This website is now archived. To find out what BERG did next, go to www.bergcloud.com.

Blog posts tagged as 'interaction'

Connbox: prototyping a physical product for video presence with Google Creative Lab, 2011

At the beginning of 2011 we started a wide-ranging conversation with Google Creative Lab, discussing near-future experiences of Google and its products.

We’ve already discussed our collaboration on “Lamps”, the conceptual R&D around computer vision in a separate post.

They had already in mind another brief before approaching us, to create a physical product encapsulating Google voice/video chat services.

This brief became known as ‘Connection Box’ or ‘Connbox’ for short…

BERG-Chaco-Connbox-20110714-003

For six months through the spring and summer of 2011, a multidisciplinary team at BERG developed the brief based on research, strategic thinking, hardware and software prototyping into believable technical and experiential proof of a product that could be taken to market.

It’s a very different set of outcomes from Lamps, and a different approach – although still rooted in material exploration, it’s much more centred around rapid product prototyping to really understand what the experience of physical device, service and interface could be.

As with our Lamps post, I’ve broken up this long report of what was a very involving project for the entire studio.


The Connbox backstory


The videophone has an unusually long cultural legacy.

It has been a very common feature of science fiction all the way back to the 1920s. As part of our ‘warm-up’ for the project, Joe put together a super-cut of all of the instances he could recollect from film and tv…

Videophones in film from BERG on Vimeo.

The video call is still often talked about as the next big thing in mobile phones (Apple used FaceTime as a central part of their iphone marketing, while Microsoft bought Skype to bolster their tablet and phone strategy). But somehow video calling has been stuck in the ‘trough of disillusionment’ for decades. Furthermore, the videophone as a standalone product that we might buy in a shop has never become a commercial reality.

On the other hand, we can say that video calls have recently become common, but in a very specific context. That is, people talking to laptops – constrained by the world as seen from webcam and a laptop screen.

13 September, 18.57

This kind of video calling has become synonymous with pre-arranged meetings, or pre-arranged high-bandwidth calls. It is very rarely about a quick question or hello, or a spontaneous connection, or an always-on presence between two spaces.

Unpacking the brief

The team at Google Creative Lab framed a high-level prototyping brief for us.

The company has a deep-seated interest in video-based communication, and of course, during the project both Google Hangouts and Google Plus were launched.

The brief placed a strong emphasis on working prototypes and live end-to-end demos. They wanted to, in the parlance of Google, “dogfood” the devices, to see how they felt in everyday use themselves.

I asked Jack to recall his reaction to the brief:

The domain of video conferencing products is staid and unfashionable.

Although video phones have lived large in the public imagination, no company has made a hardware product stick in the way that audio devices have. There’s something weirdly broken about taking behaviours associated with a phone: synchronous talking, ringing or alerts when one person wants another’s attention, hanging up and picking up etc.

Given the glamour and appetite for the idea, I felt that somewhere between presence and video a device type could emerge which supported a more successful and appealing set of behaviours appropriate to the form.

The real value in the work was likely to emerge in what vehicle designers call the ‘third read’. The idea of product having a ‘first, second and third read’ comes up a lot in the studio. We’ve inherited it by osmosis from product designer friends, but an excerpt from the best summation of it we can find on the web follows:

The concept of First, Second, Third Read which comes from the BMW Group automotive heritage in terms of understanding Proportion, Surface, and Detail.

The First Read is about the gesture and character of the product. It is the first impression.

Looking closer, there is the Second Read in which surface detail and specific touchpoints of interaction with the product confirm impressions and set up expectations.

The Third Read is about living with the product over time—using it and having it meet expectations…

So we’re not beginning with how the product looks or where it fits in a retail landscape, but designing from the inside out.

We start by understanding presence through devices and what video can offer, build out the behaviours, and then identify forms and hardware which support that.

To test and iterate this detail we needed to make everything, so that we can live with and see the behaviours happen in the world.

connbox for blogging.016

Material Exploration


We use the term ‘material exploration’ to describe our early experimental work. This is an in-depth exploration of the subject by exploring the properties, both inate and emergent of the materials at hand. We’ve talked about it previously here and here.

What are the materials that make up video? They are more traditional components and aspects of film such as lenses, screens, projectors, field-of-view as well as newer opportunities in the domains of facial recognition and computer vision.

Some of our early experiments looked at field-of-view – how could we start to understand where an always-on camera could see into our personal environment?

We also challenged the prevalent forms of video communication – which generally are optimised for tight shots of people’s faces. What if we used panoramic lenses and projection to represent places and spaces instead?

chaco_Connbox_2012-03-10_TDA.010

chaco_Connbox_2012-03-10_TDA.009

In the course of these experiments we used a piece of OpenFrameworks code developed by Golan Levin. Thanks Golan!

We also experimented with the visual, graphic representation of yourself and other people, we are used to the ‘picture in picture’ mode of video conferencing, where we see the other party, but have an image of ourselves superimposed in a small window.

We experimented with breaking out the representation of yourself into a separate screen, so you could play with your own image, and position the camera for optimal or alternative viewpoints, or to actually look ‘through’ the camera to maintain eye contact, while still being able to look at the other person.

Connbox-lens-projection-tests-00001

One of the main advantages of this – aside from obviously being able to direct a camera at things of interest to the other party – was to remove the awkwardness of the picture-in-picture approach to showing yourself superimposed on the stream of the person you are communicating with…

There were interaction & product design challenges in making a simpler, self-contained video chat appliance, amplified by the problem of taking the things we take for granted on the desktop or touchscreen: things like the standard UI, windowing, inputs and outputs, that all had to be re-imagined as physical controls.

This is not a simple translation between a software and hardware behaviour, it’s more than just turning software controls into physical switches or levers.

It involves choosing what to discard, what to keep and what to emphasise.

Should the product allow ‘ringing’ or ‘knocking’ to kickstart a conversation, or should it rely on other audio or visual cues? How do we encourage always-on, ambient, background presence with the possibility of spontaneous conversations and ad-hoc, playful exchanges? Existing ‘video calling’ UI is not set up to encourage this, so what is the new model of the interaction?

To do this we explored in abstract some of the product behaviours around communicating through video and audio.

We began working with Durrell Bishop from LuckyBite at this stage, and he developed scenarios drawn as simple cartoons which became very influential starting points for the prototyping projects.

The cartoons feature two prospective users of an always-on video communication product – Bill and Ann…

Durrell_firstsketches1

This single panel from a larger scenario shows the moment Bill opens up a connection (effectively ‘going online’) and Ann sees this change reflected as a blind going up on Bill’s side of her Connbox.

Prototyping


Our early sketches on both whiteboards and in these explorations then informed our prototyping efforts – firstly around the technical challenges of making a standalone product around google voice/video, and the second more focussed on the experiential challenges of making a simple, pleasurable domestic video chat device.

prototype_sketches

For reasons that might become obvious, the technical exploration became nicknamed “Polar Bear” and the experimental prototype “Domino”.

Prototype 1: A proof of technology called ‘Polar Bear’

In parallel with the work to understand behaviours we also began exploring end-to-end technical proofs.

We needed to see if it was possible to make a technically feasible video-chat product with components that could be believable for mass-production, and also used open-standard software.

Aside from this, it provided us with something to ‘live with’, to understand the experience of having an always-on video chat appliance in a shared social space (our studio)

chaco_Connbox_2012-03-10_TDA.013

chaco_Connbox_2012-03-10_TDA.014

Andy and Nick worked closely with Tom and Durrell from Luckybite on housing the end-to-end proof in a robust accessible case.

It looked like a polar bear to us, and the name stuck…

chaco_Connbox_2012-03-10_TDA.016

chaco_Connbox_2012-03-10_TDA.017

The software stack was designed to create something that worked as an appliance once paired with another, that would fire up a video connection with its counterpart device over wireless internet from being switched on, with no need for any other interface than switching it on at the plug.

chaco_Connbox_2012-03-10_TDA.015

We worked with Collabora to implement the stack on Pandaboards: small form-factor development boards.

chaco_Connbox_2012-03-10_TDA.018

Living with Polar Bear was intriguing – sound became less important than visual cues.

It reminded us all of Matt Webb’s “Glancing” project back in 2003:

Every so often, you look up and look around you, sometimes to rest your eyes, and other times to check people are still there. Sometimes you catch an eye, sometimes not. Sometimes it triggers a conversation. But it bonds you into a group experience, without speaking.

Prototype 2: A product and experience prototype called “Domino”


We needed to come up with new kinds of behaviours for an always on, domestic device.

This was the biggest challenge by far, inventing ways in which people might be comfortable opening up their spaces to each other, and on top of that, to create a space in which meaningful interaction or conversation might occur.

To create that comfort we wanted to make the state of the connection as evident as possible, and the controls over how you appear to others simple and direct.

The studio’s preoccupations with making “beautiful seams” suffused this stage of the work – our quest to create playful, direct and legible interfaces to technology, rather than ‘seamless’ systems that cannot be read or mastered.

In workshops with Luckybite, the team sketched out an approach where the state of the system corresponds directly to the physicality of the device.

Durrell_firstsketches2

The remote space that you are connecting with is represented on one screen housed in a block, and the screen that shows your space is represented on another. To connect the spaces, the blocks are pushed together, and pulled-apart to disconnect.

Durrell outlined a promising approach to the behaviour of the product in a number of very quick sketches during one of our workshops:

DB_sketches_large

Denise further developed the interaction design principles in a detailed “rulespace” document, which we used to develop video prototypes of the various experiences. This strand of the project acquired the nickname ‘Domino’ – these early representations of two screens stacked vertically resembling the game’s pieces.

chaco_Connbox_2012-03-10_TDA.026

As the team started to design at a greater level of detail, they started to see the issues involved in this single interaction: Should this action interrupt Ann in her everyday routine? Should there be a sound? Is a visual change enough to attract Ann’s attention?

The work started to reveal more playful uses of the video connection, particularly being able to use ‘stills’ to communicate about status. The UI also imagines use of video filters to change the way that you are represented, going all the way towards abstracting the video image altogether, becoming visualisations of audio or movement, or just pixellated blobs of colour. Other key features such as a ‘do not disturb blind’ that could be pulled down onscreen through a physical gesture emerged, and the ability to ‘peek’ through it to let the other side know about our intention to communicate.

Product/ID development


With Luckybite, we started working on turning it into something that would bridge the gap between experience prototype and product.

BERG-Domino-20120221-006

The product design seeks to make all of the interactions evident with minimum styling – but with flashes of Google’s signature colour-scheme.

BERG-Domino-20120221-005

The detachable camera, with a microphone that can be muted with a sliding switch, can be connected to a separate stand.

BERG-Domino-20120221-012

This allows it to be re-positioned and pointed at other views or objects.

BERG-Domino-20120221-020

This is a link back to our early ‘material explorations’ that showed it was valuable to be able to play with the camera direction and position.

Prototype 3: Testing the experience and the UI


Final technical prototypes in this phase make a bridge between the product design and experience thinking and the technical explorations.

This manifested in early prototypes using Android handsets connected to servers.

BERG-Connbox-20111021-002

BERG-Connbox-20111021-001

chaco_Connbox_2012-03-10_TDA.032

chaco_Connbox_2012-03-10_TDA.033

Connbox: Project film


Durrell Bishop narrates some of the prototype designs that he and the team worked through in the Connbox project.

The importance of legible products


The Connbox design project had a strong thread running though it of making interfaces as evident and simple as possible, even when trying to convey abstract notions of service and network connectivity.

I asked Jack to comment on the importance of ‘legibility’ in products:

Connbox exists in a modern tradition of legible products, which sees the influence of Durrell Bishop. The best example I’ve come across that speaks to this thinking is Durrell’s answering machine he designed.

When messages are left on the answering machine they’re represented as marbles which gather in a tray. People play the messages by placing them in a small dip and when they’ve finished they replace them in the machine.

Screen Shot 2013-02-25 at 16.36.07

If messages are for someone else in the household they’re left in that persons bowl for later. When you look at the machine the system is clear and presented through it’s physical form. The whole state of the system is evident on the surface, as the form of the product.

Making technology seamless and invisible hides the control and state of the system – this path of thinking and design tries to place as much control as possible in the hands of the end-user by making interfaces evident.

In the prototype UI design, Joe created some lovely details of interaction fusing Denise’s service design sketches and the physical product design.

For instance, I love this detail where using the physical ‘still’ button, causes a digital UI element to ‘roll’ out from the finger-press…

Legible-interaction2

A very satisfying dial for selecting video effects/filters…

Legible-interaction3

And here, where a physical sliding tab on top of the device creates the connection between two spaces

Legible-interaction1

This feels like a rich direction to explore in future projects, of a kind of ‘reverse-skeuomorphism‘ where digital and physical affordances work together to do what each does best rather than just one imitating the other.

Conclusion: What might have been next?


At the end of this prototyping phase, the project was put on hiatus, but a number of directions seemed promising to us and Google Creative Lab.

Broadly speaking, the work was pointing towards new kinds of devices, not designed for our pockets but for our homes. Further explorations would have to be around the rituals and experience of use in a domestic setting.

Special attention would have to be given to the experience of set-up, particularly pairing or connecting the devices. Would this be done as a gift, easily configured and left perhaps for a relative who didn’t have a smartphone or computer? How could that be done in an intuitive manner that emphasised the gift, but left the receiver confident that they could not break the connection or the product? Could it work with a cellular radio connection, in places where there no wireless broadband is found?

connbox for blogging.027

What cues could the physical product design give to both functionality and context? What might the correct ‘product language’ be for such a device, or family of devices for them to be accepted into the home and not seen as intrusive technology.

G+ and Hangouts launched toward the end of the project, so unfortunately there wasn’t time in the project to accommodate these interesting new products.

connbox for blogging.029

However we did start to talk about ways to physicalize G+’s “Circles” feature, which emphasises small groups and presence – it seemed like a great fit with what we had already looked at. How might we create a product that connects you to an ‘inner circle’ of contacts and the spaces they were in?

Postscript: Then and Now – how technology has moved on, and where we’d start now


Since we started the Connbox project in the Spring of 2011, one could argue that we’ve seen a full cycle of Moore’s law improve the capabilities of available hardware, and certainly both industry and open-source efforts in the domain of video codecs and software have advanced significantly.

Making Connbox now would be a very different endeavour.

Here Nick comments on the current state-of-the-art and what would be our starting points were we (or someone else) to re-start the project today…

Since we wrapped up this project in 2011, there’s been one very conspicuous development in the arena of video chat, and that is the rise of WebRTC. WebRTC is a draft web standard from W3C to enable browser to browser video chat without needing plugins.

As of early 2013, Google and Mozilla have demonstrated this system working in their nightly desktop browser builds, and recorded the first cross-browser video call. Ericsson are one of the first groups to have a mobile implementation available for Android and iOS in the form of their “Bowser” browser application.

WebRTC itself is very much an evolution of earlier work. The brainchild of Google Hangout engineers, this single standard is implemented using a number of separate components. The video and audio technology comes from Google in the form of the VP8 and iLBC codecs. The transport layer has incorporated libjingle which we also relied upon for our Polar Bear prototype, as part of the Farsight 2 stack.

Google is currently working on enabling WebRTC functionality in Chrome for Android, and once this is complete, it will provide the ideal software platform to explore and prototype Connbox ideas. What’s more, it actually provides a system which would be the basis of taking a successful prototype into full production.

Notable precedents


While not exhaustive, here are some projects, products, research and thinking we referenced during the work…


Thanks

Massive thanks to Tom Uglow, Sara Rowghani, Chris Lauritzen, Ben Malbon, Chris Wiggins, Robert Wong, Andy Berndt and all those we worked with at Google Creative Lab for their collaboration and support throughout the project.

Thanks to all we worked with at Collabora and Future Platforms on prototyping the technology.

Big thanks to Oran O’Reilly who worked on the films with Timo and Jack.

The design behind How many really

How big really is now just over a year old, released just before I started work at BERG, and I still find myself totally engaged with the simplicity of the concept. It’s a solid, easy to digest punch of information that translates unknown quantities into something instantly recognisable. How many really is the second part of the experiment, and I was tasked with working on the design. This is a little write up of the design process.

We started off by following a workshop Webb & Jones had run with the BBC to kick off the initial concept of examining quantity. Myself, James Darling & Matt Brown spent a week whiteboarding, sketching and iterating, to try and nail down some initial ideas.

The first thought was the variables with which we could use to convey changes in quantity. Time, movement, zoom & scale were all identified as being potentially useful.

We started to construct sentences that could tell a story, and break down into portions to allow new stories to slot in.

Looking at splitting grids into sections to show different variables.

We thought a bit about avatars, and how to use them in visual representations of data, in this case combining them with friends’ names and stories.

Looking at combining avatars with ‘bodies’. Bird suits, vehicles, polaroids.

An early narrative concept, setting up the story early on and sending you through a process of experience. We thought about pushing bits of stories to devices in real time.

After a bit more crunching and sketching, we broke everything down into two routes:

  • Scale – influenced by Powers of 10, used to compare your networks to increasing sizes of numbers,
  • Grouping / snapping – used to take your contacts and run them through a set of statistics, applying them personally to historical events and comparing them against similar events in different times.


What became clear after the sketching was the need to show a breadcrumb trail of information, to give the user a real sense of their scale compared to the numbers we were looking at. Eames’ Powers of 10 video achieves this – a set of steps, with consistent visual comparisons between each step.

Perfect for showing the relevance of one thing in relation to the next, or a larger collective group. But the variation in the stories we’d be showing meant that we didn’t want bespoke graphics for each individual scenario. We tested out a quick mockup in illustrator using relatively sized, solid colour squares.

Despite the lack of rich textures and no visual indicators of your current position in the story, the impact was there. We added Facebook / Twitter avatars for signed in states, and worked on a colour palette that would sit well with BBC branding.

The next problem was dealing with non-signed in states. How many really was always designed to work with social networks, but we wanted it to be just as relevant with no Facebook or Twitter credentials – for classrooms, for example. We took a trip to the V&A to view the Isotype exhibition that was on at the time.

 

That’s 85 year old iconography and infographic design that looks as relevant today as it did back then. A real sense of quantity through simple pictograms. Completely fantastic. We set about designing a stack of isotype influenced icons to work with the site when users weren’t signed into their social networks.

And the icons in context…

We used a bit of Isotype inspiration for the organisation of the grouping stories – evenly spaced grids of icons or avatars.

The rest of the site was intended to stay consistent with How big really. We used photography in place of bespoke graphics for the story panels, as the graphical output varies for each user.

How many really is an entirely different beast to How big really. Rather than each dimension being a solid, one shot hit, the value is in backing up simple visuals with interesting narratives. We spent almost as much time on the written aspect of stories as we did on the aesthetics and interaction. I hope it gives a little context to numbers and figures we often take for granted. Please do have a browse around!

Friday links: Comics, Space & Rizzle Kicks

Another Friday, another round-up of the various things that have been flying around the office mailing list this week.

Core 77 are running a feature on visualisations of The Metropolis in comics. Part 1 is all about the night:

Simon sent this around – a video from the camera mounted on each of space shuttle Endeavour’s rocket boosters:

Timo sent around the trailer for producer Amon Tobin’s live tour:

Matt Jones sent around Olafur Eliasson‘s latest exhibition ‘Your rainbow panorama‘ – a 360 degree viewing platform ‘suspended between the city and the sky’, which looks incredible.

Denise pointed us to this (via @antimega), a wonderful video of dust devils lifting plastic sheets from strawberry fields:

Finally, as the sun’s out here in London and music features fairly high on our agenda at 6pm on a beautiful Friday evening, Matt Webb sent around this video from Brighton based duo Rizzle Kicks – a superbly produced video, and quite a nice track as well. Enjoy!

Bells!

Our friends at Tellart made something lovely this week.

“Bells” lets you compose a tune using tiny digital toy bells on the web, which will then through the magic of the internet, solenoids and electromagnetism play out in their studio on ‘real’ tiny toy bells.

I chose to render a version of “Here Come The Warm Jets” by Brian Eno…

Playing Eno with http://bells.tellart.com/

And a few minutes later got to see Matt Cottam and Bruno ‘enjoying’ it in Providence, RI…

Playing Eno to Bruno

Nice!

Non-personal computing: sketching a multi-user UI for the iPad

The iPad feels like a household device.

Sofa computing: passable from person-to-person, parent-to-child… And sharable as a ‘multiplayer magic table’ surface, as discussed here previously.

Magic table games

And yet, at time-of-writing, it’s a personal computer.

While parents of my acquaintance have found work-arounds, such as placing their children’s favourite apps on specific ‘pages’ of the homescreen, it’s a device bound to a MacBook or iMac, and an iTunes account – ultimately to an individual, not a small group.

While travelling last month, my wife and I managed to use the iPad as our shared device by basically signing-in and out of our Google accounts. Do-able but laborious.

Switch seems like a useful step in the direction of “non-personal computing”, allowing multiple user accounts for browsing, with a single password for each.

But I thought I’d quickly sketch something that built on the ‘magic-table’ mock-ups I’d been playing with back in the summer – looking enhancing the passable and sharable nature of the iPad as an object in and of the household.

Multi-user iPad UI Sketch

It’s pretty simple, and not much of leap, frankly…

Multi-user iPad: Portrait

The ‘person-in-each-corner’ pattern can already be seen in iPad games such as Marble Mixer and Multipong, so this really just uses the corners of the device in tandem with the orientation sensors to select which of the – up to four* – different users wants to access their apps and settings on the device.

Activity notifications could be displayed alongside the names on the lockscreen so that you could quickly see at a glance if anything needed your attention.

Multi-user iPad: Landscape

And, if you wanted a little more privacy from the rest of your housemates or family, then just a standard iOS passcode dialog could be set.

Multi-user iPad: Passcode

That’s it really.

Just a quick sketch but something I wanted to get out of my head.

The individual nature of the UI and user-model of the iPad seems so at odds to me with its form-factor, the share-ability of its screen technology and it’s emergent context of use that I can imagine something (much more elegant) than this coming from Apple in the near-future.

Of course, they may just want to sell us all one each…


* as well as the four user limit being a simple mapping to the number of corners the thing has, this seems like a very Apple constraint to me…

Magic tables, not magic windows

A while back, in 2007, I wrote about ‘a lost future’ of touch technology, and the rise of a world full of mobile glowing attention-wells.

“…it’s likely that we’re locked into pursuing very conscious, very gorgeous, deliberate touch interfaces – touch-as-manipulate-objects-on-screen rather than touch-as-manipulate-objects-in-the-world for now.”

It does look very much like we’re living in that world now – where our focus is elsewhere than our immediate surroundings – mainly residing through our fingers, in our tiny, beautiful screens.

Andrew Blum writes about this, amongst other things, in his excellent piece “Local Cities, Global Problems: Jane Jacobs in an Age of Global Change”:

Like a lot of things here, they are deeply connected to other places. Their attention is divided. And, by extension, so is ours. While this feeling is common to all cities over time, cell phones bring the tangible immediacy of the faraway to the street. Helped along by media and the global logistics networks that define our material lives, our moment-to-moment experience of the local has become increasingly global.

Recently, of course, our glowing attention wells have become larger.

We’ve been designing, developing, using and living with iPads in the studio for a while now, and undoubtedly they are fine products despite their drawbacks – but it wasn’t until our friend Tom Coates introduced me to a game called Marble Mixer that I thought they were anything other than the inevitable scaling of an internet-connected screen, and the much-mooted emergence of a tablet form-factor.

It led me to think they might be much more disruptive as magic tables than magic windows.

Marble Mixer is a simple game, well-executed. Where it sings is when you invite friends to play with you.

Each of you occupy a corner of the device, and attempts to flick marbles into the goal-mouth against the clock – dislodging the other’s marbles.

Beautiful. Simple. But also – amazing and transformative!

We’re all playing with a magic surface!

When we’re not concentrating on our marbles, we’re looking each other in the eye – chuckling, tutting and cursing our aim – and each other.

There’s no screen between us, there’s a magic table making us laugh. It’s probably my favourite app to show off the iPad – including the ones we’ve designed!

It shows that the iPad can be a media surface to share, rather than a proscenium to consume through alone.

Russell Davies pointed this out back in February (before we’d even touched one) saying:

[GoGos]’d be the perfect counters for a board game that uses the iPad as the board. They’d look gorgeous sitting on there. We’d need to work out how to make the iPad think they were fingers – maybe some sort of electrostatic sausage skin – and to remember which was which.

gogos on an iphone

Inspired by Marble Mixer, and Russell’s writings – I decided to do a spot of rapid prototyping of a ‘peripheral’ for magic table games that calls out the shared-surface…

Magic table games

It’s a screen – but not a glowing one! Just a simple bit of foamboard cut so it obscures your fellow player’s share of the game board, for games like Battleships, or in this case – a mocked-up guessing-game based on your flickr contacts…

Magic table games

You’d have a few guesses to narrow it down… Are they male, do they have a beard etc…

Magic table games

Fun for all the family!

Anyway – as you can see – this is not so serious a prototype, but I can imagine some form of capactive treatment to the bottom edge of the screen, perhaps varying the amount of secret territory each player revealed to each other, or capacitive counters as Russell suggests.

Aside from games though – the pattern of a portable shared media surface is surely worth pursuing.

As Paul Dourish put it in his book “Where the action is” – the goal would be

“interacting in the world, participating in it and acting through it, in the absorbed and unreflective manner of normal experience.”

Designing media and services for “little-ass” rather than “big-ass” magic tables might propel us into a future not so removed from the one I thought we might have lost…

Popular Science+

In December, we showed Mag+, a digital magazine concept produced with our friends at Bonnier.

Late January, Apple announced the iPad.

So today Popular Science, published by Bonnier and the largest science+tech magazine in the world, is launching Popular Science+ — the first magazine on the Mag+ platform, and you can get it on the iPad tomorrow. It’s the April 2010 issue, it’s $4.99, and you buy more issues from inside the magazine itself.

See Popular Science+ in the iTunes Store now.

Here’s Jack, speaking about the app, its background, and what we learned about art direction for magazines using Mag+.

Articles are arranged side by side. You swipe left and right to go between them. For big pictures, it’s fun to hold your finger between two pages, holding and moving to pan around.

You swipe down to read. Tap left to see the pictures, tap right to read again. These two modes of the reading experience are about browsing and drinking in the magazine, versus close reading.

Pull the drawer up with two fingers to see the table of contents and your other issues. Swipe right and left with two fingers to zip across pages to the next section. Dog-ear a page by turning down the top-right corner.

There’s a store in the magazine. When a new issue comes out, you purchase it right there.

Editorial

Working with the Popular Science team and their editorial has been wonderful, and we’ve been working together to re-imagine the form of magazines. Art direction for print is so much about composition. There are a 1,000 tiny tweaks to tune a page to get it to really sing. But what does layout mean when readers can make the text disappear, when the images move across one another, and the page itself changes shape as the iPad rotates?

We discovered safe areas. We found little games to play with the reader, having them assemble infographics in the act of scrolling, and making pages that span multiple panes, only revealing themselves when the reader does a double-finger swipe to zoom across them.

It helps that Popular Science has great photography, a real variety of content, and an engaged and open team.

What amazes me is that you don’t feel like you’re using a website, or even that you’re using an e-reader on a new tablet device — which, technically, is what it is. It feels like you’re reading a magazine.

Apple made the first media device you can curl up with, and I think we’ve done it, and Popular Science, justice.

From concept to production

The story, for me, is that the design work behind the Mag+ concept video was strong enough to spin up a team to produce Popular Science+ in only two months.

Not only that, but an authoring system that understands workflow. And InDesign integration so art directors are in control, not technologists. And an e-commerce back-end capable of handling business models suitable for magazines. And a new file format, “MIB,” that strikes the balance between simple enough for anyone to implement, and expressive enough to let the typography, pictures, and layout shine. And it’s set up to do it all again in 30 days. And more.

It’s all basic, sure. But it’ll grow. We’ve built in ways for it to grow.

But we’ve always said that good design is rooted not just in doing good by the material, but by understanding the opportunities in the networks of organisations and people too.

A digital magazine is great, immersive content on the screen. But behind those pixels are creative processes and commercial systems that also have to come together.

Inventing something, be it a toy or new media, always means assembling networks such as these. And design is our approach on how to do it.

I’m pleased we were able to work with Popular Science and Bonnier, to get to a chance to do this, and to bring something new into the world.

Thanks!

Thank you to the BERG team for sterling work on El Morro these last two months, especially the core team who have sunk so much into this: Campbell Orme, James Darling, Lei Bramley, Nick Ludlam and Timo Arnall. Also Jack Schulze, Matt Jones, Phil Gyford, Tom Armitage, and Tom Taylor.

Thanks to the Popular Science team, Mike Haney and Sam Syed in particular, Mark Poulalion and his team from Bonnier, and of course Bonnier R&D and Sara Öhrvall, the grand assembler!

It’s a pleasure and a privilege to work with each and every one of you.

See also…

Mag+, a concept video on the future of digital magazines

I’ve got something I want to share with you.

We’ve been working with our friends at Bonnier R&D exploring the future of digital magazines. Bonnier publish Popular Science and many other titles.

Magazines have articles you can curl up with and lose yourself in, and luscious photography that draws the eye. And they’re so easy and enjoyable to read. Can we marry what’s best about magazines with the always connected, portable tablet e-readers sure to arrive in 2010?

This video prototype shows the take of the Mag+ project.

You can see this same video bigger on Vimeo.

The articles run in scrolls, not pages, and are placed side-to-side in kind of mountain range (as we call it internally). Magazines still arrive in issues: people like the sense of completion at the end of each.

Mag+ in landscape

You flip through by shifting focus. Tap the pictures on the left of the screen to flip through the mag, tap text on the right to dive in.

Bedside manner

It is, we hope, like stepping into a space for quiet reading. It’s pleasant to have an uncluttered space. Let the Web be the Web. But you can heat up the words and pics to share, comment, and to dig into supplementary material.

Heated Mode

The design has an eye to how paper magazines can re-use their editorial work without having to drastically change their workflow or add new teams. Maybe if the form is clear enough then every mag, no matter how niche, can look gorgeous, be super easy to understand, and have a great reading experience. We hope so. That gets tested in the next stage, and rolled into everything learned from this, and feedback from the world at large! Join the discussion at the Bonnier R&D Beta Lab.

Recently there have been digital magazine prototypes by Sports Illustrated, and by Wired. It’s fascinating to see the best features of all of these.

Many teams at Bonnier have been involved in Mag+. This is a synthesis of so much work, research, and ideas. But I want to say in particular it’s a pleasure to collaborate with our friends at R&D. And here at BERG let me call out some specific credits: Jack Schulze, Matt Jones, Campbell Orme and Timo Arnall. Thanks all!

Treemap ToC

(See also Bonnier R&D’s Mag+ page, where you can leave comments and contact Bonnier, and the thoughts of Kicker Studio — who will be expanding the concept to robust prototype over the next few months in San Francisco! BERG’s attention has now moved to the social and wider services around Mag+ – we’ll be mapping those out and concepting – and we’re looking forward to working with all the teams into 2010. Awesome.)

Olinda interface drawings

Last week, Tristan Ferne who leads the R&D team in BBC Audio & Music Interactive gave a talk at Radio at the Edge (written up in Radio Today). As a part of his talk he discussed progress on Olinda.

Most of the design and conceptual work for the radio is finished now. We are dealing with the remaining technicalities of bringing the radio into the world. To aid Tristan’s presentation we drew up some slides outlining how we expect the core functionality to work when the radio manifests.

Social module

Social Module sequence

This animated sequence shows how the social module is expected to work. The radio begins tuned to BBC Radio 2. A light corresponding to Matt’s radio lights up on the social module. When the lit button is pressed, the top screen reveals Matt is listening to Radio 6 Music, which is selected and the radio retunes to that station.

Tuning

Tuning drawing

This detail shows how the list management will work. The radio has a dual rotary dial for tuning between the different DAB stations. The outer dial cycles through the full list of all the stations the radio has successfully scanned for. The inner dial filters the list down and cycles through the top five most listened to stations. We’ll write more on why we’ve made these choices when the radio is finished.

RFID icons

Earlier this year we hosted a workshop for Timo Arnall‘s Touch project. This was a continuation of the brief I set my students late last year, to design an icon or series of icons to communicate the use of RFID technology publicly. The students who took on the work wholeheartedly delivered some early results which I summarised here.

This next stage of the project involved developing the original responses to the brief into a small number of icons to be tested, by Nokia, with a pool of 25 participants to discover their responses. Eventually these icons could end up in use on RFID-enabled surfaces, such as mobile phones, gates, and tills.

Timo and I spent an intense day working with Alex Jarvis and Mark Williams. The intention for the day was to leave us with a series of images which could be used to test responses. The images needed consistency and fairly conservative limits were placed on what should be produced. Timo’s post on the workshop includes a good list of references and detailed outline of the requirements for the day.

I’m going to discuss two of the paths I was most involved with. The first is around how the imagery and icons can represent fields we imagine are present in RFID technology.

Four sketches exploring the presence of an RFID field

The following four sketches are initial ideas designed to explore how representation of fields can help imply the potential use of RFID. The images will evolve into the worked-up icons to be tested by Nokia, so the explorations are based around mobile phones.

I’m not talking about what is actually happening with the electromagnetic field induction and so forth. These explorations are about building on the idea of what might be happening and seeing what imagery can emerge to support communication.

The first sketch uses the pattern of the field to represent that information is being transferred.

Fields sketch 01

The two sketches below imply the completion of the communication by repeating the shape or symbol in the mind or face of the target. The sketch on the left uses the edge of the field (made of triangles) to indicate that data is being carried.

Fields sketch 02

I like this final of the four sketches, below, which attempts to deal with two objects exchanging an idea. It is really over complex and looks a bit illuminati, but I’d love to explore this all more and see where it leads.

Fields sketch 03

Simplifying and working-up the sketches into icons

For the purposes of our testing, these sketches were attempting too much too early so we remained focused on more abstract imagery and how that might be integrated into the icons we had developed so far. The sketch below uses the texture of the field to show the communication.

fields-04.jpg

Retaining the mingling fields, these sketches became icons. Both of the results below imply interference and the meeting of fields, but they are also burdened by seeming atomic, or planet sized and a annoyingly (but perhaps appropriately) like credit card logos. Although I really like the imagery that emerges, I’m not sure how much it is doing to help think about what is actually happening.

Fields sketch 05

Fields sketch 06

Representing purchasing via RFID, as icons

While the first path was for icons simply to represent RFID being available, the second path was specifically about the development of icons to show RFID used for making a purchase (‘purchase’ is one of the several RFID verbs from the original brief).

There is something odd about using RFID tags. They leave you feeling uncertain, and distanced from the exchange or instruction. When passing an automated mechanical (pre-RFID) ticket barrier, or using a coin operated machine, the time the machines take to respond feels closely related to the mechanism required to trigger it. Because RFID is so invisible, any timings or response feels arbitrary. When turning a key in a lock, this actually releases the door. When waving an RFID keyfob at reader pad, one is setting off a hidden computational process which will eventually lead to a mechanical unlocking of the door.

Given the secretive nature of RFID, our approach to download icons that emerged was based on the next image, originally commissioned from me by Matt for a talk a couple of years ago. It struck me as very like using an RFID enabled phone. The phone has a secret system for pressing secret buttons that you yourself can’t push.

Hand from Phone

Many of the verbs we are examining, like purchase, download or open, communicate really well through hands. The idea of representing RFID behaviours through images of hands emerging from phones performing actions has a great deal of potential. Part of the strength of the following images comes from the familiarity of the mobile phone as an icon–it side-steps some of the problems faced in attempting to represent an RFID directly.

The following sketches deal with purchase between two phones.

Purchase hands sketch

Below are the two final icons that will go for testing. There is some ambiguity about whether coins are being taken or given, and I’m pleased that we managed to get something this unusual and bizarre into the testing process.

Hands purchase 01

Hands purchase 02

Alex submitted a poster for his degree work, representing all the material for testing from the workshop:

Outcomes

The intention is to continue iterations and build upon this work once the material has been tested (along with other icons). As another direction, I’d like to take these icons and make them situated, perhaps for particular malls or particular interfaces, integrating with the physical environment and language of specific machines.

Recent Posts

Popular Tags