This website is now archived. To find out what BERG did next, go to www.bergcloud.com.

Blog posts tagged as 'video'

Connbox: prototyping a physical product for video presence with Google Creative Lab, 2011

At the beginning of 2011 we started a wide-ranging conversation with Google Creative Lab, discussing near-future experiences of Google and its products.

We’ve already discussed our collaboration on “Lamps”, the conceptual R&D around computer vision in a separate post.

They had already in mind another brief before approaching us, to create a physical product encapsulating Google voice/video chat services.

This brief became known as ‘Connection Box’ or ‘Connbox’ for short…

BERG-Chaco-Connbox-20110714-003

For six months through the spring and summer of 2011, a multidisciplinary team at BERG developed the brief based on research, strategic thinking, hardware and software prototyping into believable technical and experiential proof of a product that could be taken to market.

It’s a very different set of outcomes from Lamps, and a different approach – although still rooted in material exploration, it’s much more centred around rapid product prototyping to really understand what the experience of physical device, service and interface could be.

As with our Lamps post, I’ve broken up this long report of what was a very involving project for the entire studio.


The Connbox backstory


The videophone has an unusually long cultural legacy.

It has been a very common feature of science fiction all the way back to the 1920s. As part of our ‘warm-up’ for the project, Joe put together a super-cut of all of the instances he could recollect from film and tv…

Videophones in film from BERG on Vimeo.

The video call is still often talked about as the next big thing in mobile phones (Apple used FaceTime as a central part of their iphone marketing, while Microsoft bought Skype to bolster their tablet and phone strategy). But somehow video calling has been stuck in the ‘trough of disillusionment’ for decades. Furthermore, the videophone as a standalone product that we might buy in a shop has never become a commercial reality.

On the other hand, we can say that video calls have recently become common, but in a very specific context. That is, people talking to laptops – constrained by the world as seen from webcam and a laptop screen.

13 September, 18.57

This kind of video calling has become synonymous with pre-arranged meetings, or pre-arranged high-bandwidth calls. It is very rarely about a quick question or hello, or a spontaneous connection, or an always-on presence between two spaces.

Unpacking the brief

The team at Google Creative Lab framed a high-level prototyping brief for us.

The company has a deep-seated interest in video-based communication, and of course, during the project both Google Hangouts and Google Plus were launched.

The brief placed a strong emphasis on working prototypes and live end-to-end demos. They wanted to, in the parlance of Google, “dogfood” the devices, to see how they felt in everyday use themselves.

I asked Jack to recall his reaction to the brief:

The domain of video conferencing products is staid and unfashionable.

Although video phones have lived large in the public imagination, no company has made a hardware product stick in the way that audio devices have. There’s something weirdly broken about taking behaviours associated with a phone: synchronous talking, ringing or alerts when one person wants another’s attention, hanging up and picking up etc.

Given the glamour and appetite for the idea, I felt that somewhere between presence and video a device type could emerge which supported a more successful and appealing set of behaviours appropriate to the form.

The real value in the work was likely to emerge in what vehicle designers call the ‘third read’. The idea of product having a ‘first, second and third read’ comes up a lot in the studio. We’ve inherited it by osmosis from product designer friends, but an excerpt from the best summation of it we can find on the web follows:

The concept of First, Second, Third Read which comes from the BMW Group automotive heritage in terms of understanding Proportion, Surface, and Detail.

The First Read is about the gesture and character of the product. It is the first impression.

Looking closer, there is the Second Read in which surface detail and specific touchpoints of interaction with the product confirm impressions and set up expectations.

The Third Read is about living with the product over time—using it and having it meet expectations…

So we’re not beginning with how the product looks or where it fits in a retail landscape, but designing from the inside out.

We start by understanding presence through devices and what video can offer, build out the behaviours, and then identify forms and hardware which support that.

To test and iterate this detail we needed to make everything, so that we can live with and see the behaviours happen in the world.

connbox for blogging.016

Material Exploration


We use the term ‘material exploration’ to describe our early experimental work. This is an in-depth exploration of the subject by exploring the properties, both inate and emergent of the materials at hand. We’ve talked about it previously here and here.

What are the materials that make up video? They are more traditional components and aspects of film such as lenses, screens, projectors, field-of-view as well as newer opportunities in the domains of facial recognition and computer vision.

Some of our early experiments looked at field-of-view – how could we start to understand where an always-on camera could see into our personal environment?

We also challenged the prevalent forms of video communication – which generally are optimised for tight shots of people’s faces. What if we used panoramic lenses and projection to represent places and spaces instead?

chaco_Connbox_2012-03-10_TDA.010

chaco_Connbox_2012-03-10_TDA.009

In the course of these experiments we used a piece of OpenFrameworks code developed by Golan Levin. Thanks Golan!

We also experimented with the visual, graphic representation of yourself and other people, we are used to the ‘picture in picture’ mode of video conferencing, where we see the other party, but have an image of ourselves superimposed in a small window.

We experimented with breaking out the representation of yourself into a separate screen, so you could play with your own image, and position the camera for optimal or alternative viewpoints, or to actually look ‘through’ the camera to maintain eye contact, while still being able to look at the other person.

Connbox-lens-projection-tests-00001

One of the main advantages of this – aside from obviously being able to direct a camera at things of interest to the other party – was to remove the awkwardness of the picture-in-picture approach to showing yourself superimposed on the stream of the person you are communicating with…

There were interaction & product design challenges in making a simpler, self-contained video chat appliance, amplified by the problem of taking the things we take for granted on the desktop or touchscreen: things like the standard UI, windowing, inputs and outputs, that all had to be re-imagined as physical controls.

This is not a simple translation between a software and hardware behaviour, it’s more than just turning software controls into physical switches or levers.

It involves choosing what to discard, what to keep and what to emphasise.

Should the product allow ‘ringing’ or ‘knocking’ to kickstart a conversation, or should it rely on other audio or visual cues? How do we encourage always-on, ambient, background presence with the possibility of spontaneous conversations and ad-hoc, playful exchanges? Existing ‘video calling’ UI is not set up to encourage this, so what is the new model of the interaction?

To do this we explored in abstract some of the product behaviours around communicating through video and audio.

We began working with Durrell Bishop from LuckyBite at this stage, and he developed scenarios drawn as simple cartoons which became very influential starting points for the prototyping projects.

The cartoons feature two prospective users of an always-on video communication product – Bill and Ann…

Durrell_firstsketches1

This single panel from a larger scenario shows the moment Bill opens up a connection (effectively ‘going online’) and Ann sees this change reflected as a blind going up on Bill’s side of her Connbox.

Prototyping


Our early sketches on both whiteboards and in these explorations then informed our prototyping efforts – firstly around the technical challenges of making a standalone product around google voice/video, and the second more focussed on the experiential challenges of making a simple, pleasurable domestic video chat device.

prototype_sketches

For reasons that might become obvious, the technical exploration became nicknamed “Polar Bear” and the experimental prototype “Domino”.

Prototype 1: A proof of technology called ‘Polar Bear’

In parallel with the work to understand behaviours we also began exploring end-to-end technical proofs.

We needed to see if it was possible to make a technically feasible video-chat product with components that could be believable for mass-production, and also used open-standard software.

Aside from this, it provided us with something to ‘live with’, to understand the experience of having an always-on video chat appliance in a shared social space (our studio)

chaco_Connbox_2012-03-10_TDA.013

chaco_Connbox_2012-03-10_TDA.014

Andy and Nick worked closely with Tom and Durrell from Luckybite on housing the end-to-end proof in a robust accessible case.

It looked like a polar bear to us, and the name stuck…

chaco_Connbox_2012-03-10_TDA.016

chaco_Connbox_2012-03-10_TDA.017

The software stack was designed to create something that worked as an appliance once paired with another, that would fire up a video connection with its counterpart device over wireless internet from being switched on, with no need for any other interface than switching it on at the plug.

chaco_Connbox_2012-03-10_TDA.015

We worked with Collabora to implement the stack on Pandaboards: small form-factor development boards.

chaco_Connbox_2012-03-10_TDA.018

Living with Polar Bear was intriguing – sound became less important than visual cues.

It reminded us all of Matt Webb’s “Glancing” project back in 2003:

Every so often, you look up and look around you, sometimes to rest your eyes, and other times to check people are still there. Sometimes you catch an eye, sometimes not. Sometimes it triggers a conversation. But it bonds you into a group experience, without speaking.

Prototype 2: A product and experience prototype called “Domino”


We needed to come up with new kinds of behaviours for an always on, domestic device.

This was the biggest challenge by far, inventing ways in which people might be comfortable opening up their spaces to each other, and on top of that, to create a space in which meaningful interaction or conversation might occur.

To create that comfort we wanted to make the state of the connection as evident as possible, and the controls over how you appear to others simple and direct.

The studio’s preoccupations with making “beautiful seams” suffused this stage of the work – our quest to create playful, direct and legible interfaces to technology, rather than ‘seamless’ systems that cannot be read or mastered.

In workshops with Luckybite, the team sketched out an approach where the state of the system corresponds directly to the physicality of the device.

Durrell_firstsketches2

The remote space that you are connecting with is represented on one screen housed in a block, and the screen that shows your space is represented on another. To connect the spaces, the blocks are pushed together, and pulled-apart to disconnect.

Durrell outlined a promising approach to the behaviour of the product in a number of very quick sketches during one of our workshops:

DB_sketches_large

Denise further developed the interaction design principles in a detailed “rulespace” document, which we used to develop video prototypes of the various experiences. This strand of the project acquired the nickname ‘Domino’ – these early representations of two screens stacked vertically resembling the game’s pieces.

chaco_Connbox_2012-03-10_TDA.026

As the team started to design at a greater level of detail, they started to see the issues involved in this single interaction: Should this action interrupt Ann in her everyday routine? Should there be a sound? Is a visual change enough to attract Ann’s attention?

The work started to reveal more playful uses of the video connection, particularly being able to use ‘stills’ to communicate about status. The UI also imagines use of video filters to change the way that you are represented, going all the way towards abstracting the video image altogether, becoming visualisations of audio or movement, or just pixellated blobs of colour. Other key features such as a ‘do not disturb blind’ that could be pulled down onscreen through a physical gesture emerged, and the ability to ‘peek’ through it to let the other side know about our intention to communicate.

Product/ID development


With Luckybite, we started working on turning it into something that would bridge the gap between experience prototype and product.

BERG-Domino-20120221-006

The product design seeks to make all of the interactions evident with minimum styling – but with flashes of Google’s signature colour-scheme.

BERG-Domino-20120221-005

The detachable camera, with a microphone that can be muted with a sliding switch, can be connected to a separate stand.

BERG-Domino-20120221-012

This allows it to be re-positioned and pointed at other views or objects.

BERG-Domino-20120221-020

This is a link back to our early ‘material explorations’ that showed it was valuable to be able to play with the camera direction and position.

Prototype 3: Testing the experience and the UI


Final technical prototypes in this phase make a bridge between the product design and experience thinking and the technical explorations.

This manifested in early prototypes using Android handsets connected to servers.

BERG-Connbox-20111021-002

BERG-Connbox-20111021-001

chaco_Connbox_2012-03-10_TDA.032

chaco_Connbox_2012-03-10_TDA.033

Connbox: Project film


Durrell Bishop narrates some of the prototype designs that he and the team worked through in the Connbox project.

The importance of legible products


The Connbox design project had a strong thread running though it of making interfaces as evident and simple as possible, even when trying to convey abstract notions of service and network connectivity.

I asked Jack to comment on the importance of ‘legibility’ in products:

Connbox exists in a modern tradition of legible products, which sees the influence of Durrell Bishop. The best example I’ve come across that speaks to this thinking is Durrell’s answering machine he designed.

When messages are left on the answering machine they’re represented as marbles which gather in a tray. People play the messages by placing them in a small dip and when they’ve finished they replace them in the machine.

Screen Shot 2013-02-25 at 16.36.07

If messages are for someone else in the household they’re left in that persons bowl for later. When you look at the machine the system is clear and presented through it’s physical form. The whole state of the system is evident on the surface, as the form of the product.

Making technology seamless and invisible hides the control and state of the system – this path of thinking and design tries to place as much control as possible in the hands of the end-user by making interfaces evident.

In the prototype UI design, Joe created some lovely details of interaction fusing Denise’s service design sketches and the physical product design.

For instance, I love this detail where using the physical ‘still’ button, causes a digital UI element to ‘roll’ out from the finger-press…

Legible-interaction2

A very satisfying dial for selecting video effects/filters…

Legible-interaction3

And here, where a physical sliding tab on top of the device creates the connection between two spaces

Legible-interaction1

This feels like a rich direction to explore in future projects, of a kind of ‘reverse-skeuomorphism‘ where digital and physical affordances work together to do what each does best rather than just one imitating the other.

Conclusion: What might have been next?


At the end of this prototyping phase, the project was put on hiatus, but a number of directions seemed promising to us and Google Creative Lab.

Broadly speaking, the work was pointing towards new kinds of devices, not designed for our pockets but for our homes. Further explorations would have to be around the rituals and experience of use in a domestic setting.

Special attention would have to be given to the experience of set-up, particularly pairing or connecting the devices. Would this be done as a gift, easily configured and left perhaps for a relative who didn’t have a smartphone or computer? How could that be done in an intuitive manner that emphasised the gift, but left the receiver confident that they could not break the connection or the product? Could it work with a cellular radio connection, in places where there no wireless broadband is found?

connbox for blogging.027

What cues could the physical product design give to both functionality and context? What might the correct ‘product language’ be for such a device, or family of devices for them to be accepted into the home and not seen as intrusive technology.

G+ and Hangouts launched toward the end of the project, so unfortunately there wasn’t time in the project to accommodate these interesting new products.

connbox for blogging.029

However we did start to talk about ways to physicalize G+’s “Circles” feature, which emphasises small groups and presence – it seemed like a great fit with what we had already looked at. How might we create a product that connects you to an ‘inner circle’ of contacts and the spaces they were in?

Postscript: Then and Now – how technology has moved on, and where we’d start now


Since we started the Connbox project in the Spring of 2011, one could argue that we’ve seen a full cycle of Moore’s law improve the capabilities of available hardware, and certainly both industry and open-source efforts in the domain of video codecs and software have advanced significantly.

Making Connbox now would be a very different endeavour.

Here Nick comments on the current state-of-the-art and what would be our starting points were we (or someone else) to re-start the project today…

Since we wrapped up this project in 2011, there’s been one very conspicuous development in the arena of video chat, and that is the rise of WebRTC. WebRTC is a draft web standard from W3C to enable browser to browser video chat without needing plugins.

As of early 2013, Google and Mozilla have demonstrated this system working in their nightly desktop browser builds, and recorded the first cross-browser video call. Ericsson are one of the first groups to have a mobile implementation available for Android and iOS in the form of their “Bowser” browser application.

WebRTC itself is very much an evolution of earlier work. The brainchild of Google Hangout engineers, this single standard is implemented using a number of separate components. The video and audio technology comes from Google in the form of the VP8 and iLBC codecs. The transport layer has incorporated libjingle which we also relied upon for our Polar Bear prototype, as part of the Farsight 2 stack.

Google is currently working on enabling WebRTC functionality in Chrome for Android, and once this is complete, it will provide the ideal software platform to explore and prototype Connbox ideas. What’s more, it actually provides a system which would be the basis of taking a successful prototype into full production.

Notable precedents


While not exhaustive, here are some projects, products, research and thinking we referenced during the work…


Thanks

Massive thanks to Tom Uglow, Sara Rowghani, Chris Lauritzen, Ben Malbon, Chris Wiggins, Robert Wong, Andy Berndt and all those we worked with at Google Creative Lab for their collaboration and support throughout the project.

Thanks to all we worked with at Collabora and Future Platforms on prototyping the technology.

Big thanks to Oran O’Reilly who worked on the films with Timo and Jack.

Notes on videophones in film

(It’s good at the beginning of projects to research what’s come before, and Joe is pretty spectacular at finding references and explaining what’s interesting about each one. He’s done this for a few projects now, but we’ve never made his research public. Last year Joe put together a set of appearances of videophones in film. It’s a lovely collection, and it was a stimulating way to think around the subject! So I asked him to share it here. -Matt W)

Metropolis (1927)

Features wall-mounted analogue videophone. Joh Fredersen appears to use four separate dials to arrive at the correct frequency for the call. Two assign the correct call location and two smaller ones provide fine video tuning. He then picks up a phone receiver with one hand and uses the other to tap a rhythm on a panel that is relayed to the other phone and displayed as flashes of light to attract attention.

Transatlantic Tunnel (1935)

Features two very different pieces of industrial design at either end of the call.

This device displays similarities to the form of a TV set…

And this one has been designed to appear more like furniture. The screen is low down in a self-contained wooden unit designed a seated caller.

Out of the Unknown (1965)

User’s own image is reflected back to them until a connection is made. Possibly to confirm that the camera is working correctly. The hexagonal screen is an extension of a mobile chair.

2001: A Space Odyssey (1968)

A public booth containing a large phone unit. The system communicates that it is in a ‘ready’ state through the screen. A call is made by entering a number into the type-pad and a connection established on pickup

Colossus: The Forbin Project (1970)

We see five or six openly shared phones and connected screens sitting on a desk in the White House.

It is apparent that a single video feed can be broadcast to multiple screens in parallel, as below, or exclusively to a single one.

Space: 1999 (1975-1977)

The monotone blue phone screen has been designed into the very architecture of the craft.

And here requiring a key to connect.

Blade Runner (1982)

An outdoor, public phone service. Network information is displayed on screen implying that it is subject to change. When Deckard begins to dial a ‘transmitting’ notification appears. The cost of the call is shown when the receiver line is closed.

The screen is used as a canvas, covered in scrawled messages. A cross indicates the optimal position for viewer’s head.

Back to the Future Part II (1989)

Marty McFly is contacted by Needles his coworker. The video feed features personal information about the person in view, favourite drinks and hobbies.

Real-time message input can be expressed as video overlays.

Or push print-outs.

When not in use the screen displays a Van Gogh self portrait.

The Jetsons (1962-1988)

Videophones throughout the series. Rather than command desk space they lower from the ceiling when required. They appear to come with either a handset or microphone.

Star Trek II: The Wrath of Khan (1982)

Circular screen set into a square frame emerging from a pillar unit.

Gremlins 2: The New Batch (1990)

Shows a AT&T VideoPhone 2500 prototype with space for hand written addresses. When the video feed is lost the system defaults to a voice exchange through the handset.

Star Trek Nemesis (2002)

Pop-up screen set into the desk. Appears when call is received. Only visible when required.

The Rock (1996)

One way video stream displayed on multiple, wall inset screens.

The Simpsons: Lisa’s Wedding (1995)

A “Picturephone” uses a rotary dial to make calls. A camera housed in the device is distinctly visible in a trapezium above the screen. Set in the future, the device seems to be a new invention that Marge isn’t quite used to yet, as she visibly crosses her fingers guaranteeing that Homer will behave at the forthcoming wedding.

Moon (2009)

Ruggedised video phone for use in zero gravity. No function available for hiding outgoing video stream is evident as Sam Bell uses his hand to cover the camera.

And finally… the super cut of all of them

Watch the videophones supercut on Vimeo.

Week 350

Week 350 from BERG on Vimeo.

Robot Readable World. The film.

I recently cut together a short film, an experiment in found machine-vision footage:

Robot readable world from Timo on Vimeo.

As robots begin to inhabit the world alongside us, how do they see and gather meaning from our streets, cities, media and from us? The robot-readable world is one of the themes that the studio has been preoccupied by recently. Matt Jones talked about it last year:

“The things we are about to share our environment with are born themselves out of a domestication of inexpensive computation, the ‘Fractional AI’ and ‘Big Maths for trivial things’ that Matt Webb has spoken about.

and

‘Making Things See’ could be the the beginning of a ‘light-switch’ moment for everyday things with behaviour hacked-into them. For things with fractional AI, fractional agency – to be given a fractional sense of their environment.

This film uses found-footage from computer vision research to explore how machines are making sense of the world. And from a very high-level and non-expert viewing, it seems very true that machines have a tiny, fractional view of our environment, that sometimes echoes our own human vision, and sometimes doesn’t.

For a long time I have been struck by just how beautiful the visual expressions of machine vision can be. In many research papers and Siggraph experiments that float through our inboxes, there are moments with extraordinary visual qualities, probably quite separate from and unintended by the original research. Something about the crackly, jittery but yet often organic, insect-like or human quality of a robot’s interpetation of the world. It often looks unstable and unsure, and occasionally mechanically certain and accurate.

Of the film Warren Ellis says:

“Imagine it as, perhaps, the infant days of a young machine intelligence.”

The Robot-Readable World is pre-Cambrian at the moment, but machine vision is becoming a design material alongside metals, plastics and immaterials. It’s something we need to develop understandings and approaches to, as we begin to design, build and shape the senses of our new artificial companions.

Much of our fascination with this has been fuelled by James George’s beautiful experiments, Kevin Slavin’s lucid unpacking of algorithms and the work (above) by Adam Harvey developing a literacy within computer vision. Shynola are also headed in interesting directions with their production diary for the upcoming Red Men film, often crossing over with James Bridle’s excellent ongoing research into the aesthetics of contemporary life. And then there is the work of Harun Farocki in his Eye / Machine series that unpacks human-machine distinctions through collected visual material.

As a sidenote, this has reminded me that I was long ago inspired by Paul Bush’s ‘Rumour of true things’ which is ‘constructed entirely from transient images – including computer games, weapons testing, production line monitoring and marriage agency tapes’ and a ”A remarkable anthropological portrait of a society obsessed with imaging itself.’. This found-footage tactic is fascinating: the process of gathering and selecting footage is an interesting R&D exercise, and cutting it all together reveals new meanings and concepts. Something to investigate, as a method of research and communication.

“Sometimes the stories are the science…”

This is a blog post about a type of work we find successful – namely, video prototyping – and why we think it’s valuable.

We’ve made quite a few films in the last couple of years, that have had some success – in how they describe products, technologies and contexts of their use in public.

We’re lucky enough to work with Timo Arnall, as creative director, who guides all of our film output and is central to the way that we’ve been able to use the moving image as part of our design process – more of which later.

Film is a great way to show things that have behaviour in them – and the software, services and systems that literally animate them.

Embedded in Time.

A skilled film-maker can get across the nature of that behaviour in a split-second with film – which would take thousands of words or ultra-clear infographics.

They can do this along with the bonuses of embedding humour, emotional-resonance, context and a hundred other tacit things about the product.

Film is also an easy way to show things that don’t exist yet, or can’t exist yet – and make claims about them.

We’ve all seen videos by corporations and large design companies that are glossy and exciting illustrations of the new future products they’ll almost certainly never make.

Some are dire, some are intriguing-but-flawed, some are awesome-but-unbelievable.

This is fine!

More than fine!

Brilliant!

Ultimately they are communications – of brand and ambition – rather than legal promises.

Some of these communications though – have enormous purchase on our dreams and ambitions for years afterwards – for better, or for worse.

I’m thinking particularly of the Apple ‘Knowledge Navigator’ film of 1987, important in some of the invention it foreshadowed, even while some of the notions in it are now a little laughable.

It was John Sculley‘s vision – not Jobs – and was quite controversial at the time.

Nevertheless, designers, technologists and businesses have pursued those ideas with greater and lesser success due to the hold that film had over the collective psyche of the technology industry for, say, 20 years.

Hugh Dubberly was working at Apple at the time points out some of the influences the film in a piece on his studio’s website:

“We began with as much research as we could do in a few days. We talked with Aaron Marcus and Paul Saffo. Stewart Brand’s book on the “Media Lab” was also a source—as well as earlier visits to the Architecture Machine Group. We also read William Gibson’s “Neuromancer” and Verber Vinge’s “True Names”.

Of course the company that authored it, Apple, I’d argue built it eventually to some extent with the iPhone.

The gravity well of the knowledge navigator was enormous, and fittingly, Apple punched out of it first with real product.

As Andy Baio and Jason Kottke has pointed out – their predicted time horizon for some of the concepts realised in the iPhone 4S and particularly Siri was uncannily accurate.

This ‘communications gravity’ – the sheer weight of the ‘microfuture’ portrayed shifts the discussion, shifts culture and it’s invention just a little bit toward it.

They are what Webb calls (after Victor Papanek, I believe) ‘normative’ – they illustrate something we want to build toward.

They are also commercial acts – perhaps with altruistic or collegiate motives woven in – but commercial all the same.

They illustrate a desirable microfuture wherein Brand-X’s product or services are central.

Dubberly, in his piece about Knowledge Navigator points out the importance of this – the influence the film had on the corporate imagination of the company, and of competitors:

“What is surprising is that the piece took on a life of its own. It spawned half a dozen or more sequels within Apple, and several other companies made similar pieces. These pieces were marketing materials. They supported the sale of computers by suggesting that a company making them has a plan for the future.

One effect of the video was engendering a discussion (both inside Apple and outside) about what computers should be like. On another level, the videos became a sort of management tool.

They suggested that Apple had a vision of the future, and they prompted a popular internal myth that the company was “inventing the future.”

Very recently, we’ve seen the rise of two other sub-genres of concept video.

It’s very early days for both, but both are remarkable for the ‘communications gravity’ they generate for very different commercial endeavours.

First of all – what Bruce Sterling has called the ‘vernacular video’ – often of products in use – created for startups and small companies.

Adam Lisagor has been hailed as the leader in this genre by Fast Company – and his short films for the like of Flipboard, Square and Jawbone have in many ways been defining of the vernacular in that space. They are short, and understated – and very clear about the central benefit of the product or service. Perfect for the sharing and re-sharing. Timo’s written about Adam’s work previously on his personal blog, and I’d agree with him when he says “He’s good at surfacing the joy and pleasure in some of the smallest interactions”. They serve as extraordinarily elegant pitches for products and services that are ‘real’ i.e. has usually already been made.

Secondly – the short videos that illustrate the product intentions of people on Kickstarter, often called ‘campaign videos’ – outlining a prototype or a feasibility investigation into making a product at small scale.

They are often very personal and emotive, but mix in somewhat of a documentary approach to making and construction around prototypes. They serve as invitations to support a journey.

So far, so what?

Video is a well-known way of communicating new or future products & services that reaches the mainstream – and we are seeing a boom in the amount of great short communication about design, invention and making with ever-higher production value as the tools of creation fall in cost, and the techniques of using them become available to small, nimble groups of creators.

Well, we think that’s just half of the potential of using video.

There is a great deal of potential in using video as a medium for design itself – not just communicating what’s been designed, or imagined.

Jack and Timo drew this for me a couple of months ago when we were discussing an upcoming project.

Public Prototyping = New Grammars

We were talking about the overlap between invention and storytelling that occurs when we make films, and how and why that seems to happen.

On the right is the ‘communications gravity’ that I’ve already talked about above – but the left-hand circle of the Venn is ‘product invention’.

During a project like Mag+ we used video prototyping throughout – in order to find what was believable, what seemed valuable, and how it might normalise into a mainstream product of worth.

In the initial workshopping stages we made very quick sketches with cut-up magazines, pasted together and filmed with an iPhone – but then played back on an iPhone to understand the quality of the layout and interaction on a small screen.

From these early animatics to discuss with our client at Bonnier R&D, we moved to the video prototype of the chosen route.

There were many iterations of the ‘material quality’ of the interface – we call it the ‘rulespace’ – the physics of the interactions, the responsiveness of the media – tuned in the animation and video until we had something that felt right – and that could communicate it’s ‘rightness’ in film.

You find what is literally self-evident.

You are faking everything except this ‘rulespace’ – it’s a block of wood, with green paper on it. But as we’ve written before, that gets you to intuitions about use and gesture – what will make you tired, what will feel awkward in public places, how it sits on the breakfast table.

Finding the rulespace is the thing that is the real work – and that is product invention through making a simulation.

Why we make models

We are making a model of how a product is, to the degree that we can in video. We subject it to as much rigour as we can in terms of the material and technological capabilities we think can be built.

It must not be magic, or else it won’t feel real.

I guess I’m saying sufficiently-advanced technology should be distinguishable from magic.

Some of that is about context – we try and illustrate a “universe-next-door” where the new product is the only novelty. Where there is still tea, and the traffic is still miserable.

This increases belief in our possible microfuture to be sure – but it also serves a purpose in our process of design and invention.

The context itself is a rulespace – that the surface and behaviour of the product must believably fit into for it to be successful. It becomes part of the material you explore. There are phenomena you discover that present obstacles and opportunities.

That leads me to the final, overlapping area of the Venn diagram above – “New Grammar”

This summer I read “The Nature Of Technology: What it is and how it evolves” by W. Brian Arthur. I picked it up after reading Dan Saffer’s review of it, so many thanks to him for turning me on to it.

In it, Arthur frames the realtionship between ‘natural phenomena’ as discovered and understood by science, and how technology is that which ‘programs phenomena to our use’.

“That a technology relies on some effect is general. A technology is always based on some phenomenon or truism of nature that can be exploited and used to a purpose. I say “always” for the simple reason that a technology that exploited nothing could achieve nothing.”

“Phenomena are the indispensable source from which all technologies arise. All technologies, no matter how simple or sophisticated, are dressed-up versions of the use of some effect—or more usually, of several effects.”

“Phenomena rarely can be used in raw form. They may have to be coaxed and tuned to operate satisfactorily, and they may work only in a narrow range of conditions. So the right combination of supporting means to set them up for the purpose intended must be found.”

“A technology is a phenomenon captured and put to use. Or more accurately I should say it is a collection of phenomena captured and put to use. I use the word “captured” here, but many other words would do as well. I could say the phenomenon is harnessed, seized, secured, used, employed, taken advantage of, or exploited for some purpose. To my mind though, “captured and put to use” states what I mean the best.”

“…technology is more than a mere means. It is a programming of phenomena for a purpose. A technology is an orchestration of phenomena to our use.”

This leads me to another use of film we find valuable – as documentary evidence and experimental probe. What Schulze calls ‘science on science’.

The work that he and Timo did on RFID exploring it’s ‘material’ qualities through film is a good example of this I think.

It’s almost a nature documentary in a way, pointing and poking at a phenomena in order to capture new (often visual) language to understand it.

Back to W.Brian Arthur:

“…phenomena used in technology now work at a scale and a range that casual observation and common sense have no access to.”

I think this is what Jack and Timo are trying to address with work such as ‘Immaterials’, and reffering to in the centre of their Venn – creating new grammar is an important part of both design investigation, and communication. It is an act of synthesis that can happen within and be expressed through the film-making process.

Arthur’s book goes on to underline the importance of such activities in invention:

“A new device or method is put together from the available components—the available vocabulary—of a domain. In this sense a domain forms a language; and a new technological artifact constructed from components of the domain is an utterance in the domain’s language. This makes technology as a whole a collection of several languages, because each new artifact may draw from several domains. And it means that the key activity in technology—engineering design—is a form of composition. It is expression within a language (or several).”

He goes on to quote Paul Klee on the the importance of increasing the grammar we have access to:

“…even adepts can never fully keep up with all the principles of combination in their domain. One result of this heavy investment in a domain is that a designer rarely puts a technology together from considerations of all domains available. The artist adapts himself, Paul Klee said, to the contents of his paintbox. “The painter… does not fit the paints to the world. He fits himself to the paint.” As in art, so in technology. Designers construct from the domains they know.”

I think one of the biggest rewards of this sort of work is finding new grammar from other domains. Or what Arthur calls the importance of ‘redomaining’ in invention.

“The reason… redomainings are powerful is not just that they provide a wholly new and more efficient way to carry out a purpose. They allow new possibilities.”

“A change in domain is the main way in which technology progresses.”

“…a single practitioner’s new projects typically contain little that is novel. But many different designers acting in parallel produce novel solutions: in the concepts used to achieve particular purposes; in the choice of domains; in component combinations; in materials, architectures, and manufacturing techniques. All these cumulate to push an existing technology and its domain forward.”

“At the creative heart of invention lies appropriation, some sort of mental borrowing that comes in the form of a half-conscious suggestion.”

“…associates a problem with a solution by reaching into his store of functionalities and imagining what will happen when certain ones are combined.”

“Invention at its core is mental association.”

It’s not necessarily an end product we are after – that comes through more thinking through making. And it also comes from a collegiate conversation using new grammars that work unearths.

But to get a new language, a map, even if it’s just a pirate map, just a confident sketch in an emerging territory – is invaluable in order to provoke the mental association Arthur refers to.

We’re going to continue to experiment with video as a medium for research, design and communication.

Recent efforts like ‘Clocks for Robots‘ are us trying to find something like a sketch, where we start a conversation about new grammar through video…

About a decade ago – I saw Oliver Sacks speak at the Rockerfeller Institute in NYC, talk about his work.

A phrase from his address has always stuck with me since. He said of what he did – his studies and then the writing of books aimed at popular understanding of his studies that ‘…sometimes the stories are the science’.

Sometimes our film work is the design work.

Again this is a commercial act, and we are a commercial design studio.

But it’s also something that we hope unpacks the near-future – or at least the near-microfutures – into a public where we can all talk about them.

Ditto

We’re a design studio, so we like going to the degree shows that pop up around London this time of year — London has a number of extremely strong design courses, and seeing what the students are up to is always an inspiration. A couple of weeks ago it was the Goldsmiths Design 2011 Show, and my personal favourite there was Ditto by Matt House. This is what he says:

Copying is fundamental to development and social interaction, yet it is viewed negatively in education and creative fields. With new media, reproduction is engrained in culture allowing us to embrace this phenomenon. How do individuals respond when you reiterate, reprocess and reclaim their property? We are the generation that remix, parody and re-enact. Go henceforth and copy.

(He says it twice, naturally.)

So what did he do? At the core of his show piece was a performance: he sat opposite you wearing a bowler hat (like a Magritte), and copied exactly everything you did, while you were doing it. What you spoke, how you moved, what you drew, your expressions.

I have never felt anything so uncanny. You lose yourself in the mirror-feeling, and it gets confused in your head where free will comes from.

His work speaks directly to the nature of novelty and invention, culture and the individual, and to the creative act — particularly now, in the 21st century, where everything is copy-and-pastable, the whole world is a palette to be dabbed and painted using our new brushes. It’s a wonderful feeling, to be forced to encounter this insight so abruptly!

Anyway, Matt House recently put a new video online, where he literally puts my words and the words of Matt Jones in his own mouth.

He’s taken bits of our public talks and patched them into his own movements. It makes me think: where does character come from? Ideas? He is deliberately being a blank slate, and this accentuates the individuality of Jones, me, and him.

This is weirder for me than for you, I’m sure, but I love it, and here it is:

berg copy from Matt House on Vimeo.

Sensor-Vernacular

Consider this a little bit of a call-and-response to our friends through the plasterboard, specifically James’ excellent ‘moodboard for unknown products’ on the RIG-blog (although I’m not sure I could ever get ‘frustrated with the NASA extropianism space-future’).

There are some lovely images there – I’m a sucker for the computer-vision dazzle pattern as referenced in William Gibson’s ‘Zero History’ as the ‘world’s ugliest t-shirt‘.

The splinter-camo planes are incredible. I think this is my favourite that James picked out though…

Although – to me – it’s a little bit 80′s-Elton-John-video-seen-through-the-eyes-of-a-’Cheekbone‘-stylist-too-young to-have-lived-through-certain-horrors.

I guess – like NASA imagery – it doesn’t acquire that whiff-of-nostalgia-for-a-lost-future if you don’t remember it from the first time round. For a while, anyway.

Anyway. We’ll come back to that.

The main thing, is that James’ writing galvanised me to expand upon a scrawl I made during an all-day crit with the RCA Design Interactions course back in February.

‘Sensor-Vernacular’ is a current placeholder/bucket I’ve been scrawling for a few things.

The work that Emily Hayes, Veronica Ranner and Marguerite Humeau in RCA DI Year 2 presented all had a touch of ‘sensor-vernacular’. It’s an aesthetic born of the grain of seeing/computation.

Of computer-vision, of 3d-printing; of optimised, algorithmic sensor sweeps and compression artefacts.

Of LIDAR and laser-speckle.

Of the gaze of another nature on ours.

There’s something in the kinect-hacked photography of NYC’s subways that we’ve linked to here before, that smacks of the viewpoint of that other next nature, the robot-readable world.


Photo credit: obvious_jim

The fascination we have with how bees see flowers, revealing animal link between senses and motives. That our environment is shared with things that see with motives we have intentionally or unintentionally programmed them with.

As Kevin Slavin puts it – the things we have written that we can no longer read.

Nick’s being playing this week with http://code.google.com/p/structured-light/, and made this quick (like, in a spare minute he had) sketch of me…

The technique has been used for some pretty lovely pieces, such as this music video for Broken Social Scene.

In particular, for me, there is something in the loop of 3d-scanning to 3d-printing to 3d-scanning to 3d-printing which fascinates.

Rapid Form by Flora Parrot

It’s the lossy-ness that reveals the grain of the material and process. A photocopy of a photocopy of a fax. But atoms. Like the 80′s fanzines, or old Wonder Stuff 7″ single cover art. Or Vaughn Oliver, David Carson.

It is – perhaps – at once a fascination with the raw possibility of a technology, and – a disinterest, in a way, of anything but the qualities of its output. Perhaps it happens when new technology becomes cheap and mundane enough to experiment with, and break – when it becomes semi-domesticated but still a little significantly-other.

When it becomes a working material not a technology.

We can look back to the 80s, again, for an early digital-analogue: what one might term ‘Video-Vernacular’.

Talking Heads’ cover art for their album “Remain In Light” remains a favourite. It’s video grain / raw quantel as aesthetic has a heck of a punch still.

I found this fascinating from it’s wikipedia entry:

“The cover art was conceived by Weymouth and Frantz with the help of Massachusetts Institute of Technology Professor Walter Bender and his MIT Media Lab team.

Weymouth attended MIT regularly during the summer of 1980 and worked with Bender’s assistant, Scott Fisher, on the computer renditions of the ideas. The process was tortuous because computer power was limited in the early 1980s and the mainframe alone took up several rooms. Weymouth and Fisher shared a passion for masks and used the concept to experiment with the portraits. The faces were blotted out with blocks of red colour.

The final mass-produced version of Remain in Light boasted one of the first computer-designed record jackets in the history of music.”

Growing up in the 1980s, my life was saturated by Quantel.

Quantel were the company in the UK most associated with computer graphics and video effects. And even though their machines were absurdly expensive, even in the few years since Weymouth and Fisher harnessed a room full of computing to make an album cover, moore’s law meant that a quantel box was about the size of a fridge as I remember.

Their brand name comes from ‘Quantized Television’.

Awesome.

As a kid I wanted nothing more than to play with a Quantel machine.

Every so often there would be a ‘behind-the-scenes’ feature on how telly was made, and I wanted to be the person in the dark illuminated by screens changing what people saw. Quantizing television and changing it before it arrived in people homes. Photocopying the photocopy.

Alongside that, one started to see BBC Model B graphics overlaid on video and TV. This was a machine we had in school, and even some of my posher friends had at home! It was a video-vernacular emerging from the balance point between new/novel/cheap/breakable/technology/fashion.

Kinects and Makerbots are there now. Sensor-vernacular is in the hands of fashion and technology now.

In some of the other examples James cites, one might even see ‘Sensor-Deco’ arriving…

Lo-Rez Shoe by United Nude

James certainly has an eye for it. I’m going to enjoy following his exploration of it. I hope he writes more about it, the deeper structure of it. He’ll probably do better than I am.

Maybe my response to it is in some ways as nostalgic as my response to NASA imagery.

Maybe it’s the hauntology of moments in the 80s when the domestication of video, computing and business machinery made things new, cheap and bright to me.

But for now, let me finish with this.

There’s both a nowness and nextness to Sensor-Vernacular.

I think my attraction to it – what ever it is – is that these signals are hints that the hangover of 10 years of ‘war-on-terror’ funding into defense and surveillance technology (where after all the advances in computer vision and relative-cheapness of devices like the Kinect came from) might get turned into an exuberant party.

Dancing in front of the eye of a retired-surveillance machine, scanning and printing and mixing and changing. Fashion from fear. Quantizing and surprising. Imperfections and mutations amplifying through it.

Beyonce’s bright-green chromakey socks might be the first, positive step into the real aesthetic of the early 21st century, out of the shadows of how it begun.

Let’s hope so.

Friday Links: Light with character, some graphic design, and music videos

Sticky Light is an installation that projects a laser that sticks to lines and solid objects. There’s no camera – just a laser and a photodector. It’s incredibly responsive, and completely captivating. The dot of light takes on a surprising amount of personality, darting around, occasionally getting lost and confused, and then suddenly slipping away to explore its surroundings when released.

That such a nuanced impression of character could be formed from such a seemingly simple actor reminded me of Ken Perlin’s Polly: a prism that walks around a surface. That may not sound like much, but once you start playing with the various animation loops programmed into it, you might well change your mind. “Dejected” is heartbreaking. And yet: it’s a triangular prism. Marvellous.

Two pieces of graphic design that caught my eye. First, via Paul Mison, a spread from Marie Neurath’s Railways Under London. There’s a bit more on the output of the Isotype Institute, and some lovely examples of their work for children, over at the Science Project blog.

Kafka

Secondly, via Frank Chimero, this lovely selection of covers for Kafka’s books by Peter Mendelsund. Mendelsund has a great blogpost on the design of the covers for publisher Shocken.

Finally, two music videos with interesting visual treatments. Firstly, Echo Lake’s Young Silence, which used a Kinect’s depth camera to film the band. It’s not a raw output, of course. There’s a lot of visual processing, and compositing of co-ordinates that’s followed up, but it makes the video very striking – and much like a low-budget take on Radiohead’s House Of Cards video, filmed on LIDAR.

And, to end, Chairlift’s Evident Utensil. This came up in discussion in the studio when we were talking about the aesthetics unique to video in the digital age, such as stabilisation, or as in these videos, what happens when keyframe data goes missing. The answer to the latter can be seen in the Chairlift video – and in several other examples of Datamoshing.

Media Surfaces: Incidental Media

Following iPad light painting, we’ve made two films of alternative futures for media. These continue our collaboration with Dentsu London and Timo Arnall. We look at the near future, a universe next door in which media travels freely onto surfaces in everyday life. A world of media that speaks more often, and more quietly.

Incidental Media is the first of two films.

The other film can be seen here.

Each of the ideas in the film treat the surface as a focus, rather than the channel or the content delivered. Here, media includes messages from friends and social services, like foursquare or Twitter, and also more functional messages from companies or services like banks or airlines alongside large traditional big ‘M’ Media (like broadcast or news publishing).

All surfaces have access to connectivity. All surfaces are displays responsive to people, context, and timing. If any surface could show anything, would the loudest or the most polite win? Surfaces which show the smartest most relevant material in any given context will be the most warmly received.

Unbelievably efficient

I recently encountered this mixing in surfaces. An airline computer spoke to me through SMS. This space is normally reserved for awkwardly typed highly personal messages from friends. Not a conversational interface with a computer. But now, those pixels no longer differentiate between friends, companies and services.

Mixing Media

How would it feel if the news ticker we see as a common theme in broadcast news programmes begun to contain news from services or social media?

Media Surfaces mixed media

I like the look of it. The dominance of linear channel based screens is distorted as it shares unpredictable pixels and a graphic language with other services and systems.

Ambient listening

This screen listens to its environment and runs an image search against some of the words it hears. I’ve long wanted to see what happens if the subtitles feed from BBC television broadcast content was tied to an image search.

Media Surfaces ambient listening

It feels quite strange to have a machine ambiently listening to words uttered even if the result is private and relatively anodyne. Maybe it’s a bit creepy.

Print can be quick

This sequence shows a common receipt from a coffee shop and explores what happens when we treat print as a highly flexible, context-sensitive, connected surface, and super quick by contrast to say video in broadcast.

Media Surfaces print can be quick 01

The receipt includes a mayorship notification from foursquare and three breaking headlines from the Guardian news feed. It turns the world of ticket machines, cash registers and chip-and-pin machines into a massive super-local, personalised system of print-on-demand machines. The receipt remains as insignificant and peripheral as it always has, unless you choose to read it.

Computer vision

The large shop front shows a pair of sprites who lurk at the edges of the window frames. As pedestrians pass by or stand close, the pair steal colours from their clothes. The sketch assumes a camera to read passers-by and feed back their colour and position to the display.

Media Surfaces computer vision 01

Computer vision installations present interesting opportunities. Many installations demand high levels of attention or participation. These can often be witty and poetic, as shown here by Matt Jones in a point of sale around Lego.

We’ve drawn from great work from the likes of Chris O’Shea and his Hand from Above project to sketch something peripheral and ignorable, but still at scale. The installation could be played with by those having their colours stolen, but it doesn’t demand interaction. In fact I suspect it would succeed far more effectively for those viewing from afar with no agency over the system at all.

In contrast to a Minority Report future of aggressive messages competing for a conspicuously finite attention, these sketches show a landscape of ignorable surfaces capitalising on their context, timing and your history to quietly play and present in the corners of our lives.

Incidental Media is brought to you by Dentsu London and BERG. Beeker has written about the films here.

Thank you to Beeker Northam (Dentsu London), and Timo Arnall, Campbell Orme, Matt Brown, and Matt Jones!

Popular Science+

In December, we showed Mag+, a digital magazine concept produced with our friends at Bonnier.

Late January, Apple announced the iPad.

So today Popular Science, published by Bonnier and the largest science+tech magazine in the world, is launching Popular Science+ — the first magazine on the Mag+ platform, and you can get it on the iPad tomorrow. It’s the April 2010 issue, it’s $4.99, and you buy more issues from inside the magazine itself.

See Popular Science+ in the iTunes Store now.

Here’s Jack, speaking about the app, its background, and what we learned about art direction for magazines using Mag+.

Articles are arranged side by side. You swipe left and right to go between them. For big pictures, it’s fun to hold your finger between two pages, holding and moving to pan around.

You swipe down to read. Tap left to see the pictures, tap right to read again. These two modes of the reading experience are about browsing and drinking in the magazine, versus close reading.

Pull the drawer up with two fingers to see the table of contents and your other issues. Swipe right and left with two fingers to zip across pages to the next section. Dog-ear a page by turning down the top-right corner.

There’s a store in the magazine. When a new issue comes out, you purchase it right there.

Editorial

Working with the Popular Science team and their editorial has been wonderful, and we’ve been working together to re-imagine the form of magazines. Art direction for print is so much about composition. There are a 1,000 tiny tweaks to tune a page to get it to really sing. But what does layout mean when readers can make the text disappear, when the images move across one another, and the page itself changes shape as the iPad rotates?

We discovered safe areas. We found little games to play with the reader, having them assemble infographics in the act of scrolling, and making pages that span multiple panes, only revealing themselves when the reader does a double-finger swipe to zoom across them.

It helps that Popular Science has great photography, a real variety of content, and an engaged and open team.

What amazes me is that you don’t feel like you’re using a website, or even that you’re using an e-reader on a new tablet device — which, technically, is what it is. It feels like you’re reading a magazine.

Apple made the first media device you can curl up with, and I think we’ve done it, and Popular Science, justice.

From concept to production

The story, for me, is that the design work behind the Mag+ concept video was strong enough to spin up a team to produce Popular Science+ in only two months.

Not only that, but an authoring system that understands workflow. And InDesign integration so art directors are in control, not technologists. And an e-commerce back-end capable of handling business models suitable for magazines. And a new file format, “MIB,” that strikes the balance between simple enough for anyone to implement, and expressive enough to let the typography, pictures, and layout shine. And it’s set up to do it all again in 30 days. And more.

It’s all basic, sure. But it’ll grow. We’ve built in ways for it to grow.

But we’ve always said that good design is rooted not just in doing good by the material, but by understanding the opportunities in the networks of organisations and people too.

A digital magazine is great, immersive content on the screen. But behind those pixels are creative processes and commercial systems that also have to come together.

Inventing something, be it a toy or new media, always means assembling networks such as these. And design is our approach on how to do it.

I’m pleased we were able to work with Popular Science and Bonnier, to get to a chance to do this, and to bring something new into the world.

Thanks!

Thank you to the BERG team for sterling work on El Morro these last two months, especially the core team who have sunk so much into this: Campbell Orme, James Darling, Lei Bramley, Nick Ludlam and Timo Arnall. Also Jack Schulze, Matt Jones, Phil Gyford, Tom Armitage, and Tom Taylor.

Thanks to the Popular Science team, Mike Haney and Sam Syed in particular, Mark Poulalion and his team from Bonnier, and of course Bonnier R&D and Sara Öhrvall, the grand assembler!

It’s a pleasure and a privilege to work with each and every one of you.

See also…

Recent Posts

Popular Tags