This website is now archived. To find out what BERG did next, go to www.bergcloud.com.

Blog posts tagged as 'interactions'

Lamps: a design research collaboration with Google Creative Labs, 2011

Preface

This is a blog post about a large design research project we completed in 2011 in close partnership with Google Creative Lab.

There wasn’t an opportunity for publication at the time, but it represented a large proportion of the studio’s efforts for that period – nearly everyone in the studio was involved at some point – so we’ve decided to document the work and its context here a year on.

I’m still really proud of it, and some of the end results the team produced are both thought-provoking and gorgeous.

We’ve been wanting to share it for a while.

It’s a long post covering a lot of different ideas, influences, side-projects and outputs, so I’ve broken it up into chapters… but I recommend you begin at the beginning…


Introduction

 


At the beginning of 2011 we started a wide-ranging conversation with Google Creative Lab, around near-future experiences of Google and its products.

Tom Uglow, Ben Malbon of Google Creative Lab with Matt Jones of BERG

During our discussions with them, a strong theme emerged. We were both curious about how it would feel to have Google in the world with us, rather than on a screen.

If Google wasn’t trapped behind glass, what would it do?

What would it behave like?

How would we react to it?

Supergirl, trapped behind glass

This traces back to our studio’s long preoccupation with embodied interaction. Also, our explorations of the technologies of computer vision and projection that we’ve talked about previously under the banner of the “Robot-Readable World”.

Our project through the spring and summer of 2011 concentrated on making evidence around this – investigating computer vision and projection as ‘material’ for designing with, in partnership with Google Creative Lab.

Material Exploration

 


We find that treating ‘immaterial’ new technologies as if they were physical materials is useful in finding rules-of-thumb and exploring opportunities in their “grain”. We try as a studio to pursue this approach as much as someone trying to craft something from wood, stone, or metal.

Jack Schulze of BERG and Chris Lauritzen, then of Google Creative Lab

We looked at computer-vision and projection in a close relationship – almost as one ‘material’.

That material being a bound-together expression of the computer’s understanding of the world around it and its agency or influence in that environment.

Influences and starting points

 

One of the very early departure points for our thinking was a quote by (then-)Google’s Marissa Meyer at the Le Web conference in late 2010: “We’re trying to build a virtual mirror of the world at all times”

This quote struck a particular chord for me, reminding me greatly of the central premise of David Gelernter‘s 1991 book “Mirror Worlds“.

I read “Mirror Worlds” while I was in architecture school in the 90s. Gelernter’s vision of shared social simulations based on real-world sensors, information feeds and software bots still seems incredibly prescient 20 years later.

Gelernter saw the power to simply build sophisticated, shared models of reality that all could see, use and improve as a potentially revolutionary technology.

What if Google’s mirror world were something out in the real world with us, that we could see, touch and play with together?

Seymour Papert – another incredibly influential computer science and education academic – also came to our minds. Not only did he maintain similar views about the importance of sharing and constructing our own models of reality, but was also a pioneer of computer vision. in 1966 he sent the ‘Summer Vision Memo“Spend the summer linking a camera to a computer, and getting the computer to describe what it saw…”



Nearly fifty years on, we have Kinects in our houses, internet-connected face-tracking cameras in our pockets and ‘getting the computer to describe (or at least react to) what it saw seems to be one of the most successful component tracks of the long quest for ‘artificial intelligence’.

Our thinking and discussion continued this line toward the cheapness and ubiquity of computer vision.

The $700 Lightbulb

 

Early on, Jack invoked the metaphor of a “$700 lightbulb”:

Lightbulbs and electric light went from a scientific curiosity to a cheap, accessible ubiquity in the late 19th and early 20th century.

What if lightbulbs were still $700?

We’d carry one around carefully in a case and screw it in when/where we needed light. They are not, so we leave them screwed in wherever we want, and just flip the switch when we need light. Connected computers with eyes cost $500, and so we carry them around in our pockets.

But – what if we had lots of cheap computer vision, processing, connectivity and display all around our environments – like light bulbs?

Ubiquitous Computing has of course been a long held vision in academia, which in some ways has been derailed by the popularity of the smartphone

But smartphones are getting cheaper, Android is embedding itself in new contexts, with other I/Os than a touchscreen, and increasingly we keep our data in the cloud rather than in dedicated devices at the edge.

Ubiquitous computing has been seen by many as in the past as a future of cheap, plentiful ‘throw-away’ I/O clients to the cloud.

It seems like we’re nearly there.

In 2003, I remember being captivated by Neil Gershenfeld’s vision of computing that you could ‘paint’ onto any surface:

“a paintable computer, a viscous medium with tiny silicon fragments that makes a pour-out computer, and if it’s not fast enough or doesn’t store enough, you put another few pounds or paint out another few square inches of computing.”

Professor Neil Gershenfeld of MIT

Updating this to the present-day, post web2.0 world – where if ‘it’s not fast enough or doesn’t store enough’ we request more resources from centralised, elastic compute-clouds.

“Clouds” that can see our context, our environment through sensors and computer vision, and have a picture of us built up through our continued interactions with it to deliver appropriate information on-demand.

To this we added speculation that not only computer-vision would be cheap and ubiquitous, but excellent quality projection would become as cheap and widespread as touch screens in the near-future.

This would mean that the cloud could act in the world with us, come out from behind the glass and relate what it sees to what we see.

In summary: computer vision, depth-sensing and projection can be combined as materials – so how can we use them to make Google services bubble through from the Mirror World into your lap?

How would that feel? How should it feel?

This is the question we took as our platform for design exploration.

“Lamps that see”

 

One of our first departure points was to fuse computer-vision and projection into one device – a lamp that saw.

Here’s a really early sketch of mine where we see a number of domestic lamps, that saw and understood their context, projecting and illuminating the surfaces around them with information and media in response.

We imagined that the type of lamp would inform the lamp’s behaviour – more static table lamps might be less curious or more introverted than a angle-poise for instance.

Jack took the idea of the angle-poise lamp on, thinking about how servo-motors might allow the lamp to move around within the degrees of freedom its arm gives it on a desk to inquire about its context with computer vision, track objects and people, and surfaces that it can ‘speak’ onto with projected light.

Early sketches of “A lamp that sees” by Jack Schulze

Early sketches of “A lamp that sees” by Timo Arnall

Of course, in the back of our minds was the awesome potential for injecting character and playfulness into the angle-poise as an object – familiar to all from the iconic Pixar animation Luxo Jr.



And very recently, students from the University of Wellington in New Zealand created something very similar at first glance, although the projection aspect is missing here.

Alongside these sketching activities around proposed form and behaviour we started to pursue material exploration.

Sketching in Video, Code & Hardware

 


We’d been keenly following work by friends such as James George and Greg Borenstein in the space of projection and computer vision, and a number of projects in the domain emerged during the course of the project, but we wanted to understand it as ‘a material to design with’ from first principles.

Timo, Jack, Joe and Nick – with Chris Lauritzen (then of Google Creative Lab), and Elliot Woods of Kimchi & Chips, started a series of tests to investigate both the interactive and aesthetic qualities of the combination of projection and computervision – which we started to call “Smart Light” internally.

First of all, the team looked at the different qualities of projected light on materials, and in the world.

This took the form or a series of very quick experiments, looking for different ways in which light could act in inhabited spaces, on surfaces, interact with people and things.

In a lot of these ‘video sketches’ there was little technology beyond the projector and photoshop being used – but it enabled us to imagine what a computer-vision directed ‘smart light’ might behave like, look like and feel like at human scale very quickly.

Here are a few example video sketches from that phase of the work:

Sketch 04 Sticky search from BERG on Vimeo.

Sketch 06: Interfaces on things from BERG on Vimeo.

One particularly compelling video sketch projected an image of a piece of media (in this case a copy of Wired magazine) back onto the media – the interaction and interference of one with the other is spellbinding at close-quarters, and we thought it could be used to great effect to direct the eye as part of an interaction.

Sketch 09: Media on media from BERG on Vimeo.

Alongside these aesthetic investigations, there were technical explorations for instance, into using “structured light” techniques with a projector to establish a depth map of a scene…

Sketch 13: Structured light from BERG on Vimeo.

Quickly, the team reached a point where more technical exploration was necessary and built a test-rig that could be used to prototype a “Smart Light Lamp” comprising a projector, a HD webcam, a PrimeSense / Asus depth camera and bespoke software.

Elliot Woods working on early software for Lamps

At the time of the project the Kinect SDK now ubiquitous in computer vision projects was not officially available. The team plumped for the component approach over the integration of the Kinect for a number of reasons, including wanting the possibility of using HD video in capture and projection.

Testing the Lamps rig from BERG on Vimeo.

Nick recalls:

Actually by that stage the OpenNI libraries were out (http://openni.org/), but the “official” Microsoft SDK wasn’t out (http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx). The OpenNI libraries were more focussed on skeletal tracking, and were difficult to get up and running.

Since we didn’t have much need for skeletal tracking in this project, we used very low-level access to the IR camera and depth sensor facilitated by various openFrameworks plugins. This approach gave us the correct correlation of 3D position, high definition colour image, and light projection to allow us to experiment with end-user applications in a unified, calibrated 3D space.

The proto rig became a great test bed for us to start to explore high-level behaviours of Smart Light – rules for interaction, animation and – for want of a better term – ‘personality’.

Little Brain, Big Brain

 

One of our favourite things of the last few years is Sticky Light.

It’s a great illustration of how little a system needs to do, for us to ascribe personality to its behaviour.

We imagined that the Smart Light Lamp might manifest itself as a companion species in the physical world, a creature that could act as a go-between for you and the mirror-worlds of the digital.

We’ve written about digital companion species before: when our digital tools become more than just tools – acquiring their own behaviour, personality and agency.

Bit, Flynn’s digital companion from the original Tron

You might recall Bit from the original Tron movie, or the Daemons from the Philip Pullman “His Dark Materials” trilogy. Companions that are “on your side” but have abilities and senses that extend you.

We wanted the Lamp to act as companion species for the mirror-worlds of data that we all live within, and Google has set out to organise.

We wanted the lamp to act as a companion species that illustrated – through its behaviour – the powers of perception that Google has through computer vision, context-sensing and machine-learning.

Having a companion species that is a native of the cloud, but on your side, could make evident the vast power of such technologies in an intuitive and understandable way.

Long-running themes of the studio’s work are at play here – beautiful seams, shelf-evidence, digital companion species and BASAAP – which we tried to sum up in our Gardens and Zoos talk/blog post , which in turn was informed by the work we’d done in the studios on Lamps.

One phrase that came up again and again around this areas of the lamps behaviour was “Big Brain, Little Brain” i.e. the Smart Light companion would be the Little Brain, on your side, that understood you and the world immediately around you, and talked on your behalf to the Big Brain in ‘the cloud’.

This intentional division, this hopefully ‘beautiful seam’ would serve to emphasise your control over what you let the the Big Brain know in return for its knowledge and perspective, and also make evident the sense (or nonsense) that the Little Brain makes of your world before it communicates that to anyone else.

One illustration we made of this is the following sketch of a ‘Text Camera’

Text Camera from BERG on Vimeo.

Text Camera is about making the inputs and inferences the phone sees around it to ask a series of friendly questions that help to make clearer what it can sense and interpret.

It reports back on what it sees in text, rather than through a video. Your smartphone camera has a bunch of software to interpret the light it’s seeing around you – in order to adjust the exposure automatically. So, we look to that and see if it’s reporting ‘tungsten light’ for instance, and can infer from that whether to ask the question “Am I indoors?”.

Through the dialog we feel the seams – the capabilities and affordances of the smartphone, and start to make a model of what it can do.

The Smart Light Companion in the Lamp could similarly create a dialog with its ‘owner’, so that the owner could start to build up a model of what its Little Brain could do, and where it had to refer to the Big Brain in the cloud to get the answers.

All of this serving to playfully, humanely build a literacy in how computer vision, context-sensing and machine learning interpret the world.

Rules for Smart Light

 


The team distilled all of the sketches, code experiments, workshop conversations and model-making into a few rules of thumb for designing with this new material – a platform for further experiments and invention we could use as we tried to imagine products and services that used Smart Light.

Reflecting our explorations, some of the rules-of-thumb are aesthetic, some are about context and behaviour, and some are about the detail of interaction.

24 Rules for smart light from BERG on Vimeo.

We wrote the ‘rules’ initially as a list of patterns that we saw as fruitful in the experiments. Our ambition was to evolve this in the form of a speculative “HIG” or Human Interface Guideline – for an imagined near-future where Smart Light is as ubiquitous as the touchscreen is now…


Smart Light HIG

  1. Projection is three-dimensional. We are used to projection turning a flat ‘screen’ into an image, but there is really a cone of light that intersects with the physical world all the way back to the projector lens. Projection is not the flatland display surfaces that we have become used to through cinema, tv and computers.
  2. Projection is additive. Using a projector we can’t help but add light to the world. Projecting black means that a physical surface is unaffected, projecting white means that an object is fully illuminated up to the brightness of the projector.
  3. Enchanted objects. Unless an object has been designed with blank spaces for projection, it should not have information projected onto it. Because augmenting objects with information is so problematic (clutter, space, text on text) objects can only be ‘spotlighted’, ‘highlighted’ or have their own image re-projected onto themselves.
  4. Light feels touchable (but it’s not). Through phones and pads we are conditioned into interacting with bright surfaces. It feels intuitive to want to touch, grab, slide and scroll projected things around. However, it is difficult to make it interactive.
  5. The new rules of depth. A lamp sees the world as a stream of images, but also as a three-dimensional space. There is no consistent interaction surface to interact with like in mouse or touch-based systems, light hits any and all surfaces and making them respond to ‘touch’ is difficult. This is due to finger-based interaction being very difficult to achieve with projection and computer vision. Tracking fingers is technically difficult, fingers are small, there is limited/no existing skeletal recognition software for detecting hands.
  6. Smart light should be respectful. Projected light inhabits the world alongside us, it augments and affects the things we use every day. Unlike interfaces that are contained in screens, the boundaries of the lamps vision and projection are much more obscure. Lamps ‘look’ at the world through cameras, which mean that they should be trustworthy companions.

Next, we started to create some speculative products using these rules, particularly focussed around the idea of “Enchanted Objects”

Smart Light, Dumb Products

 


These are a set of physical products based on digital media and services such as YouTube watching, Google calendar, music streaming that have no computation or electronics in them at all.

All of the interaction and media is served from a Smart Light Lamp that acts on the product surface to turn it from a block into an ‘enchanted object’.

Joe started with a further investigation of the aesthetic qualities of light on materials.

Projection materials from BERG on Vimeo.

This lead to sketches exploring techniques of projection mapping on desktop scales. It’s something often seen at large scales, manipulating our perceptions of architectural facades with animated projected light, but we wanted to understand how it felt at more intimate human scale of projecting onto everyday objects.

In the final film you might notice some of the lovely effects this can create to attract attention to the surface of the object – guiding perhaps to notifications from a service in the cloud, or alerts in a UI.

Then some sketching in code – using computer vision to create optical switches – that make or break a recognizable optical marker depending on movement. In a final product these markers could be invisible to the human eye but observable by computer vision. Similarly – tracking markers to provide controls for video navigation, calendar alerts etc.

Fiducial switch from BERG on Vimeo.

Joe worked with physical prototypes – first simple nets in card and then more finished models to uncover some of the challenges of form in relation to angles of projection and computer vision.

For instance in the Video object, a pulley system has to connect the dial the user operates to the marker that the Lamp sees, so that it’s not obscured from the computer vision software.

Here’s the final output from these experiments:

Dumb things, smart light from BERG on Vimeo.

This sub-project was a fascinating test of our Smart Light HIG – which lead to more questions and opportunities.

For instance, one might imagine that the physical product – as well as housing dedicated and useful controls for the service it is matched to – could act as a ‘key’ to be recognised by computer vision to allow access to the service.

What if subscriptions to digital services were sold as beautiful robot-readable objects, each carved at point-of-purchase with a wonderful individually-generated pattern to unlock access?

What happened next: Universe-B

 


From the distance of a year since we finished this work, it’s interesting to compare its outlook to that of the much-more ambitious and fully-realised Google Glass project that was unveiled this summer.

Google Glass inherits a vision of ubiquitous computing that has been strived after for decades.

As a technical challenge it’s been one that academics and engineers in industry have failed to make compelling to the general populace. The Google team’s achievement in realising this vision is undoubtedly impressive. I can’t wait to try them! (hint, hint!)

It’s also a vision that is personal and, one might argue, introverted – where the Big Brain is looking at the same things as you and trying to understand them, but the results are personal, never shared with the people you are with. The result could be an incredibly powerful, but subjective overlay on the world.

In other words, the mirrorworld has a population of 1. You.

Lamps uses similar techniques of computer vision, context-sensing and machine learning but its display is in the world, the cloud is painted on the world. In the words of William Gibson, the mirrorworld is becoming part of our world – everted into the spaces we live in.

The mirrorworld is shared with you, and those you are with.

This brings with it advantages (collaboration, evidence) and disadvantages (privacy, physical constraints) – but perhaps consider it as a complementary alternative future… A Universe-B where Google broke out of the glass.


Postscript: the scenius of Lamps

 


No design happens in a vacuum, and culture has a way of bubbling up a lot of similar things all at the same time. While not an exhaustive list, we want to acknowledge that! Some of these projects are precedent to our work, and some emerged in the nine months of the project or since.

Here are a selection of less-academic projects using projection and computer-vision that Joe picked out from the last year or so:


Huge thanks to Tom Uglow, Sara Rowghani, Chris Lauritzen, Ben Malbon, Chris Wiggins, Robert Wong, Andy Berndt and all those we worked with at Google Creative Lab for their collaboration and support throughout the project.

Instruments of Politeness

We weren’t at SxSW, but some of our friends were – and via their twitter-exhaust this report by David Sherwin of FrogDesign from a talk by Intel’s Genevieve Bell popped up on our radar.

In her panel yesterday at South by Southwest, Genevieve Bell posed the following question: “What might we really want from our devices?” In her field research as a cultural anthropologist and Intel Fellow, she surfaced themes that might be familiar to those striving to create the next generation of interconnected devices. Adaptable, anticipatory, predictive: tick the box. However, what happens when our devices are sensitive, respectful, devout, and perhaps a bit secretive? Smart devices are “more than being context aware,” Bell said. “It’s being aware of consequences of context.”

Here’s a lovely quote from Genevieve:

“[Today’s devices] blurt out the absolute truth as they know it. A smart device [in the future] might know when NOT to blurt out the truth.”

This in turn, reminded me of a lovely project that Steffen Fiedler did back in 2009 during a brief I helped run at the RCA Design Interactions course as part T-Mobile’s ongoing e-Etiquette project, called “Instruments of Politeness“.

These are the titular instruments – marvellous contraptions!

They’re a set of machines to fool context-aware devices and services – to enable you to tell little white lies with sensors.

For instance, cranking the handle of the machine above simulates something like a pattern of ‘walking’ in the accelerometer data of the phone, so if you told someone you were out running errands (when in fact you were lazing on the sofa) your data-trail wouldn’t catch you out…

Artificial Empathy

Last week, a series of talks on robots, AI, design and society began at London’s Royal Institution, with Alex Deschamps-Sonsino (late of Tinker and now of our friends RIG) giving a presentation on ‘Emotional Robots’, particularly the EU-funded research work of ‘LIREC‘ that she is involved with.

Alex Deschamps-Sonsino on Emotional Robots at the Royal Institution

It was a thought-provoking talk, and as a result my notebook pages are filled with reactions and thoughts to follow-up rather than a recording of what she said.

My notes from Alex D-S's 'Emotional Robots' talk at the RI

LIREC‘s work is centred around a academic deconstruction of human emotional relations to each other, pets and objects – considering them as companions.

Very interesting!

These are themes dear to our hearts cf. Products Are People Too, Pullman-esque daemons and B.A.S.A.A.P.

Design principle #1

With B.A.S.A.A.P. in mind, I was particularly struck by the animal behaviour studies that LIREC members are carrying out, looking into how dogs learn and adapt as companions with their human owners, and learn how to negotiate different contexts in a almost symbiotic relationship with their humans.

December 24, 2009_15-19

Alex pointed out that the dogs sometimes test their owners – taking their behaviour to the edge of transgression in order to build a model of how to behave.

13-February-2010_14.54

Adaptive potentiation – serious play! Which lead me off onto thoughts of Brian Sutton-Smith and both his books ‘Ambiguity of Play’ and ‘Toys as Culture’. The LIREC work made me imagine the beginnings of a future literature of how robots play to adapt and learn.

Supertoys (last all summer long) as culture!

Which led me to my question to Alex at the end of her talk – which I formulated badly I think, and might stumble again here to write down clearly.

In essence – dogs and domesticated animals model our emotional states, and we model theirs – to come to an understanding. There’s no direct understanding there – just simulations running in both our minds of each other, which leads to a working relationship usually.

14-February-2010_12.42

My question was whether LIREC’s approach of deconstruction and reconstruction of emotions would be less successful than the ‘brute-force’ approach of simulating the 17,000 years or so domestication of wild animals in companion robots.

Imagine genetic algorithms creating ‘hopeful monsters‘ that could be judged as more or less loveable and iterated upon…

Another friend, Kevin Slavin recently gave a great talk at LIFT11, about the algorithms that surround and control our lives – that ‘we can write but can’t read’ the complex behaviours they generate.

He gave the example of http://www.boxcar2d.com/ – that generates ‘hopeful monster’ wheeled devices that have to cross a landscape.

The little genetic algorithm that could

As Kevin says – it’s “Sometimes heartbreaking”.

Some succeed, some fail – we map personality and empathise with them when they get stuck.

I was also reminded of another favourite design-fiction of the studio – Bruce Sterling’s ‘Taklamakan

Pete stared at the dissected robots, a cooling mass of nerve-netting, batteries, veiny armor plates, and gelatin.
“Why do they look so crazy?”
“‘Cause they grew all by themselves. Nobody ever designed them.”
Katrinko glanced up.

Another question from the audience featured a wonderful term that I at least I had never heard used before – “Artificial Empathy”.

Artificial Empathy, in place of Artificial Intelligence.

Artificial Empathy is at the core of B.A.S.A.A.P. – it’s what powers Kacie Kinzer’s Tweenbots, and it’s what Byron and Nass were describing in The Media Equation to some extent, which of course brings us back to Clippy.

Clippy was referenced by Alex in her talk, and has been resurrected again as an auto-critique to current efforts to design and build agents and ‘things with behaviour’

One thing I recalled which I don’t think I’ve mentioned in previous discussions was that back in 1997, when Clippy was at the height of his powers – I did something that we’re told (quite rightly to some extent) no-one ever does – I changed the defaults.

You might not know, but there were several skins you could place on top of Clippy from his default paperclip avatar – a little cartoon Einstein, an ersatz Shakespeare… and a number of others.

I chose a dog, which promptly got named ‘Ajax’ by my friend Jane Black. I not only forgave Ajax every infraction, every interruption – but I welcomed his presence. I invited him to spend more and more time with me.

I played with him.

Sometimes we’re that easy to please.

I wonder if playing to that 17,000 years of cultural hardwiring is enough in some ways.

In the bar afterwards a few of us talked about this – and the conversation turned to ‘Big Dog’.

Big Dog doesn’t look like a dog, more like a massive crossbreed of ED-209, the bottom-half of a carousel horse and a black-and-decker workmate. However, if you’ve watched the video then you probably, like most of the people in the bar shouted at one point – “DON’T KICK BIG DOG!!!”.

Big Dog’s movements and reactions – it’s behaviour in response to being kicked by one of it’s human testers (about 36 seconds into the video above) is not expressed in a designed face, or with sad ‘Dreamworks’ eyebrows – but in pure reaction – which uncannily resembles the evasion and unsteadiness of a just-abused animal.

It’s heart-rending.

But, I imagine (I don’t know) it’s an emergent behaviour of it’s programming and design for other goals e.g. reacting to and traversing irregular terrain.

Again like Boxcar2d, we do the work, we ascribe hurt and pain to something that absolutely cannot be proven to experience it – and we are changed.

So – we are the emotional computing power in these relationships – as LIREC and Alex are exploring – and perhaps we should design our robotic companions accordingly.

Or perhaps we let this new nature condition us – and we head into a messy few decades of accelerated domestication and renegotiation of what we love – and what we think loves us back.


P.S.: This post contains lost of images from our friend Matt Cottam’s wonderful “Dogs I Meet” set on Flickr, which makes me wonder about a future “Robots I Meet” set which might illicit such emotions…

Marking immaterials

Earlier in our involvement with Touch, Timo and I held a workshop with Alex Jarvis (currently at moo.com) and Mark Williams (now at Venture Three) to explore notation for RFID and the actions hidden in the readers.

One of my favourites that emerged from the day was this one.

Pay-coins.png

It shows how far we were reaching for metaphorical handles – around which to characterise the technology, relying on the verbs associated with the result of the interaction: to Pay, to Open, to Delete etc.

Physically the systems are very different and are more frequently represented by their envelope packaging, like the Oyster card. Branded systems have chosen to use characters, my favourite is the Suica Penguin.

suica.png

During the visualisation work, the cross sections in the readable volumes that emerged began to feel very strong visually. They capture an essential nature in the technology which is difficult to unearth with symbols based on metaphors.

Timo and I experimented with forms which have an almost typographic nature ranging to a more strictly geometric shape.

experimental-icons.png

We settled on this most geometric version. It would be terrific to see this picked up and used as a symbol for the technology in public.

geometric-icon.png

A CC licensed pdf of the ‘geometric’ can be found in the Touch vaults.

Nearness

Last week Timo and I finished filming and editing Nearness. Earlier in the year BERG was commissioned by AHO/Touch to produce a series of explorations into designerly applications for RFID (more to come on what that means). Over the coming weeks BERG will be sharing the results of the work here and on the Touch blog.

The film Nearness explores interacting without touching. With RFID it’s proximity that matters, and actual contact isn’t necessary. Much of Timo’s work in the Touch project addresses the fictions and speculations in the technology. Here we play with the problems of invisibility and the magic of being close.

The work refers fondly to the Fischli and Weiss The Way Things Go film and its controversial offspring The Honda ‘Cog’ commercial. There are of course any number of awesome feats of domestic engineering on YouTube. Japanese culture has taken the form to its heart. My favourite examples are the bumpers in the kids science show Pythagora Switch. Here’s a clip.

Our twist is that the paired objects do not hit or knock, they touch without touching.

Olinda interface drawings

Last week, Tristan Ferne who leads the R&D team in BBC Audio & Music Interactive gave a talk at Radio at the Edge (written up in Radio Today). As a part of his talk he discussed progress on Olinda.

Most of the design and conceptual work for the radio is finished now. We are dealing with the remaining technicalities of bringing the radio into the world. To aid Tristan’s presentation we drew up some slides outlining how we expect the core functionality to work when the radio manifests.

Social module

Social Module sequence

This animated sequence shows how the social module is expected to work. The radio begins tuned to BBC Radio 2. A light corresponding to Matt’s radio lights up on the social module. When the lit button is pressed, the top screen reveals Matt is listening to Radio 6 Music, which is selected and the radio retunes to that station.

Tuning

Tuning drawing

This detail shows how the list management will work. The radio has a dual rotary dial for tuning between the different DAB stations. The outer dial cycles through the full list of all the stations the radio has successfully scanned for. The inner dial filters the list down and cycles through the top five most listened to stations. We’ll write more on why we’ve made these choices when the radio is finished.

RFID icons

Earlier this year we hosted a workshop for Timo Arnall‘s Touch project. This was a continuation of the brief I set my students late last year, to design an icon or series of icons to communicate the use of RFID technology publicly. The students who took on the work wholeheartedly delivered some early results which I summarised here.

This next stage of the project involved developing the original responses to the brief into a small number of icons to be tested, by Nokia, with a pool of 25 participants to discover their responses. Eventually these icons could end up in use on RFID-enabled surfaces, such as mobile phones, gates, and tills.

Timo and I spent an intense day working with Alex Jarvis and Mark Williams. The intention for the day was to leave us with a series of images which could be used to test responses. The images needed consistency and fairly conservative limits were placed on what should be produced. Timo’s post on the workshop includes a good list of references and detailed outline of the requirements for the day.

I’m going to discuss two of the paths I was most involved with. The first is around how the imagery and icons can represent fields we imagine are present in RFID technology.

Four sketches exploring the presence of an RFID field

The following four sketches are initial ideas designed to explore how representation of fields can help imply the potential use of RFID. The images will evolve into the worked-up icons to be tested by Nokia, so the explorations are based around mobile phones.

I’m not talking about what is actually happening with the electromagnetic field induction and so forth. These explorations are about building on the idea of what might be happening and seeing what imagery can emerge to support communication.

The first sketch uses the pattern of the field to represent that information is being transferred.

Fields sketch 01

The two sketches below imply the completion of the communication by repeating the shape or symbol in the mind or face of the target. The sketch on the left uses the edge of the field (made of triangles) to indicate that data is being carried.

Fields sketch 02

I like this final of the four sketches, below, which attempts to deal with two objects exchanging an idea. It is really over complex and looks a bit illuminati, but I’d love to explore this all more and see where it leads.

Fields sketch 03

Simplifying and working-up the sketches into icons

For the purposes of our testing, these sketches were attempting too much too early so we remained focused on more abstract imagery and how that might be integrated into the icons we had developed so far. The sketch below uses the texture of the field to show the communication.

fields-04.jpg

Retaining the mingling fields, these sketches became icons. Both of the results below imply interference and the meeting of fields, but they are also burdened by seeming atomic, or planet sized and a annoyingly (but perhaps appropriately) like credit card logos. Although I really like the imagery that emerges, I’m not sure how much it is doing to help think about what is actually happening.

Fields sketch 05

Fields sketch 06

Representing purchasing via RFID, as icons

While the first path was for icons simply to represent RFID being available, the second path was specifically about the development of icons to show RFID used for making a purchase (‘purchase’ is one of the several RFID verbs from the original brief).

There is something odd about using RFID tags. They leave you feeling uncertain, and distanced from the exchange or instruction. When passing an automated mechanical (pre-RFID) ticket barrier, or using a coin operated machine, the time the machines take to respond feels closely related to the mechanism required to trigger it. Because RFID is so invisible, any timings or response feels arbitrary. When turning a key in a lock, this actually releases the door. When waving an RFID keyfob at reader pad, one is setting off a hidden computational process which will eventually lead to a mechanical unlocking of the door.

Given the secretive nature of RFID, our approach to download icons that emerged was based on the next image, originally commissioned from me by Matt for a talk a couple of years ago. It struck me as very like using an RFID enabled phone. The phone has a secret system for pressing secret buttons that you yourself can’t push.

Hand from Phone

Many of the verbs we are examining, like purchase, download or open, communicate really well through hands. The idea of representing RFID behaviours through images of hands emerging from phones performing actions has a great deal of potential. Part of the strength of the following images comes from the familiarity of the mobile phone as an icon–it side-steps some of the problems faced in attempting to represent an RFID directly.

The following sketches deal with purchase between two phones.

Purchase hands sketch

Below are the two final icons that will go for testing. There is some ambiguity about whether coins are being taken or given, and I’m pleased that we managed to get something this unusual and bizarre into the testing process.

Hands purchase 01

Hands purchase 02

Alex submitted a poster for his degree work, representing all the material for testing from the workshop:

Outcomes

The intention is to continue iterations and build upon this work once the material has been tested (along with other icons). As another direction, I’d like to take these icons and make them situated, perhaps for particular malls or particular interfaces, integrating with the physical environment and language of specific machines.

RFID Interim update

Last term during an interim crit, I saw the work my students had produced on the RFID icons brief I set some weeks ago. It was a good afternoon and we were lucky enough to have Timo Arnall from the Touch project and Younghee Jung from Nokia Japan join us and contribute to the discussion. All the students attending showed good work of a high standard, overall it was very rewarding.

I’ll write a more detailed discussion on the results of the work when the brief ends, but I suspect there may be more than I can fit into a single post, so I wanted to point at some of the work that has emerged so far.

All the work here is from Alex Jarvis and Mark Williams.

Alex began by looking at the physical act of swiping your phone or card over a reader. The symbol he developed was based on his observations of people slapping their Oyster wallets down as they pass through the gates on to the underground. Not a delicate, patient hover over the yellow disc, but a casual thud, expectant wait for the barrier to open, then a lurching acceleration through to the other side before the gates violently spasm shut.

RFID physical act 01

More developed sketches here…

RFID physical act 02

I suspect that this inverted tick will abstract really well, I like the thin line on the more developed version snapping the path of the card into 3D. It succeeds since it doesn’t worry too much about working as an instruction and concentrates more on a powerful cross-system icon to be consistently recognisable.

Verbs

The original brief required students to develop icons for the verbs: purchase, identify, enter (but one way), download, phone and destroy.

Purchase and destroy are the two of these verbs with the most far-reaching and less immediate consequences. The aspiration for this work is to make the interaction feel like a purchase, not a touch that triggers a purchase. This gives the interaction room to grow into the more complex ones that will be needed in the future.

This first sketch, on purchase, from Alex shows your stack of coins depleting, something nice about the dark black arrow which repeats as a feature throughout Alex’s developments.

RFID Purchase 01

Mark has also been tackling purchase, his sketches tap into the currency symbols, again with a view to represent depletion. Such a blunt representation is attractive, it shouts “this will erode your currency!”

RFID Purchase 03

Mark explores some more on purchase here:

RFID Purchase 02

Purchase is really important. I can’t think of a system other than Oyster that takes your money so ambiguously. Most purchasing systems require you to enter pin numbers, sign things, swipe cards etc, all really clear unambiguous acts. All you have to do is wave at an Oyster reader and it costs you £2… maybe: The same act will open the barrier for free if you have a travel card on there. Granted, passengers have already made a purchase to put the money on the card, but if Transport for London do want to extend their system for use as a digital wallet they will need to tackle this ambiguity.

Both Mark and Alex produced material looking at the symbols to represent destroy, for instances where swiping the reader would obliterate data on it, or render it useless. This might also serve as a warning for areas where RFID tags were prone to damage.

RFID Destroy 01

I like the pencil drawing to the top right that he didn’t take forward. I’ve adjusted the contrast over it to draw out some more detail. Important that he distinguished between representing the destruction of the object and the data or contents.

Williams Destroy sketches

Mark’s sketches for destroy include the excellent mushroom cloud, but he also looks at an abstraction of data disassembly, almost looks like the individual bits of data are floating off into oblivion. Not completely successful since it also reminds me of broadcasting Wonka bars in Charlie and the Chocolate Factory and teleporting in Star Trek, but nice none the less.

Drawing

This is difficult to show online, but Alex works with a real pen, at scale. He is seeing the material he’s developing at the same size it will be read at. Each mark he makes he is seeing and responding to as he makes it.

Jarvis Pen

He has produced some material with Illustrator, but it lacked any of the impact his drawings brought to the icons. Drawing with a pen really helps avoid the Adobe poisoning that comes from Illustrator defaults and the complexities of working out of scale with the zoom tool (you can almost smell the 1pt line widths and the 4.2333 mm radius on the corners of the rounded rectangle tool). It forces him to choose every line and width and understand the success and failures that come with those choices. Illustrator does so much for you it barely leaves you with any unique agency at all.

It is interesting to compare the students’ two approaches. Alex works bluntly with bold weighty lines and stubby arrows portraying actual things moving or downloading. Mark tends towards more sophisticated representations and abstractions, and mini comic strips in a single icon. Lightness of touch and branching paths of exploration are his preference.

More to come from both students and I’ll also post some of my own efforts in this area.

RFID and forced intimacy

S&W is here in Oslo with Timo Arnall’s Touch project for an RFID hacking workshop (check out that hand-drawn antenna field map). Yesterday was introductions, learning about RFID as a technology, and some preliminary explorations.

The work group met for breakfast today, and we discussed promising interactions and potential projects. One of the topics that came up was RFID tag visibility.

I know it’s obvious to state it, but RFID tags are generally hidden. To read a tag, first you have to find it with your reader. Design can make the location of the tag obvious… but wouldn’t it be interesting to embrace the invisibility constraint? Could we take advantage of the seeking behaviour that has to occur?

Consider a car showroom. Imagine no salespeople there, and no prices on the cars. When you entered the showroom, you would be given a RFID reader. There would be an RFID tag, holding the price, hidden somewhere on each car. You’d have to find the rough location of the tag to read the price.

Okay, this would be enormously annoying. But it would force you to step closer to the car, to examine the wing mirror to see if a tag was there, get up close to the paint-work on the door. What would happen?

When you get bright lights and noises in films, you feel excited whether the narrative of the film is exciting or not. Playing Project Rub on the Nintendo DS, the blowing interaction forces you to get adrenalised. Your body is tricked. Maybe getting really close to the car would be a kind of forced intimacy. You would feel better disposed to the car whether you liked it or not, simply by virtue of almost hugging it.

Okay, that’s car showrooms. What about parties for teenagers? We made up a Spin the Bottle type game. Oh, and gave it a pirate theme.

The scenario is this: Everyone who comes to your house party gets a token that looks like a gold coin with an RFID tag in it (holding a unique ID). People at the party take turn with the RFID reader. They have to wear a pirate hat, and the reader looks like a buccaneer’s sword. Let’s say you have the sword. You press a button on the handle, and it chooses a random RFID tag. This is the tag you have to find, by going round to everyone at the party and sweeping them with the sword.

Now let’s say you’re a person with a coin. If you’re not too keen on the person wearing the pirate hat, you’d put the coin in your pocket, or under your collar, so it could be found quickly. But maybe if you fancy the person with the hat, you’d conceal the coin a little, to make it harder to find. Gosh. I think it could get a little bit sexy.

I like this game because it celebrates the invisibility of the RFID tags, the fact they have a short detection range, and that the range can be shortened with material. It’s a treasure hunt game, but it doesn’t matter whether you have the chosen coin or not–it can be flirty in any case. It supports social interactions (ahem) rather than displacing them.

We had other ludicrous game ideas: Croquet where the hoops had tags and the balls had readers, where the balls would speak what you had to do next. “You have taken 4 hits. Now get through hoop 2,” your ball would say. You wouldn’t need the rule book. We also considered playing penny football with RFID tags, the readers snapping onto the edges of the tables and recording the number of goals. They could show the score on LCD screens, and play cheering sounds whenever a goal was scored.

Both of these games feel as silly to me as Slapz, the electronic game that replaces the children’s game Red Hands (or “slaps”).

The forced intimacy treasure game feels just as silly, but much more fun.

Coke Happiness Factory

My favourite television ad at the moment is the Coca Cola one where the chap pushes his money into the vending machine and it triggers a sequence of magical adventures in a fantasy world, culminating in the fireworks-accompanied delivery of a cold bottle of Coke.

Coke Happiness Factory

I like to think that all vending machines look like this inside. It’s a great way to make special the purchasing act.

Aside from product-specific events, like drinking for Coke, almost all our interactions with products are punctuated with moments like this: encountering, selecting, purchasing, showing off, selling. There are more. Each moment is an experience that good product design and advertising can make special, and use as a hook for brand communication. Each is a threshold at which, while the mere physical reality of the world doesn’t change, we’re taken us from one life to another–perhaps from being content with our lot to feeling covetous, or from not owning something to having it. These thresholds may appear small but they carry tremendous weight and meaning for us, are important in our individual and social lives, and are opportunities for design.

Some companies understand this very well. Apple make products that are legendarily pleasurable to unbox. The laptops have a charged battery from the get-go; the iPod box opens to reveal the device like a pearl. Unboxing is a weblog celebrating precisely this experience. Tiffany use high quality packaging to protect the jewels–but also because the glamour of passing over this ownership threshold reflects back on the glamour of the jewellery. (Tiffany have also clothed models in the packaging paper, which builds up the glamourous associations.) As a very different kind of business, Amazon understands that it’s not just about selling books. It’s about being present for customer for the duration of the life-cycle of the book, during browsing, discovering, learning about, wanting, purchasing, reviewing and finally selling on to someone else.

I think that discerning consumers – and we’re all more discerning now – delight in products when these acts are delightful, not because a product is like a cool big brother to us, or has a particular lifestyle we want to associate with.

Anyway, I like the Coke advert because it speaks directly to one of those acts, and also because it’s terribly pretty and vending machines are cool. Duncan’s TV Ad Land has more about Coke Happiness Factory and the team behind it (including a link to a large movie download), and you can watch the ad at YouTube.

Recent Posts

Popular Tags