This website is now archived. To find out what BERG did next, go to www.bergcloud.com.

Blog posts tagged as 'talks'

Breaking Out & Breaking In at Studio-X, NYC, Monday 30th April

Columbia University’s Studio-X NYC is hosting a fascinating evening that I’m going to be part of to close-out their “Breaking In & Breaking Out” virtual film festival focusing on the interplay of architecture with daring heists and escapes in the movies.

I’m going to speak a little bit about infrastructure, phones, watches, time and timetables – following on from this short post of mine from a while back.

But – mainly I’m going to listen – to the amazing line-up of actual serving and ex-FBI heist experts that has been assembled for the evening…

Hope to see you there.

An “Evening with BERG” at St. Brides, London, March 21st

On March 21st, St. Bride’s Type Library in London are hosting us for the rather-marvellously titled “An Evening with BERG”.

Timo is the headliner, but there’ll be a few of us from the studio, discussing our work and approach to it – and hopefully getting a good discussion going.

If you’d like to come along, the event page is here.

I’m rather hoping there will be bar stools and tumblers of scotch, a bit like Dave Allen used to have, while we tell tall-tales of injection-molding and machine intelligence…

Gardens and Zoos

This is a version of a talk that I gave at the “In Progress” event, staged by ‘It’s Nice That‘ magazine.

It builds on some thoughts that I’ve spoken about at some other events in 2011, but I think this version put it forward in the way I’m happiest with.

Having said that, I took the name of the event literally – it’s a bit of a work-in-progress, still.

It might more properly be entitled ‘Pets & Pot-plants’ rather than ‘Gardens & Zoos’ – but the audience seemed to enjoy it, and hopefully it framed some of the things we’re thinking about and discussing in the studio over the last year or so, as we’ve been working on http://bergcloud.com and other projects looking at the near-future of connected products.

And – with that proviso… Here it is.

Let me introduce a few characters…

This is my frying pan. I bought it in Helsinki. It’s very good at making omelettes.

This is Sukie. She’s a pot-plant that we adopted from our friend Heather’s ‘Wayward Plants‘ project, at the Radical Nature exhibit at the Barbican (where “In Progress” is!)


This is a puppy – we’ll call him ‘Bruno’.

I have no idea if that’s his name, but it’s from our friend Matt Cottam’s “Dogs I Meet” flickr set, and Matt’s dog is called Bruno – so it seemed fitting.


And finally, this is Siri – a bot.


And, I’m Matt Jones – a designer and one of the principals at BERG, a design and invention studio.


There are currently 13 of us – half-technologists, half-designers, sharing a room in East London where we invent products for ourselves and for other people – generally large technology and media companies.


This is Availabot, one of the first products that we designed – it’s a small connected product that represents your online status physically…


But I’m going to talk today about the near-future of connected products.

And it is a near-future, not far from the present.


In fact, one of our favourite quotes about the future is from William Burroughs: When you cut into the present, the future leaks out…


A place we like to ‘cut into the present’ is the Argos catalogue! Matt Webb’s talked about this before.

It’s really where you see Moore’s Law hit the high-street.

Whether it’s toys, kitchen gear or sports equipment – it’s getting hard to find consumer goods that don’t have software inside them.


This is near-future where the things around us start to display behaviour – acquiring motive and agency as they act and react to the context around them according to the software they have inside them, and increasingly the information they get from (and publish back to) the network.

In this near-future, it’s very hard to identify the ‘U’ in UI’ – that is, the User in User-Interface. It’s not so clear anymore what these things are. Tools… or something more.

Of course, I choose to illustrate this slightly-nuanced point with a video of kittens riding a Roomba that Matt Webb found, so you might not be convinced.


However, this brings us back to our new friends, the Bots.


By bot – I guess I mean a piece of software that displays a behaviour, that has motive and agency.


Let me show a clip about Siri, and how having bots in our lives might affect us [Contains Strong Language!]

Perhaps, like me – you have more sympathy for the non-human in that clip…


But how about some other visions of what it might be like to have non-human companions in our lives? For instance, the ‘daemons’ of Phillip Pullman’s ‘Dark Materials‘ trilogy. They are you, but not you – able to reveal things about you and reveal things to you. Able to interact naturally with you and each other.


Creatures we’ve made that play and explore the world don’t seem that far-fetched anymore. This is a clip of work on juggling robot quadcopters by ETH Zurich.

Which brings me back to my earlier thought – that it’s hard to see where the User in User-Interfaces might be. User-Centred Design has been the accepted wisdom for decades in interaction design.

I like this quote that my friend Karsten introduced me to, by Prof Bertrand Meyer (coincidentally at professor at ETH) that might offer an alternative view…

A more fruitful stance for interaction design in this new landscape might be that offered by Actor-Network Theory?


I like this snippet from a formulation of ANT based on work by Geoff Walsham et al.

“Creating a body of allies, human and non-human…”

Which brings me back to this thing…

Which is pretty unequivocally a tool. No motive, no agency. The behaviour is that of it’s evident, material properties.


Domestic pets, by contrast, are chock-full of behaviour, motive, agency. We have a model of what they want, and how they behave in certain contexts – as they do of us, we think.

We’ll never know, truly of course.

They can surprise us.

That’s part of why we love them.


But what about these things?

Even though we might give them names, and have an idea of their ‘motive’ and behaviour, they have little or no direct agency. They move around by getting us to move them around, by thriving or wilting…

And – this occurred to me while doing this talk – what are houseplants for?

Let’s leave that one hanging for a while…


And come back to design – or more specifically – some of the impulses beneath it. To make things, and to make sense of things. This is one of my favourite quotes about that. I found it in an exhibition explaining the engineering design of the Sydney Opera House.

Making models to understand is what we do as we design.

And, as we design for slightly-unpredictable, non-human-centred near-futures we need to make more of them, and share them so we can play with them, spin them round, pick them apart and talk about what we want them to be – together.


I’ll just quickly mention some of the things we talk about a lot in our work. The things we think are important in the models, and designs we make for connected products. The first one is legibility. That the product or service presents a readable, evident model of how it works to the world on it’s surface. That there is legible feedback, and you can quickly construct a theory how it works through that feedback.


One of the least useful notions you come up against, particularly in technology companies, is the stated ambition that the use of products and services should be ‘seamless experiences’.

Matthew Chalmers has stated (after Mark Weiser, one of the founding figures of ‘ubicomp’) that we need to design “seamful systems, with beautiful seams”

Beautiful seams attract us to the legible surfaces of a thing, and allow our imagination in – so that we start to build a model in our minds (and appreciate the craft at work, the values of the thing, the values of those that made it, and how we might adapt it to our values – but that’s another topic)


Finally – this guy – who pops up a lot on whiteboards in the studio, or when we’re working with clients.

B.A.S.A.A.P. is a bit of an internal manifesto at BERG, and stands for Be As Smart As A Puppy – and it’s something I’ve written about at length before.


It stems from something robotics and AI expert Rodney Brooks said… that if we put the fifty smartest people in a room for fifty years, we’d be luck if we make AIs as smart as a puppy.

We see this an opportunity rather than a problem!

We’ve made our goal to look to other models of intelligence and emotional response in products and services than emulating what we’d expect from humans.

Which is what this talk is about. Sort-of.

But before we move on, a quick example of how we express these three values in our work.

“Text Camera” is a very quick sketch of something that we think illustrates legibility, seamful-ness and BASAAP neatly.

Text Camera is about making the inputs and inferences the phone sees around it to ask a series of friendly questions that help to make clearer what it can sense and interpret. It kind of reports back on what it sees in text, rather through a video feed.

Let me explain one of the things it can do as an example. Your smartphone camera has a bunch of software to interpret the light it’s seeing around you – in order to adjust the exposure automatically.

So, we look to that and see if it’s reporting ‘tungsten light’ for instance, and can infer from that whether to ask the question “Am I indoors?”.

Through the dialog we feel the seams – the capabilities and affordances of the smartphone, and start to make a model of what it can do.

So next, I want to talk a little about a story you might be familiar with – that of…

I hope that last line doesn’t spoil it for anyone who hasn’t seen it yet…

But – over the last year I’ve been talking with lot to people about a short scene in the original 1977 Star Wars movie ‘A New Hope’ – where Luke and his Uncle Owen are attempting to buy some droids from the Jawas that have pulled up outside their farmstead.


I’ve become a little obsessed with this sequence – where the droids are presented like… Appliances? Livestock?

Or more troublingly, slaves?

Luke and Uncle Owen relate to them as all three – at the same time addressing them directly, aggressively and passive-aggressively. It’s such a rich mix of ways that ‘human and non-human actors’ might communicate.

Odd, and perhaps the most interesting slice of ‘science-fiction’ in what otherwise is squarely a fantasy film.

Of course Artoo and Threepio are really just…

Men in tin-suits, but our suspension of belief is powerful! Which brings me to the next thing we should quickly throw into the mix of the near-future…


This is the pedal of my Brompton bike. It’s also a yapping dog (to me at least)

Our brains are hard-wired to see faces, it’s part of a phenomena called ‘Pareidolia

It’s something we’ve talked about before on the BERGblog, particularly in connection with Schoolscope. I started a group on flickr called “Hello Little Fella” to catalogue my pareidolic-excesses (other facespotting groups are available).

This little fella is probably my favourite.

He’s a little bit ill, and has a temperature.

Anyway.

The reason for this particular digression is to point out that one of the prime materials we work with as interaction designers is human perception. We try to design things that work to take advantage of its particular capabilities and peculiarities.

I’m not sure if anyone here remembers the Apple Newton and the Palm Pilot?

The Newton was an incredible technological attainment for it’s time – recognising the user’s handwriting. The Palm instead forced us to learn a new type of writing (“Graffiti“).

We’re generally faster learners than our technology, as long as we are given something that can be easily approached and mastered. We’re more plastic and malleable – what we do changes our brains – so the ‘wily’ technology (and it’s designers) will sieze upon this and use it…

All of which leaves me wondering whether we are working towards Artificial Empathy, rather than Artificial Intelligence in the things we are designing…

If you’ve seen this video of ‘Big Dog’, an all-terrain robot by Boston Dynamics – and you’re anything like me – then you flinch when it’s tester kicks it.

To quote from our ‘Artificial Empathy’ post:

Big Dog’s movements and reactions – it’s behaviour in response to being kicked by one of it’s human testers (about 36 seconds into the video above) is not expressed in a designed face, or with sad ‘Dreamworks’ eyebrows – but in pure reaction – which uncannily resembles the evasion and unsteadiness of a just-abused animal.

Of course, before we get too carried away by artificial empathy, we shouldn’t forget what Big Dog is primarily designed for, and funded by…

Anyway – coming back to ‘wily’ tactics, here’s the often-referenced ‘Uncanny Valley’ diagram, showing the relationship between ever-more-realistic simulations of life, particularly humans and our ‘familiarity’ with them.

Basically, as we get ever closer to trying to create lifelike-simulations of humans, they start to creep us out.

It can perhaps be most neatly summed up as our reaction to things like the creepy, mocapped synthespians in the movie Polar Express…

The ‘wily’ tactic then would be to stay far away from the valley – aim to make technology behave with empathic qualities that aren’t human at all, and let us fill in the gaps as we do so well.

Which, brings us back to BASAAP, which as Rodney Brooks pointed out – is still really tough.

Bruno’s wild ancestors started to brute-force the problem of creating artificial empathy and a working companion-species relationship with humans through the long, complex process of domestication and selective-breeding…

…from that point the first time these kind of eyes were made towards scraps of meat held at the end of a campfire somewhere between 12-30,000 years ago…

Some robot designers have opted to stay on the non-human side of the uncanny valley, notably perhaps Sony with AIBO.

Here’s an interesting study from 2003 that hints a little at what the effects of designing for ‘artificial empathy’ might be.

We’re good at holding conflicting models of things in our heads at the same time it seems. That AIBO is a technology, but that it also has ‘an inner life’.

Take a look at this blog, where an AIBO owner posts it’s favourite places, and laments:

“[he] almost never – well, make it never – leaves his station these days. It’s not for lack on interest – he still is in front of me at the office – but for want of preservation. You know, if he breaks a leg come a day or a year, will Sony still be there to fix him up?”

(One questioner after my talk asked: “What did the 25% of people who didn’t think AIBO was a technological gadget report it to be?” – Good question!)

Some recommendations of things to look at around this area: the work of Donna Haraway, esp. The Companion Species Manifesto.

Also, the work of Cynthia Brezeal, Heather Knight and Kacie Kinzer – and the ongoing LIREC research project that our friend Alexandra Deschamps-Sonsino is working with, that’s looking to studies of canine behaviour and companionship to influence the design of bots and robots.

In science-fiction there’s a long, long list that could go here – but for now I’ll just point to the most-affecting recent thing I’ve read in the area, Ted Chiang’s novella “The Lifecycle of Software Objects” – which I took as my title for a talk partly on this subject at UX London earlier in the year.

In our own recent work I’d pick out Suwappu, a collaboration with Dentsu London as something where we’re looking to animate, literally, toys with an inner life through a computer-vision application that recognises each character and overlays dialogue and environments around them.

I wonder how this type of technology might develop hand-in-hand with storytelling to engage and delight – while leaving room for the imagination and empathy that we so easily project on things, especially when we are young.

Finally, I want to move away from the companion animal as a model, back to these things…

I said we’d come back to this! Have you ever thought about why we have pot plants? What we have them in the corners of our lives? How did they get there? What are they up to?!?

(Seriously – I haven’t managed yet to find research or a cultural history of how pot-plants became part of our home life. There are obvious routes through farming, gardening and cooking – but what about ornamental plants? If anyone reading this wants to point me at some they’d recommend in the comments to this post, I’d be most grateful!)

Take a look at this – one of the favourite finds of the studio in 2011 – Sticky Light.

It is very beautifully simple. It displays motive and behaviour. We find it fascinating and playful. Of course, part of it’s charm is that it can move around of its own volition – it has agency.

Pot-plants have motives (stay alive, reproduce) and behaviour (grow towards the light, shrivel when not watered) but they don’t have much agency. They rely on us to move them into the light, to water them.

Some recent projects have looked to augment domestic plants with some agency – Botanicalls by Kati London, Kate Hartman, Rebecca Bray and Rob Faludi equips a plant not only with a network connection, but a twitter account! Activated by sensors it can report to you (and its followers) whether it is getting enough water. Some voice, some agency.

(I didn’t have time to mention it in the talk, but I’d also point to James Chamber’s evolution of the idea with his ‘Has Needs’ project, where an abused potplant not only has a network connection, but the means to advertise for a new owner on freecycle…)

Here’s my botanical, which I chose to call Robert Plant…

So, much simpler systems that people or pets can find places in our lives as companions. Legible motives, limited behaviours and agency can illicit response, empathy and engagement from us.

We think this is rich territory for design as the things around us start to acquire means of context-awareness, computation and connectivity.

As we move from making inert tools – that we are unequivocally the users of – to companions, with behaviours that animate them – we wonder whether we should go straight from this…


…to this…

Namely, straight from things with predictable and legible properties and affordances, to things that try and have a peer-relationship, speaking with human voice and making great technological leaps to relate to us in that way, but perhaps with a danger of entering the uncanny valley.

What if there’s an interesting space to design somewhere in-between?

This in part is the inspiration behind some of the thinking in our new platform Berg Cloud, and its first product – Little Printer.

We like to think of Little Printer as something of a ‘Cloud Companion Species’ that mediates the internet and the domestic, that speaks with your smartphone, and digests the web into delightful little chunks that it dispenses when you want.

Little Printer is the beginning of our explorations into these cloud-companions, and BERG Cloud is the means we’re creating to explore them.

Ultimately we’re interested in the potential for new forms of companion species that extend us. A favourite project for us is Natalie Jeremijenko’s “Feral Robotic Dogs” – a fantastic example of legibility, seamful-ness and BASAAP.

Natalie went to communities near reclaimed-land that might still have harmful toxins present, and taught workshops where cheap (remember Argos?) robot dogs that could be bought for $30 or so where opened up and hacked to accommodate new sensors.

They were reprogrammed to seek the chemical traces associated with lingering toxins. Once release by the communities they ‘sniff’ them out, waddling towards the highest concentrations – an immediate tangible and legible visualisation of problem areas.

Perhaps most important was that the communities themselves were the ones taught to open the toys up, repurpose their motives and behaviour – giving them the agency over the technology and evidence they could build themselves.

In the coming world of bots – whether companions or not, we have to attempt to maintain this sort of open literacy. And it is partly the designer’s role to increase its legibility. Not only to beguile and create empathy – but to allow a dialogue.

As Kevin Slavin said about the world of algorithms growing around us“We can write it but we can’t read it”

We need to engage with the complexity and make it open up to us.

To make evident, seamful surfaces through which we can engage with puppy-smart things.

As our friend Chris Heathcote has put so well:

Thanks for inviting me, and for your attention today.


FOOTNOTE: Auger & Loizeau’s Domestic Robots.

I didn’t get the chance to reference the work of James Auger & Jimmy Loizeau in the talk, but their “Carnivorous Robots” project deserves study.

From the project website:

“For a robot to comfortably migrate into our homes, appearance is critical. We applied the concept of adaptation to move beyond the functional forms employed in laboratories and the stereotypical fictional forms often applied to robots. In effect creating a clean slate for designing robot form, then looking to the contemporary domestic landscape and the related areas of fashion and trends for inspiration. The result is that on the surface the CDER series more resemble items of contemporary furniture than traditional robots. This is intended to facilitate a seamless transition into the home through aesthetic adaptation, there are however, subtle anomalies or alien features that are intended to draw the viewer in and encourage further investigation into the object.”

And on robots performing as “Companion Species”

”In the home there are several established object categories each in some way justifying the products presence through the benefit or comfort they bring to the occupant, these include: utility; ornament; companionship; entertainment and combinations of the above, for example, pets can be entertaining and chairs can be ornamental. The simplest route for robots to enter the home would be to follow one of these existing paths but by necessity of definition, offering something above and beyond the products currently occupying those roles.”

James Auger is currently completing his Phd at the RCA on ‘Domestication of Robotics’ and I can’t wait to read it.

Some upcoming talks in November and December

Like buses it seems – none for ages, then loads all at once.

I’ll be speaking at a breakfast at NESTA discussing the “Internet Of Things” with the estimable Mr. Haque on November 22nd.

On the 25th November, there’s an incredible sounding 2-day event staged by Intelligence Squared called the IF Conference which I’ll be giving a sort talk at. The line-up is diverse and strong – reminding me of the late lamented Etech…

After that I’ll be on a panel with various extremely clever folk discussing “Robot Futures” on December 1st at the Science Museum.

Finally (you’ll be glad to hear) I’ll be at the “In Progress” conference run by It’s Nice That, alongside Tom Uglow, Mills from UsTwo, James Bridle and other fine reprobates on December 9th in the Barbican.

All the events above that I’m participating in are in London, but Matt W., I think will be further afield soon, but I’ll let him tell you about that…

Tomorrow’s World

We staged a small event for Internet Week Europe – a night of drinks and ten minute talks. We were totally suprised when it sold out in under ten minutes!

Tomorrow's World - Alice Taylor

Thanks to our great speakers: Alice Taylor, James Bridle, Karsten Schmidt, Fiona Romeo, Jamais Cascio, Russell Davies and Warren Ellis.

Tomorrow's World - Russell Davies

At the end of the night, I asked the packed little room at The Gopher Hole whether we should do it again – and the result was a resounding “yes” – so stay tuned in the new year!

Tomorrow's World - Fiona Romeo

Thanks to Beatrice, Kevin and all at The Gopher Hole, Penny Shaw at Internet Week Europe and most importantly everyone who came along on the night!

The Robot-Readable World

QR

I gave a talk at Glug London last week, where I discussed something that’s been on my mind at least since 2007, when I last talked about it briefly at Interesting.

It is rearing its head in our work, and in work and writings by others – so thought I would give it another airing.

The talk at Glug London bounced through some of our work, and our collective obsession with Mary Poppins, so I’ll cut to the bit about the Robot-Readable World, and rather than try and reproduce the talk I’ll embed the images I showed that evening, but embellish and expand on what I was trying to point at.

Robot-Readable World is a pot to put things in, something that I first started putting things in back in 2007 or so.

At Interesting back then, I drew a parallel between the Apple Newton’s sophisticated, complicated hand-writing recognition and the Palm Pilot’s approach of getting humans to learn a new way to write, i.e. Graffiti.

The connection I was trying to make was that there is a deliberate design approach that makes use of the plasticity and adaptability of humans to meet computers (more than) half way.

Connecting this to computer vision and robotics I said something like:

“What if, instead of designing computers and robots that relate to what we can see, we meet them half-way – covering our environment with markers, codes and RFIDs, making a robot-readable world”

After that I ran a little session at FooCamp in 2009 called “Robot readable world (AR shouldn’t just be for humans)” which was a bit ill-defined and caught up in the early hype of augmented reality…

But the phrase and the thought has been nagging at me ever since.

I read Kevin Kelly’s “What technology wants” recently, and this quote popped out at me:

Three billion artificial eyes!

In zoologist Andrew Parker’s 2003 book “In the blink of an eye” he outlines ‘The Light Switch Theory’.

“The Cambrian explosion was triggered by the sudden evolution of vision” in simple organisms… active predation became possible with the advent of vision, and prey species found themselves under extreme pressure to adapt in ways that would make them less likely to be spotted. New habitats opened as organisms were able to see their environment for the first time, and an enormous amount of specialization occurred as species differentiated.”

In this light (no pun intended) the “Robot-Readable World” imagines the evolutionary pressure of those three billion (and growing) linked, artificial eyes on our environment.

It imagines a new aesthetic born out of that pressure.

As I wrote in “Sensor-Vernacular”

[it is an aesthetic…] Of computer-vision, of 3d-printing; of optimised, algorithmic sensor sweeps and compression artefacts. Of LIDAR and laser-speckle. Of the gaze of another nature on ours. There’s something in the kinect-hacked photography of NYC’s subways that we’ve linked to here before, that smacks of the viewpoint of that other next nature, the robot-readable world. The fascination we have with how bees see flowers, revealing animal link between senses and motives. That our environment is shared with things that see with motives we have intentionally or unintentionally programmed them with.



The things we are about to share our environment with are born themselves out of a domestication of inexpensive computation, the ‘Fractional AI’ and ‘Big Maths for trivial things’ that Matt Webb has spoken about this year (I’d recommend starting at his Do Lecture).

And, as he’s also said before – it is a plausible, purchasable near-future that can be read in the catalogues of discount retailers as well as the short stories of speculative fiction writers.

We’re in a present, after all, where a £100 point-and-shoot camera has the approximate empathic capabilities of a infant, recognising and modifying it’s behaviour based on facial recognition.



And where the number one toy last Christmas is a computer-vision eye that can sense depth, movement, detect skeletons and is a direct descendent of techniques and technologies used for surveillance and monitoring.

As Matt Webb pointed out on twitter last year:



Ten years of investment in security measures funded and inspired by the ‘War On Terror’ have lead us to this point, but what has been left behind by that tide is domestic, cheap and hackable.

Kinect hacking has become officially endorsed and, to my mind, the hacks are more fun than the games that have published for it.

Greg Borenstein, who scanned me with a Kinect at FooCamp is at the moment writing a book for O’Reilly called ‘Making Things See’.

It is a companion in someways to Tom Igoe’s handbook to injecting behaviour into everyday things with Arduino and other hackable, programmable hardware called “Making Things Talk”.

“Making Things See” could be the the beginning of a ‘light-switch’ moment for everyday things with behaviour hacked-into them. For things with fractional AI, fractional agency – to be given a fractional sense of their environment.

Again, I wrote a little bit about that in “Sensor-Vernacular”, and the above image by James George & Alexander Porter still pins that feeling for me.

The way the world is fractured from a different viewpoint, a different set of senses from a new set of sensors.

Perhaps it’s the suspicious look from the fella with the moustache that nails it.

And its a thought that was with me while I wrote that post that I want to pick at.

The fascination we have with how bees see flowers, revealing the animal link between senses and motives. That our environment is shared with things that see with motives we have intentionally or unintentionally programmed them with.

Which leads me to Richard Dawkins.

Richard Dawkins talks about how we have evolved to live ‘in the middle’ (http://www.ted.com/talks/richard_dawkins_on_our_queer_universe.html) and our sensorium defines our relationship to this ‘Middle World’

“What we see of the real world is not the unvarnished world but a model of the world, regulated and adjusted by sense data, but constructed so it’s useful for dealing with the real world.

The nature of the model depends on the kind of animal we are. A flying animal needs a different kind of model from a walking, climbing or swimming animal. A monkey’s brain must have software capable of simulating a three-dimensional world of branches and trunks. A mole’s software for constructing models of its world will be customized for underground use. A water strider’s brain doesn’t need 3D software at all, since it lives on the surface of the pond in an Edwin Abbott flatland.”

Middle World — the range of sizes and speeds which we have evolved to feel intuitively comfortable with –is a bit like the narrow range of the electromagnetic spectrum that we see as light of various colours. We’re blind to all frequencies outside that, unless we use instruments to help us. Middle World is the narrow range of reality which we judge to be normal, as opposed to the queerness of the very small, the very large and the very fast.”

At the Glug London talk, I showed a short clip of Dawkins’ 1991 RI Christmas Lecture “The Ultraviolet Garden”. The bit we’re interested in starts about 8 minutes in – but the whole thing is great.

In that bit he talks about how flowers have evolved to become attractive to bees, hummingbirds and humans – all occupying separate sensory worlds…

Which leads me back to…



What’s evolving to become ‘attractive’ and meaningful to both robot and human eyes?

Also – as Dawkins points out

The nature of the model depends on the kind of animal we are.

That is, to say ‘robot eyes’ is like saying ‘animal eyes’ – the breadth of speciation in the fourth kingdom will lead to a huge breadth of sensory worlds to design within.

One might look for signs in the world of motion-capture special effects, where Zoe Saldana’s chromakey acne and high-viz dreadlocks that transform here into an alien giantess in Avatar could morph into fashion statements alongside Beyoncé’s chromasocks…

Or Takashi Murakami’s illustrative QR codes for Louis Vuitton.



That a such a bluntly digital format such as a QR code can be appropriated by a luxury brand such as LV is notable by itself.

Since the talk at Glug London, Timo found a lovely piece of work featured by BLDGBLOG by Diego Trujillo-Pisanty who is a student on the Design Interactions course at the RCA that I sometimes teach at.

Diego’s project “With Robots” imagines a domestic scene where objects, furniture and the general environment have been modified for robot senses and affordances.

Another recent RCA project, this time from the Design Products course, looks at fashion in a robot-readable world.

Thorunn Arnadottir’s QR-code beaded dresses and sunglasses imagine a scenario where pop-stars inject payloads of their own marketing messages into the photographs taken by paparazzi via readable codes turning the parasites into hosts.

But, such overt signalling to distinct and separate senses of human and robots is perhaps too clean-cut an approach.

Computer vision is a deep, dark specialism with strange opportunities and constraints. The signals that we design towards robots might be both simpler and more sophisticated than QR codes or other 2d barcodes.

Timo has pointed us towards Maya Lotanʼs work from Ivrea back in 2005. He neatly frames what may be the near-future of the Robot-Readable World:

Those QR ‘illustrations’ are gaining attention because they are novel. They are cheap, early and ugly computer-readable illustration, one side of an evolutionary pressure towards a robot-readable world. In the other direction, images of paintings, faces, book covers and buildings are becoming ‘known’ through the internet and huge databases. Somewhere they may meet in the middle, and we may have beautiful hybrids such as http://www.mayalotan.com/urbanseeder-thesis/inside/

In our own work with Dentsu – the Suwappu characters are being designed to be attractive and cute to humans and meaningful to computer vision.

Their bodies are being deliberately gauged to register with a computer vision application, so that they can interact with imaginary storylines and environments generated by the smartphone.

Back to Dawkins.

Living in the middle means that our limited human sensoriums and their specialised, superhuman robotic senses will overlap, combine and contrast.

Wavelengths we can’t see can be overlaid on those we can – creating messages for both of us.

SVK wasn’t created for robots to read, but it shows how UV wavelengths might be used to create an alternate hidden layer to be read by eyes that see the world in a wider range of wavelengths.

Timo and Jack call this “Antiflage” – a made-up word for something we’re just starting to play with.

It is the opposite of camouflage – the markings and shapes that attract and beguile robot eyes that see differently to us – just as Dawkins describes the strategies that flowers and plants have built up over evolutionary time to attract and beguile bees, hummingbirds – and exist in a layer of reality complimentary to that which we humans sense and are beguiled by.

And I guess that’s the recurring theme here – that these layers might not be hidden from us just by dint of their encoding, but by the fact that we don’t have the senses to detect them without technological-enhancement.

I say a recurring theme as it’s at the core of the Immaterials work that Jack and Timo did with RFID – looking to bring these phenomena into our “Middle World” as materials to design with.

And while I present this as a phenomena, and dramatise it a little into being an emergent ‘force of nature’, let’s be clear that it is a phenomena to design for, and with. It’s something we will invent, within the frame of the cultural and technical pressures that force design to evolve.

That was the message I was trying to get across at Glug: we’re the ones making the robots, shaping their senses, and the objects and environments they relate to.

Hence we make a robot-readable world.

I closed my talk with this quote from my friend Chris Heathcote, which I thought goes to the heart of this responsibility.

There’s a whiff in the air that it’s not as far off as we might think.

The Robot-Readable World is pre-Cambrian at the moment, but perhaps in a blink of an eye it will be all around us.

This thought is a shared one – that has emerged from conversations with Matt Webb, Jack, Timo, and Nick in the studio – and Kevin Slavin (watch his recent, brilliant TED talk if you haven’t already), Noam Toran, James Auger, Ben Cerveny, Matt Biddulph, Greg Borenstein, James George, Tom Igoe, Kevin Grennan, Natalie Jeremijenko, Russell Davies, James Bridle (who will be giving a talk this October with the title ‘Robot-readable world’ and will no doubt take it to further and wilder places far more eloquently than I ever could), Tom Armitage and many others over the last few years.

If you’re tracking the Robot-Readable World too, let me know in comments here – or the hashtag #robotreadableworld.

Artificial Empathy

Last week, a series of talks on robots, AI, design and society began at London’s Royal Institution, with Alex Deschamps-Sonsino (late of Tinker and now of our friends RIG) giving a presentation on ‘Emotional Robots’, particularly the EU-funded research work of ‘LIREC‘ that she is involved with.

Alex Deschamps-Sonsino on Emotional Robots at the Royal Institution

It was a thought-provoking talk, and as a result my notebook pages are filled with reactions and thoughts to follow-up rather than a recording of what she said.

My notes from Alex D-S's 'Emotional Robots' talk at the RI

LIREC‘s work is centred around a academic deconstruction of human emotional relations to each other, pets and objects – considering them as companions.

Very interesting!

These are themes dear to our hearts cf. Products Are People Too, Pullman-esque daemons and B.A.S.A.A.P.

Design principle #1

With B.A.S.A.A.P. in mind, I was particularly struck by the animal behaviour studies that LIREC members are carrying out, looking into how dogs learn and adapt as companions with their human owners, and learn how to negotiate different contexts in a almost symbiotic relationship with their humans.

December 24, 2009_15-19

Alex pointed out that the dogs sometimes test their owners – taking their behaviour to the edge of transgression in order to build a model of how to behave.

13-February-2010_14.54

Adaptive potentiation – serious play! Which lead me off onto thoughts of Brian Sutton-Smith and both his books ‘Ambiguity of Play’ and ‘Toys as Culture’. The LIREC work made me imagine the beginnings of a future literature of how robots play to adapt and learn.

Supertoys (last all summer long) as culture!

Which led me to my question to Alex at the end of her talk – which I formulated badly I think, and might stumble again here to write down clearly.

In essence – dogs and domesticated animals model our emotional states, and we model theirs – to come to an understanding. There’s no direct understanding there – just simulations running in both our minds of each other, which leads to a working relationship usually.

14-February-2010_12.42

My question was whether LIREC’s approach of deconstruction and reconstruction of emotions would be less successful than the ‘brute-force’ approach of simulating the 17,000 years or so domestication of wild animals in companion robots.

Imagine genetic algorithms creating ‘hopeful monsters‘ that could be judged as more or less loveable and iterated upon…

Another friend, Kevin Slavin recently gave a great talk at LIFT11, about the algorithms that surround and control our lives – that ‘we can write but can’t read’ the complex behaviours they generate.

He gave the example of http://www.boxcar2d.com/ – that generates ‘hopeful monster’ wheeled devices that have to cross a landscape.

The little genetic algorithm that could

As Kevin says – it’s “Sometimes heartbreaking”.

Some succeed, some fail – we map personality and empathise with them when they get stuck.

I was also reminded of another favourite design-fiction of the studio – Bruce Sterling’s ‘Taklamakan

Pete stared at the dissected robots, a cooling mass of nerve-netting, batteries, veiny armor plates, and gelatin.
“Why do they look so crazy?”
“‘Cause they grew all by themselves. Nobody ever designed them.”
Katrinko glanced up.

Another question from the audience featured a wonderful term that I at least I had never heard used before – “Artificial Empathy”.

Artificial Empathy, in place of Artificial Intelligence.

Artificial Empathy is at the core of B.A.S.A.A.P. – it’s what powers Kacie Kinzer’s Tweenbots, and it’s what Byron and Nass were describing in The Media Equation to some extent, which of course brings us back to Clippy.

Clippy was referenced by Alex in her talk, and has been resurrected again as an auto-critique to current efforts to design and build agents and ‘things with behaviour’

One thing I recalled which I don’t think I’ve mentioned in previous discussions was that back in 1997, when Clippy was at the height of his powers – I did something that we’re told (quite rightly to some extent) no-one ever does – I changed the defaults.

You might not know, but there were several skins you could place on top of Clippy from his default paperclip avatar – a little cartoon Einstein, an ersatz Shakespeare… and a number of others.

I chose a dog, which promptly got named ‘Ajax’ by my friend Jane Black. I not only forgave Ajax every infraction, every interruption – but I welcomed his presence. I invited him to spend more and more time with me.

I played with him.

Sometimes we’re that easy to please.

I wonder if playing to that 17,000 years of cultural hardwiring is enough in some ways.

In the bar afterwards a few of us talked about this – and the conversation turned to ‘Big Dog’.

Big Dog doesn’t look like a dog, more like a massive crossbreed of ED-209, the bottom-half of a carousel horse and a black-and-decker workmate. However, if you’ve watched the video then you probably, like most of the people in the bar shouted at one point – “DON’T KICK BIG DOG!!!”.

Big Dog’s movements and reactions – it’s behaviour in response to being kicked by one of it’s human testers (about 36 seconds into the video above) is not expressed in a designed face, or with sad ‘Dreamworks’ eyebrows – but in pure reaction – which uncannily resembles the evasion and unsteadiness of a just-abused animal.

It’s heart-rending.

But, I imagine (I don’t know) it’s an emergent behaviour of it’s programming and design for other goals e.g. reacting to and traversing irregular terrain.

Again like Boxcar2d, we do the work, we ascribe hurt and pain to something that absolutely cannot be proven to experience it – and we are changed.

So – we are the emotional computing power in these relationships – as LIREC and Alex are exploring – and perhaps we should design our robotic companions accordingly.

Or perhaps we let this new nature condition us – and we head into a messy few decades of accelerated domestication and renegotiation of what we love – and what we think loves us back.


P.S.: This post contains lost of images from our friend Matt Cottam’s wonderful “Dogs I Meet” set on Flickr, which makes me wonder about a future “Robots I Meet” set which might illicit such emotions…

Destination: Botworld!

Last week saw the first of a series of talks on robots, artificial-intelligence and design at London’s Royal Institution, curated by Ben Hammersley. Our friend Alex Deschamps-Sonsino presented the work of the EU-funded LIREC project in a talk called ‘Emotional Robots’.

I took a bunch of notes which were reactions rather than a recording, and my thoughts will hopefully bubble up here soon…

My notes from Alex D-S's 'Emotional Robots' talk at the RI

However, I hardly have time to collect my thoughts – because this week (Wednesday 16th) it’s m’colleague Matt Webb speaking – giving a talk entitled “Botworld: Designing for the new world of domestic A.I.”.

If the conversations we’ve had about it are any guide, it should be a corker. There are still tickets available, so hopefully we’ll see you there on Wednesday and for a bot-fuelled beer in the RI bar afterward.

Matt Webb speaking in February about the future, robots, and artificial intelligence

Ben Hammersley is curating a series of three lectures at the Royal Institute of Great Britain during February. The RI is a 200-year-old research and public lecture organisation for science. Much of Faraday’s work on electricity was done there.

One of the lectures is with me!

All three lectures are at 7pm, and they are…

  1. Uncanny & lovable: The future of emotional robots, by Alexandra Deschamps-Sonsino (also of @iotwatch on Twitter, where she tracks the emerging Internet of Things). This is on the 10th, this coming Thursday.
  2. Botworld: Designing for the new world of domestic A.I. — I’m giving this lecture! My summary is below. It’s on Wednesday 16th February.
  3. Finally, A.I. will kill us all: post-digital geopolitics, with Ben Hammersley, series curator and Editor-at-large of Wired UK magazine. Date: Thursday 24th February.

You’ll need to book if you want to come, so get to it!

My talk is going to build on a few themes I’ve been exploring recently at a couple of talks and on my personal blog.

Botworld: Designing for the new world of domestic A.I.

Back in the 1960s, we thought the 21st century was going to be about talking robots, and artificial intelligences we could chat with and play chess with like people. It didn’t happen, and we thought the artificial intelligence dream was dead.

But somehow, a different kind of future snuck up on us. One of robot vacuum cleaners, virtual pets that chat amongst themselves, and web search engines so clever that we might-as-well call them intelligent. So we got our robots, and the world is full of them. Not with human intelligence, but with something simpler and different. And not as colleagues, but as pets and toys.

Matt looks at life in this Botworld. We’ll encounter a zoo of beasts: telepresence robots, big maths, mirror worlds, and fractional A.I. We’ll look at signals from the future, and try to figure out where it’s going.

We’ll look at questions like: what does it mean to relate emotionally to a silicon thing that pretends to be alive? How do we deal with this shift from ‘Meccano’ to ‘The Sims’? And what are the consequences, when it’s not just our toys and gadgets that have fractional intelligence… but every product and website?

Matt digs into history and sci-fi to find lessons on how to think about and recognise Botworld, how to design for it, and how to live in it.

I’ll be going to Alex’s and Ben’s too. I hope to see you there.

Tom at Interesting North, 13th November

I’m going to be speaking at Interesting North in Sheffield on Saturday 13th November. Alongside a great lineup of speakers, I’ll be giving a talk called – at the moment – Five Things Rules Do, which I’ve summarised thus:

The thing that make games Games isn’t joypads, or scores, or 3D graphics, or little bits of cardboard, or many-sided dice. It’s the rules and mechanics beating in their little clockwork hearts. That may be a somewhat dry reduction of thousands of years of fun, but my aim is to celebrate and explore the many things that games (and other systemic media) do with the rules at their foundation. And, on the way, perhaps change your mind at exactly what rules are for.

It’s already sold out, but if you’ve got a ticket – perhaps see you there!

Recent Posts

Popular Tags