This website is now archived. To find out what BERG did next, go to www.bergcloud.com.

Post #2910

Artificial Empathy

Last week, a series of talks on robots, AI, design and society began at London’s Royal Institution, with Alex Deschamps-Sonsino (late of Tinker and now of our friends RIG) giving a presentation on ‘Emotional Robots’, particularly the EU-funded research work of ‘LIREC‘ that she is involved with.

Alex Deschamps-Sonsino on Emotional Robots at the Royal Institution

It was a thought-provoking talk, and as a result my notebook pages are filled with reactions and thoughts to follow-up rather than a recording of what she said.

My notes from Alex D-S's 'Emotional Robots' talk at the RI

LIREC‘s work is centred around a academic deconstruction of human emotional relations to each other, pets and objects – considering them as companions.

Very interesting!

These are themes dear to our hearts cf. Products Are People Too, Pullman-esque daemons and B.A.S.A.A.P.

Design principle #1

With B.A.S.A.A.P. in mind, I was particularly struck by the animal behaviour studies that LIREC members are carrying out, looking into how dogs learn and adapt as companions with their human owners, and learn how to negotiate different contexts in a almost symbiotic relationship with their humans.

December 24, 2009_15-19

Alex pointed out that the dogs sometimes test their owners – taking their behaviour to the edge of transgression in order to build a model of how to behave.

13-February-2010_14.54

Adaptive potentiation – serious play! Which lead me off onto thoughts of Brian Sutton-Smith and both his books ‘Ambiguity of Play’ and ‘Toys as Culture’. The LIREC work made me imagine the beginnings of a future literature of how robots play to adapt and learn.

Supertoys (last all summer long) as culture!

Which led me to my question to Alex at the end of her talk – which I formulated badly I think, and might stumble again here to write down clearly.

In essence – dogs and domesticated animals model our emotional states, and we model theirs – to come to an understanding. There’s no direct understanding there – just simulations running in both our minds of each other, which leads to a working relationship usually.

14-February-2010_12.42

My question was whether LIREC’s approach of deconstruction and reconstruction of emotions would be less successful than the ‘brute-force’ approach of simulating the 17,000 years or so domestication of wild animals in companion robots.

Imagine genetic algorithms creating ‘hopeful monsters‘ that could be judged as more or less loveable and iterated upon…

Another friend, Kevin Slavin recently gave a great talk at LIFT11, about the algorithms that surround and control our lives – that ‘we can write but can’t read’ the complex behaviours they generate.

He gave the example of http://www.boxcar2d.com/ – that generates ‘hopeful monster’ wheeled devices that have to cross a landscape.

The little genetic algorithm that could

As Kevin says – it’s “Sometimes heartbreaking”.

Some succeed, some fail – we map personality and empathise with them when they get stuck.

I was also reminded of another favourite design-fiction of the studio – Bruce Sterling’s ‘Taklamakan

Pete stared at the dissected robots, a cooling mass of nerve-netting, batteries, veiny armor plates, and gelatin.
“Why do they look so crazy?”
“‘Cause they grew all by themselves. Nobody ever designed them.”
Katrinko glanced up.

Another question from the audience featured a wonderful term that I at least I had never heard used before – “Artificial Empathy”.

Artificial Empathy, in place of Artificial Intelligence.

Artificial Empathy is at the core of B.A.S.A.A.P. – it’s what powers Kacie Kinzer’s Tweenbots, and it’s what Byron and Nass were describing in The Media Equation to some extent, which of course brings us back to Clippy.

Clippy was referenced by Alex in her talk, and has been resurrected again as an auto-critique to current efforts to design and build agents and ‘things with behaviour’

One thing I recalled which I don’t think I’ve mentioned in previous discussions was that back in 1997, when Clippy was at the height of his powers – I did something that we’re told (quite rightly to some extent) no-one ever does – I changed the defaults.

You might not know, but there were several skins you could place on top of Clippy from his default paperclip avatar – a little cartoon Einstein, an ersatz Shakespeare… and a number of others.

I chose a dog, which promptly got named ‘Ajax’ by my friend Jane Black. I not only forgave Ajax every infraction, every interruption – but I welcomed his presence. I invited him to spend more and more time with me.

I played with him.

Sometimes we’re that easy to please.

I wonder if playing to that 17,000 years of cultural hardwiring is enough in some ways.

In the bar afterwards a few of us talked about this – and the conversation turned to ‘Big Dog’.

Big Dog doesn’t look like a dog, more like a massive crossbreed of ED-209, the bottom-half of a carousel horse and a black-and-decker workmate. However, if you’ve watched the video then you probably, like most of the people in the bar shouted at one point – “DON’T KICK BIG DOG!!!”.

Big Dog’s movements and reactions – it’s behaviour in response to being kicked by one of it’s human testers (about 36 seconds into the video above) is not expressed in a designed face, or with sad ‘Dreamworks’ eyebrows – but in pure reaction – which uncannily resembles the evasion and unsteadiness of a just-abused animal.

It’s heart-rending.

But, I imagine (I don’t know) it’s an emergent behaviour of it’s programming and design for other goals e.g. reacting to and traversing irregular terrain.

Again like Boxcar2d, we do the work, we ascribe hurt and pain to something that absolutely cannot be proven to experience it – and we are changed.

So – we are the emotional computing power in these relationships – as LIREC and Alex are exploring – and perhaps we should design our robotic companions accordingly.

Or perhaps we let this new nature condition us – and we head into a messy few decades of accelerated domestication and renegotiation of what we love – and what we think loves us back.


P.S.: This post contains lost of images from our friend Matt Cottam’s wonderful “Dogs I Meet” set on Flickr, which makes me wonder about a future “Robots I Meet” set which might illicit such emotions…

4 Comments and Trackbacks

  • 1. tamberg said on 18 February 2011...

    Even robots that are just remote placeholders for humans can evoke emotions, e.g. http://twitter.com/#!/tamberg/status/35020879490596864 where a coffee ordering telepresence bot asks for help in a coffee shop because it’s not tall enough to see what’s on display. Cheers, tamberg

  • 2. tamberg said on 18 February 2011...

    http://www.youtube.com/watch?v=mz4FshiMu3U (comment markup seems to disfigure Twitter links)

  • 3. Alex Tolley said on 21 February 2011...

    “My question was whether LIREC’s approach of deconstruction and reconstruction of emotions would be less successful than the ‘brute-force’ approach of simulating the 17,000 years or so domestication of wild animals in companion robots. ”

    Bear in mind that a tiny minority of animals were domesticated (c.f. Jared Diamonds’s “Guns, Germs and Steel”). House pets are even fewer. Almost certainly there is a complex interplay between the genetic predisposition of some animals to accept or like human company, their ability to create some communication with us, and our predisposition to some types of animals and our ability to communicate and feel comfortable with them.

    Therefore it seems to make intuitive sense that we should at least start with characteristics that appear to be found in pets we like. Even a tribble level response (warm, furry, purring in response to stroking) would be one set of characteristics, with little need for modification.

    It seems to me that both dogs and cats have relatively few behavioral states that we , as humans, overload with mean in response to out states. Maybe some fairly “basic” programming and evolved stimulus-response conditioning may be enough?

  • Trackback: Bookmarks for February 18th from 08:18 to 14:26 | Sumit Paul-Choudhury 15 July 2011

    […] Artificial Empathy – Blog – BERG – 'Artificial Empathy', from @moleitau: (cc @firepile, @aeromenthe) […]

Leave a Comment

Popular Tags