Last week, a series of talks on robots, AI, design and society began at London’s Royal Institution, with Alex Deschamps-Sonsino (late of Tinker and now of our friends RIG) giving a presentation on ‘Emotional Robots’, particularly the EU-funded research work of ‘LIREC‘ that she is involved with.
It was a thought-provoking talk, and as a result my notebook pages are filled with reactions and thoughts to follow-up rather than a recording of what she said.
LIREC‘s work is centred around a academic deconstruction of human emotional relations to each other, pets and objects – considering them as companions.
With B.A.S.A.A.P. in mind, I was particularly struck by the animal behaviour studies that LIREC members are carrying out, looking into how dogs learn and adapt as companions with their human owners, and learn how to negotiate different contexts in a almost symbiotic relationship with their humans.
Alex pointed out that the dogs sometimes test their owners – taking their behaviour to the edge of transgression in order to build a model of how to behave.
Adaptive potentiation – serious play! Which lead me off onto thoughts of Brian Sutton-Smith and both his books ‘Ambiguity of Play’ and ‘Toys as Culture’. The LIREC work made me imagine the beginnings of a future literature of how robots play to adapt and learn.
Supertoys (last all summer long) as culture!
Which led me to my question to Alex at the end of her talk – which I formulated badly I think, and might stumble again here to write down clearly.
In essence – dogs and domesticated animals model our emotional states, and we model theirs – to come to an understanding. There’s no direct understanding there – just simulations running in both our minds of each other, which leads to a working relationship usually.
My question was whether LIREC’s approach of deconstruction and reconstruction of emotions would be less successful than the ‘brute-force’ approach of simulating the 17,000 years or so domestication of wild animals in companion robots.
Imagine genetic algorithms creating ‘hopeful monsters‘ that could be judged as more or less loveable and iterated upon…
Another friend, Kevin Slavin recently gave a great talk at LIFT11, about the algorithms that surround and control our lives – that ‘we can write but can’t read’ the complex behaviours they generate.
He gave the example of http://www.boxcar2d.com/ – that generates ‘hopeful monster’ wheeled devices that have to cross a landscape.
As Kevin says – it’s “Sometimes heartbreaking”.
Some succeed, some fail – we map personality and empathise with them when they get stuck.
I was also reminded of another favourite design-fiction of the studio – Bruce Sterling’s ‘Taklamakan‘
Pete stared at the dissected robots, a cooling mass of nerve-netting, batteries, veiny armor plates, and gelatin.
“Why do they look so crazy?”
“‘Cause they grew all by themselves. Nobody ever designed them.”
Katrinko glanced up.
Another question from the audience featured a wonderful term that I at least I had never heard used before – “Artificial Empathy”.
Artificial Empathy, in place of Artificial Intelligence.
Artificial Empathy is at the core of B.A.S.A.A.P. – it’s what powers Kacie Kinzer’s Tweenbots, and it’s what Byron and Nass were describing in The Media Equation to some extent, which of course brings us back to Clippy.
Clippy was referenced by Alex in her talk, and has been resurrected again as an auto-critique to current efforts to design and build agents and ‘things with behaviour’
One thing I recalled which I don’t think I’ve mentioned in previous discussions was that back in 1997, when Clippy was at the height of his powers – I did something that we’re told (quite rightly to some extent) no-one ever does – I changed the defaults.
You might not know, but there were several skins you could place on top of Clippy from his default paperclip avatar – a little cartoon Einstein, an ersatz Shakespeare… and a number of others.
I chose a dog, which promptly got named ‘Ajax’ by my friend Jane Black. I not only forgave Ajax every infraction, every interruption – but I welcomed his presence. I invited him to spend more and more time with me.
I played with him.
Sometimes we’re that easy to please.
I wonder if playing to that 17,000 years of cultural hardwiring is enough in some ways.
In the bar afterwards a few of us talked about this – and the conversation turned to ‘Big Dog’.
Big Dog doesn’t look like a dog, more like a massive crossbreed of ED-209, the bottom-half of a carousel horse and a black-and-decker workmate. However, if you’ve watched the video then you probably, like most of the people in the bar shouted at one point – “DON’T KICK BIG DOG!!!”.
Big Dog’s movements and reactions – it’s behaviour in response to being kicked by one of it’s human testers (about 36 seconds into the video above) is not expressed in a designed face, or with sad ‘Dreamworks’ eyebrows – but in pure reaction – which uncannily resembles the evasion and unsteadiness of a just-abused animal.
But, I imagine (I don’t know) it’s an emergent behaviour of it’s programming and design for other goals e.g. reacting to and traversing irregular terrain.
Again like Boxcar2d, we do the work, we ascribe hurt and pain to something that absolutely cannot be proven to experience it – and we are changed.
So – we are the emotional computing power in these relationships – as LIREC and Alex are exploring – and perhaps we should design our robotic companions accordingly.
Or perhaps we let this new nature condition us – and we head into a messy few decades of accelerated domestication and renegotiation of what we love – and what we think loves us back.
P.S.: This post contains lost of images from our friend Matt Cottam’s wonderful “Dogs I Meet” set on Flickr, which makes me wonder about a future “Robots I Meet” set which might illicit such emotions…