This website is now archived. To find out what BERG did next, go to www.bergcloud.com.

Blog posts tagged as 'robots'

Gardens and Zoos

This is a version of a talk that I gave at the “In Progress” event, staged by ‘It’s Nice That‘ magazine.

It builds on some thoughts that I’ve spoken about at some other events in 2011, but I think this version put it forward in the way I’m happiest with.

Having said that, I took the name of the event literally – it’s a bit of a work-in-progress, still.

It might more properly be entitled ‘Pets & Pot-plants’ rather than ‘Gardens & Zoos’ – but the audience seemed to enjoy it, and hopefully it framed some of the things we’re thinking about and discussing in the studio over the last year or so, as we’ve been working on http://bergcloud.com and other projects looking at the near-future of connected products.

And – with that proviso… Here it is.

Let me introduce a few characters…

This is my frying pan. I bought it in Helsinki. It’s very good at making omelettes.

This is Sukie. She’s a pot-plant that we adopted from our friend Heather’s ‘Wayward Plants‘ project, at the Radical Nature exhibit at the Barbican (where “In Progress” is!)


This is a puppy – we’ll call him ‘Bruno’.

I have no idea if that’s his name, but it’s from our friend Matt Cottam’s “Dogs I Meet” flickr set, and Matt’s dog is called Bruno – so it seemed fitting.


And finally, this is Siri – a bot.


And, I’m Matt Jones – a designer and one of the principals at BERG, a design and invention studio.


There are currently 13 of us – half-technologists, half-designers, sharing a room in East London where we invent products for ourselves and for other people – generally large technology and media companies.


This is Availabot, one of the first products that we designed – it’s a small connected product that represents your online status physically…


But I’m going to talk today about the near-future of connected products.

And it is a near-future, not far from the present.


In fact, one of our favourite quotes about the future is from William Burroughs: When you cut into the present, the future leaks out…


A place we like to ‘cut into the present’ is the Argos catalogue! Matt Webb’s talked about this before.

It’s really where you see Moore’s Law hit the high-street.

Whether it’s toys, kitchen gear or sports equipment – it’s getting hard to find consumer goods that don’t have software inside them.


This is near-future where the things around us start to display behaviour – acquiring motive and agency as they act and react to the context around them according to the software they have inside them, and increasingly the information they get from (and publish back to) the network.

In this near-future, it’s very hard to identify the ‘U’ in UI’ – that is, the User in User-Interface. It’s not so clear anymore what these things are. Tools… or something more.

Of course, I choose to illustrate this slightly-nuanced point with a video of kittens riding a Roomba that Matt Webb found, so you might not be convinced.


However, this brings us back to our new friends, the Bots.


By bot – I guess I mean a piece of software that displays a behaviour, that has motive and agency.


Let me show a clip about Siri, and how having bots in our lives might affect us [Contains Strong Language!]

Perhaps, like me – you have more sympathy for the non-human in that clip…


But how about some other visions of what it might be like to have non-human companions in our lives? For instance, the ‘daemons’ of Phillip Pullman’s ‘Dark Materials‘ trilogy. They are you, but not you – able to reveal things about you and reveal things to you. Able to interact naturally with you and each other.


Creatures we’ve made that play and explore the world don’t seem that far-fetched anymore. This is a clip of work on juggling robot quadcopters by ETH Zurich.

Which brings me back to my earlier thought – that it’s hard to see where the User in User-Interfaces might be. User-Centred Design has been the accepted wisdom for decades in interaction design.

I like this quote that my friend Karsten introduced me to, by Prof Bertrand Meyer (coincidentally at professor at ETH) that might offer an alternative view…

A more fruitful stance for interaction design in this new landscape might be that offered by Actor-Network Theory?


I like this snippet from a formulation of ANT based on work by Geoff Walsham et al.

“Creating a body of allies, human and non-human…”

Which brings me back to this thing…

Which is pretty unequivocally a tool. No motive, no agency. The behaviour is that of it’s evident, material properties.


Domestic pets, by contrast, are chock-full of behaviour, motive, agency. We have a model of what they want, and how they behave in certain contexts – as they do of us, we think.

We’ll never know, truly of course.

They can surprise us.

That’s part of why we love them.


But what about these things?

Even though we might give them names, and have an idea of their ‘motive’ and behaviour, they have little or no direct agency. They move around by getting us to move them around, by thriving or wilting…

And – this occurred to me while doing this talk – what are houseplants for?

Let’s leave that one hanging for a while…


And come back to design – or more specifically – some of the impulses beneath it. To make things, and to make sense of things. This is one of my favourite quotes about that. I found it in an exhibition explaining the engineering design of the Sydney Opera House.

Making models to understand is what we do as we design.

And, as we design for slightly-unpredictable, non-human-centred near-futures we need to make more of them, and share them so we can play with them, spin them round, pick them apart and talk about what we want them to be – together.


I’ll just quickly mention some of the things we talk about a lot in our work. The things we think are important in the models, and designs we make for connected products. The first one is legibility. That the product or service presents a readable, evident model of how it works to the world on it’s surface. That there is legible feedback, and you can quickly construct a theory how it works through that feedback.


One of the least useful notions you come up against, particularly in technology companies, is the stated ambition that the use of products and services should be ‘seamless experiences’.

Matthew Chalmers has stated (after Mark Weiser, one of the founding figures of ‘ubicomp’) that we need to design “seamful systems, with beautiful seams”

Beautiful seams attract us to the legible surfaces of a thing, and allow our imagination in – so that we start to build a model in our minds (and appreciate the craft at work, the values of the thing, the values of those that made it, and how we might adapt it to our values – but that’s another topic)


Finally – this guy – who pops up a lot on whiteboards in the studio, or when we’re working with clients.

B.A.S.A.A.P. is a bit of an internal manifesto at BERG, and stands for Be As Smart As A Puppy – and it’s something I’ve written about at length before.


It stems from something robotics and AI expert Rodney Brooks said… that if we put the fifty smartest people in a room for fifty years, we’d be luck if we make AIs as smart as a puppy.

We see this an opportunity rather than a problem!

We’ve made our goal to look to other models of intelligence and emotional response in products and services than emulating what we’d expect from humans.

Which is what this talk is about. Sort-of.

But before we move on, a quick example of how we express these three values in our work.

“Text Camera” is a very quick sketch of something that we think illustrates legibility, seamful-ness and BASAAP neatly.

Text Camera is about making the inputs and inferences the phone sees around it to ask a series of friendly questions that help to make clearer what it can sense and interpret. It kind of reports back on what it sees in text, rather through a video feed.

Let me explain one of the things it can do as an example. Your smartphone camera has a bunch of software to interpret the light it’s seeing around you – in order to adjust the exposure automatically.

So, we look to that and see if it’s reporting ‘tungsten light’ for instance, and can infer from that whether to ask the question “Am I indoors?”.

Through the dialog we feel the seams – the capabilities and affordances of the smartphone, and start to make a model of what it can do.

So next, I want to talk a little about a story you might be familiar with – that of…

I hope that last line doesn’t spoil it for anyone who hasn’t seen it yet…

But – over the last year I’ve been talking with lot to people about a short scene in the original 1977 Star Wars movie ‘A New Hope’ – where Luke and his Uncle Owen are attempting to buy some droids from the Jawas that have pulled up outside their farmstead.


I’ve become a little obsessed with this sequence – where the droids are presented like… Appliances? Livestock?

Or more troublingly, slaves?

Luke and Uncle Owen relate to them as all three – at the same time addressing them directly, aggressively and passive-aggressively. It’s such a rich mix of ways that ‘human and non-human actors’ might communicate.

Odd, and perhaps the most interesting slice of ‘science-fiction’ in what otherwise is squarely a fantasy film.

Of course Artoo and Threepio are really just…

Men in tin-suits, but our suspension of belief is powerful! Which brings me to the next thing we should quickly throw into the mix of the near-future…


This is the pedal of my Brompton bike. It’s also a yapping dog (to me at least)

Our brains are hard-wired to see faces, it’s part of a phenomena called ‘Pareidolia

It’s something we’ve talked about before on the BERGblog, particularly in connection with Schoolscope. I started a group on flickr called “Hello Little Fella” to catalogue my pareidolic-excesses (other facespotting groups are available).

This little fella is probably my favourite.

He’s a little bit ill, and has a temperature.

Anyway.

The reason for this particular digression is to point out that one of the prime materials we work with as interaction designers is human perception. We try to design things that work to take advantage of its particular capabilities and peculiarities.

I’m not sure if anyone here remembers the Apple Newton and the Palm Pilot?

The Newton was an incredible technological attainment for it’s time – recognising the user’s handwriting. The Palm instead forced us to learn a new type of writing (“Graffiti“).

We’re generally faster learners than our technology, as long as we are given something that can be easily approached and mastered. We’re more plastic and malleable – what we do changes our brains – so the ‘wily’ technology (and it’s designers) will sieze upon this and use it…

All of which leaves me wondering whether we are working towards Artificial Empathy, rather than Artificial Intelligence in the things we are designing…

If you’ve seen this video of ‘Big Dog’, an all-terrain robot by Boston Dynamics – and you’re anything like me – then you flinch when it’s tester kicks it.

To quote from our ‘Artificial Empathy’ post:

Big Dog’s movements and reactions – it’s behaviour in response to being kicked by one of it’s human testers (about 36 seconds into the video above) is not expressed in a designed face, or with sad ‘Dreamworks’ eyebrows – but in pure reaction – which uncannily resembles the evasion and unsteadiness of a just-abused animal.

Of course, before we get too carried away by artificial empathy, we shouldn’t forget what Big Dog is primarily designed for, and funded by…

Anyway – coming back to ‘wily’ tactics, here’s the often-referenced ‘Uncanny Valley’ diagram, showing the relationship between ever-more-realistic simulations of life, particularly humans and our ‘familiarity’ with them.

Basically, as we get ever closer to trying to create lifelike-simulations of humans, they start to creep us out.

It can perhaps be most neatly summed up as our reaction to things like the creepy, mocapped synthespians in the movie Polar Express…

The ‘wily’ tactic then would be to stay far away from the valley – aim to make technology behave with empathic qualities that aren’t human at all, and let us fill in the gaps as we do so well.

Which, brings us back to BASAAP, which as Rodney Brooks pointed out – is still really tough.

Bruno’s wild ancestors started to brute-force the problem of creating artificial empathy and a working companion-species relationship with humans through the long, complex process of domestication and selective-breeding…

…from that point the first time these kind of eyes were made towards scraps of meat held at the end of a campfire somewhere between 12-30,000 years ago…

Some robot designers have opted to stay on the non-human side of the uncanny valley, notably perhaps Sony with AIBO.

Here’s an interesting study from 2003 that hints a little at what the effects of designing for ‘artificial empathy’ might be.

We’re good at holding conflicting models of things in our heads at the same time it seems. That AIBO is a technology, but that it also has ‘an inner life’.

Take a look at this blog, where an AIBO owner posts it’s favourite places, and laments:

“[he] almost never – well, make it never – leaves his station these days. It’s not for lack on interest – he still is in front of me at the office – but for want of preservation. You know, if he breaks a leg come a day or a year, will Sony still be there to fix him up?”

(One questioner after my talk asked: “What did the 25% of people who didn’t think AIBO was a technological gadget report it to be?” – Good question!)

Some recommendations of things to look at around this area: the work of Donna Haraway, esp. The Companion Species Manifesto.

Also, the work of Cynthia Brezeal, Heather Knight and Kacie Kinzer – and the ongoing LIREC research project that our friend Alexandra Deschamps-Sonsino is working with, that’s looking to studies of canine behaviour and companionship to influence the design of bots and robots.

In science-fiction there’s a long, long list that could go here – but for now I’ll just point to the most-affecting recent thing I’ve read in the area, Ted Chiang’s novella “The Lifecycle of Software Objects” – which I took as my title for a talk partly on this subject at UX London earlier in the year.

In our own recent work I’d pick out Suwappu, a collaboration with Dentsu London as something where we’re looking to animate, literally, toys with an inner life through a computer-vision application that recognises each character and overlays dialogue and environments around them.

I wonder how this type of technology might develop hand-in-hand with storytelling to engage and delight – while leaving room for the imagination and empathy that we so easily project on things, especially when we are young.

Finally, I want to move away from the companion animal as a model, back to these things…

I said we’d come back to this! Have you ever thought about why we have pot plants? What we have them in the corners of our lives? How did they get there? What are they up to?!?

(Seriously – I haven’t managed yet to find research or a cultural history of how pot-plants became part of our home life. There are obvious routes through farming, gardening and cooking – but what about ornamental plants? If anyone reading this wants to point me at some they’d recommend in the comments to this post, I’d be most grateful!)

Take a look at this – one of the favourite finds of the studio in 2011 – Sticky Light.

It is very beautifully simple. It displays motive and behaviour. We find it fascinating and playful. Of course, part of it’s charm is that it can move around of its own volition – it has agency.

Pot-plants have motives (stay alive, reproduce) and behaviour (grow towards the light, shrivel when not watered) but they don’t have much agency. They rely on us to move them into the light, to water them.

Some recent projects have looked to augment domestic plants with some agency – Botanicalls by Kati London, Kate Hartman, Rebecca Bray and Rob Faludi equips a plant not only with a network connection, but a twitter account! Activated by sensors it can report to you (and its followers) whether it is getting enough water. Some voice, some agency.

(I didn’t have time to mention it in the talk, but I’d also point to James Chamber’s evolution of the idea with his ‘Has Needs’ project, where an abused potplant not only has a network connection, but the means to advertise for a new owner on freecycle…)

Here’s my botanical, which I chose to call Robert Plant…

So, much simpler systems that people or pets can find places in our lives as companions. Legible motives, limited behaviours and agency can illicit response, empathy and engagement from us.

We think this is rich territory for design as the things around us start to acquire means of context-awareness, computation and connectivity.

As we move from making inert tools – that we are unequivocally the users of – to companions, with behaviours that animate them – we wonder whether we should go straight from this…


…to this…

Namely, straight from things with predictable and legible properties and affordances, to things that try and have a peer-relationship, speaking with human voice and making great technological leaps to relate to us in that way, but perhaps with a danger of entering the uncanny valley.

What if there’s an interesting space to design somewhere in-between?

This in part is the inspiration behind some of the thinking in our new platform Berg Cloud, and its first product – Little Printer.

We like to think of Little Printer as something of a ‘Cloud Companion Species’ that mediates the internet and the domestic, that speaks with your smartphone, and digests the web into delightful little chunks that it dispenses when you want.

Little Printer is the beginning of our explorations into these cloud-companions, and BERG Cloud is the means we’re creating to explore them.

Ultimately we’re interested in the potential for new forms of companion species that extend us. A favourite project for us is Natalie Jeremijenko’s “Feral Robotic Dogs” – a fantastic example of legibility, seamful-ness and BASAAP.

Natalie went to communities near reclaimed-land that might still have harmful toxins present, and taught workshops where cheap (remember Argos?) robot dogs that could be bought for $30 or so where opened up and hacked to accommodate new sensors.

They were reprogrammed to seek the chemical traces associated with lingering toxins. Once release by the communities they ‘sniff’ them out, waddling towards the highest concentrations – an immediate tangible and legible visualisation of problem areas.

Perhaps most important was that the communities themselves were the ones taught to open the toys up, repurpose their motives and behaviour – giving them the agency over the technology and evidence they could build themselves.

In the coming world of bots – whether companions or not, we have to attempt to maintain this sort of open literacy. And it is partly the designer’s role to increase its legibility. Not only to beguile and create empathy – but to allow a dialogue.

As Kevin Slavin said about the world of algorithms growing around us“We can write it but we can’t read it”

We need to engage with the complexity and make it open up to us.

To make evident, seamful surfaces through which we can engage with puppy-smart things.

As our friend Chris Heathcote has put so well:

Thanks for inviting me, and for your attention today.


FOOTNOTE: Auger & Loizeau’s Domestic Robots.

I didn’t get the chance to reference the work of James Auger & Jimmy Loizeau in the talk, but their “Carnivorous Robots” project deserves study.

From the project website:

“For a robot to comfortably migrate into our homes, appearance is critical. We applied the concept of adaptation to move beyond the functional forms employed in laboratories and the stereotypical fictional forms often applied to robots. In effect creating a clean slate for designing robot form, then looking to the contemporary domestic landscape and the related areas of fashion and trends for inspiration. The result is that on the surface the CDER series more resemble items of contemporary furniture than traditional robots. This is intended to facilitate a seamless transition into the home through aesthetic adaptation, there are however, subtle anomalies or alien features that are intended to draw the viewer in and encourage further investigation into the object.”

And on robots performing as “Companion Species”

”In the home there are several established object categories each in some way justifying the products presence through the benefit or comfort they bring to the occupant, these include: utility; ornament; companionship; entertainment and combinations of the above, for example, pets can be entertaining and chairs can be ornamental. The simplest route for robots to enter the home would be to follow one of these existing paths but by necessity of definition, offering something above and beyond the products currently occupying those roles.”

James Auger is currently completing his Phd at the RCA on ‘Domestication of Robotics’ and I can’t wait to read it.

The Robot-Readable World

QR

I gave a talk at Glug London last week, where I discussed something that’s been on my mind at least since 2007, when I last talked about it briefly at Interesting.

It is rearing its head in our work, and in work and writings by others – so thought I would give it another airing.

The talk at Glug London bounced through some of our work, and our collective obsession with Mary Poppins, so I’ll cut to the bit about the Robot-Readable World, and rather than try and reproduce the talk I’ll embed the images I showed that evening, but embellish and expand on what I was trying to point at.

Robot-Readable World is a pot to put things in, something that I first started putting things in back in 2007 or so.

At Interesting back then, I drew a parallel between the Apple Newton’s sophisticated, complicated hand-writing recognition and the Palm Pilot’s approach of getting humans to learn a new way to write, i.e. Graffiti.

The connection I was trying to make was that there is a deliberate design approach that makes use of the plasticity and adaptability of humans to meet computers (more than) half way.

Connecting this to computer vision and robotics I said something like:

“What if, instead of designing computers and robots that relate to what we can see, we meet them half-way – covering our environment with markers, codes and RFIDs, making a robot-readable world”

After that I ran a little session at FooCamp in 2009 called “Robot readable world (AR shouldn’t just be for humans)” which was a bit ill-defined and caught up in the early hype of augmented reality…

But the phrase and the thought has been nagging at me ever since.

I read Kevin Kelly’s “What technology wants” recently, and this quote popped out at me:

Three billion artificial eyes!

In zoologist Andrew Parker’s 2003 book “In the blink of an eye” he outlines ‘The Light Switch Theory’.

“The Cambrian explosion was triggered by the sudden evolution of vision” in simple organisms… active predation became possible with the advent of vision, and prey species found themselves under extreme pressure to adapt in ways that would make them less likely to be spotted. New habitats opened as organisms were able to see their environment for the first time, and an enormous amount of specialization occurred as species differentiated.”

In this light (no pun intended) the “Robot-Readable World” imagines the evolutionary pressure of those three billion (and growing) linked, artificial eyes on our environment.

It imagines a new aesthetic born out of that pressure.

As I wrote in “Sensor-Vernacular”

[it is an aesthetic…] Of computer-vision, of 3d-printing; of optimised, algorithmic sensor sweeps and compression artefacts. Of LIDAR and laser-speckle. Of the gaze of another nature on ours. There’s something in the kinect-hacked photography of NYC’s subways that we’ve linked to here before, that smacks of the viewpoint of that other next nature, the robot-readable world. The fascination we have with how bees see flowers, revealing animal link between senses and motives. That our environment is shared with things that see with motives we have intentionally or unintentionally programmed them with.



The things we are about to share our environment with are born themselves out of a domestication of inexpensive computation, the ‘Fractional AI’ and ‘Big Maths for trivial things’ that Matt Webb has spoken about this year (I’d recommend starting at his Do Lecture).

And, as he’s also said before – it is a plausible, purchasable near-future that can be read in the catalogues of discount retailers as well as the short stories of speculative fiction writers.

We’re in a present, after all, where a £100 point-and-shoot camera has the approximate empathic capabilities of a infant, recognising and modifying it’s behaviour based on facial recognition.



And where the number one toy last Christmas is a computer-vision eye that can sense depth, movement, detect skeletons and is a direct descendent of techniques and technologies used for surveillance and monitoring.

As Matt Webb pointed out on twitter last year:



Ten years of investment in security measures funded and inspired by the ‘War On Terror’ have lead us to this point, but what has been left behind by that tide is domestic, cheap and hackable.

Kinect hacking has become officially endorsed and, to my mind, the hacks are more fun than the games that have published for it.

Greg Borenstein, who scanned me with a Kinect at FooCamp is at the moment writing a book for O’Reilly called ‘Making Things See’.

It is a companion in someways to Tom Igoe’s handbook to injecting behaviour into everyday things with Arduino and other hackable, programmable hardware called “Making Things Talk”.

“Making Things See” could be the the beginning of a ‘light-switch’ moment for everyday things with behaviour hacked-into them. For things with fractional AI, fractional agency – to be given a fractional sense of their environment.

Again, I wrote a little bit about that in “Sensor-Vernacular”, and the above image by James George & Alexander Porter still pins that feeling for me.

The way the world is fractured from a different viewpoint, a different set of senses from a new set of sensors.

Perhaps it’s the suspicious look from the fella with the moustache that nails it.

And its a thought that was with me while I wrote that post that I want to pick at.

The fascination we have with how bees see flowers, revealing the animal link between senses and motives. That our environment is shared with things that see with motives we have intentionally or unintentionally programmed them with.

Which leads me to Richard Dawkins.

Richard Dawkins talks about how we have evolved to live ‘in the middle’ (http://www.ted.com/talks/richard_dawkins_on_our_queer_universe.html) and our sensorium defines our relationship to this ‘Middle World’

“What we see of the real world is not the unvarnished world but a model of the world, regulated and adjusted by sense data, but constructed so it’s useful for dealing with the real world.

The nature of the model depends on the kind of animal we are. A flying animal needs a different kind of model from a walking, climbing or swimming animal. A monkey’s brain must have software capable of simulating a three-dimensional world of branches and trunks. A mole’s software for constructing models of its world will be customized for underground use. A water strider’s brain doesn’t need 3D software at all, since it lives on the surface of the pond in an Edwin Abbott flatland.”

Middle World — the range of sizes and speeds which we have evolved to feel intuitively comfortable with –is a bit like the narrow range of the electromagnetic spectrum that we see as light of various colours. We’re blind to all frequencies outside that, unless we use instruments to help us. Middle World is the narrow range of reality which we judge to be normal, as opposed to the queerness of the very small, the very large and the very fast.”

At the Glug London talk, I showed a short clip of Dawkins’ 1991 RI Christmas Lecture “The Ultraviolet Garden”. The bit we’re interested in starts about 8 minutes in – but the whole thing is great.

In that bit he talks about how flowers have evolved to become attractive to bees, hummingbirds and humans – all occupying separate sensory worlds…

Which leads me back to…



What’s evolving to become ‘attractive’ and meaningful to both robot and human eyes?

Also – as Dawkins points out

The nature of the model depends on the kind of animal we are.

That is, to say ‘robot eyes’ is like saying ‘animal eyes’ – the breadth of speciation in the fourth kingdom will lead to a huge breadth of sensory worlds to design within.

One might look for signs in the world of motion-capture special effects, where Zoe Saldana’s chromakey acne and high-viz dreadlocks that transform here into an alien giantess in Avatar could morph into fashion statements alongside Beyoncé’s chromasocks…

Or Takashi Murakami’s illustrative QR codes for Louis Vuitton.



That a such a bluntly digital format such as a QR code can be appropriated by a luxury brand such as LV is notable by itself.

Since the talk at Glug London, Timo found a lovely piece of work featured by BLDGBLOG by Diego Trujillo-Pisanty who is a student on the Design Interactions course at the RCA that I sometimes teach at.

Diego’s project “With Robots” imagines a domestic scene where objects, furniture and the general environment have been modified for robot senses and affordances.

Another recent RCA project, this time from the Design Products course, looks at fashion in a robot-readable world.

Thorunn Arnadottir’s QR-code beaded dresses and sunglasses imagine a scenario where pop-stars inject payloads of their own marketing messages into the photographs taken by paparazzi via readable codes turning the parasites into hosts.

But, such overt signalling to distinct and separate senses of human and robots is perhaps too clean-cut an approach.

Computer vision is a deep, dark specialism with strange opportunities and constraints. The signals that we design towards robots might be both simpler and more sophisticated than QR codes or other 2d barcodes.

Timo has pointed us towards Maya Lotanʼs work from Ivrea back in 2005. He neatly frames what may be the near-future of the Robot-Readable World:

Those QR ‘illustrations’ are gaining attention because they are novel. They are cheap, early and ugly computer-readable illustration, one side of an evolutionary pressure towards a robot-readable world. In the other direction, images of paintings, faces, book covers and buildings are becoming ‘known’ through the internet and huge databases. Somewhere they may meet in the middle, and we may have beautiful hybrids such as http://www.mayalotan.com/urbanseeder-thesis/inside/

In our own work with Dentsu – the Suwappu characters are being designed to be attractive and cute to humans and meaningful to computer vision.

Their bodies are being deliberately gauged to register with a computer vision application, so that they can interact with imaginary storylines and environments generated by the smartphone.

Back to Dawkins.

Living in the middle means that our limited human sensoriums and their specialised, superhuman robotic senses will overlap, combine and contrast.

Wavelengths we can’t see can be overlaid on those we can – creating messages for both of us.

SVK wasn’t created for robots to read, but it shows how UV wavelengths might be used to create an alternate hidden layer to be read by eyes that see the world in a wider range of wavelengths.

Timo and Jack call this “Antiflage” – a made-up word for something we’re just starting to play with.

It is the opposite of camouflage – the markings and shapes that attract and beguile robot eyes that see differently to us – just as Dawkins describes the strategies that flowers and plants have built up over evolutionary time to attract and beguile bees, hummingbirds – and exist in a layer of reality complimentary to that which we humans sense and are beguiled by.

And I guess that’s the recurring theme here – that these layers might not be hidden from us just by dint of their encoding, but by the fact that we don’t have the senses to detect them without technological-enhancement.

I say a recurring theme as it’s at the core of the Immaterials work that Jack and Timo did with RFID – looking to bring these phenomena into our “Middle World” as materials to design with.

And while I present this as a phenomena, and dramatise it a little into being an emergent ‘force of nature’, let’s be clear that it is a phenomena to design for, and with. It’s something we will invent, within the frame of the cultural and technical pressures that force design to evolve.

That was the message I was trying to get across at Glug: we’re the ones making the robots, shaping their senses, and the objects and environments they relate to.

Hence we make a robot-readable world.

I closed my talk with this quote from my friend Chris Heathcote, which I thought goes to the heart of this responsibility.

There’s a whiff in the air that it’s not as far off as we might think.

The Robot-Readable World is pre-Cambrian at the moment, but perhaps in a blink of an eye it will be all around us.

This thought is a shared one – that has emerged from conversations with Matt Webb, Jack, Timo, and Nick in the studio – and Kevin Slavin (watch his recent, brilliant TED talk if you haven’t already), Noam Toran, James Auger, Ben Cerveny, Matt Biddulph, Greg Borenstein, James George, Tom Igoe, Kevin Grennan, Natalie Jeremijenko, Russell Davies, James Bridle (who will be giving a talk this October with the title ‘Robot-readable world’ and will no doubt take it to further and wilder places far more eloquently than I ever could), Tom Armitage and many others over the last few years.

If you’re tracking the Robot-Readable World too, let me know in comments here – or the hashtag #robotreadableworld.

Sensor-Vernacular

Consider this a little bit of a call-and-response to our friends through the plasterboard, specifically James’ excellent ‘moodboard for unknown products’ on the RIG-blog (although I’m not sure I could ever get ‘frustrated with the NASA extropianism space-future’).

There are some lovely images there – I’m a sucker for the computer-vision dazzle pattern as referenced in William Gibson’s ‘Zero History’ as the ‘world’s ugliest t-shirt‘.

The splinter-camo planes are incredible. I think this is my favourite that James picked out though…

Although – to me – it’s a little bit 80’s-Elton-John-video-seen-through-the-eyes-of-a-‘Cheekbone‘-stylist-too-young to-have-lived-through-certain-horrors.

I guess – like NASA imagery – it doesn’t acquire that whiff-of-nostalgia-for-a-lost-future if you don’t remember it from the first time round. For a while, anyway.

Anyway. We’ll come back to that.

The main thing, is that James’ writing galvanised me to expand upon a scrawl I made during an all-day crit with the RCA Design Interactions course back in February.

‘Sensor-Vernacular’ is a current placeholder/bucket I’ve been scrawling for a few things.

The work that Emily Hayes, Veronica Ranner and Marguerite Humeau in RCA DI Year 2 presented all had a touch of ‘sensor-vernacular’. It’s an aesthetic born of the grain of seeing/computation.

Of computer-vision, of 3d-printing; of optimised, algorithmic sensor sweeps and compression artefacts.

Of LIDAR and laser-speckle.

Of the gaze of another nature on ours.

There’s something in the kinect-hacked photography of NYC’s subways that we’ve linked to here before, that smacks of the viewpoint of that other next nature, the robot-readable world.


Photo credit: obvious_jim

The fascination we have with how bees see flowers, revealing animal link between senses and motives. That our environment is shared with things that see with motives we have intentionally or unintentionally programmed them with.

As Kevin Slavin puts it – the things we have written that we can no longer read.

Nick’s being playing this week with http://code.google.com/p/structured-light/, and made this quick (like, in a spare minute he had) sketch of me…

The technique has been used for some pretty lovely pieces, such as this music video for Broken Social Scene.

In particular, for me, there is something in the loop of 3d-scanning to 3d-printing to 3d-scanning to 3d-printing which fascinates.

Rapid Form by Flora Parrot

It’s the lossy-ness that reveals the grain of the material and process. A photocopy of a photocopy of a fax. But atoms. Like the 80’s fanzines, or old Wonder Stuff 7″ single cover art. Or Vaughn Oliver, David Carson.

It is – perhaps – at once a fascination with the raw possibility of a technology, and – a disinterest, in a way, of anything but the qualities of its output. Perhaps it happens when new technology becomes cheap and mundane enough to experiment with, and break – when it becomes semi-domesticated but still a little significantly-other.

When it becomes a working material not a technology.

We can look back to the 80s, again, for an early digital-analogue: what one might term ‘Video-Vernacular’.

Talking Heads’ cover art for their album “Remain In Light” remains a favourite. It’s video grain / raw quantel as aesthetic has a heck of a punch still.

I found this fascinating from it’s wikipedia entry:

“The cover art was conceived by Weymouth and Frantz with the help of Massachusetts Institute of Technology Professor Walter Bender and his MIT Media Lab team.

Weymouth attended MIT regularly during the summer of 1980 and worked with Bender’s assistant, Scott Fisher, on the computer renditions of the ideas. The process was tortuous because computer power was limited in the early 1980s and the mainframe alone took up several rooms. Weymouth and Fisher shared a passion for masks and used the concept to experiment with the portraits. The faces were blotted out with blocks of red colour.

The final mass-produced version of Remain in Light boasted one of the first computer-designed record jackets in the history of music.”

Growing up in the 1980s, my life was saturated by Quantel.

Quantel were the company in the UK most associated with computer graphics and video effects. And even though their machines were absurdly expensive, even in the few years since Weymouth and Fisher harnessed a room full of computing to make an album cover, moore’s law meant that a quantel box was about the size of a fridge as I remember.

Their brand name comes from ‘Quantized Television’.

Awesome.

As a kid I wanted nothing more than to play with a Quantel machine.

Every so often there would be a ‘behind-the-scenes’ feature on how telly was made, and I wanted to be the person in the dark illuminated by screens changing what people saw. Quantizing television and changing it before it arrived in people homes. Photocopying the photocopy.

Alongside that, one started to see BBC Model B graphics overlaid on video and TV. This was a machine we had in school, and even some of my posher friends had at home! It was a video-vernacular emerging from the balance point between new/novel/cheap/breakable/technology/fashion.

Kinects and Makerbots are there now. Sensor-vernacular is in the hands of fashion and technology now.

In some of the other examples James cites, one might even see ‘Sensor-Deco’ arriving…

Lo-Rez Shoe by United Nude

James certainly has an eye for it. I’m going to enjoy following his exploration of it. I hope he writes more about it, the deeper structure of it. He’ll probably do better than I am.

Maybe my response to it is in some ways as nostalgic as my response to NASA imagery.

Maybe it’s the hauntology of moments in the 80s when the domestication of video, computing and business machinery made things new, cheap and bright to me.

But for now, let me finish with this.

There’s both a nowness and nextness to Sensor-Vernacular.

I think my attraction to it – what ever it is – is that these signals are hints that the hangover of 10 years of ‘war-on-terror’ funding into defense and surveillance technology (where after all the advances in computer vision and relative-cheapness of devices like the Kinect came from) might get turned into an exuberant party.

Dancing in front of the eye of a retired-surveillance machine, scanning and printing and mixing and changing. Fashion from fear. Quantizing and surprising. Imperfections and mutations amplifying through it.

Beyonce’s bright-green chromakey socks might be the first, positive step into the real aesthetic of the early 21st century, out of the shadows of how it begun.

Let’s hope so.

Friday Links – Kinects, jittergifs, and robots

Chris O’Shea’s Body Swap is a Kinect-based installation that lets two people control “paper cut-outs” of one another. Especially fun, as the video proves, with two people of very different height – and the provision of music to encourage acting and play is a nice touch.


Photo credit: obvious_jim

Another Kinect-related link: this Flickr set shows what happens when you map depth data (from a Kinect sensor) to a traditional digital camera photograph – and then pivot and distort it in three dimensions. The above image is probably my favourite, but the whole set is worth a look – if only for the way the set progresses through increasingly distorted takes on the original photographs.

3ERD is a tumblelog of jitter-gif photographs from Matt Moore. He’s using a stereoscopic compact camera (a bit like, say, the Fuji W1) to take stereoscopic images – but then turning the left and right image into a two-frame animated gif. The results are uncanny. It’s hard to comprehend that both frames were taken at the same time, however simple the idea may seem; the translation of two images separated in space into two images separated by time is a strange one to wrap your head around. A little slice of bullet-time.

Teriyaki blog

50 Watts’ Space Teriyaki is a wonderful collection of Japanese futurist art and imagery from the seventies and eighties. It veers between the bleak and gynaecological; throughout, though, there’s a fascinating use of colour and form.

And finally: a robot arm, repurposed into a physical feedback system for a racing computer game. It brings a whole new meaning to “force feedback”.

A few links for your Friday

Matt Jones sent this lovely bit of musical mojo – “a collaborative music and spoken word project conceived by Darren Solomon from Science for Girls” –  to the studio a couple of weeks ago, and I immediately spent at least twenty minutes playing with it. Hypnotic.

Matt Webb found this gorgeous isometric map of Hong Kong. I’ve not yet been to Hong Kong, but looking at it from this perspective, the immense density of the city started to sink in. Look at all those high rise buildings smushed in together!

Via Alice Taylor ‘s round-up of Toy Fair USA we discovered Kauzbots. How great are these? You get a cuddly handcrafted robot toy and support a good cause at the same time. I think several people I know may be getting these as gifts this year.

Finally, in case you missed it yesterday, the last Discovery space shuttle mission launch:

We’ve been sending humans into space for fifty years now, and there are two main thoughts that usually occur to me whenever I reflect on the fact of space flight: 1) “WTF?! We send people into space! There are people LIVING in space on the International Space Station! Un-effing-believable!” and 2) In the 1960s people expected by now that we’d have colonised the moon and interplanetary travel would be no big deal. What happened? Why aren’t we there yet?

Artificial Empathy

Last week, a series of talks on robots, AI, design and society began at London’s Royal Institution, with Alex Deschamps-Sonsino (late of Tinker and now of our friends RIG) giving a presentation on ‘Emotional Robots’, particularly the EU-funded research work of ‘LIREC‘ that she is involved with.

Alex Deschamps-Sonsino on Emotional Robots at the Royal Institution

It was a thought-provoking talk, and as a result my notebook pages are filled with reactions and thoughts to follow-up rather than a recording of what she said.

My notes from Alex D-S's 'Emotional Robots' talk at the RI

LIREC‘s work is centred around a academic deconstruction of human emotional relations to each other, pets and objects – considering them as companions.

Very interesting!

These are themes dear to our hearts cf. Products Are People Too, Pullman-esque daemons and B.A.S.A.A.P.

Design principle #1

With B.A.S.A.A.P. in mind, I was particularly struck by the animal behaviour studies that LIREC members are carrying out, looking into how dogs learn and adapt as companions with their human owners, and learn how to negotiate different contexts in a almost symbiotic relationship with their humans.

December 24, 2009_15-19

Alex pointed out that the dogs sometimes test their owners – taking their behaviour to the edge of transgression in order to build a model of how to behave.

13-February-2010_14.54

Adaptive potentiation – serious play! Which lead me off onto thoughts of Brian Sutton-Smith and both his books ‘Ambiguity of Play’ and ‘Toys as Culture’. The LIREC work made me imagine the beginnings of a future literature of how robots play to adapt and learn.

Supertoys (last all summer long) as culture!

Which led me to my question to Alex at the end of her talk – which I formulated badly I think, and might stumble again here to write down clearly.

In essence – dogs and domesticated animals model our emotional states, and we model theirs – to come to an understanding. There’s no direct understanding there – just simulations running in both our minds of each other, which leads to a working relationship usually.

14-February-2010_12.42

My question was whether LIREC’s approach of deconstruction and reconstruction of emotions would be less successful than the ‘brute-force’ approach of simulating the 17,000 years or so domestication of wild animals in companion robots.

Imagine genetic algorithms creating ‘hopeful monsters‘ that could be judged as more or less loveable and iterated upon…

Another friend, Kevin Slavin recently gave a great talk at LIFT11, about the algorithms that surround and control our lives – that ‘we can write but can’t read’ the complex behaviours they generate.

He gave the example of http://www.boxcar2d.com/ – that generates ‘hopeful monster’ wheeled devices that have to cross a landscape.

The little genetic algorithm that could

As Kevin says – it’s “Sometimes heartbreaking”.

Some succeed, some fail – we map personality and empathise with them when they get stuck.

I was also reminded of another favourite design-fiction of the studio – Bruce Sterling’s ‘Taklamakan

Pete stared at the dissected robots, a cooling mass of nerve-netting, batteries, veiny armor plates, and gelatin.
“Why do they look so crazy?”
“‘Cause they grew all by themselves. Nobody ever designed them.”
Katrinko glanced up.

Another question from the audience featured a wonderful term that I at least I had never heard used before – “Artificial Empathy”.

Artificial Empathy, in place of Artificial Intelligence.

Artificial Empathy is at the core of B.A.S.A.A.P. – it’s what powers Kacie Kinzer’s Tweenbots, and it’s what Byron and Nass were describing in The Media Equation to some extent, which of course brings us back to Clippy.

Clippy was referenced by Alex in her talk, and has been resurrected again as an auto-critique to current efforts to design and build agents and ‘things with behaviour’

One thing I recalled which I don’t think I’ve mentioned in previous discussions was that back in 1997, when Clippy was at the height of his powers – I did something that we’re told (quite rightly to some extent) no-one ever does – I changed the defaults.

You might not know, but there were several skins you could place on top of Clippy from his default paperclip avatar – a little cartoon Einstein, an ersatz Shakespeare… and a number of others.

I chose a dog, which promptly got named ‘Ajax’ by my friend Jane Black. I not only forgave Ajax every infraction, every interruption – but I welcomed his presence. I invited him to spend more and more time with me.

I played with him.

Sometimes we’re that easy to please.

I wonder if playing to that 17,000 years of cultural hardwiring is enough in some ways.

In the bar afterwards a few of us talked about this – and the conversation turned to ‘Big Dog’.

Big Dog doesn’t look like a dog, more like a massive crossbreed of ED-209, the bottom-half of a carousel horse and a black-and-decker workmate. However, if you’ve watched the video then you probably, like most of the people in the bar shouted at one point – “DON’T KICK BIG DOG!!!”.

Big Dog’s movements and reactions – it’s behaviour in response to being kicked by one of it’s human testers (about 36 seconds into the video above) is not expressed in a designed face, or with sad ‘Dreamworks’ eyebrows – but in pure reaction – which uncannily resembles the evasion and unsteadiness of a just-abused animal.

It’s heart-rending.

But, I imagine (I don’t know) it’s an emergent behaviour of it’s programming and design for other goals e.g. reacting to and traversing irregular terrain.

Again like Boxcar2d, we do the work, we ascribe hurt and pain to something that absolutely cannot be proven to experience it – and we are changed.

So – we are the emotional computing power in these relationships – as LIREC and Alex are exploring – and perhaps we should design our robotic companions accordingly.

Or perhaps we let this new nature condition us – and we head into a messy few decades of accelerated domestication and renegotiation of what we love – and what we think loves us back.


P.S.: This post contains lost of images from our friend Matt Cottam’s wonderful “Dogs I Meet” set on Flickr, which makes me wonder about a future “Robots I Meet” set which might illicit such emotions…

Destination: Botworld!

Last week saw the first of a series of talks on robots, artificial-intelligence and design at London’s Royal Institution, curated by Ben Hammersley. Our friend Alex Deschamps-Sonsino presented the work of the EU-funded LIREC project in a talk called ‘Emotional Robots’.

I took a bunch of notes which were reactions rather than a recording, and my thoughts will hopefully bubble up here soon…

My notes from Alex D-S's 'Emotional Robots' talk at the RI

However, I hardly have time to collect my thoughts – because this week (Wednesday 16th) it’s m’colleague Matt Webb speaking – giving a talk entitled “Botworld: Designing for the new world of domestic A.I.”.

If the conversations we’ve had about it are any guide, it should be a corker. There are still tickets available, so hopefully we’ll see you there on Wednesday and for a bot-fuelled beer in the RI bar afterward.

Matt Webb speaking in February about the future, robots, and artificial intelligence

Ben Hammersley is curating a series of three lectures at the Royal Institute of Great Britain during February. The RI is a 200-year-old research and public lecture organisation for science. Much of Faraday’s work on electricity was done there.

One of the lectures is with me!

All three lectures are at 7pm, and they are…

  1. Uncanny & lovable: The future of emotional robots, by Alexandra Deschamps-Sonsino (also of @iotwatch on Twitter, where she tracks the emerging Internet of Things). This is on the 10th, this coming Thursday.
  2. Botworld: Designing for the new world of domestic A.I. — I’m giving this lecture! My summary is below. It’s on Wednesday 16th February.
  3. Finally, A.I. will kill us all: post-digital geopolitics, with Ben Hammersley, series curator and Editor-at-large of Wired UK magazine. Date: Thursday 24th February.

You’ll need to book if you want to come, so get to it!

My talk is going to build on a few themes I’ve been exploring recently at a couple of talks and on my personal blog.

Botworld: Designing for the new world of domestic A.I.

Back in the 1960s, we thought the 21st century was going to be about talking robots, and artificial intelligences we could chat with and play chess with like people. It didn’t happen, and we thought the artificial intelligence dream was dead.

But somehow, a different kind of future snuck up on us. One of robot vacuum cleaners, virtual pets that chat amongst themselves, and web search engines so clever that we might-as-well call them intelligent. So we got our robots, and the world is full of them. Not with human intelligence, but with something simpler and different. And not as colleagues, but as pets and toys.

Matt looks at life in this Botworld. We’ll encounter a zoo of beasts: telepresence robots, big maths, mirror worlds, and fractional A.I. We’ll look at signals from the future, and try to figure out where it’s going.

We’ll look at questions like: what does it mean to relate emotionally to a silicon thing that pretends to be alive? How do we deal with this shift from ‘Meccano’ to ‘The Sims’? And what are the consequences, when it’s not just our toys and gadgets that have fractional intelligence… but every product and website?

Matt digs into history and sci-fi to find lessons on how to think about and recognise Botworld, how to design for it, and how to live in it.

I’ll be going to Alex’s and Ben’s too. I hope to see you there.

Friday Links: Light painting

This Friday: a collection of links from the studio mailing-list, all about light painting.

kalaam-530.jpg

Image: Poésie by kaalam on Flickr

Julian Breton’s work as Kaalam has already featured on the blog but it’s too beautiful not to include again in today’s collection of links. Influenced by Arabic script, he paints delicate, abstract calligraphy into his photographs as they are being exposed. There’s more on his Flickr profile and his website.

evensong.jpg

Sophie Clements’ stunning film Evensong films a series of moving light-patterns in Argyll. Mounted on rigs such as spinning wheels, there’s a magic in the way the lights interact with their environment: dancing around poles, reflecting in pools. It’s striking to see light painting such as this in moving, rather than still images.

lightdraw.jpg

Nils Völker has been buildling a robot for created coloured light drawings. Once the pattern is programmed into it, it trundles around the floor, turning its light on and off as necessary, tracing the pattern whilst a camera takes a long exposure. Whilst not as pretty as Kaalam’s work, there’s something interesting in automating this kind of work. It’s also strange to see this machine at work, as this video testifies: whilst it works, you can’t really see what it’s doing. It only makes sense when viewed as a long-exposure.

seven-roombas-1.jpg

Photo: IBR Roomba Swarm in the Dark IV by IBRoomba

Völker’s robot drew the patterns it was told to. But light painting techniques can also reveal the behaviours of smarter robots. The above picture comes from the Roomba Art group on Flickr – where people upload long exposures of their automated vacuum cleaners having attached lights to them. This image shows seven Roombas – each with a different colour LED – working all at once; you can see their starting points in the middle of the room, and the odd collision. It’s a very pretty remnant of robots at work. The rest of the pool is great, too.

caleb-charland.jpg

Photos: Light Sphere with Right Arm and Cigarette Lighter and Arcs with Arms and Candles by Caleb Charland

Caleb Charland’s images take a variety of approaches to light painting. Some are multiple exposures; some are long-duration, single exposures. Some are very much about the artist’s presence in the image (albeit in ghostly ways); in others, the artist is largely absent. They’re all lovely, though; I particular like his use of naked flames in his images.

sun-over-clifton.jpg

Justin Quinnell’s six-month exposure of the Clifton Suspension Bridge could be described as light painting using the sun. The duration of the exposure allows you to see the sun’s transit shift with the seasons. Justin has more long-exposure pinhole photography at his website.

B.A.S.A.A.P.

Design principle #1

The above is a post-it note, which as I recall is from a workshop at IDEO Palo Alto I attended while I was at Nokia.

And, as I recall, it was probably either Charlie Schick or Charles Warren who scribbled this down and stuck it on the wall as I was talking about what was a recurring theme for me back then.

Recently I’ve been thinking about it again.

B.A.S.A.A.P. is short for Be As Smart As A Puppy, which is my short-hand for a bunch of things I’ve been thinking about… Ooh… Since 2002 or so I think, and a conversation in a california car-park with Matt Webb.

It was my term for a bunch of things that encompass some 3rd rail issues for UI designers like proactive personalisation and interaction, examined in the work of Byron and Nass, exemplified by (and forever-after-vilified-as) Microsoft’s Bob and Clippy (RIP). A bunch of things about bots and daemons, conversational interface.

And lately, a bunch of things about machine learning – and for want of a better term, consumer-grade artificial intelligence.

BASAAP is my way of thinking about avoiding the ‘uncanny valley‘ in such things.

Making smart things that don’t try to be too smart and fail, and indeed, by design, make endearing failures in their attempts to learn and improve. Like puppies.

Cut forward a few years.

At Dopplr, Tom Insam and Matt B. used to astonish me with links and chat about where the leading-edge of hackable, commonly-employable machine learning was heading.

Startups like songkick and last.fm amongst others were full of smart cookies making use of machine learning, data-mining and a bunch of other techniques I’m not smart enough to remember (let-alone reference), to create reactive, anticipatory systems from large amounts of data in a certain domain.

Now, machine-learning is superhot.

The web has become a web-of-data, data-mining technology is becoming a common component of services, and processing power on tap in the cloud means that experimentation is cheap. The amount of data available makes things possible that were impossible a few years ago.

I was chatting with Matt B. again this weekend about writing this post, and he told me that the algorithms involved are old. It’s just that the data and the processing power is there now to actually get to results. Google’s Peter Norvig has been quoted as saying “All models are wrong, and increasingly you can succeed without them.“.

Things like Hunch are making an impression in the mainstream. Google Priority Inbox, launched recently, make the utility of such approaches clear.

BASAAP services are here.

BASAAP things are on the horizon.

As Mike Kuniavsky has pointed out – we are past the point of “Peak Mhz”:

driving ubiquitous computing, as their chips become more efficient, smaller and cheaper, thus making them increasingly easier to include into everyday objects.

This is ApriPoco by Toshiba. It’s a household robot.

It works by picking up signals from standard remote controls and asks you what you are doing, to which you are supposed to reply in a clear voice. Eventually it will know how to turn on your television, switch to a specific channel, or play a DVD simply by being told. This system solves the problem that conventional speech recognition technology has with some accents or words, since it is trained by each individual user. It can send signals from IR transmitters in its arms, and has cameras in its head with which it can identify specific users.

Not perhaps the most pressing need that you have in your house, but interesting none-the-less.

Imagine this not as a device, but as an actor in your home.

The face-recognition is particularly interesting.

My £100 camera has a ‘smile-detection’ mode, which is becoming common. It can also recognise more faces that a 6-month old human child. Imagine this then, mixed with ApriPoco, registering and remembering smiles and laughter.

Go further, plug it into the internet. Into big data.

As Tom suggested on our studio mailing list: recognising background chatter of people not paying attention. Plugged into something like Shownar, constantly updating the data of what people are paying attention to, and feeding back suggestions of surprising and interesting things to watch.

Imagine a household of hunchbots.

Each of them working across a little domain within your home. Each building up tiny caches of emotional intelligence about you, cross-referencing them with machine learning across big data from the internet. They would make small choices autonomously around you, for you, with you – and do it well. Surprisingly well. Endearingly well.

They would be as smart as puppies.

Hunch-Puppies…?

Ahem.

Of course, there’s the other side of domesticated intelligences.

Matt W.’s been tracking the bleed of AI into the Argos catalogue, particularly the toy pages for some time.

They do their little swarming thing and have these incredibly obscure interactions

The above photo of toys from Argos he took was given the title: “They do their little swarming thing and have these incredibly obscure interactions”

That might be part of the near-future: being surrounded by things that are helping us, that we struggle to build a model of how they are doing it in our minds. That we can’t directly map to our own behaviour. A demon-haunted world. This is not so far from most people’s experience of computers (and we’re back to Byron and Nass) but we’re talking about things that change their behaviour based on their environment and their interactions with us, and that have a certain mobility and agency in our world.

I’m reminded of the work of Rodney Brooks and the BEAM approach to robotics, although hopefully more AIBO than Runaways.

Again, staying on the puppy side of the uncanny valley is a design strategy here – as is the guidance within Adam Greenfield’s “Everyware”: how to think of design for ubiquitous systems that behave as sensing, learning actors in contexts beyond the screen.

Adam’s book is written as a series of theses (to be nailed to the door of a corporation or two?), and thinking of his “Thesis #37″ in connection with BASAAP intelligences in the home of the near-future amuses me in this context:

“Everyday life presents designers of everyware with a particularly difficult case because so very much about it is tacit, unspoken, or defined with insufficient precision.”

This cuts both ways in a near-future world of domesticated intelligences, and that might be no bad thing. Think of the intuitions and patterns – the state machine – your pets build up of you, and vice-versa. You don’t understand pets as tools, even if they perform ‘job-like’ roles. They don’t really know what we are.

We’ll never really understand what we look like from the other side of the Uncanny Valley.

Mechanical Dog Four-Leg Walking Type

What is this going to feel like?

Non-human actors in our home, that we’ve selected personally and culturally. Designed and constructed but not finished. Learning and bonding. That intelligence can look as alien as staring into the eye of a bird (ever done that? Brrr.) or as warm as looking into the face of a puppy. New nature.

What is that going to feel like?

We’ll know very soon.

Recent Posts

Popular Tags