This website is now archived. To find out what BERG did next, go to www.bergcloud.com.

Post #6258

Robot Readable World. The film.

I recently cut together a short film, an experiment in found machine-vision footage:

Robot readable world from Timo on Vimeo.

As robots begin to inhabit the world alongside us, how do they see and gather meaning from our streets, cities, media and from us? The robot-readable world is one of the themes that the studio has been preoccupied by recently. Matt Jones talked about it last year:

“The things we are about to share our environment with are born themselves out of a domestication of inexpensive computation, the ‘Fractional AI’ and ‘Big Maths for trivial things’ that Matt Webb has spoken about.

and

‘Making Things See’ could be the the beginning of a ‘light-switch’ moment for everyday things with behaviour hacked-into them. For things with fractional AI, fractional agency – to be given a fractional sense of their environment.

This film uses found-footage from computer vision research to explore how machines are making sense of the world. And from a very high-level and non-expert viewing, it seems very true that machines have a tiny, fractional view of our environment, that sometimes echoes our own human vision, and sometimes doesn’t.

For a long time I have been struck by just how beautiful the visual expressions of machine vision can be. In many research papers and Siggraph experiments that float through our inboxes, there are moments with extraordinary visual qualities, probably quite separate from and unintended by the original research. Something about the crackly, jittery but yet often organic, insect-like or human quality of a robot’s interpetation of the world. It often looks unstable and unsure, and occasionally mechanically certain and accurate.

Of the film Warren Ellis says:

“Imagine it as, perhaps, the infant days of a young machine intelligence.”

The Robot-Readable World is pre-Cambrian at the moment, but machine vision is becoming a design material alongside metals, plastics and immaterials. It’s something we need to develop understandings and approaches to, as we begin to design, build and shape the senses of our new artificial companions.

Much of our fascination with this has been fuelled by James George’s beautiful experiments, Kevin Slavin’s lucid unpacking of algorithms and the work (above) by Adam Harvey developing a literacy within computer vision. Shynola are also headed in interesting directions with their production diary for the upcoming Red Men film, often crossing over with James Bridle’s excellent ongoing research into the aesthetics of contemporary life. And then there is the work of Harun Farocki in his Eye / Machine series that unpacks human-machine distinctions through collected visual material.

As a sidenote, this has reminded me that I was long ago inspired by Paul Bush’s ‘Rumour of true things’ which is ‘constructed entirely from transient images – including computer games, weapons testing, production line monitoring and marriage agency tapes’ and a ”A remarkable anthropological portrait of a society obsessed with imaging itself.’. This found-footage tactic is fascinating: the process of gathering and selecting footage is an interesting R&D exercise, and cutting it all together reveals new meanings and concepts. Something to investigate, as a method of research and communication.

Before this:

After this:

8 Comments and Trackbacks

  • 1. TechTropes - Matt said on 7 February 2012...

    Really interesting stuff.

    Have you seen this earlier article about using “Dazzle” makeup patterns to fool facial recognition systems?

    http://www.todayandtomorrow.net/2010/03/31/computer-vision-dazzle-makeup/

  • 2. Timo Arnall said on 7 February 2012...

    Matt, yep, that’s Adam Harvey’s work linked above.

  • 3. Synne Skjulstad said on 8 February 2012...

    Fascinating, and somewhat uncanny…
    Great work!

  • Trackback: Video>> | Library Test Kitchen 14 February 2012

    […] This is all found machine vision footage, cut together. Read more here. […]

  • Trackback: Hur ser robotar världen? | Robotnyheter 21 April 2012

    […] Timo Arnall har i kortfilmen ”Robot Readable World” satt ihop ett antal klipp som visar hur algoritmer i våra datorer och robotar ser […]

  • Trackback: Pick of the Week 41 16 August 2012

    […] more about the film in BERG’s blog. Tweet Share{lang: 'en-GB'} « Pick of the week 40 Sponsorship and Partnership […]

  • Trackback: Militant Consumers and Drones: Mainstreaming Cyber-sex, Androids and AI « Camryn Rothenbury 17 August 2012

    […] & CREDITS: Feature image sourced from CNN. Read about ROBOT READABLE WORLD. Still image from David Cronenberg’s “Videodrome”. Share this:TwitterFacebookLike […]

  • Trackback: No to NoUI – Timo Arnall 13 March 2013

    […] our work with interface technologies such as RFID and computer vision, we’ve discovered that it takes a lot of work to make sense of the technologies as design […]

Leave a Comment

Recent entries from
Timo Arnall

Popular Tags