This website is now archived. To find out what BERG did next, go to www.bergcloud.com.

Blog posts tagged as 'sensor-vernacular'

I am (not) sitting in a room

I first came across Alvin Lucier’s “I am sitting in a room” through the Strictly Kev/Paul Morley masterpiece mix “Raiding The 20th Century”.

It’s an incredibly simple but powerful piece, that becomes hypnotic and immersing as his speech devolves into a drone through the feedback loop he sets up in the performance.


The space that he performs in becomes the instrument – the resonant frequencies of the room feeding back into the loop.

From wikipedia:

I am sitting in a room (1969) is one of composer Alvin Lucier’s best known works, featuring Lucier recording himself narrating a text, and then playing the recording back into the room, re-recording it. The new recording is then played back and re-recorded, and this process is repeated. Since all rooms have characteristic resonance or formant frequencies (e.g. different between a large hall and a small room), the effect is that certain frequencies are emphasized as they resonate in the room, until eventually the words become unintelligible, replaced by the pure resonant harmonies and tones of the room itself. The recited text describes this process in action—it begins “I am sitting in a room, different from the one you are in now. I am recording the sound of my speaking voice,” and the rationale, concluding, “I regard this activity not so much as a demonstration of a physical fact, but more as a way to smooth out any irregularities my speech might have,” referring to his own stuttering.

Playing around with the kinect/makerbot set-up at Foo set me thinking of Lucier’s piece, and how the sensor-vernacular interpretation could play out as a playful installation…

First, we need a Kinect chandelier.

Then, we scan the original ‘room’ with it.

Next, we print a new space using a concrete printer.

Which we then scan with the Kinect chandelier…

And so on…

One could imagine the degradation of the structure over the generations of scanning and printing might become quite beautiful or grotesque – a kind of feedback-baroque. And, as we iterate, printing spaces one after the other – generate a sensor-vernacular Park Güell

If anyone wants to give us an airship hanger and a massive concrete printer this summer, please let us know!

SVK vs Kinect

I’m at FooCamp, and I’ve brought a copy of SVK fresh off the press to show some of the folk here.

The Make space at the O’Reilly HQ has Greg Borenstein showing the possibilities of Kinect-hacking, and we were playing around with pointing the UV torch that will come with SVK at the Kinect sensor… and found it punches a hole in the point-cloud of depth-data that the infra-red structured-light of Kinect senses…

The SVK as seen by a Kinect

It’s quite a striking ‘sensor-vernacular’ image, and my folk-physics explanation of it (at midnight last night, after some Lagunitas IPA) was that the UV was cancelling out the IR – but it might just be that any LED torch light source might have the same effect – just blowing out the sensor – can’t find anything online as yet…

The SVK as seen by a Kinect

I’m away from my physicspunching colleagues who would put me right in short-order I’m sure (as probably, someone here at Foo will today) but if you have any thoughts…

Update from Greg in email:

For the record, my bet is that the torch’s universe-whole punching power came from its plastic sleeve. Things that are reflective tend to do that by bouncing the IR back directly at the Kinect and giving it a blind spot. Check out the hole where Max Ogden’s eye should be in the scan we made of him last night:

Max wears glasses and his lens’ reflection caused this.

You can actually do wickedly clever things with a mirror and the Kinect to pull different bits of space into the depth image. It’s like being able to cut-and-paste space. For example, Kyle McDonald is trying to use the trick to scan all sides of an object at once.

I like using it to make wormholes…

Sensor-Vernacular

Consider this a little bit of a call-and-response to our friends through the plasterboard, specifically James’ excellent ‘moodboard for unknown products’ on the RIG-blog (although I’m not sure I could ever get ‘frustrated with the NASA extropianism space-future’).

There are some lovely images there – I’m a sucker for the computer-vision dazzle pattern as referenced in William Gibson’s ‘Zero History’ as the ‘world’s ugliest t-shirt‘.

The splinter-camo planes are incredible. I think this is my favourite that James picked out though…

Although – to me – it’s a little bit 80’s-Elton-John-video-seen-through-the-eyes-of-a-‘Cheekbone‘-stylist-too-young to-have-lived-through-certain-horrors.

I guess – like NASA imagery – it doesn’t acquire that whiff-of-nostalgia-for-a-lost-future if you don’t remember it from the first time round. For a while, anyway.

Anyway. We’ll come back to that.

The main thing, is that James’ writing galvanised me to expand upon a scrawl I made during an all-day crit with the RCA Design Interactions course back in February.

‘Sensor-Vernacular’ is a current placeholder/bucket I’ve been scrawling for a few things.

The work that Emily Hayes, Veronica Ranner and Marguerite Humeau in RCA DI Year 2 presented all had a touch of ‘sensor-vernacular’. It’s an aesthetic born of the grain of seeing/computation.

Of computer-vision, of 3d-printing; of optimised, algorithmic sensor sweeps and compression artefacts.

Of LIDAR and laser-speckle.

Of the gaze of another nature on ours.

There’s something in the kinect-hacked photography of NYC’s subways that we’ve linked to here before, that smacks of the viewpoint of that other next nature, the robot-readable world.


Photo credit: obvious_jim

The fascination we have with how bees see flowers, revealing animal link between senses and motives. That our environment is shared with things that see with motives we have intentionally or unintentionally programmed them with.

As Kevin Slavin puts it – the things we have written that we can no longer read.

Nick’s being playing this week with http://code.google.com/p/structured-light/, and made this quick (like, in a spare minute he had) sketch of me…

The technique has been used for some pretty lovely pieces, such as this music video for Broken Social Scene.

In particular, for me, there is something in the loop of 3d-scanning to 3d-printing to 3d-scanning to 3d-printing which fascinates.

Rapid Form by Flora Parrot

It’s the lossy-ness that reveals the grain of the material and process. A photocopy of a photocopy of a fax. But atoms. Like the 80’s fanzines, or old Wonder Stuff 7″ single cover art. Or Vaughn Oliver, David Carson.

It is – perhaps – at once a fascination with the raw possibility of a technology, and – a disinterest, in a way, of anything but the qualities of its output. Perhaps it happens when new technology becomes cheap and mundane enough to experiment with, and break – when it becomes semi-domesticated but still a little significantly-other.

When it becomes a working material not a technology.

We can look back to the 80s, again, for an early digital-analogue: what one might term ‘Video-Vernacular’.

Talking Heads’ cover art for their album “Remain In Light” remains a favourite. It’s video grain / raw quantel as aesthetic has a heck of a punch still.

I found this fascinating from it’s wikipedia entry:

“The cover art was conceived by Weymouth and Frantz with the help of Massachusetts Institute of Technology Professor Walter Bender and his MIT Media Lab team.

Weymouth attended MIT regularly during the summer of 1980 and worked with Bender’s assistant, Scott Fisher, on the computer renditions of the ideas. The process was tortuous because computer power was limited in the early 1980s and the mainframe alone took up several rooms. Weymouth and Fisher shared a passion for masks and used the concept to experiment with the portraits. The faces were blotted out with blocks of red colour.

The final mass-produced version of Remain in Light boasted one of the first computer-designed record jackets in the history of music.”

Growing up in the 1980s, my life was saturated by Quantel.

Quantel were the company in the UK most associated with computer graphics and video effects. And even though their machines were absurdly expensive, even in the few years since Weymouth and Fisher harnessed a room full of computing to make an album cover, moore’s law meant that a quantel box was about the size of a fridge as I remember.

Their brand name comes from ‘Quantized Television’.

Awesome.

As a kid I wanted nothing more than to play with a Quantel machine.

Every so often there would be a ‘behind-the-scenes’ feature on how telly was made, and I wanted to be the person in the dark illuminated by screens changing what people saw. Quantizing television and changing it before it arrived in people homes. Photocopying the photocopy.

Alongside that, one started to see BBC Model B graphics overlaid on video and TV. This was a machine we had in school, and even some of my posher friends had at home! It was a video-vernacular emerging from the balance point between new/novel/cheap/breakable/technology/fashion.

Kinects and Makerbots are there now. Sensor-vernacular is in the hands of fashion and technology now.

In some of the other examples James cites, one might even see ‘Sensor-Deco’ arriving…

Lo-Rez Shoe by United Nude

James certainly has an eye for it. I’m going to enjoy following his exploration of it. I hope he writes more about it, the deeper structure of it. He’ll probably do better than I am.

Maybe my response to it is in some ways as nostalgic as my response to NASA imagery.

Maybe it’s the hauntology of moments in the 80s when the domestication of video, computing and business machinery made things new, cheap and bright to me.

But for now, let me finish with this.

There’s both a nowness and nextness to Sensor-Vernacular.

I think my attraction to it – what ever it is – is that these signals are hints that the hangover of 10 years of ‘war-on-terror’ funding into defense and surveillance technology (where after all the advances in computer vision and relative-cheapness of devices like the Kinect came from) might get turned into an exuberant party.

Dancing in front of the eye of a retired-surveillance machine, scanning and printing and mixing and changing. Fashion from fear. Quantizing and surprising. Imperfections and mutations amplifying through it.

Beyonce’s bright-green chromakey socks might be the first, positive step into the real aesthetic of the early 21st century, out of the shadows of how it begun.

Let’s hope so.

Recent Posts

Popular Tags