This website is now archived. To find out what BERG did next, go to www.bergcloud.com.

Blog (page 36)

Links from around the studio: paper computing, computer vision, simple and small

Matt J was busy running Papercamp last Saturday. One of my favourite things to emerge from the day was Basil Safwat’s Processing.A4. It’s computational cardboard; you follow the instructions on it to replicate the output of the Substrate Processing script.

Troika have launched their new artwork, Shoal, in Toronto.

Spanning across a 50 meter long corridor, 467 fish-like objects wrapped in iridescent colours and suspended from the ceiling rotate rhythmically around their own axis to display the movements and interdependency typical to school of fish.

Beautiful, though.

Niklas Roy’s My Little Piece of Privacy is a delightful computer-vision project:

My workshop is located in an old storefront with a big window facing towards the street. In an attempt to create more privacy inside, I’ve decided to install a small but smart curtain in that window. The curtain is smaller than the window, but an additional surveillance camera and an old laptop provide it with intelligence: The computer sees the pedestrians and locates them. With a motor attached, it positions the curtain exactly where the pedestrians are.

I really enjoyed his video of it – first, the project displayed-as-is, and then a detailed explanation of what the computer’s “seeing”. Through both parts, the hilarity of the little, jerkily moving curtain is not lost.

I’ve been enjoying dataists – a new blog about the science and interprtation of data – a great deal recently. Today’s post about What Data Visualisation Should Do is particularly good:

…yesterday I focused on three key things – I think – data visualization should do:
1. Make complex things simple
2. Extract small information from large data
3. Present truth, do not deceive

The emphasis is added to highlight the goal of all data visualization; to present an audience with simple small truth about whatever the data are measuring.

That felt like a nice addition to some of the topics covered in Matt J’s talk at citycamp and my own talk on data from a few weeks ago – but do read the whole post; it’s an insightful piece of writing.

Finally, some stop-motion animation. Our friends Timo Arnall and Matt Cottam recently linked to the videos some of their students at the Umeå Institue of Design produced during their week working on stop-motion techniques. They’re all charming; it’s hard to single any of them out – they’re all lovely – but the dancing radio (above) was a particular favourite.

“Post-Digital Printed Augmented Reality”

PaperCamp 2 was on Saturday. It was ace.

PaperCamp is all about, well, paper. As the PaperCamp 1 wiki says,

“whether that’s looking at material possibilities of paper itself, connecting paper to the internet and vice-versa with things like 2d-barcodes, RFIDs or exotic things like printing with conductive inks… it’s about the fact that paper hasn’t gone away in the digital age – it’s become more useful, more abundant and in some cases gone and got itself bionic superpowers.”

Roo and Ben have done a couple of lovely write-ups. As Ben said, “Stuff is happening at the moment that I feel we’ll look back upon and enjoy saying, I was there. Papercamp was one of those.”

Anyway, I rambled a bit about some things, and gave everyone a few behind-the-scenes glimpses of some ideas we’ve had around paper over the last few months at BERG, so I thought I’d share a few slides here. But first, here’s some stuff we like.

Making and thinking with paper

I remember doing tons of these Albers paper studies at school. You know, just lovely bits of material exploration. Finding the inherent properties of things. And there was loads of it going on throughout the day on Saturday.

Cutting and folding paper is a common way of exploring the material of other ideas too. Say, mathematics. There must be something in the crossover of immaterial and material, and the ease and immediacy of making as thinking. Say, people like Gerry Stormer, who makes gorgeous “Origamic Architecture”…

… and David Huffman (the same David Huffman that invented Huffman encoding), who is really into curved folds.

And then of course, there’s this magical self-folding origami that came out of MIT and Harvard a few months back. It’s clumsy but mind-blowing. During the talk, we were told by Ben that it works because “it’s covered in stuff”.

Paper and data and storytelling

Seeing as we’re reading from screens more and more in our everyday lives, maybe the pressure might be off books and paper as things that need to impart information. They’re being freed up as something we can do new things with. And, of course, the web has given us loads of new ways to feed information back on to (and into) paper itself.

It makes me think of things like Bruno Munari’s brilliant Look Into My Eyes

Bruno Munari "Look Into My Eyes"

…or the colossal Star Wars pop-up book by Matthew Reinhart, which I think would be loads better if it had no text.

What if we ignored printing altogether, and imagined what we could do with just data and paper? Datadecs (by our mates RIG and Andy Huntington) looks at this idea in a typically charming way (OK, they’re not made of paper, but you know).

And Nick O’ Leary’s Paper Graphs point at some similar loveliness. Now I’ve got my christmas decorations, I want data-driven presents under the tree too.

Prototype Paper Pie Chart

Then there are the projects that take the best bits of everything: the web; printing; paper; maps; context of use and so on. James Bridle’s A Wide Arm Of Sea is a fully immersive, locative, pervasive, contextually-aware, haptic, 3D augmented reality experience. All printed on a bit of newspaper.

Some recent BERGian papery thinking

We’ve been dancing with a few ideas around paper here at BERG in a few recent projects too. They haven’t made it into the world (yet, maybe), but we thought that they’d be worth sharing.

One idea that didn’t make it past the drawing board for BBC Dimensions was Post-Digital Printed Augmented Reality. Or, Sticking A Bit Of Paper In Front Of Your Face.

If we knew, say, the height of the Saturn V rocket; roughly what height you are, and how far away the horizon is, maybe we could make a paper sextant to help you imagine where the tip of the rocket would be if it were in front of you.

Or how about how the silhouette of a Spitfire if it was zooming over you, x distance away at x altitude?

Or how about how big the Great Pyramid of Giza would be if it was on the horizon?

I know, I know. But it could totally work!

Also, here are some of the experiments around the cut-out-and-keep schools we did for Schooloscope back in July. We already had all the parameters in place to draw a picture of a school on the site, so why not use the same variables to draw data-driven pop-up postcards, that fold up and lock together without needing glue? We didn’t get time to look at this in as much detail as we’d hoped to at the time, but it’s there in the idea drawer.

Getting the fold to lock into place nicely was important – the thinking being that this could be a little souvenir that could sit nicely on a desk or wherever. You can imagine the little school sitting on a road, maybe with the sky painted in behind, or pointing to other schools nearby, and so on. We think there’s something really exciting in combining dry datasets with the graphic language of cereal boxes, or Pokemon cards or whatever.

Anyway. I told you this would be a bit of a ramble. Hopefully it points at bigger, cleverer, juicier things happening around paper and the web. I can’t wait to see what else pops up (sorry) over the coming months.

Multiple Screens

Earlier in the week, Matt W asked if there were any games that took advantage of outputting on more than one screen. Not necessarily the usage of side-by-side screens to increase the field of view, either – but different screens that perform totally different functions.

I pointed out that there was some precedent – although not a lot – and what began as a conversation quickly became a list that was worth sharing and explaining a bit.

forza-3-screens.jpg

This isn’t the kind of thing Matt meant. Whilst it’s definitely a part of this conversation, the Forza Motorsport series’ use of multiple monitors to increase the field of view is the kind of thing that’s not actually very interesting. It doesn’t alter the game in any significant way. It’s also a brute force solution: each screen is rendered by its own Xbox, and all the consoles are slaved together over a local network.

I think what Matt meant was separate screens performing different functions.

At the very simplest level, second screens can act as contextual displays – parts of the HUD or interface broken out to their own display.

supcom2.jpg

The strategy game Supreme Commander allows players to use a second monitor for a zoomed-out tactical map. Rather than reducing the map to the corner of the screen (as many strategy games do), or forcing the player to constantly zoom in and out, the second screen provides a permanent context for what’s going on the primary screen.

vmu-1.jpg

A similar type of contextual screen can be seen on the Sega Dreamcast. The VMU memory unit was designed as a miniature console itself, with a screen and set of controls. When docked with the joypad, it acted as a second screen in the player’s hands.

The VMU was not used as effectively in the role of “second screen” as it might have been, although there were exceptions. Resident Evil: Code Veronica, for instance, used the VMU to display the player character’s health (which was otherwise only visible in the status menu).

nintendobs.jpg

Of course, there’s a limit to how many secondary screens are sensible; shortly after the announcement of the Nintendo DS, the above spoof was widely circulated. It’s a good point: lots of little screens right next to each other aren’t very different from one big screen.

The most interesting usage of multiple screens is in their capacity to affect gameplay itself. What sort of games would you design when players can have different viewports onto the world?

pacman-vs.jpg

Pac-Man VS is my favourite answer to that question so far. It’s four-player Pac-Man, on the Nintendo Gamecube. Three players play ghosts: they play on the TV, with Gamecube pads.They have a 3D-ish view of a limited part of the map, and a radar in the bottom-right to know where each other is.

The fourth player is Pac-Man; they don’t use a Gamecube joypad. Instead, they play on a Gameboy Advance, plugged into the Gameube with a connection lead:

gba-connection.jpg

The Gameboy screen shows the Pac-Man player the entire map. Pac-man’s superpower over the ghosts is context; he has knowledge of the whole map. The ghosts are more powerful, but can’t see nearly so much.

Here’s a nice video of it all playing out, the Gameboy screen on the left, the TV on the right.

It’s marvellous: fun, social, and utterly ingenious. There were a few other games for the linkup cable designed around players having their own screens – Final Fantasy: Crystal Chronicles and Zelda: Four Swords are the obvious examples – but Pac-Man VS remains the stand-out, for me.

ipad-scrabble.jpg

One recent example of this sort of approach is Scrabble on the iPad, which lets you use the pad as a board, and other iOS devices for each player to hide their tiles. But it feels so unimaginative: the secondary screens feel like they’ve been used simply because it was possible; they’re no more than direct analogues for real-world objects. (It’s also an absurdly expensive way to play Scrabble.)

Nintendo’s DS focused on the usage of a secondary screen as context and extra information – but in a parallel universe, I’m sure there’s a DS that looks much like this:

facing-ds-for-blog.jpg

This imaginary affords all manner of games based on hidden knowledge and incomplete views of the world. And, just like a tandem, it looks wrong without someone else playing with you; it indicates how it wants to be used, inviting a second player.

My imaginary console is entirely symmetrical in its design. It’d be a shame to only encourage games that gave symmetrical abilities for both players, in the same way as games like Guess Who? or Battleships. Asymmetric games – where players have very different abilities, or viewpoints, much like Pac Man VS above – are, for me, a more interesting notion to explore with multiple screens. Imagine games where players may have not only very different abilities or tasks to one another, but also might be played on totally different types of screen from one another.

Super Mario Galaxy demonstrated a co-operative approach to asymmetric play. Rather than being another avatar in the world alongside Mario, a second player could use their Wiimote to scoop up star bits as they passed. They did nothing else, and could drop in and out when they liked; theirs was a purely additive role. It allows a player with different capabilites – or attention – to drop in and out of the game, always helping, but not being critical to Mario’s success.

To extend that idea to screens: what are the gameplay modes for a friend with a touchscreen tablet, whilst I’m playing on a console attached to the TV? Mechanic to my racing driver? Coach to my football team? Evil overlord planting traps in the dungeon I’m exploring?

I don’t know yet. This at least feels like the start of a useful catalogue of multiple-screen play. And as screens become smarter, and “screen” and “device” increasingly become synonyms for one another, the world of multiple-screen play feels like an exciting, and ripe area to explore.

Open Data for the Arts – Human Scale Data and Synecdoche

This is a short talk that I gave as part of a 45-minute workshop with Matthew Somerville at The Media Festival Arts 2010. As part of a session on how arts and cultural bodies can use open data, I talked about what I felt open data was, and what the more interesting opportunities it affords to are.

What is open data?

I’d describe “open data” as: “Making your information freely available for reuse in practical formats with no licensing requirements.

It’s not just sticking some data on a website; it’s providing it in some kind of data-format (be it CSV, XML, JSON, RDF, either via files or an API) for the intended purpose of being re-used. The more practical the format, the better.

You can still own the copyright; you can still claim credit. That doesn’t stop the data being open. But open data shouldn’t require payment.

More importantly:

What isn’t open data?

It’s not just sticking up web pages and saying it’s open because you won’t tell me off for scraping it.

It’s not any specific format. One particular crowd will tell you that open data has to be RDF, for instance. That is one format it can be, but it doesn’t have to be.

The success of your open data platform depends on how useful people will find it.

How do I know if it’s useful?

A good rule of thumb for “good open data” – and, by “good”, I mean “easy for people to use”, is something I’ve seen referred to as “The P Test“, which can be paraphrased as:

“You can do something interesting with it – however simple – in an hour, in a language beginning with P.”

Making something super-simple in an hour in Perl/PHP/Python (or similar, simple scripting language, that doesn’t begin with P, like Ruby or Javascript) is a good first goal for an open data set. If a developer can’t do something simple in that little time, why would they spend longer really getting to grips with your information? This, for me, is a problem with RDF: it’s very representative of information, as a data format, but really, it’s bloody hard to use. If I can’t do something trivial in an hour, I’m probably going to give up.

What are the benefits of open data?

The big benefit of open data is that it gets your “stuff” in more places. Your brand isn’t a logo, and it isn’t a building; it’s this strange hybrid of all manner of things, and your information is part of that. That information might be a collection, or a catalogue, or a programme. Getting that information in more places helps spread your brand.

As well as building your profile, open data can also build collaboration and awareness. I can build something out of someone else’s information as a single developer messing around, sure – but I can also build products around it that stand alone, and yet build value.

schooloscope-od.jpg

For instance, Schooloscope. Schooloscope looks at data about UK schools and put it together to give you a bigger picture. A lot of reporting about schools focuses on academic performance. Schooloscope is more interested in a bigger picture, looking at pupil happiness and change over time. We built this site around DFE data, Edubase data, and Ofsted reports. We’re building a product in its own right on top of other people’s data, and if the product itself is meaningful, and worthwhile… then that’s good for both your product and the source data – not to mention that data’s originators.

But for me, the biggest thing about open data is: it helps grow the innovation culture in your organisation.

The number-one user of open data should be you. By which I mean: if your information is now more easily accessible via an API (for instance), it makes it easier to build new products on top of it. You don’t have to budget for building interfaces to your data, because you’ve done it already: you have a great big API. So the cost of innovation goes down.

(A short note on APIs: when you build an API, build good demos. When I can see what’s possible, that excites me, as a developer, to make more things. Nothing’s worse than a dry bucket of data with no examples.)

Similarly: the people who can innovate have now grown in number. If you’ve got information as CSV – say, your entire catalogue, or every production ever – then there’s nothing to stop somebody armed with Excel genuinely doing something useful. So, potentially, your editorial team, your marketing team, your curators can start exploring or using that information with no-one mediating, and that’s interesting. The culture begins to move to one where data is a given, rather than something you have to request from a technical team that might take ages.

And, of course, every new product that generates data needs to be continuing to make it open. Nothing’s worse than static open data – data that’s 12, 18 months old, and gets updated once a year as part of a “big effort” – rather than just adding a day to a project to make sure its information is available to the API.

What’s the benefit for everyone else?

This is just a short digression about something that really interests me. Because here’s the thing: when somebody says “open data”, and “developers using your information”, we tend to imagine things like this:

red-dot-fever.jpg

Schuyler Erle called the above kind of map “red dot fever”: taking geolocated data and just putting it all on a map, without any thought. This isn’t design, this isn’t a product, this is just a fact. And it’s about as detached from real people as, to be honest, the raw CSV file was.

So I think one thing that open-data allows people to do is make information human-scale. And by which I mean: make it relevant, make it comprehensible, move it from where the culture might be to where *I* am.

And that lets me build an ongoing relationship with something that might have been incomprehensible.

I should probably show you an example.

tower-bridge-od.jpg

This is a Twitter bot that I built. It tells you when Tower Bridge is opening and closing. I stole the data from their website.

Or rather: Tower Bridge itself tells you when it’s opening and closing. Things on Twitter talk in the first person, so it should be itself. It becomes another voice in my Twitter stream, not just some bot intruding like a foghorn.

It exposes a rhythm. I built it because I used to work near Tower Bridge – I saw it every day. I liked the bot most when I was out of London; I’d see it opening and closing and know that London was still going on, still continuing. It has a silly number of followers, but not many of them interact with it daily. And yet – when you do, it’s useful; some friends found it helpful for reminding them not to leave the office for a bit.

And: you learn just how many times it opens/closes, but not in a numeric way; in a visceral way of seeing it message you.

lowflyingrocks-od.jpg

This is Low Flying Rocks by my friend Tom Taylor. It’s a bot that scrapes NASA data about asteroids passing within 0.2AU AU of Earth (an AU being 0.2 of the distance from the Earth to the sun). That’s quite close! What you discover is a) there are lots of asteroids passing quite close, and b) we know that they’re there. You both learn about the universe, and a little bit about our capacity to understand it. And you learn it not in some big glut of information, but slowly, as a trickle.

It feels more relevant because it’s at my scale.

And that leads to my final point.

Synecdoche

I want to talk about synecdoche, because I think that’s what these kind of Twitter bots are.

Synecdoche’s a term from literature, best explained as “the part representing a whole“. That’s a terrible explanation. It’s better explained with some examples:

A hundred keels cut the ocean“; “keel” stands for “ship“. “The herd was a hundred head strong“; “head” stands for “cow“.

So: for me, Tower Bridge is synecdoche, for the Thames, for London, for the city, for home. Low Flying Rocks is synecdoche not only for the scale of the universe, all the activity in the solar system, the earth’s place in that – but also for NASA, for science, for discovery.

Synecdoche allows you to make big, terrifying data, human-scale.

I was thinking, to wrap this session up, about a piece of data I’d like if I was building a Twitter bot, and I decided that what I’d love would be: what the curtain at the Royal Opera House was doing.

curtain.jpg

It sounds boring at first: it’s going to go up and down a few times in a performance. That means once an evening, and perhaps the odd matinee.

But it’s also going to go up and down for tech rehearsals. And fire tests. And who knows what else. It’s probably going up and down quite a lot.

And, as that burbles its way into my chat stream, it tells me a story: you may only think there’s a production a day in the theatre, but really, the curtain never stops moving; the organisation never stop working, even when you’re not there. I didn’t learn that by reading it in a book; I learned it by feeling it, and not even by feeling all of it – just a tiny little bit. That talking robot told me a story. This isn’t about instrumenting things for the sake of it; it’s about instrumenting things to make them, in one particular way, more real.

Yes, from your end, it’s making APIs and CSV and adding extra functionality to existing projects that are probably under tight budgets. But it allows for the things you couldn’t have planned for.

Open Data allows other people to juxtapose and invent, and tell stories, and that’s exciting.

Light Painting with an HTC Desire

Janine Pauke has been emulating the light-painting technique we used in Making Future Magic. Instead of an iPad, she’s been using her mobile phone, and slicing her own 3D models up.

We found her pictures on Flickr yesterday and were delighted.

Her results are just lovely. A small, neon spaceship flies through a house; the otherworldly glow of the phone’s screen is juxtaposed with the warm tungsten bulbs of the everyday world.

And, of course, by painting in the world, you capture all the details of the world in the background. A bemused cat by the stairs; the bright lights above a stove; a blurry arm, dragging the phone through the air.

It’s great to see someone else using the technique so effectively. Beautiful pictures, Janine!

All photographs © Janine Pauke.

Quinn Norton on cyborgs

To celebrate the 50th anniversary of the word “cyborg” entering English, Tim Maly is running the #50cyborgs project: 50 essays about cyborgs.

Quinn Norton just posted hers, 50 years of cyborgs: I have not the words.

She starts like this:

For a sense of place to my moment, I will tell you I am on a wireless keyboard, swinging on a homemade swing on the first floor in the three story high living room of the person that would be my it’s complicated on Facebook if I had a Facebook.

My computer itself is on the second floor. As I type these words into the air I have no way of knowing for sure that they are not ephemeral, nothing to confirm my progress and therefore distract me from my thoughts. I strongly suspect that for all the weirdness of the moment, they are (in fact) among the least ephemeral words penned by mankind

My emphasis. Awesome.

Then, birth control pills: The modified were women, and the environment was men.

Then, quoting theory: Cyborgs not only disrupt orderly power structures and fixed interests but also signify a challenge to settled politics, which assumes that binary oppositions or identities are natural distinctions.

Then, I don’t think we’ll ever notice the age of cyborgs, because we do these things one at a time.

Quinn ends by looking for new language, for ways to talk about the world of cyborgs we already live in, and the kind of un-cyborgs coming into being that we didn’t expect.

Read it all. This is terrific.

Friday Links: Screens In The World

For this Friday, a selection of links from around the studio about screens-in-the-world.

This video is the output of the TAT Open Innovation project – an exploration of the future of screen technology. Of course, more than ever, “screen” is becoming interchangeable with “device”, as this video explores the actions and interactions made possible by new kinds of device, both mobile and static.

And here’s Freescale Semiconductor’s vision of a screen-driven future. Smart mirrors and see-through tablets are increasingly popular tropes of the future right now.

iron-man-coffee-table.jpg

iron-man-pda.jpg

Two more examples of transparent screens – one portable, one embedded in the environment – from Perception’s work on the visual effects for Iron Man 2. Such tropes aren’t just limited to concept videos; they’re also a part of popular culture.

Chris O’Shea’s Hand From Above makes a playful use of giant, public screens. These screens are so often passive, broadcasting devices. It’s strange and jarring – in an exciting way – to see them interacting with us. It’s like they can see.

Keiichi Matsuda’s Domestic Robocop envisages an augmented-reality future where the augmentation outweighs the reality. Practically every surface in Matsuda’s imagined kitchen has the capacity to become a screen – most of which end up displaying advertising, generating income for the homeowner.

There’s an overlap I’m beginning to see here: between “screens everywhere“, and “everything being a screen” and what we’re currently calling augmented reality. Thinking on that, I can’t help but return to this lovely video from our friend and collaborator Timo Arnall. It doesn’t matter how the map appears on the street. For the woman on the bench, the ground in front of her is the most sensible place for the map to appear. Large pieces of information can make good use of large spaces. Why not, then, make the “screen” as big as possible, and use the environment itself?

Making Future Magic: the book

There were an awful lot of photos taken for the Making Future Magic video that BERG and Dentsu London launched last week; Timo reckons he shot somewhere in the region of 5500 shots. Stop-frame animation is a very costly process in the first instance, but as the source we were shooting was hand held (albeit with locked-off cameras) and had only the most rudimentary of motion-control (chalk lines, black string and audio progress cues), if a frame was poorly exposed, obscure or fumbled, it left the sequence largely unusable. This meant that a lot was left on the cutting room floor.

In addition, we amassed a stack of incidental pictures of props, setups, mistakes, 3D tests and amphibious observers during the film’s creation.

Clicking through these pictures, it was clear that a book collecting some of these pictures, offering little behind-the-scenes glimpses alongside the finished graded stills used in the final edit, was the way forward. As well as offering a platform for some of the shots that didn’t make the final cut, the static prints want to be pored over, allowing for the finer details and shades (the animations themselves had textures and colours burnt into them in prior to shooting, so as to add a disruptive quality) to come through.

Our copies arrived today from Blurb. The print quality and stock is fantastic – especially considering it’s an on-demand service – and for us it’s great to have a little summary of a project that doesn’t require any software or legacy codecs to view it and will remain ‘as is’. We’ve made the book available to the public and in two formats; you can get your hands on the hardcover edition here, and the softcover here.

More images of the book are up here.

Week 276

Each week, Kari spends 5 minutes with each person in the studio recording what they’ve been up to. We do this so nobody has to keep time-sheets. Here’s my week.

Last Tuesday (the 14th), we launched the short film Making Future Magic. It hit 400,000 views in 2 days (it’s currently over double that). The video was picked up by Gizmodo, Stephen Fry, and William Gibson. I wasn’t on the film team, but helped with the launch preparation and saw it come together. The day Cam, Timo and Jack hit on the techniques that went on to become stop-motion light painting, it was electric just to have them in the room.

Also last Tuesday afternoon, we had the kick-off meeting for Project Blacklight. It has been slow to start, this one, as it’s a pretty unusual enterprise for us. One quirk is that the financials aren’t completely fixed yet, and they have to be before I continue conversations with potential backers and advertisers. The print tests and quotes over the next couple weeks will firm those up. Blacklight should make for an exciting start to 2011.

On Wednesday I had a meeting with a potential new client with Matt Jones. This particular client is interested in our product invention workshop, which we run either standalone or as a prelude to pretty much all our design work. It’s 3 days of intense knowledge download, concepting and co-creation, and sketching. The client ends up with around 5 “microbriefs,” which is what we call the sketches and descriptions of the products or services we come up with around their business and brief. They then take those briefs off to their existing agencies and internal teams, or ask us to make a proposal for one or more of them. (BBC Dimensions started this way, one of a half dozen products to come out of an invention workshop aimed at history storytelling and digital.)

I had a catch-up with Nick over lunch, covering everything from my current thoughts about the studio’s direction, to his progress meeting iOS developers, and what weird ideas are tickling him at the moment (I’ll make sure our proposals steer in that direction). It was really good. So I’m going to spend 45 minutes with each of the studio, individually, every two weeks on Wednesdays. It’s funny how, even in a small room, you can miss chances to really spend time together.

Jack and Matt J had a long-anticipated getting-to-know-you meeting with another possible client in the afternoon, and we spent an hour after that chewing over possibilities.

But mainly on Wednesday I was working on my talk for the Do Lectures, which was in Wales. I spoke on Thursday evening, and went from sci-fi, to the early years of electrification, to the idea that is really making me bubble at the moment: Fractional A.I. This riffs of Dave Winer’s application of fractional horsepower to the Web, where he says that new products can be made by taking an old one and scaling it down.

What if we had fractional artificial intelligence? This is another way of saying Matt J’s maxim to Be As Smart As a Puppy, and also a topic I covered in my Mobile Monday Amsterdam talk What comes after mobile. It’s a topic I’m fleshing out.

Thursday and Friday was talks, walks in the Welsh countryside (there’s a beautiful river there and you can take a short hike up the gorge. Lots of ancient woodland and slate landscapes), late-night conversations, and inspiration. You should watch the 2010 videos when they’re up.

Whilst I was away, a project proposal was accepted, and we’ll start that project off this week.

Euan Semple gave me a lift back to London on Sunday night, and I waited at Slough railway station for a train. While there, I found a stuffed dog in a box. The dog is called “Station Jim,” and he used to raise money for charity. He was quite a character by all accounts, and died in the closing years of the 1800s. I mentioned Station Jim on Twitter… and @stationjim replied! Fractional A.I. indeed. We had a little chat.

Monday, yesterday, we had a kick-off meeting for the next stage of Project Barringer. Andy is working with us a day a week to produce a pretty significant strand of the project. It’s nicely complex – lots of different skills and people involved – and a good blend of design and hard tech. But risky. So the next two stages are: prototype; detailed costings for production. We’ll have to do some pretty serious analysis at every stage of this one.

In the afternoon I caught up on a few projects. I wanted to get an update on the next film (it’s going well — the team have just been meeting to discuss the last few bits of copy), and Tom and Matt B have been working on league tables for Schooloscope and those are tantalisingly close now. I went out with Jack in the evening to run through contracts. After 40 minutes discussing “worst case scenarios” I got home a bit grouchy. It’s funny the ways in which work affects your personal life. Not just emails arriving late at night, feeling tired from working hard, or elated after a launch, but subtle emotional spillover. I try to keep an eye on that. I’m undecided yet whether a high level of self-knowledge is an advantage or hindrance for the kind of invention and design we do. But it’s important for general wellbeing.

Which brings us to today.

This morning we’ve had our All Hands, during which we had our first project updates from active new product development. These projects are like invisible people, so they deserve to have their say about their week’s activity.

I’ve set up, with Kari, an old laptop to run Dropbox. We’ve pretty much entirely shifted to Dropbox for file-sharing from our in-studio server, but that means our archives aren’t up to date. So: archiving.

A few copies of the Making Future Magic book arrived in the post (print on demand; designed by Cam. Very pretty). And I pointed Matt B at Tunecore because we’d like to put the film music on iTunes.

Jack and Matt J are at a workshop on Wednesday and Thursday, so I’ll help them prep that later. I think I’m sneaking in a massage after lunch (lunch is with some iOS developers, so we can keep them in mind for future projects). And this afternoon and over the rest of the week, I am way behind on keeping project proposals moving through the pipeline, so I want to concentrate on that. There are a bunch. Oh, and emails: way behind on those too. I have a little list of people to whom I really owe a Hello.

Last: Jack, Matt J and I were going to go out for dinner with an Interesting Person tonight, but that’s been moved to tomorrow. I can still make it — I’m not sure about the other two.

Otherwise, generally thinking about what’s happening next, and seeing where I can nudge or smooth the way as appropriate. To be honest, that’s most of my time.

So that’s my week!

Making Future Magic – a bit about the music

Some of you might have seen this film we released with our friends from Dentsu London the other day. At the time of writing, it’s had over half a million views. Whoa.

Also, a few people have been asking about the music we used, so I thought I’d chat a little bit about it. We wrote it ourselves, here in the studio. I pasted it all together, with direction and input from Schulze, Timo, Beeker and the rest of the Dentsu crew.

Some of the best bits about working at BERG are how everyone, despite having particular specialist skills, gleefully ignores boundaries, disciplines, labels and predefined processes, and allows themselves space to just run with things when they get excited. Deciding to do the music for the first Making Future Magic film ourselves was one of those moments.

“Yeah, so who are your influences then?”

About ten days ago, after the animation had reached a final(ish) edit, I happened to overhear Schulze, Timo and Cam batting a few ideas around about potential soundtrack music. I hadn’t really been involved in the project so far, but at this point I dropped what I was doing, went a bit Barry from High Fidelity, and started throwing some MP3s at them.

Over that afternoon, we chewed on some of Aphex Twin’s prepared piano robotics; the sinister, codeine-fuelled fizzes of Oneohtrix Point Never; the anodyne, bleepy piano washes of Swod and Jan Jelinek; the fuzzy felt collages of The Focus Group; the tranquil-yet-demented drone of Mandelbrot Set; Finnish free jazz kraut-metallers Circle; ultra-hip dub-glitchers Mount Kimbie; the electric guitar symphonies of Glenn Branca; some Eno-squelched dulcimer by Laraaji; downright weirdness by Basil Kirchin, and of course the obligatory Reichs, Glasses and Rileys. Maybe a dash of Yellow Magic Orchestra at the end, too, just for sheer melodic charm and natty suits.

That weekend, on a long train journey, and with a few hours to kill, I was listening back to the tunes we’d picked out, and thought I’d sketch out some musical ideas to accompany a few clips of the current edit, just as a little exercise. Like loads of people I know, I do enjoy a bit of noodling around with things like Ableton Live, Logic, Beatmaker on the iPhone and so on. So I had a crack at it.

On the Monday morning, everyone had a listen, and nudged me to do a little more, just to see where it went. Gradually, things began to firm up into a “proper job”. I’d never written music for a film (or anything else, for that matter) ever before, but hey, everyone knows the best way to learn something is simply to set a risky week-away deadline involving potential public ridicule. So here went nothing.

Designing the Music – first sketches

We all know that a lot of the unseen (yet most satisfying) work in design goes into getting rid of things. Tidying up. Wielding Occam’s Razor. Making things unnoticeable. Getting things under the hood working so well you forget they’re there. All that good stuff. There are obvious parallels to this in music, but I guess this applies even more so to making soundtracks.

Not your rousing, whistle-able belters of your Williamses or Morricones; I’m thinking more about Bernstein’s work for the Eames films, John Cameron’s haunting soundtrack to Kes, anything on the KPM label, or, say, Clint Mansell, whose Moon soundtrack got quite a rinsing here in the studio last year. There’s a quiet unselfishness to this type of music which I’m really drawn to – it’s kind of half-there, beckoning you to invent accompanying stories and pictures in your head, and sometimes it’s at its best when you don’t really notice it. I imagine this rings lots of little bells in the heads of anyone involved in design or making things – it definitely does for me.

As I say, I’d never really written any music before, so pretty much used these little scraps of what I know about design (and what I love about film music) as a way in. Finding the grain of a material and playing with it; hitting on an idea and not getting in the way of it; looking for patterns; making references to other, familiar concepts, using broad brush strokes first, then (quite literally) tuning and polishing – all the usual approaches, really. The same way we’d work with any (im)material here at BERG.

So, here are the three first sketches I did. The visual glitchiness of the animation was the main thing I wanted to complement, so I went outside, made some little field recordings on my phone, chucked them all into the computer, then pressed record and left it on. I assembled the samples into few rhythms, teased out little patterns of pitch, timbre and so on, and eventually, after a few hours, out popped a few bits and pieces. It took me about 6 hours of jamming to come up with three one-minute ideas. Told you I was new at this.

That was a bit Chris Isaak meets Twin Peaks. Bland. Nah. Next.

A po-faced Radiohead rip-off. Cheesy moody piano. Banal drum-and-bass-by-numbers rhythm. Overall, nah.

We all sat up at this one. Warm, bubbly ARPy synths; Reichy scales and patterns; plinky, poppy glockenspiels; pentatonic scales giving off a subtle whiff of J-Pop (which might sit nicely with the Dentsu folks), and it had the most potential to grow melodically. Tick!

Building out the musical structure

After that, it was time to work out how this sketch would evolve to fit across the whole film. The first task was to build the scaffolding we wanted to hang everything off, by translating the timing of each visual cut into bars and beats, which I did with a metronome and a few big sheets of paper. I grabbed Schulze, talked about where we wanted the main narrative pivots to be, and stuck those on post-it notes.

Since we had three sections to work with (Making, Future and Magic), everything pretty much finished itself after that. We’d built the scaffolding, so now all that was needed was the rest of the building – from the main zones down to furniture, textures, colours and so on. I blocked in the main themes and some large areas of texture, then just worked my way down to polishing little details. I don’t know much about how composers work, but this bit wasn’t all that different from we usually get from whiteboards and post-its down to pixels and working code.

Jack and Timo were still making edits to the film as I was composing, so I needed to leave a bit of slack here and there to adjust to their timings. I made little modular loops of different lengths (3, 4, and 5 notes, in different rhythms, at different speeds), which meant I could cut or extend little phrases here and there, ignoring strict time signatures as needed. Again, just simple, common sense stuff, really.

The final mix

After 3 or 4 days of tuning and polishing, we had an overall structure everyone was pretty happy with, so we got in touch with the chaps at Resonate to help us mix and master everything – the proper, detailed tuning. Big big thanks to Liam and Andy for being super helpful at such short notice! Aside from treating a novice like me very kindly, they brought a level of clarity and depth to the mix way beyond what my ears had previously heard. Here are the before and after versions. Spot the difference!

Before mixing:

Mixed and mastered:

And of course here’s the finished film.

Overall, the music took us about 6 or 7 days. A mere blip compared to the weeks of late nights that went into the animation, but a nice example of how when the studio is simmering nicely, everyone’s interests, hobbies and hunches tend to bubble to the surface and happily get put to use, all in the name of doing Good Stuff.

Recent Posts

Popular Tags