This website is now archived. To find out what BERG did next, go to

Blog posts tagged as 'materialexploration'

New Nature: a brief to Goldsmiths Design students

"Death To Fiction" minibrief, Goldsmiths Design

The project we ran in the spring with the Goldsmiths Design BA course was not ‘live’ in the sense that there was a commercial client’s needs informing the project, but it was an approximation of the approach that we take in the studio when we are working with clients around new product generation and design consultancy.

It was also an evolution of a brief that we have run before at SVA in New York with Durrell Bishop – but with the luxury of having much more time to get into it.

Our brief was in two parts – representing techniques that we use in the early stages of projects.

The first half: “Death To Fiction” stems from our love for deconstructing technologies, particularly cheap everyday ones to find new opportunities.

It’s a direct influence from Durrell – and techniques he used while teaching Schulze, Joe Malia and others at the RCA – and also something that is very familiar to many craftspeople – having at least some knowledge of a lot of different materials and techniques that can then inform deeper investigation, or enable more confident leaps of invention later on in the process. It also owes a lot to our friend Matt Cottam‘s “What is a Switch?” brief that he’s run at RISD, Umea, CIID and Aho…

We asked the students to engage with everyday technology and manufactured, designed goods as if it were nature.

“The Anthropocene” has been proposed by ecologists, geologists and geographers to describe the epoch marked by the domination of human influence on the Earth’s systems – seams of plastic kettles and Tesco “Bags For Life” will be discovered in millions of years time by the distant ancestors of Tony Robinson’s Time Team.

There is no split between nature and technology in the anthropocene. So, we ask – what happens if you approach technology with the enthusiasm and curiosity of the amateur naturalist of old – the gentlemen and women who trotted the globe in the last few centuries with sturdy boots, travel trunks and butterfly nets – hunting, collecting, studying, dissecting, breeding and harnessing the nature around them?

The students did not disappoint.

Like latter-day Linneans, or a troop of post-digital Deptford Darwins – they headed off into New Cross and took the poundstretchers and discount DIY stores as their Galapagos.

After two weeks I returned to see what they had done and was blown-away.

Berg: New Nature brief

Chewing-gum, Alarm-clocks, key-finders, locks, etch-a-sketches, speakers, headphones, lighters, wind-up toys and more – had all been pulled-apart, scrutinised, labelled, diagrammed, tortured, tested, reconstructed…

"Death To Fiction" minibrief, Goldsmiths Design

"Death To Fiction" minibrief, Goldsmiths Design

Berg: New Nature brief

And – perhaps most importantly I had the feeling they had not only been understood, but the invention around communicating what they had learnt displayed a confidence in this ‘new nature’ that I felt would really stand them in good stead for the next part of the project, and also future projects.

Berg: New Nature brief

It was all great work, and lots of work – the smile didn’t leave my face for at least a week – but a few projects stood out for me.

"Death To Fiction" minibrief, Goldsmiths Design

"Death To Fiction" minibrief, Goldsmiths Design

Charlotte’s investigations of disposable cameras, Helen’s thought-provoking examination of pregnancy tests, Tom’s paper speakers (which he promised had worked!), Simon’s unholy pairings of pedometers and drills, Liboni and Adam’s thorough dissections of ultrasonic keyfinders and the brilliant effort to understand how quartz crystal regulate time by baking their own crystal, wiring it to a multimeter and whacking it with a hammer!

"Death To Fiction" minibrief, Goldsmiths Design

Hefin Jones’ deconstruction of the MagnaDoodle, and his (dramatic, hairdryer-centric) reconstruction of it’s workings was a particularly fine effort.

The second half of the brief asked the students to assess the insights and opportunities they had from their material exploration and begin to combine them, and place them in a product context – inventing new products, services, devices, rituals, experiences.

We’ve run this process with students before in a brief we call “Hopeful Monsters”, which begins with a kind of ‘exquisite corpse’ mixing and breeding of devices, affordances, capabilities, materials and contexts to spur invention.

We’d pinched that drawing technique way back in 2007 for Olinda from Matt Ward, head of the design course at Goldsmiths so it only seemed fitting that he would lead that activity in a workshop in the second phase of the brief.

Berg: New Nature brief

The students organised themselves in teams for this part of the brief, and produced some lovely varied work – what was particularly pleasing to me was that they appeared to remain nimble and experimental in this phase of the project, not seizing upon a big idea then dogmatically trying to build it, but allowing the process of making inform the way to achieve the goals they set themselves.

We closed the project with an afternoon of presentations at The Gopher Hole (thanks to Ossie and Beatrice for making that happen!) where the teams presented back their concepts. All the teams had documented their research for the project as they went online, and many opted to explain their inventions in short films.

Here’s a selection:

A special mention to the ‘Roads Mata’ team, who for me really went the extra-mile in creating something that was believably-buildable and desirable – to the extent that I think my main feedback to them was they should get on KickStarter

There were sparks of lovely invention throughout all the student groups – some teams had more trouble recognising them than others, but as Linus Pauling once said “To have a good idea you have to have a lot of ideas”, and that certainly wasn’t a problem.

I wonder what everyone would have come up with if we had a slightly longer second design phase to the project, or introduced a more constrained brief goal to design for. It might have enabled some of the teams to close in on something either through iteration or constraint.

Next time!

As it was I hope that the methods that the brief introduce stay with the group, and that the curiosity, energy and ability to think through making that they obviously all have grows in confidence and output through the coming years.

They will be a force to be reckoned with if so.

The Hopeful Monsters of New York


We’re wrapping up our week teaching at SVA on the interaction course tomorrow.

It’s been an amazingly fun week – with an excellent group of students throwing themselves into material explorations, generative drawing, prototyping behaviour and surfaces and more.

It’s like Sterling’s cave of Taklamakan, made from post-it notes and acetates.

We’ve had a little blog for the week set up where we’re posting the work as it’s produced, and have put the briefs etc.

Havasu: a material exploration of conversational interfaces


This November, I made a robot. It’s called Havasu.

Havasu is a robot that helps you find out what films are on when, and then organise your friends to go. You talk to Havasu through instant messenger.

The purpose of the project was to be a material exploration into conversational interfaces. The Havasu project page explains more:

The goal of the project was to explore ways of interacting that aren’t menus or GUIs; manners of interaction more like dialogue, or polite listening, facilitated by agents or AIs that are could not really be called “smart”. It’s very much in line with the notions of “Fractional AI” and “BASAAP” that we are interested in.

I’ve written about material exploration before. The previous explorations I’ve conducted in code were all about large datasets – TV listings for Shownar, schools for Schooloscope.

Havasu isn’t an exploration of data, though: it’s an exploration of a type of UI. We were seeking to define the qualities that defined a type of interaction – what made an interface conversational. Was it the use and understanding of natural language? Was it the way a conversation flowed? Was it that it was conducted in a more casual manner than an interface that constantly demands your full attention? The easiest way to find out these answers was to build our own conversational UI.

Havasu was built in three weeks. Matt W and I agreed on a fairly simple set of goals for the project:

  • it had to be a conversational interface
  • about film screenings
  • that could be released publicly

You can talk to Havasu yourself. It’s available to any Jabber-compatible instant messanging client (such as Google Talk) as It’s very much not a complete product; it has some rough edges, to say the least.

The material exploration within Havasu wasn’t just writing the code to build the bot: it was interacting with that interface, seeing where it succeeded and failed, and using what we’d learned by both making Havasu and playing with it to determine what makes an interface conversational.

The conclusions we came to – and the suggestions for how the Havasu bot could be improved – are on the Havasu project page.

Hopeful Monsters and the Trough Of Disillusionment

Last Saturday, Matt Webb and I hosted a short session at O’Reilly FooCamp 2010, in Sebastopol, California.

The title was “Mining the Trough of Disillusionment”, referring to the place in the Gartner “Hype Cycle” that we find inspiration in – where technologies languish that have become recently mundane, cheap and widely-available but are no longer seen as exciting ‘bullet-points’ on the side of products.

For instance, RFID was down in the trough when Jack and Timo did their ‘Nearness’ and ‘Immaterials’ work, and many of the components of Availabot are trough-dwellers, enabling them to be cheap and widely-available for both experimentation and production.

While not presenting the Gartner reports as ‘science’ – they do offer an interesting perspective of the socio-technical ‘weather’ that surrounds us and condenses into the products and services we use.

In the session we examined the last five years of the hype cycle reports they have published – it’s kind of fascinating – there are some very strange decisions as to what is included, excluded and how buzzwords morph over time.

After that we brainstormed with the group which technologies they thought had fallen, perhaps irrevocably, into the trough. It was fun to get so many ‘alpha geeks’ thinking about gamma things…

Having done so – we had a discussion about how they might breed or be re-contextualised in order to create interesting new products.

These “hopeful monsters” often sound ridiculous on first hearing, but when you pick at them they illustrate ways in which a forgotten or unfashionable technology can serve a need or create desire.

Or they can expose a previously unexploited affordance or feature of the technology – that was not brought to the fore by the original manufacturers or hype that surrounded it. By creating a chimera, you can indulge in some material exploration.

The list we generated is below, if you’d like to join in…

It was a really fun session, that threw up some promising avenues – and some new products ideas for us… Thanks to all who attended and participated!

"Trough of disillusionment" session, Foo10

  • Mobsploitation (a.k.a. Crowdsourcing…)
  • Artificial Intelligence
  • <512mb thumbdrives
  • Blinking Lights (esp. in shoes)
  • Singing Chips (esp. in greetings cards)
  • Desktop Web Apps
  • Cameras
  • Accelerometers
  • MS Office Apps
  • Physical Keyboards
  • Mice
  • Cords & Wires in general
  • Non-Smart Phones
  • RSS
  • Semantic Web
  • Offline…
  • Compact Discs
  • Landline Phones
  • Command Lines & Text UIs
  • Privacy
  • P2P
  • MUDs & MOOs
  • Robot Webcams & Sousveillance
  • Google Wave
  • Adobe Flash
  • Kiosks
  • Municipal Wifi
  • QR Codes
  • Pager/Cellphone Vibrator motors
  • Temporary Autonomous Zones

Toiling in the data-mines: what data exploration feels like

Matt’s mentioned in the past few summaries of weeks that I’ve been working on ‘material exploration’ for a project called Ashdown. I wanted to expand a little on what material exploration looks like for code and what it feels like to me, because it feels like a strange and foreign territory at times. This is my second material exploration of data for BERG, the first being at the beginning of the Shownar project.

There are several aspects to this post. Partly, it’s about what material explorations look like when performed with data. Partly, it’s about the role of code as a tool to explore data. We don’t write about code much on the site, because we’re mainly interested in the products we produce and the invention involved in them, but it’s sometimes important to talk about processes and tools, and this, I feel, is one of those times. At the same time, as well as talking about technical matters, I wanted to talk a little about what the act of doing this work feels like.

Programmers very rarely talk about what their work feels like to do, and that’s a shame. Material explorations are something I’ve really only done since I’ve joined BERG, and both times have felt very similar – in that they were very, very different to writing production code for an understood product. They demand code to be used as a sculpting tool, rather than as an engineering material, and I wanted to explain the knock-on effects of that: not just in terms of what I do, and the kind of code that’s appropriate for that, but also in terms of how I feel as I work on these explorations. Even if the section on the code itself feels foreign, I hope that the explanation of what it feels like is understandable.

Material explorations

BERG has done material explorations before – they were a big part of our Nokia Personalisation project, for instance – and the value of them is fairly immediate when the materials involved are things you can touch.

But Ashdown is a software project for the web – its substrate is data. What’s the value of a material exploration with an immaterial substrate? What does it look like to perform such explorations? And isn’t a software project usually defined before you start work on it?

Not always. Invention comes from design, and until the data’s been exposed to designers in a way that they can explore it, and manipulate it, and come to an understanding of what design is made possible by the data, there essentially is no product. To invent a product, we need to design, and to design, we need to explore the material. It’s as simple as that.

There’s a lot of value in this process. We know, at a high level, what the project’s about: in the case of Ashdown, Matt’s described it as “a project to bring great user experience to UK education data“. The high level pitch for the project is clear, but we need to get our hands mucky with the data to answer some more significant questions about it: what will it do? What will it feel like to use? What are the details of that brief?

The goals of material exploration

There are several questions that the material exploration of data seeks to answer:

  • What’s available: what datasets are available? What information is inside them? How easily are they to get hold of – are they available in formatted datasets or will they need scraping? Are they freely available or will they need licensing?
  • What’s significant: it’s all very well to have a big mass of data, but what’s actually significant within it? This might require datamining, or other statistical analysis, or getting an expert eye on it.
  • What’s interesting: what are the stories that are already leaping out of the data? If you can tell stories with the data, chances are you can build compelling experiences around it.
  • What’s the scale: getting a good handle on the order of magnitude helps you begin to understand the scope of the project, and the level of details that’s worth going into. Is the vast scale of information what’s important, or is it the ability to cherry-pick deep, vertical slices from it more useful? That answer varies from project to project.
  • What’s feasible: this goes hand in hand with understanding the scale; it’s useful to know how long basic tasks like parsing or importing data take to know the pace the application can move at, or what any blockers to a realistic application are. There is lots of scope to improve performance later, but knowing the limitations of processing the dataset early on helps inform design decisions.
  • Where are the anchor points: this ties into “what’s significant”, but essentially: what are the points you keep coming back to – the core concepts within the datasets, that will become primary objects not just in the code but in the project design?
  • What does it afford?: By which I mean: what are the obvious hooks to other datasets, or applications, or processes. Having location data affords geographical visualisation – maps – and also allows you to explore proximity; having details of Local Education Authorities allows you to explore local politics. What other ideas immediately leap into mind from exploring the data?

To explore all these ideas, we need to shape the data into something malleable: we need to apply a layer of code on the top of it. And it can’t just exist as code: we also need the beginnings of a website.

This won’t be the final site – or even the final code – but it’s the beginnings of a tool that can explain the data available, and help explore them, to designers, developers, and other project stakeholders, and that’s why it’s available, as early as possible, as an actual site.

To do this, the choice of tools used is somewhat important, but perhaps more important is the approach: keeping the code malleable, ensuring no decisions are too binding, and not editorialising. “Show everything” has become a kind of motto for this kind of work: because no-one else knows the dataset yet, it’s never worth deeming things “not worth sharing” yet. Everything gets a representation on the site, and then informed design decisions can be made by the rest of the team.

What does the code for such explorations look like?

It’s a bit basic. Not simple, but we’re not going to do anything clever: architecture is not the goal here. It will likely inform the final architecture, and might even end up being re-used, but the real goal is to get answers out of the system as fast as possible, and explore the scale of the data as widely as possible.

That means doing things like building temporary tables or throwaway models where necessary: speed is more important than normalisation, and, after all, how are you going to know how to structure the data until you’ve explored it?

Also, because we’re working on very large chunks of data, it’s important that any long running processes – scrapers, parsers, processors – need to be really granular, and able to pick up where they left off; my processing tasks usually only do one thing, and require running in order, but it’s better than one long complex process that can’t be restarted – if that falls over in the middle and can’t be restarted, it’s a lot of time (a valuable resource at these early stages) wasted.

It’s also important that there’s a suitably malleable interface to the data for you, the developer. For me, that’s a REPL/console of some sort – something slightly higher level than a MySQL terminal, that lets you explore codified representations of data (models) rather than just raw information. Shownar was built in PHP, and whilst it was, for many reasons, the right choice of platform for the project, I missed having a decent shell interface onto the system. On Ashdown, I’m working in Rails, and already the interactive console has made itself indispensable. For a good illustration of the succinct power of REPLs, and why they’re a useful thing to have around for data exploration, it’s definitely worth reading Simon Willison’s recent post on why he likes Redis.


Visualisation is a really important part of the material exploration process. When it comes to presenting our explorations, it’s not just enough to have big lists, and vast, RESTful interfaces on top of blobs of data: that’s still not a very effective translation of the stories the data tells. Right now, we don’t need to be fussy about what we visualise: it’s worth sticking graphs everywhere and anywhere we can, just to start exploring new representations of the data. It’s also useful to start learning what sort of visual representations suit the data: some data just doesn’t make as much sense in a graph as a table, and that’s OK – but it’s good to find out now.

Because now isn’t the time to be shaving too many yaks, when it comes to visualisation libraries and tools, the ones that are fastest or that you are most familiar with are probably the best. For that reason, I like libraries that only touch the client-side such as the Google Charts API, or gRaphael (which I’ve been using to good effect recently). Interactive graphs, of the kind gRaphael makes trivial, are more than just eye candy: it’s actually really useful, with large datasets, to be able to mouse around a pie chart and find out which slice corresponds to which value.

Visualisation isn’t just a useful lens on the data for designers; it can be hugely beneficial for developers. A recent example of the usefulness of visualisation for development work in progress comes from this video behind the scenes on Naughty Dog’s PS3 game Uncharted 2: Among Thieves. About twenty seconds in, you can see this image:

of a developer playing the game with a vast amount of telemetry overlaid, reacting as he plays. It’s not pretty, but it does provide an immediate explanation of how gameplay affects the processors of the console, and is clearly an invaluable debugging tool.

What data exploration feels like

It often feels somewhat pressured: time is tight and whilst an hour spend going down the wrong alley is fine, a day spent fruitlessly is somewhat less practical. At the same time, without doing this exploration, you won’t even know what is “fruitless”. It can be frightening to feel so directionless, and overcoming that fear – trusting that any new information is the goal – is tough, but important to making progress.

It can also be overwhelming. Shownar ended up with a massive dataset; Ashdown’s is huge already. That dataset – its meaning, its structure – gets stuck in your head, and it’s easy to lose yourself to it. That often makes it harder to explain to others – you start talking in a different langauge – so it becomes critical to get it out of your head and onto screens.

It also feels lonely in the data-mines at times. Not because you’re the only person working on it, but because no-one else can speak the language you do; the deeper you get into the data, the harder you have to work to communicate it, and the quicker you forget how little anyone else on the project knows.

Invention becomes difficult: being bogged down in the mechanics of Making It Work often makes it hard for me to have creative ideas about what you can do with that data, or new ways of looking at it. Questions from others help – a few simple questions about the data opens enough avenues to keep me busy all day. One thing we tried to do was ensure that I made a “new graph” every day; the graph should only take about 30 minutes to write the code and do, but it ensures that I don’t spend all my time on writing processing or scraping code.

At times, the code you’re writing can feel a bit string and glue – not the robust, Quality Code you’d like to be writing as a developer. I’d like to TATFT, but this isn’t the place for it: we’re sculpting and carving at the moment, and the time for engineering is later. For now, getting it on the screen is key, and sometimes, that means sacrifices. You learn to live with it – but just make sure you write the tests for the final product.

There are a lot of pregnant pauses. For Ashdown, I’ve had long-running processes running overnight on Amazon EC2 servers. Until I come in the next day, I have no idea if it worked, and even if it did work, whether or not it’ll be useful. As such, the work is bursty – there’s code, and a pause to gather results, and then a flurry of code, and then more gathering. All I’ve learned to date is: that’s the rhythm of exploration, and you learn to deal with it.

What emerges at the end of this work?

For starters, a better understanding of the data available: what there is, how to represent it, what the core concepts are. Sometimes, core concepts are immediately obvious – it’s likely that “schools” are going to be a key object in Ashdown. Sometimes, they’re compound; the core concept in Shownar turned out to be “shows”, but how the notion of a ‘show’ was represented in the data turned out to be somewhat complex. As part of these core concepts, the beginnings of a vocabulary for the application emerge.

Technically, you’ve got the beginnings of a codebase and a schema, but much of that might be redundant or thrown out in future; you shouldn’t bet on this, but it’s a nice side effect. You also might, as a side effect of building a site, have the beginnings of some IA, but again, don’t bet on it: that’s something for designers to work on.

You should also have a useful tool for explaining the project to colleagues, stakeholders, and anyone coming onto the project new – and that tool will allow everyone else to gain insight into just what’s possible with the data available. Enabling creativity, providing a tool for non-developers to explore the data, is the key goal of such exploration. And that leads into a direction and brief for the final piece of software – and it’s a brief that you can be confident in, because it’s derived from exploration of the data, rather than speculation.

And then, the invention can begin.

Recent Posts

Popular Tags