This website is now archived. To find out what BERG did next, go to www.bergcloud.com.

Blog (page 37)

Making Future Magic: light painting with the iPad

“Making Future Magic” is the goal of Dentsu London, the creative communications agency. We made this film with them to explore this statement.

(Click through to Vimeo to watch in HD!)

We’re working with Beeker Northam at Dentsu, using their strategy to explore how the media landscape is changing. From Beeker’s correspondence with us during development:

“…what might a magical version of the future of media look like?”

and

…we [Dentsu] are interested in the future, but not so much in science fiction – more in possible or invisible magic

We have chosen to interpret that brief by exploring how surfaces and screens look and work in the world. We’re finding playful uses for the increasingly ubiquitous ‘glowing rectangles’ that inhabit the world.

iPad light painting with painter

This film is a literal, aesthetic interpretation of those ideas. We like typography in the world, we like inventing new techniques for making media, we want to explore characters and movement, we like light painting, we like photography and cinematography as methods to explore and represent the physical world of stuff.

We made this film with the brilliant Timo Arnall (who we’ve worked with extensively on the Touch project) and videographer extraordinaire Campbell Orme. Our very own Matt Brown composed the music.

Light painting meets stop-motion

We developed a specific photographic technique for this film. Through long exposures we record an iPad moving through space to make three-dimensional forms in light.

First we create software models of three-dimensional typography, objects and animations. We render cross sections of these models, like a virtual CAT scan, making a series of outlines of slices of each form. We play these back on the surface of the iPad as movies, and drag the iPad through the air to extrude shapes captured in long exposure photographs. Each 3D form is itself a single frame of a 3D animation, so each long exposure still is only a single image in a composite stop frame animation.

Each frame is a long exposure photograph of 3-6 seconds. 5,500 photographs were taken. Only half of these were used for the animations seen in the final edit of the film.

There are lots of photographic experiments and stills in the Flickr stream.

Future reflection

light painting the city with Matt Jones

The light appears to boil since there are small deviations in the path of the iPad between shots. In some shots the light shapes appear suspended in a kind of aerogel. This is produced by the black areas of the iPad screen which aren’t entirely dark, and affected by the balance between exposure, the speed of the movies and screen angle.

We’ve compiled the best stills from the film into a print-on-demand Making Future Magic book which you can buy for £32.95/$59.20. (Or get the softcover for £24.95/$44.20.)

My piece on iPad magazines for Icon’s September 2010 issue.

Icon September Issue: piece on (near-)future of digital magazines by me

Outgoing editor Justin McGuirk asked me to write a little about the near-future of digital magazines for Icon #87, in which I talk a bit about challenges of the context they now find themselves in as a media form, as well as things we think we learned during the Mag+ project.

They’ve kindly allowed us to republish it here.

Since the launch of the Apple iPad six months ago, the world of digital magazines has seen fevered activity and hyperbolic punditry.

Big names such as Wired, Vanity Fair, Time and Popular Science (which our studio, BERG, helped to bring to the iPad with the Mag+ platform) have released editions into the App Store and made proclamations that it’s the future of magazines.

However, the very term “digital magazine” smacks of “horseless carriage”, Marshall McLuhan’s term for an in-between technology that is quickly obsolete. While nothing is certain about the future of any media, there is no doubt that the digital tablet form will grow in popularity, with the iPad being joined later this year by numerous other (possibly cheaper) competitors mainly powered by Google’s Android operating system.

So, what does the future really hold for digital magazines? We can identify some challenges and some opportunities. One certainty is that the manner by which we discover and purchase magazines will be given a hefty thump by the switch to digital. We are in a world of search rather than browse – which perhaps in turn leads to a change in the role of cover design, from “buy me, look what’s inside” to “you know what’s inside, but here is an incredible, evocative image”. In many ways it’s a return to the “classic” magazine covers of the 1950s and 60s, privileging the desirability of the object itself rather than shouting about every feature.

The bounded “object-ness” of the magazine embedded in the world of the endless, restless internet is seen by most as an anachronism, but it is also one of its greatest attributes. Research we received from our client Bonnier as part of the brief for the Mag+ concept indicated that people really were attached to the magazine as a form of media that creates a bubble of time to indulge in reading – and as a contrast to other, faster forms of media.

Meeting this need – while acknowledging the breadth, speed and interconnectedness of the internet – is a design condition that has not been satisfied fully by the current crop of digital magazine offerings, our efforts included. But stay tuned.

Another change in what we might term the “attention economics” of digital magazines is that their new neighbours in the app ecosystem are not other magazines, but games, spreadsheets, supermarket delivery apps, photography apps and so on. One device is now the conduit for vastly different activities and experiences.

And yet – at least in the current user-interface paradigm of Apple and Google – they all get pretty much the same real estate on screen. You have to decide between killing time with a magazine, playing Angry Birds or ordering your Ocado delivery based on the same visual evidence.

Perhaps future iterations of mobile and tablet operating systems will have a more media-led approach, as evidenced by the new Windows Phone 7 mobile operating system (yes, that’s right, Microsoft has made a more media-centric user interface than Apple) – leading to magazine icons being bigger or more varied on the media surface.

Still, having such vastly different neighbours nestling so close creates a new context for an old form that has heavier production costs than its new competitors. A casual game developed by five people commands the same attention of a magazine produced by 25. That is remarkably imbalanced, but don’t think these attention economics will stand. The production and form of the magazine cannot fail to be affected. Internet-native publishers such as gadget expert Gizmodo, fashion maven The Sartorialist or critically minded gamer Rock, Paper, Shotgun are smaller and nimbler. And eventually they’ll be able to publish to the same canvas as the big boys and girls – and be able to charge for their expert curation and commentary.

Which brings us to some of what I’ve started to call “two-star problems”. In the consumocracy of the App Store, star ratings are all, and unfortunately most of the current magazine offerings have only two stars, compared to the four- or five-star world of games and other apps. Even Wired and, I’m sad to say, Popular Science garner a “must-try-harder” three stars. Consumer dismay at customer service, reliability, consistency, pricing and the overall offer seem to lead to these relatively low ratings. Consumers’ expectations are determined by the value they see offered by software producers compared to traditional media producers.

So where to head? What are the opportunities? I think they are supplementary to what magazine publishers see as their existing strengths in writing, curation and design. They will emerge from their less glamorous but equally deep knowledge of subscriptions, service and “belonging”.

Take the best of what you understand of your readership and the decade or so that many magazines have spent on the internet and look to exploit the social technologies of the web, rather than run to present your content as an isolated recapitulation of a mid-1990s CD-ROM.

Create hybrids and experiment – not with the empty (and costly) spectacle of embedding jarring 3D and video, but with data, visualisation, sociality, location-based services, semantic technologies.

There’s no reason that the feel of a well-designed, valuable, curated object shouldn’t be complemented when placed properly in the roaring, sparkling stream of the internet. And experiment not just with editorial content, but also with advertising. I’d rather have a live link to the latest Amazon price for a camera than a spinning 3D video of it.

Tablets promise to be transformative – in their context of use and how well they can display content – but they do not wish away the disruptive challenge (and opportunity) the internet presents to magazine publishers.

This is the beginning of a tumultuously exciting time for magazines and those who produce them – not an end to the “free-for-all” of the web as many would love to believe. More experimentation, not less, is what’s called for. As a reader and a designer, I’m looking forward to that.

Friday Links: Light painting

This Friday: a collection of links from the studio mailing-list, all about light painting.

kalaam-530.jpg

Image: Poésie by kaalam on Flickr

Julian Breton’s work as Kaalam has already featured on the blog but it’s too beautiful not to include again in today’s collection of links. Influenced by Arabic script, he paints delicate, abstract calligraphy into his photographs as they are being exposed. There’s more on his Flickr profile and his website.

evensong.jpg

Sophie Clements’ stunning film Evensong films a series of moving light-patterns in Argyll. Mounted on rigs such as spinning wheels, there’s a magic in the way the lights interact with their environment: dancing around poles, reflecting in pools. It’s striking to see light painting such as this in moving, rather than still images.

lightdraw.jpg

Nils Völker has been buildling a robot for created coloured light drawings. Once the pattern is programmed into it, it trundles around the floor, turning its light on and off as necessary, tracing the pattern whilst a camera takes a long exposure. Whilst not as pretty as Kaalam’s work, there’s something interesting in automating this kind of work. It’s also strange to see this machine at work, as this video testifies: whilst it works, you can’t really see what it’s doing. It only makes sense when viewed as a long-exposure.

seven-roombas-1.jpg

Photo: IBR Roomba Swarm in the Dark IV by IBRoomba

Völker’s robot drew the patterns it was told to. But light painting techniques can also reveal the behaviours of smarter robots. The above picture comes from the Roomba Art group on Flickr – where people upload long exposures of their automated vacuum cleaners having attached lights to them. This image shows seven Roombas – each with a different colour LED – working all at once; you can see their starting points in the middle of the room, and the odd collision. It’s a very pretty remnant of robots at work. The rest of the pool is great, too.

caleb-charland.jpg

Photos: Light Sphere with Right Arm and Cigarette Lighter and Arcs with Arms and Candles by Caleb Charland

Caleb Charland’s images take a variety of approaches to light painting. Some are multiple exposures; some are long-duration, single exposures. Some are very much about the artist’s presence in the image (albeit in ghostly ways); in others, the artist is largely absent. They’re all lovely, though; I particular like his use of naked flames in his images.

sun-over-clifton.jpg

Justin Quinnell’s six-month exposure of the Clifton Suspension Bridge could be described as light painting using the sun. The duration of the exposure allows you to see the sun’s transit shift with the seasons. Justin has more long-exposure pinhole photography at his website.

The surprisingness of what we say about ourselves

Google Scribe is autocomplete meets word processing. It looks at everything you’ve typed so far, and predicts what you’re going to say next. For example, Scribe believes I am now about to write: they are not the only one who can not afford to pay for the cost of the project is to develop a new generation of protein database.

I feel like I’m connected to the spirit world, except that the spirit world is an amalgam of a billion Internet users and Google’s massive server farm.

What I like about Scribe is that you can see how surprising each word is. If Google can’t predict what you’re about to say, what you’re saying is truly novel.

So:

At the bottom of every page on this website, there’s a little statement about ourselves: BERG is a design consultancy, working hands-on with companies to research and develop their technologies and strategy, primarily by finding opportunities in networks and physical things.

I made a chart of word-by-word surprisingness: given the statement so far, could Scribe predict what would come next?

Here are the results:

Google Scribe surprisingness of BERG's studio description

I learn that about half of the statement is exactly what Google’s spirit world expects, which goes to show it could be more concise and higher signal-to-noise.

Use this technique to avoid redundancy in speech or writing.

Matt Webb speaking at The Do Lectures, September 16th, Cardigan, Wales

Matt’s going to be giving a talk on “Old Sci-Fi & Little Robots” at this year’s Do Lectures, in a tent in West Wales!

Having been to a previous Do, I’m very jealous. It’s a fantastic event, and all the videos from past Do Lectures are online – it’s a bit like TED but with fewer presidents or rockstars and more mud…

Ludichocolate

I’m very enamoured of Cadbury’s “Spots Vs Stripes” chocolate bars.

Spots Vs Stripes

There’s all sorts of adverts around London, covering every bus-stop, little loops on urban-screens, giving them a big push – and a fancy website full of the latest social-casual-game-flash-o-rama. But, the purity and brilliance of the chocolate bar itself is what really stands out.

You unwrap it (carefully… this was a bit of a point-of-failure with my first one…) and you discover three chunks of chocolate: one with spots on it, one with stripes on – and one labelled ‘winner’.

Spots Vs Stripes

Inside the wrapper is a challenge – to share with a friend – each of you adopting the side of spots or stripes. The winner, naturally gets the ‘winner’ chunk at the end.

Brilliant.

To see play and small-group-sharing designed into something everyday like this is inspirational. Amusingly, Cadburys appear to have been awarded the role of ‘Official Treat Provider’ by the London 2010 Olympics.

Spots Vs Stripes

The treat of Spots Vs Stripes is the play it affords, as much as the chocolate…

B.A.S.A.A.P.

Design principle #1

The above is a post-it note, which as I recall is from a workshop at IDEO Palo Alto I attended while I was at Nokia.

And, as I recall, it was probably either Charlie Schick or Charles Warren who scribbled this down and stuck it on the wall as I was talking about what was a recurring theme for me back then.

Recently I’ve been thinking about it again.

B.A.S.A.A.P. is short for Be As Smart As A Puppy, which is my short-hand for a bunch of things I’ve been thinking about… Ooh… Since 2002 or so I think, and a conversation in a california car-park with Matt Webb.

It was my term for a bunch of things that encompass some 3rd rail issues for UI designers like proactive personalisation and interaction, examined in the work of Byron and Nass, exemplified by (and forever-after-vilified-as) Microsoft’s Bob and Clippy (RIP). A bunch of things about bots and daemons, conversational interface.

And lately, a bunch of things about machine learning – and for want of a better term, consumer-grade artificial intelligence.

BASAAP is my way of thinking about avoiding the ‘uncanny valley‘ in such things.

Making smart things that don’t try to be too smart and fail, and indeed, by design, make endearing failures in their attempts to learn and improve. Like puppies.

Cut forward a few years.

At Dopplr, Tom Insam and Matt B. used to astonish me with links and chat about where the leading-edge of hackable, commonly-employable machine learning was heading.

Startups like songkick and last.fm amongst others were full of smart cookies making use of machine learning, data-mining and a bunch of other techniques I’m not smart enough to remember (let-alone reference), to create reactive, anticipatory systems from large amounts of data in a certain domain.

Now, machine-learning is superhot.

The web has become a web-of-data, data-mining technology is becoming a common component of services, and processing power on tap in the cloud means that experimentation is cheap. The amount of data available makes things possible that were impossible a few years ago.

I was chatting with Matt B. again this weekend about writing this post, and he told me that the algorithms involved are old. It’s just that the data and the processing power is there now to actually get to results. Google’s Peter Norvig has been quoted as saying “All models are wrong, and increasingly you can succeed without them.“.

Things like Hunch are making an impression in the mainstream. Google Priority Inbox, launched recently, make the utility of such approaches clear.

BASAAP services are here.

BASAAP things are on the horizon.

As Mike Kuniavsky has pointed out – we are past the point of “Peak Mhz”:

driving ubiquitous computing, as their chips become more efficient, smaller and cheaper, thus making them increasingly easier to include into everyday objects.

This is ApriPoco by Toshiba. It’s a household robot.

It works by picking up signals from standard remote controls and asks you what you are doing, to which you are supposed to reply in a clear voice. Eventually it will know how to turn on your television, switch to a specific channel, or play a DVD simply by being told. This system solves the problem that conventional speech recognition technology has with some accents or words, since it is trained by each individual user. It can send signals from IR transmitters in its arms, and has cameras in its head with which it can identify specific users.

Not perhaps the most pressing need that you have in your house, but interesting none-the-less.

Imagine this not as a device, but as an actor in your home.

The face-recognition is particularly interesting.

My £100 camera has a ‘smile-detection’ mode, which is becoming common. It can also recognise more faces that a 6-month old human child. Imagine this then, mixed with ApriPoco, registering and remembering smiles and laughter.

Go further, plug it into the internet. Into big data.

As Tom suggested on our studio mailing list: recognising background chatter of people not paying attention. Plugged into something like Shownar, constantly updating the data of what people are paying attention to, and feeding back suggestions of surprising and interesting things to watch.

Imagine a household of hunchbots.

Each of them working across a little domain within your home. Each building up tiny caches of emotional intelligence about you, cross-referencing them with machine learning across big data from the internet. They would make small choices autonomously around you, for you, with you – and do it well. Surprisingly well. Endearingly well.

They would be as smart as puppies.

Hunch-Puppies…?

Ahem.

Of course, there’s the other side of domesticated intelligences.

Matt W.’s been tracking the bleed of AI into the Argos catalogue, particularly the toy pages for some time.

They do their little swarming thing and have these incredibly obscure interactions

The above photo of toys from Argos he took was given the title: “They do their little swarming thing and have these incredibly obscure interactions”

That might be part of the near-future: being surrounded by things that are helping us, that we struggle to build a model of how they are doing it in our minds. That we can’t directly map to our own behaviour. A demon-haunted world. This is not so far from most people’s experience of computers (and we’re back to Byron and Nass) but we’re talking about things that change their behaviour based on their environment and their interactions with us, and that have a certain mobility and agency in our world.

I’m reminded of the work of Rodney Brooks and the BEAM approach to robotics, although hopefully more AIBO than Runaways.

Again, staying on the puppy side of the uncanny valley is a design strategy here – as is the guidance within Adam Greenfield’s “Everyware”: how to think of design for ubiquitous systems that behave as sensing, learning actors in contexts beyond the screen.

Adam’s book is written as a series of theses (to be nailed to the door of a corporation or two?), and thinking of his “Thesis #37″ in connection with BASAAP intelligences in the home of the near-future amuses me in this context:

“Everyday life presents designers of everyware with a particularly difficult case because so very much about it is tacit, unspoken, or defined with insufficient precision.”

This cuts both ways in a near-future world of domesticated intelligences, and that might be no bad thing. Think of the intuitions and patterns – the state machine – your pets build up of you, and vice-versa. You don’t understand pets as tools, even if they perform ‘job-like’ roles. They don’t really know what we are.

We’ll never really understand what we look like from the other side of the Uncanny Valley.

Mechanical Dog Four-Leg Walking Type

What is this going to feel like?

Non-human actors in our home, that we’ve selected personally and culturally. Designed and constructed but not finished. Learning and bonding. That intelligence can look as alien as staring into the eye of a bird (ever done that? Brrr.) or as warm as looking into the face of a puppy. New nature.

What is that going to feel like?

We’ll know very soon.

Patina

leicam4.jpg

I saw this picture via The Online Photographer a few days ago. It’s a Leica M4, being sold second-hand right now on eBay, for the premium prices such cameras command.

I loved the wear at the edges, where the black paint has been worn away to reveal the brass underneath. It’s not broken; it hasn’t been mistreated. It’s just been well-used in its 35-year-odd lifespan.

And, in some ways, it’s more attractive for its wear. This isn’t a camera that’s been locked away in its packaging by an over-protective collector; it’s been well-used for its intended purpose. Part of the attraction to such an object isn’t just the aesthetic quality of its patina: there’s also something attractive about the action that wear represents. As a photographer, I’m attracted to this wear because in some ways, it represents the act of photography.

I’m not sure I’m explaining this well. Here’s another example.

prayer-feet.jpg

I was looking through my links for other articles about wear and patina, and I found this Reuters photograph from last year. It’s of the floor of a Tibetan monastery, where, over twenty years of daily prayer, Hua Chi has worn his own footprints into the floor.

He has knelt in prayer so many times that his footprints remain deeply, perfectly ingrained on the temple’s wooden floor.

Every day before sunrise, he arrives at the temple steps, places his feet in his footprints and bends down to pray a few thousand times before walking around the temple.

The footprints are three centimeters (1.2 inches) deep where the balls of his feet have pressed into the wood.

1.2 inches of prayer. There’s something beautiful about the smooth imprints of a human foot worn into wood. But the wear itself also comes to symbolise the action that led to it: in this case, Hua Chi’s prayers.

Patina is the effect of actions made solid; photography into worn paint, prayer into a worn floor. It is verbing turned into a noun.

Shared Lives

Nouns and verbs. That reminded me of this post about “The Life Of Products” by Matt W, from nearly four years ago. Matt wrote:

Products are not nouns but verbs. A product designed as a noun will sit passively in a home, an office, or pocket. It will likely have a focus on aesthetics, and a list of functions clearly bulleted in the manual… but that’s it.

Products can be verbs instead, things which are happening, that we live alongside. We cross paths with our products when we first spy them across a crowded shop floor, or unbox them, or show a friend how to do something with them. We inhabit our world of activities and social groups together… a product designed with this in mind can look very different.

Wear is, of course, both a noun and a verb. It’s the verb that inevitably happens through use, and it’s the noun that the verb leaves behind. Patina is the history of a product written into its skin.

And, of course, it takes time for wear to occur. Objects start their lives pure, unworn, ready to be both used and shaped by that use. In Products are People Too, Matt’s 2007 talk from Reboot 9, he said:

Products exist over time. We meet them, we hang out with them, we live life together.

Patina is a sign of a life shared.

tarnished-laptop.jpg

Here’s a life I’ve shared.

This is my three-and-a-half year old laptop. It’s my second aluminium Mac, and, just as with my previous laptop, the surface has tarnished right underneath where my palms rest. It’s not a fault – that black speckling is just what happens when perispiration meets aluminum. It’s not as beautiful as the Leica, or the monastery floor – but it’s not as ugly as cracked and chipped plastic.

I think that might be one reason I’ve kept it quite so long: the material and form of the exterior have encouraged me to hold onto the laptop. Certainly much longer than if it had been poorly constructed, becoming damaged rather than worn.

In his talk at Frontiers of Interaction in 2009, Matt J showed this photograph of Howies’ “Hand-Me-Down jacket”.

It’s a jacket that’s designed to last. Howies ensure they have the materials to repair it, encouraging the owner to mend the jacket rather than throw it out. Inside the jacket is the label above: name tags to last several generations, indicating periods of ownership.

The label is surprising because it serves as a reminder that the product will last. The encouragement to pass something on, and to measure ownership in years, acts as a reminder that there’s no reason to throw the jacket out.

It seems absurd to have to be reminded of that.

But: how many essentially functional pieces of clothing have you or I thrown out? How many items that could be repaired have ended up in the bin? How many objects have never had the time to acquire a patina – thrown out before their time was truly up?

It’s sad that we have to be reminded that objects can last. I cannot deny that there’s a role for inexpensive, cheaply-manufactured, and somewhat disposable products – but they shouldn’t condition us into thinking that’s how all products are.

Designing things that want to be kept

I read an article – which, alas, I can’t find a link to at the moment – about the disposal and lifespan of mobile phones in the USA. The most shocking item in it was that, when questioned as to the lifespan of a mobile phone, most Americans responded with “about 24 months”. A mobile phone may not last like a Leica or a Stradivarius… but it’ll last a good bit longer than two years before it’s beyond use.

24 months was, of course, the length of common cellphone contracts. And so, as contracts expired, and network providers told their customers they were eligible for a new phone, they began to assume there had to be something wrong with the old phone. And it would go in the bin.

When the patina an object gains is attractive, it acts as an encouragement to keep it. Good jeans really come into their own as they wear down and develop creases, rips, rough patches. It’s why my favourite pair say something along the lines of “wash me as little as possible!” inside.

It’s important to note: the wear I’m discussing isn’t related to things breaking. Things break because they’re worn out, or poorly designed, or used inappropriately. Patina is that wear which comes from entirely “correct” usage of a product. That usage might be intense – a professional guitarist’s instrument will acquire patina far faster than mine will – but it is, nontheless, the intended usage of the object.

I’m not sure patina can be designed. After all, it’s a product of the relationship between product and owner.

The form it takes can be shaped – by the materials used in a product, by the nature and frequency of operations that an owner might perform. I suppose that a product can be designed to age gracefully, to wear attractively; it’s just the exact nature of that wear that’s out of a designer’s hands.

In considering the patina a product might develop, you of course have to ask a series of interesting questions: about longevity, about sustainability, about materials, about manufacturing. Going beyond “peak X” and towards “resilient X”, as Matt J said. But I think the most interesting questions – at the very heart of that consideration – are emotional ones. “What if someone adores your product? What if someone really does want to make a product a part of their life? What will your product look like when it’s been worn into the ground by virtue of its own success?

I don’t think there are single answers to those questions, but they’re great questions to have to consider.

(One answer, which leaps to mind for me, can be found in The Velveteen Rabbit – one of those children’s books that manages to be, of course, both profoundly sad and yet uplifting with it. The toy rabbit in question discovers that if his owner loves him enough, he becomes real. Products are people, too, right there in 1920s children’s books).

jim-marshall-m4.jpg

Another Leica M4 to end with: this one belonging to the photographer Jim Marshall, noted for his music photography since the 60s. (If you don’t know the name, you’ll almost certainly know his work).

Marshall made so many striking images with this camera and others like it, and, in that making, gave it its unique patina. It’s a camera as rock’n’roll as the subjects it shot. Somewhere in that wear – buried in the scuff-marks, the scratches, the flaked paint – are Jimi Hendrix, John Coltrane, Johnny Cash at Folsom Prison: the “life lived together” of Marshall and his camera.

Are you an iOS developer in London?

There’s no particular project quite yet, but we’re talking with a number of people about iPhone and iPad work. More than we can handle with our usual crew if it all comes in.

So I want to expand our iOS developers network!

If you develop for iPhone or iPad, and would be interested in working with BERG on short or longer contracts, please do get in touch to introduce yourself, and we’ll keep your details on file. London-or-nearby folks only… we’re a tight-knit studio, and we really like it when people working together are sitting in the same room.

Email Nick at nl at berglondon dot com and please include your CV, a list of apps in the App Store and what you did on them (bonus points if you were the sole developer), and the name of the coolest app installed on your iPhone. (My current favourites are calvetica and Little Uzu.)

And we’ll keep you in mind whenever something comes up!

Recruiters: we’re happy to hear from you, but please ensure your candidates would be cool with contract work, and that you include their answers to the extra questions above. Thanks.

Matt Webb speaking at Mobile Monday Amsterdam, September 6th

Last one of the parish notices. Matt Webb will be in Amsterdam alongside our good friend and occasional colleague Timo Arnall, and longstanding friend-of-BERG Tom Hume – giving a talk on design to the Mobile Monday event next Monday, 6th September.

Recent Posts

Popular Tags