This website is now archived. To find out what BERG did next, go to

Blog posts tagged as 'typography'

Guardian Headliner: The newspaper that looks back at you…

Headliner is an experiment in online reading that BERG conducted in a short project with The Guardian. It uses face detection and term extraction to create “a newspaper that looks back at you”

Headliner: Final Prototype

It’s part of a series of experiments and prototypes that they are initiating with internal and external teams. You can try it for yourself here:

Headliner: Final Prototype

Jack led the project and we got a dream-team of past collaborators to work with on it: Phil Gyford, who had already done loads of thoughtful designing of new reading experiences for the Guardian with his ‘Today’s Guardian‘ project, and brilliant designer James King who we had worked with previously on the Here & There Maps.

I asked Jack, Phil and James to share their thoughts about the process and the prototype:

Jack Schulze:

Faces come up in News articles a lot, editors exercise artistry in picking photos of politicians or public figures at their most desperate. Subjects caught glancing in the wrong direction or grimacing are used to great effect and drama alongside headlines.

Headliner makes use of face detection to highlight the eyes in news photographs. It adds a second lens to the existing photo, dramatising and exaggerating the subject. It allows audiences to read more meaning into the headline and context.

Headliner: Final Prototype

Graphically Headliner departs from the graphic rules and constraints news has inherited from print. It references the domain’s aesthetic through typography but adopts a set of behaviours and structures only available in browsers and on the web.

Phil Gyford:

We wanted to retain much of what makes Today’s Guardian a good reading experience but find more in the text and images that we could use to make it less dry. We decided to rely solely on the material we can get from the Guardian’s API, alongside other free services and software.

We looked at various ways of extracting useful data from the text of articles. It had been some years since I’d last dabbled with term extraction and I was surprised that it didn’t seem wildly better than I remembered. We settled on using the free Calais API to pull useful terms out of articles, but it’s quite hit and miss — some places and peoples’ names that seem obvious to us are missed, and other words are erroneously identified as significant. But it gave us a little something extra which we could use to treat text, and also to guess at what an article was about: we could guess whether an article was focused on a person or a place, for example.

We wanted to do more with the articles’ images and focusing on faces seemed most interesting. We initially used the API to identify faces in the images and, in particular, the eyes. This worked really well, and with a bit of rotating and cropping using PIL we could easily make inevitably small pictures of eyes. (All the article text and images are pre-processed periodically on the back-end using Python, to create a largely static and fast front-end that just uses HTML, CSS and JavaScript.)

Antero experiments

Unfortunately for us were bought by Facebook and promptly announced the imminent closure of their API. We replaced it with OpenCV using Python, which is trickier, and we don’t yet have it working quite as well as’s detection did, but it’s a good, free, alternative under our control.

Enlarging the cropped eye images looked great: eyes that seemed guilty or angry or surprised, emphasising the choices of picture editors, stared out at you below a headline. We tried giving these images a halftone effect, to emphasise the newspaper printing context of the stories, but unfortunately it didn’t work well with such tiny source images. (Here’s the code for the half toning effect though.)

Headliner: Early Graphic Studies

Browsers treated the drastically zoomed images differently. Chrome and Safari tried to smooth the grossly enlarged images out, which sometimes worked well, but we liked the effect in Firefox, which we could force to show the now-huge pixels using `image-rendering: -moz-crisp-edges;`. The chunky pixels made a feature of the cropped portions of images being so small, and we wanted to show this very raw material on all browsers. This was easily done on the front-end using the excellent Close Pixelate JavaScript.

If we didn’t have any detected eyes to use, we didn’t only want to enlarge the supplied photo — we wanted some variety and to use more of the data we’d extracted from the text. So, if we’d determined that the article was probably focused on a place, we used Google’s Static Maps API to display a satellite image centred on the location Calais had identified.

Headliner: Final Prototype

We put all that together with a front-end based, for speed, on the original Today’s Guardian code, but heavily tweaked. We make images as big as we possibly can — take advantage of that huge monitor! — and enlarge the headlines (with the help of FitText) to make the whole thing more colourful and lively, and an interesting browsing experience.

James King:

To start with, we were most interested in how we might integrate advertisements more closely into the fabric of the news itself. Directing the readers attention towards advertising is a tricky problem to deal with.

Headliner: Design Development

One of the more fanciful ideas we came up with was to integrate eye-tracking into the newspaper (with support for webcams) so that it would respond to your gaze and serve up contextually relevant ads based on what you were reading at any particular moment.

Headliner: Design Development

This idea didn’t get much further than a brief feasibility discussion with Phil who determined that, given the tight deadline, building this would be unlikely! What did survive however, was the idea that the newspaper looks back at you.

Eyes are always interesting. Early on, we experimented with cropping a news photo closely around the eyes and presenting it alongside a headline. This had quite a dramatic effect.

Headliner: Design Development

In the same way that a news headline can often grab the attention but remain ambiguous, these “eye crops” of news photos could convey emotion but not the whole story. Who the eyes belong to, where the photo is taken and other details remain hidden.

Headliner: Design Development

In the same way that we were summarising the image, we thought about summarising the story, to see if we could boil a long story down to a digestible 500 words. So we investigated some auto-summarising tools only to find that they didn’t do such a good job of selecting the essence of a story.

Headliner: Design Development

Perhaps they take a lot of customisation, or need to be trained with the the right vocabulary, but often the output would be comical or nonsensical. We did discover that Open Calais did a reasonably reliable job of selecting phrases within text and guessing whether it referred to a person, a place, an organisation etc. While we felt that Open Calais wasn’t good enough to draw inferences from the article, we felt we could use it to emphasising important phrases in the headlines and standfirsts.

Typographically, it made sense to use Guardian Egyptian for the headlines, although we did explore some other alternatives such as Carter One – a lovely script face available as a free Google font.

Headliner was a two-week experiment to explore the graphic possibilities of machine-learning and computer vision applied to content.

Not everything works all the time, it’s a prototype after all – but it hints at some interesting directions for new types of visual presentation where designers, photo editors and writers work with algorithms as part of their creative toolbox.

Making Future Magic: the book

There were an awful lot of photos taken for the Making Future Magic video that BERG and Dentsu London launched last week; Timo reckons he shot somewhere in the region of 5500 shots. Stop-frame animation is a very costly process in the first instance, but as the source we were shooting was hand held (albeit with locked-off cameras) and had only the most rudimentary of motion-control (chalk lines, black string and audio progress cues), if a frame was poorly exposed, obscure or fumbled, it left the sequence largely unusable. This meant that a lot was left on the cutting room floor.

In addition, we amassed a stack of incidental pictures of props, setups, mistakes, 3D tests and amphibious observers during the film’s creation.

Clicking through these pictures, it was clear that a book collecting some of these pictures, offering little behind-the-scenes glimpses alongside the finished graded stills used in the final edit, was the way forward. As well as offering a platform for some of the shots that didn’t make the final cut, the static prints want to be pored over, allowing for the finer details and shades (the animations themselves had textures and colours burnt into them in prior to shooting, so as to add a disruptive quality) to come through.

Our copies arrived today from Blurb. The print quality and stock is fantastic – especially considering it’s an on-demand service – and for us it’s great to have a little summary of a project that doesn’t require any software or legacy codecs to view it and will remain ‘as is’. We’ve made the book available to the public and in two formats; you can get your hands on the hardcover edition here, and the softcover here.

More images of the book are up here.

Making Future Magic: light painting with the iPad

“Making Future Magic” is the goal of Dentsu London, the creative communications agency. We made this film with them to explore this statement.

(Click through to Vimeo to watch in HD!)

We’re working with Beeker Northam at Dentsu, using their strategy to explore how the media landscape is changing. From Beeker’s correspondence with us during development:

“…what might a magical version of the future of media look like?”


…we [Dentsu] are interested in the future, but not so much in science fiction – more in possible or invisible magic

We have chosen to interpret that brief by exploring how surfaces and screens look and work in the world. We’re finding playful uses for the increasingly ubiquitous ‘glowing rectangles’ that inhabit the world.

iPad light painting with painter

This film is a literal, aesthetic interpretation of those ideas. We like typography in the world, we like inventing new techniques for making media, we want to explore characters and movement, we like light painting, we like photography and cinematography as methods to explore and represent the physical world of stuff.

We made this film with the brilliant Timo Arnall (who we’ve worked with extensively on the Touch project) and videographer extraordinaire Campbell Orme. Our very own Matt Brown composed the music.

Light painting meets stop-motion

We developed a specific photographic technique for this film. Through long exposures we record an iPad moving through space to make three-dimensional forms in light.

First we create software models of three-dimensional typography, objects and animations. We render cross sections of these models, like a virtual CAT scan, making a series of outlines of slices of each form. We play these back on the surface of the iPad as movies, and drag the iPad through the air to extrude shapes captured in long exposure photographs. Each 3D form is itself a single frame of a 3D animation, so each long exposure still is only a single image in a composite stop frame animation.

Each frame is a long exposure photograph of 3-6 seconds. 5,500 photographs were taken. Only half of these were used for the animations seen in the final edit of the film.

There are lots of photographic experiments and stills in the Flickr stream.

Future reflection

light painting the city with Matt Jones

The light appears to boil since there are small deviations in the path of the iPad between shots. In some shots the light shapes appear suspended in a kind of aerogel. This is produced by the black areas of the iPad screen which aren’t entirely dark, and affected by the balance between exposure, the speed of the movies and screen angle.

We’ve compiled the best stills from the film into a print-on-demand Making Future Magic book which you can buy for £32.95/$59.20. (Or get the softcover for £24.95/$44.20.)

Recent Posts

Popular Tags