This website is now archived. To find out what BERG did next, go to www.bergcloud.com.

Blog posts tagged as 'thinking-through-making'

Guardian Headliner: The newspaper that looks back at you…

Headliner is an experiment in online reading that BERG conducted in a short project with The Guardian. It uses face detection and term extraction to create “a newspaper that looks back at you”

Headliner: Final Prototype

It’s part of a series of experiments and prototypes that they are initiating with internal and external teams. You can try it for yourself here: http://headliner.guardian.co.uk

Headliner: Final Prototype

Jack led the project and we got a dream-team of past collaborators to work with on it: Phil Gyford, who had already done loads of thoughtful designing of new reading experiences for the Guardian with his ‘Today’s Guardian‘ project, and brilliant designer James King who we had worked with previously on the Here & There Maps.

I asked Jack, Phil and James to share their thoughts about the process and the prototype:

Jack Schulze:

Faces come up in News articles a lot, editors exercise artistry in picking photos of politicians or public figures at their most desperate. Subjects caught glancing in the wrong direction or grimacing are used to great effect and drama alongside headlines.

Headliner makes use of face detection to highlight the eyes in news photographs. It adds a second lens to the existing photo, dramatising and exaggerating the subject. It allows audiences to read more meaning into the headline and context.

Headliner: Final Prototype

Graphically Headliner departs from the graphic rules and constraints news has inherited from print. It references the domain’s aesthetic through typography but adopts a set of behaviours and structures only available in browsers and on the web.

Phil Gyford:

We wanted to retain much of what makes Today’s Guardian a good reading experience but find more in the text and images that we could use to make it less dry. We decided to rely solely on the material we can get from the Guardian’s API, alongside other free services and software.

We looked at various ways of extracting useful data from the text of articles. It had been some years since I’d last dabbled with term extraction and I was surprised that it didn’t seem wildly better than I remembered. We settled on using the free Calais API to pull useful terms out of articles, but it’s quite hit and miss — some places and peoples’ names that seem obvious to us are missed, and other words are erroneously identified as significant. But it gave us a little something extra which we could use to treat text, and also to guess at what an article was about: we could guess whether an article was focused on a person or a place, for example.

We wanted to do more with the articles’ images and focusing on faces seemed most interesting. We initially used the Face.com API to identify faces in the images and, in particular, the eyes. This worked really well, and with a bit of rotating and cropping using PIL we could easily make inevitably small pictures of eyes. (All the article text and images are pre-processed periodically on the back-end using Python, to create a largely static and fast front-end that just uses HTML, CSS and JavaScript.)

Antero face.com experiments

Unfortunately for us Face.com were bought by Facebook and promptly announced the imminent closure of their API. We replaced it with OpenCV using Python, which is trickier, and we don’t yet have it working quite as well as Face.com’s detection did, but it’s a good, free, alternative under our control.

Enlarging the cropped eye images looked great: eyes that seemed guilty or angry or surprised, emphasising the choices of picture editors, stared out at you below a headline. We tried giving these images a halftone effect, to emphasise the newspaper printing context of the stories, but unfortunately it didn’t work well with such tiny source images. (Here’s the code for the half toning effect though.)

Headliner: Early Graphic Studies

Browsers treated the drastically zoomed images differently. Chrome and Safari tried to smooth the grossly enlarged images out, which sometimes worked well, but we liked the effect in Firefox, which we could force to show the now-huge pixels using `image-rendering: -moz-crisp-edges;`. The chunky pixels made a feature of the cropped portions of images being so small, and we wanted to show this very raw material on all browsers. This was easily done on the front-end using the excellent Close Pixelate JavaScript.

If we didn’t have any detected eyes to use, we didn’t only want to enlarge the supplied photo — we wanted some variety and to use more of the data we’d extracted from the text. So, if we’d determined that the article was probably focused on a place, we used Google’s Static Maps API to display a satellite image centred on the location Calais had identified.

Headliner: Final Prototype

We put all that together with a front-end based, for speed, on the original Today’s Guardian code, but heavily tweaked. We make images as big as we possibly can — take advantage of that huge monitor! — and enlarge the headlines (with the help of FitText) to make the whole thing more colourful and lively, and an interesting browsing experience.

James King:

To start with, we were most interested in how we might integrate advertisements more closely into the fabric of the news itself. Directing the readers attention towards advertising is a tricky problem to deal with.

Headliner: Design Development

One of the more fanciful ideas we came up with was to integrate eye-tracking into the newspaper (with support for webcams) so that it would respond to your gaze and serve up contextually relevant ads based on what you were reading at any particular moment.

Headliner: Design Development

This idea didn’t get much further than a brief feasibility discussion with Phil who determined that, given the tight deadline, building this would be unlikely! What did survive however, was the idea that the newspaper looks back at you.

Eyes are always interesting. Early on, we experimented with cropping a news photo closely around the eyes and presenting it alongside a headline. This had quite a dramatic effect.

Headliner: Design Development

In the same way that a news headline can often grab the attention but remain ambiguous, these “eye crops” of news photos could convey emotion but not the whole story. Who the eyes belong to, where the photo is taken and other details remain hidden.

Headliner: Design Development

In the same way that we were summarising the image, we thought about summarising the story, to see if we could boil a long story down to a digestible 500 words. So we investigated some auto-summarising tools only to find that they didn’t do such a good job of selecting the essence of a story.

Headliner: Design Development

Perhaps they take a lot of customisation, or need to be trained with the the right vocabulary, but often the output would be comical or nonsensical. We did discover that Open Calais did a reasonably reliable job of selecting phrases within text and guessing whether it referred to a person, a place, an organisation etc. While we felt that Open Calais wasn’t good enough to draw inferences from the article, we felt we could use it to emphasising important phrases in the headlines and standfirsts.

Typographically, it made sense to use Guardian Egyptian for the headlines, although we did explore some other alternatives such as Carter One – a lovely script face available as a free Google font.

Headliner was a two-week experiment to explore the graphic possibilities of machine-learning and computer vision applied to content.

Not everything works all the time, it’s a prototype after all – but it hints at some interesting directions for new types of visual presentation where designers, photo editors and writers work with algorithms as part of their creative toolbox.

Recent Posts

Popular Tags