Due to a lack of time and a lack of inspiration, I asked my Berg colleagues to help write my blog entry this week. Inspired by a recent NPR Pop Culture Happy Hour podcast, I asked them what they would consider their pop culture “comfort foods”: music, movies, books, TV shows, games, etc that they return to time and again because they are comfortable and familiar, bring you back to a happy place, create a certain feeling in you, etc. NPR’s Linda Holmes described it as things that “we turn to when we get into a cultural rut and want to reawaken our love of the things we love, as it were.”
I can think of so many things that fit in this category for me. Here’s a few:
The Sound of Music (both the film and the soundtrack)
Pride and Prejudice – both the book and the films (both the Colin Firth & Jennifer Ehle version and the Keira Knightley & Matthew Mcfadyen version)
The West Wing
Buffy the Vampire Slayer
U2 – Achtung Baby (brings me right back to my first year at university)
Hem – Eveningland
A House Like a Lotus and A Ring of Endless Light by Madeleine L’Engle
I’m happy to say that everyone in the studio humoured my request. Here’s what they had to say:
Winnie the Pooh
Midnight Run, must have watched it 50 times. The most re-watchable film of all time.
Also Rhubarb and Custard (As a kid I slept under the animation table at Bob Godfrey Studios on Neal St, still remember Bob doing the voices).
I still return to many of Kieslowski’s films, they were formative in my understanding of film.
Indiana Jones And The Last Crusade
“Can’t buy a thrill” by Steely Dan
Yo La Tengo – Little Honda just for the distortion sound if nothing else
Any video of Sister Rosetta Tharpe I can find
The drum battle where Steve Gadd (he starts at 2.45 in the clip below) launches a stomp attack on Vinnie Colaiuta and Dave Weckl and their supple wrists.
Asimov’s “Robots of Dawn”
Mystery Science Theatre 3000 episodes
Underworld’s “Second Toughest In The Infants”
Once Upon a Time in the West, which has the single best concentrated set piece scene of any film at any period in history. It is beautiful, epic, speaks truth to humans, society, and history, and I can watch it infinitely.
Starship Troopers, the book, and actually any sci-fi stories from the 1940s to the 1970s I can find in second hand shops or Project Gutenberg
‘F-Zero’ / ‘Unirally’ for games
Songs in the Key of Life by Stevie Wonder for music
‘C’était un Rendezvous’ for moving image
Ladder of Years by Anne Tyler
And the films: My neighbour Totoro (for the scene at the bus stop)
Bourne Ultimatum (for the scene at Waterloo station)
This nutrigrain ad from a billion years ago, which I don’t think ever got aired but gets better every time you watch it:
So how about you? What are your culture “comfort foods”?
Chris O’Shea’s Body Swap is a Kinect-based installation that lets two people control “paper cut-outs” of one another. Especially fun, as the video proves, with two people of very different height – and the provision of music to encourage acting and play is a nice touch.
Another Kinect-related link: this Flickr set shows what happens when you map depth data (from a Kinect sensor) to a traditional digital camera photograph – and then pivot and distort it in three dimensions. The above image is probably my favourite, but the whole set is worth a look – if only for the way the set progresses through increasingly distorted takes on the original photographs.
3ERD is a tumblelog of jitter-gif photographs from Matt Moore. He’s using a stereoscopic compact camera (a bit like, say, the Fuji W1) to take stereoscopic images – but then turning the left and right image into a two-frame animated gif. The results are uncanny. It’s hard to comprehend that both frames were taken at the same time, however simple the idea may seem; the translation of two images separated in space into two images separated by time is a strange one to wrap your head around. A little slice of bullet-time.
50 Watts’ Space Teriyaki is a wonderful collection of Japanese futurist art and imagery from the seventies and eighties. It veers between the bleak and gynaecological; throughout, though, there’s a fascinating use of colour and form.
UFO On Tape (iTunes link) is a game for iOS that simulates tracking a UFO with a video-camera. The magic is in the game’s total commitment to an aesthetic: the grainy, fuzzy simulated video; the panicked advice from a girl next to you; and, best of all, the way the iPhone embodies the video camera – it is, after all, also a camera itself – as you fling it around, oblivious to the real world, tracking an imaginary flying saucer. Good stuff.
Last week, Matt J gave a crit to final-year students in Wassim Jabi’s ‘Digital Tectonics Studio’ at the Welsh School of Architecture. He shared this footage of a model by Tom Draper, exploring the idea of a mechanical screen in front of a building that would cast shadows similar to a dragon curve fractal. In order to explore what this might work like – what it’d feel like to experience those shadows, how you might mechanically create those shadows out of rods – he had to build. Thinking through making. There are also some lovely photos of the model.
Line Block by Korean designers Junbeom So, Lee Ji Eun, Yi-Seo Hyeon, Heo-Hyeoksu and Jeong Minhui proposes an alternative to cable tangles: power cables that can be joined together through tongue-and-groove rubber. I also liked that, in the cross-section, the cable is a surprised little fella. (via Yanko Design)
These links are a bit late because last Friday I was at The Design Of Understanding – a day-long conference at the St Bride Library. It was a cracking event, with lots to chew over – I’ll see if I can get my notes up soon.
Friend-of-Berg Chris Heathcote talked about New New Media – a swift overview of ubicomp and other aspects of situated computing. One highlight was when he took apart the common example of coffee shops offering you a discount as you walk by, asking:
…what ratted on you? Your Nike+ talking shoes, using a credit card nearby, your car number plate being recognised, your phone reporting your location, or your Oyster card informing the system that you’ve just come out of Oxford Circus tube?
The whole example is good – but I liked the idea of ubiquitous computing devices tattling on you, like naughty children; Chris’ use of “ratted” reminds us that such behaviours can be as much a hindrance as a help. The full talk is definitely worth your time.
Matt B wrote about Music For Shuffle this week: a single composition made out of many audio files, designed to be played in random orders on any devices. And, of course, when I say “wrote about”, I also mean composed. You should go and listen to it right now!
Matt explained more:
I set myself a half-day project to write music specifically for shuffle mode – making use of randomness to try and make something more than the sum of its parts. The ever-brilliant Russell Davies (who works a few desks away at the BRIG) sowed the seed of the idea in my head around January 2011.
Over an hour or so, I wrote a series of short, interlocking phrases (each formatted as an individual MP3) that can be played in any order and still (sort of) make musical sense.
Brilliant. Matt’s notes on influences and the process behind the composition make for great reading: as ever, there’s a lot of thought and insight there, expressed succinctly, and lots of nice jumping-off points within his notes.
Another form of light-painting, this time from Daito Manabe. By firing a laser at a wall coated in fluorescent paint, an image appears. As subsequent “passes” of the laser describe element closer to the foreground of the image, those areas of the wall are “activated” again and stay brighter; the elements towards the rear of the image stay darker. It takes a while to process what’s happening when you first see it, but the moment it all clicks into place feels great.
Chris Harrison’s Abracadabra is a prototype interface for very small devices. What might a rich interface for a device too small for a touchscreen look like? Harrison’s interface is based upon magnets: a tiny magnet on the fingertip, detected by a two-axis magnetometer in the device – providing enough sensitivity to track movements in a horizontal plane, as well as a “clicking” action in the z-axis. Extending the space of physical interaction outside the device makes a lot of sense, and it’ll be interesting to see where this kind of interface goes in the next few years.
Fizzogs popped up on the studio mailing list last week, and there followed a brief reminiscence for Ken Garland’s work for Galt Toys, which included the marvellous Connect. Matt J bought his copy in; even the box is gorgeous:
Simple, well designed games, with lovely graphic design and colours, that still manage – very much – to be toys to be played with.
Unless the behaviours and personalities of these things that compute are designed well enough the things that are not so good about them or unavoidable have the potential to come across as flaws in the object’s character, break the suspension of disbelief and do more harm than good. Running out of batteries, needing a part to be replaced or the system crashing could be seen as getting sick, dying – or worse – the whole thing could be so ridiculous and annoying that it gets thrown out on its ear before long.
There’s lots of other nice points in here; too many to quote. Notably, I liked the idea of considering what an object’s Attract Mode might be; similarly, using role-playing/method-acting/improv as sources of experience in designing subtle experiences. Good stuff.
Earlier in the week, Matt W asked if there were any games that took advantage of outputting on more than one screen. Not necessarily the usage of side-by-side screens to increase the field of view, either – but different screens that perform totally different functions.
I pointed out that there was some precedent – although not a lot – and what began as a conversation quickly became a list that was worth sharing and explaining a bit.
This isn’t the kind of thing Matt meant. Whilst it’s definitely a part of this conversation, the Forza Motorsport series’ use of multiple monitors to increase the field of view is the kind of thing that’s not actually very interesting. It doesn’t alter the game in any significant way. It’s also a brute force solution: each screen is rendered by its own Xbox, and all the consoles are slaved together over a local network.
I think what Matt meant was separate screens performing different functions.
At the very simplest level, second screens can act as contextual displays – parts of the HUD or interface broken out to their own display.
The strategy game Supreme Commander allows players to use a second monitor for a zoomed-out tactical map. Rather than reducing the map to the corner of the screen (as many strategy games do), or forcing the player to constantly zoom in and out, the second screen provides a permanent context for what’s going on the primary screen.
A similar type of contextual screen can be seen on the Sega Dreamcast. The VMU memory unit was designed as a miniature console itself, with a screen and set of controls. When docked with the joypad, it acted as a second screen in the player’s hands.
The VMU was not used as effectively in the role of “second screen” as it might have been, although there were exceptions. Resident Evil: Code Veronica, for instance, used the VMU to display the player character’s health (which was otherwise only visible in the status menu).
Of course, there’s a limit to how many secondary screens are sensible; shortly after the announcement of the Nintendo DS, the above spoof was widely circulated. It’s a good point: lots of little screens right next to each other aren’t very different from one big screen.
The most interesting usage of multiple screens is in their capacity to affect gameplay itself. What sort of games would you design when players can have different viewports onto the world?
Pac-Man VS is my favourite answer to that question so far. It’s four-player Pac-Man, on the Nintendo Gamecube. Three players play ghosts: they play on the TV, with Gamecube pads.They have a 3D-ish view of a limited part of the map, and a radar in the bottom-right to know where each other is.
The fourth player is Pac-Man; they don’t use a Gamecube joypad. Instead, they play on a Gameboy Advance, plugged into the Gameube with a connection lead:
The Gameboy screen shows the Pac-Man player the entire map. Pac-man’s superpower over the ghosts is context; he has knowledge of the whole map. The ghosts are more powerful, but can’t see nearly so much.
It’s marvellous: fun, social, and utterly ingenious. There were a few other games for the linkup cable designed around players having their own screens – Final Fantasy: Crystal Chronicles and Zelda: Four Swords are the obvious examples – but Pac-Man VS remains the stand-out, for me.
One recent example of this sort of approach is Scrabble on the iPad, which lets you use the pad as a board, and other iOS devices for each player to hide their tiles. But it feels so unimaginative: the secondary screens feel like they’ve been used simply because it was possible; they’re no more than direct analogues for real-world objects. (It’s also an absurdly expensive way to play Scrabble.)
Nintendo’s DS focused on the usage of a secondary screen as context and extra information – but in a parallel universe, I’m sure there’s a DS that looks much like this:
This imaginary affords all manner of games based on hidden knowledge and incomplete views of the world. And, just like a tandem, it looks wrong without someone else playing with you; it indicates how it wants to be used, inviting a second player.
My imaginary console is entirely symmetrical in its design. It’d be a shame to only encourage games that gave symmetrical abilities for both players, in the same way as games like Guess Who? or Battleships. Asymmetric games – where players have very different abilities, or viewpoints, much like Pac Man VS above – are, for me, a more interesting notion to explore with multiple screens. Imagine games where players may have not only very different abilities or tasks to one another, but also might be played on totally different types of screen from one another.
Super Mario Galaxy demonstrated a co-operative approach to asymmetric play. Rather than being another avatar in the world alongside Mario, a second player could use their Wiimote to scoop up star bits as they passed. They did nothing else, and could drop in and out when they liked; theirs was a purely additive role. It allows a player with different capabilites – or attention – to drop in and out of the game, always helping, but not being critical to Mario’s success.
To extend that idea to screens: what are the gameplay modes for a friend with a touchscreen tablet, whilst I’m playing on a console attached to the TV? Mechanic to my racing driver? Coach to my football team? Evil overlord planting traps in the dungeon I’m exploring?
I don’t know yet. This at least feels like the start of a useful catalogue of multiple-screen play. And as screens become smarter, and “screen” and “device” increasingly become synonyms for one another, the world of multiple-screen play feels like an exciting, and ripe area to explore.
I was lucky enough to be invited to take part in the Wonderlab a few weeks ago. The official site described it like so:
[An event that brings] together some of the smartest creatives from the digital, gaming, theatre and performance fields, to spend three days exploring where digital tools and the ethos of play will take us next.
Ever since I got back from it, though I’ve mainly been asked what the Lab actually was.
Now that I’ve decompressed from the intensity of those three days, it’s easier to both write about the event itself, and answer that question. The short video above may provide some hints, but might also just look like a bunch of grownups talking and playing games. It deserves a more detailed explanation.
The Lab was a small event, with 10 invited participants from a variety of backgrounds – performers, artists, designers, technical types. We all were, however, connected by our interest in play or games. Given the tiny size, and that it was invite-only, it doesn’t feel fair to label it as a conference. And though there was a great deal of freedom in our discussions and sessions over the three days, the Lab differed from a conference in that a definite outcome was required: as a group, we had to present “our findings” – whatever they’d turn out to be – in the format of a card game.
You could have called it a three-day game-design workshop, except it’s not entirely fair to call it a workshop, either: the format of our conclusion may have been dictated, but what conclusion we were aiming for was not clear to begin with. We had a trajectory, the event shaped by tiny, five-minute talks from each of the participants and a range of guest speakers, all talking about something that “blew their mind”, and leading into subsequent discussion. We had a few sessions where we raised topics we felt relevant to the discussion of play and games, and as the Lab went on, definite themes emerged. And then, we would have to stop talking, and make things – tiny, prototype games to prove a point; collaborative rulsets in a session of Nomic; slowly putting what we “thought” and “believed” into practice. And then, from a practical session, back to discussion and analysis.
The term “Lab” eventually proved to be the most succinct explanation of affairs. It was a space that encouraged both exploration and experimentation, not favouring one of the other, and definitely emphasising the value of thinking through making. By the end of the three days, we’d designed about two-and-a-half games each, and explored countless others. Nothing focuses the mind like having to put your discoveries and beliefs into physical, playable form.
The Lab fostered a growing literacy of games, considering “literacy” as Alan Kay did – the ability to read and write in a given medium. Early on, we played a simple parlour game called Chairs: the goal being to stop a slowly walking player from sitting down on the last available chair by moving between chairs yourselves. It’s a simple game, and yet as a group, we were terrible at it. But after the initial burst of hilarity, we took it apart: what’s going on, why are we failing, what are the simple guidelines to ensure success. We were still lousy with our newly considered perspective – and I would love to build an AI simulation just to prove how dumb you can play to win the game – but we were beginning to understand our lousiness. And thus the Lab continued: talks, discussions, or games would be presented, taken apart, put back together. I valued being asked to prove or embody a belief; the test was not to succeed, but merely to try.
What did I actually get out of it that I can explain in a concrete sense?
One overriding theme was the ethics of game-design. It’s a huge topic, especially in this post-Jesse-Schell universe, and we explored it very thoroughly in some of the sessions. By the end, we’d designed both a game you could only lose, and a game where everybody would win. We created rules that were, in the real world, entirely unethical, but within the closed system of the game we were playing not only ethical but effectively irrelevant. We considered ethics of structured, rule-based play – games themselves – versus the ongoing act of unstructured play.
For someone so interested in games as systemic media (to quote Eric Zimmerman), I was surprised by how enamoured I became in the performative aspects of games. That was no doubt in part down to the insight brought to our sessions by the numerous particiapnts with performance backgrounds. In my notes, I wrote
Games don’t have to be performance-based, but games that don’t afford performance are weaker for it
This is, I guess, what Matt J has previously described as toyetics – it’s the fun you can have with a system, the ways it affords non-structured play, the ways it encourages you to interact with other people in a social capacity. It’s the fun you can have just playing. Games aren’t just rules – they’re rules you can play with, and the best games often afford the best play.
I also finally became convinced of the value of MDA [pdf] as a framework for understanding games; previously, it had never really clicked with me. In particular, I came to appreciate the value of rules and Mechanics emerging from Dynamics – often in the form of exploration or improvisation. If the act of play isn’t fun, or challenging, or interesting, why should a game that demands non-fun actions be any good at all? Guest speaker Tassos Stevens put this much better than I currently can in his wonderful short talk, Make Believe:
Game arises from play. A ruleset crystallises a set of actions distilled from an experience of play. That crystal can be popped in your pocket to be played with again and again, any time, any place, with anyone entranced by its sparkle. It gets chipped and scratched, then rubbed and polished… the very best thing about it is that if we want to, we can smash it up and grind it into paste to make believe anew.
What we were doing at the lab was learning how to make games arise.
The game we eventually presented, to a small invited audience, was Couple Up: a site-specific parlour-game, based on getting the guests invited at the final session from one room (where they were socialising) to another (where there was booze). It used cards as a social token, but the game was played in conversations between players. From the video above, it might seem slight, and whimsical; it’s certainly a little bit broken, and needs some revisions. But at it’s core are a few things we wanted to explore: designing ethical games; designing games that forced you to learn to “read” them; games that afford performance; games that exploit hidden knowledge (both on the part of the players and game-makers. That explorations happened not only in the making, but also in watching our guests play the game, and subsequently discuss it with us afterwards.
The standard of discussion and quality of the participants and speakers throughout the lab was fantastic. The fact we were reigned-in, asked to stop taking and start explaining ourselves through making, was an important challenge, and a visible reminder of the value of thinking through making. And, of course, though our subject matter was play through the lens of games of all forms, I can already see the ways many of the lessons I learned apply to my work in design.
Some of the output of the Lab was very immediate – new colleagues, new ideas to take away. Some of it is lodged into my brain, not taking form right now, but burning away, and will no doubt nag me for the rest of the year. It acted like so many of my favourite conferences – not a reminder of things that I’ve failed to do in my work, or things that have to change immediately, but things to be thinking about in the long term, and to be incorporated into future output. Not One Big Idea, but a hundred ideas, percolating away, growing and mutating until they’re ready to use. I’ll be making use of what I learned in so many projects, and so much work, from here on out.
As such: it was a privilege to take part; thanks to Margaret, Miranda, Alex, and everyone at Hide & Seek who organised the event, not to mention LIFT and the Jerwood Foundation for their support – and, most of all, to the other participants, who all brought something wonderful to the mix.
(You can see all the participants’ short talks, and a succession of more general videos, at the Wonderlab 2010 Youtube channel. My own five-minute talk, on the German boardgame Waldschattenspiel, is here)
“…it’s likely that we’re locked into pursuing very conscious, very gorgeous, deliberate touch interfaces – touch-as-manipulate-objects-on-screen rather than touch-as-manipulate-objects-in-the-world for now.”
It does look very much like we’re living in that world now – where our focus is elsewhere than our immediate surroundings – mainly residing through our fingers, in our tiny, beautiful screens.
Like a lot of things here, they are deeply connected to other places. Their attention is divided. And, by extension, so is ours. While this feeling is common to all cities over time, cell phones bring the tangible immediacy of the faraway to the street. Helped along by media and the global logistics networks that define our material lives, our moment-to-moment experience of the local has become increasingly global.
Recently, of course, our glowing attention wells have become larger.
We’ve been designing, developing, using and living with iPads in the studio for a while now, and undoubtedly they are fine products despite their drawbacks – but it wasn’t until our friend Tom Coates introduced me to a game called Marble Mixer that I thought they were anything other than the inevitable scaling of an internet-connected screen, and the much-mooted emergence of a tablet form-factor.
It led me to think they might be much more disruptive as magic tables than magic windows.
Marble Mixer is a simple game, well-executed. Where it sings is when you invite friends to play with you.
Each of you occupy a corner of the device, and attempts to flick marbles into the goal-mouth against the clock – dislodging the other’s marbles.
Beautiful. Simple. But also – amazing and transformative!
We’re all playing with a magic surface!
When we’re not concentrating on our marbles, we’re looking each other in the eye – chuckling, tutting and cursing our aim – and each other.
There’s no screen between us, there’s a magic table making us laugh. It’s probably my favourite app to show off the iPad – including the ones we’ve designed!
It shows that the iPad can be a media surface to share, rather than a proscenium to consume through alone.
[GoGos]’d be the perfect counters for a board game that uses the iPad as the board. They’d look gorgeous sitting on there. We’d need to work out how to make the iPad think they were fingers – maybe some sort of electrostatic sausage skin – and to remember which was which.
Inspired by Marble Mixer, and Russell’s writings – I decided to do a spot of rapid prototyping of a ‘peripheral’ for magic table games that calls out the shared-surface…
It’s a screen – but not a glowing one! Just a simple bit of foamboard cut so it obscures your fellow player’s share of the game board, for games like Battleships, or in this case – a mocked-up guessing-game based on your flickr contacts…
You’d have a few guesses to narrow it down… Are they male, do they have a beard etc…
Fun for all the family!
Anyway – as you can see – this is not so serious a prototype, but I can imagine some form of capactive treatment to the bottom edge of the screen, perhaps varying the amount of secret territory each player revealed to each other, or capacitive counters as Russell suggests.
Aside from games though – the pattern of a portable shared media surface is surely worth pursuing.
it feels way more like the future than the fitbit because it’s cheap, fashiony and simple.
The Replay is $20. It doesn’t need any connectivity to share your fitness scores – a code appears on the Replay’s screen and you type it into the S2H website. It makes a smiley face when you’ve done enough exercise. And that rubber bracelet is clearly designed to be replaced/customised/given away as a freebie.
Russell’s post has lots more detail and insight. As well as the device, I liked Russell’s use of “fashiony” as a watchword: something that feels fun and now and a little bit pop. Or to use a metaphor: the Replay isn’t Ikea, it’s American Apparel. For something like the Replay, I think that’s a good quality to have.
Makedo looks like a fun take on construction toys: “a set of connectors for creating things from the stuff around you“. It’s a construction set made only of connectors and hinges; the raw materials are left for you to find. The video above has some good examples of its possibilities. My only doubt is if Makedo is toy-ish enough; the website makes it seem targeted more to an older, crafting audience. But there’s a charm and inventiveness in both the toy, and the play it enables, that I like, and I think that makes it worth a link. (Via Alice Taylor, who saw Makedo at the Toy Fair).
I think this was my favourite thing I saw this week: a downloadable game for Nintendo’s DSi. The aim of the game is to find letters hidden in 3D scenes, styled a bit like a cardboard toy theatre, by tilting the device around. The video you need to see is the second one down on this page – I can’t embed it. It’s mindboggling: a game all about perspective and visual trickery, which looks utterly beautiful. Even more impressively: the DSi has no accelerometer, just two 640×480 cameras – so all that movement is being calculated through motion tracking.
I was mainly taken with how beautiful it was, though. The only sad thing: I don’t read Japanese, I have no idea what it’s called. I hope it comes out in the English-speaking world soon.
Image: taken from Amos Topping’s slide of Radiolarians
Finally: scratching and drumming with a set of holographic heads. (via Scott Beale). This is a live performance of Chris Cairns’ Neurosonics Audiomedical Labs inc, and elevates it from “nifty video effects” to something far more ingenious. It made me laugh, too.
Augmented Reality Link Of The Week #1: Scope, by Frank Larsome. Scope is an AR tabletop wargame, played with special markers and (in a nice touch) any toys you have lying around. The interface and “game” elements are all projected onto the scene through the goggles.
I like this because it’s consistent and realistic in its use of AR: it makes sense to wear goggles or some other kind of apparatus, because you’re an army commander surveying a battlefield. And I like that reality is genuinely being augmented here: the AR element is interface and head-up display, as opposed to some 3D element pretending to be real but clearly failing at that. AR is, quite rightly, part of the novelty of Scope.
And from the sublime to the ridiculous, as it were. This is Tribal DDB Asia’s “3D McNuggets Dip“, “The first 3D Augmented reality dipping game with McNuggets”.
It’s AR as pure novelty: a marker to be used with a Flash webcam app, dragging an AR McNugget around a screen much like you might with a mouse, the sole novelty in the proposition being AR. It’s barely AR; it’s more Marker As Interface – much closer in implementation to the way a Wii Remote might be used.
Excitingly, they’ve been targeted not at existing eReaders, nor a simplified eReader aimed at children, but to a device with a touchscreen that many kids already own: the Nintendo DS.
It’s a deal between publishers Egmont Press and Penguin, with games company EA. The titles are priced at £24.99 – nearly the cost of a full DS game, but each cartridge has “6-8″ titles on it, which cuts the cost per book down to that of a paperback. And then, of course, there’s all the supplemental material.
I like the idea of Flips (as the titles are known) because they’re basically nothing new: an existing product retargeted simply by aiming at a new, simpler, cheaper platform – and one that many kids already have. There’s nothing complex here in the software or the strategy, but if the implementation’s good, then perhaps they’ll be a success.
Sure, the DS screen isn’t as easy on the eyes as a Kindle’s, and the resolution is lower, but that might be less of an issue for ten- and eleven- year olds.
It’ll be interesting to see how they sell; it’ll also be interesting to see if it sparks interest in reading, and also where they’ll be stocked: games shops are likely to carry them, but will bookshops as well? We’ll find out in December, just in time for the Christmas rush.
And, finally, a small piece of gaming nostalgia that made me smile: the 1978 Atari catalogue, featuring titles for the VCS/2600. I like it if only for its emphasis on anything but the game screens, instead focusing on the large amounts of commissioned art. That cover brings nothing to mind so much as Mr Benn, and reminds me of the escpaism – the different outfits one can wear – that computer games have always had at their heart.
When I wrote about Text In The World over on my personal blog a few weeks ago, our colleague Matt Jones left a comment:
“preparing us for AR” (augmented reality)
And this got me thinking about the ways that design and media can educate us about what future technologies might be like, or prepare us for large paradigm shifts. What sort of products really are “preparing” us for Augmented Reality?
A lot of consumer-facing the output of Augmented Reality at the moment tends to focus on combining webcams with specifically marked objects; Julian Oliver’s levelHead is one of the best-known examples:
But when AR really hits, it’s going to be because the technology it’s presented through has become much more advanced; it won’t just be webcams and monitors, but embedded in smart displays, or glasses, or even the smart contact lenses of Warren Ellis’ Clatter.
So whilst it’s interesting to play with the version of the technology we have today, there’s a lot of value to be gained from imagining what the design of fully-working AR systems might look like, unfettered by current day technological constraints. And we can do that really well in things like videos, toys, and games.
Here’s a lovely video from friend and colleague of Schulze & Webb, Timo Arnall:
Timo’s video imagines using an AR map in an urban environment. I particularly like how he emphasises that there are few limitations on scale when it comes to projecting AR – and the most convenient size for certain applications might be “as big as you can make it”. Hence projecting the map across the entire pavement.
Here’s another nice example: the Nearest Tube application for the iPhone 3GS:
This is perhaps a more exciting interpretation of what AR could be, and what AR devices might be (not to mention a working, real-world example): the iPhone becomes a magic viewfinder on the world, a Subtle Knife that can cut through dimensions to show us the information layer sitting on top of the world. It helps that it’s both useful and pretty, too.
Games are a great way of getting ready for the interfaces technologies like AR afford. Here’s a clip I put together from EA Redwood Shores’ Dead Space, illustrating the game UI:
Dead Space has no game HUD; rather, the HUD is projected into the environment of the game as a manifestation of the UI of the hero’s protective suit. It means the environment can be designed as a realistic, functional spaceship, and then all the elements necessary for a game – readouts, inventories, not to mention guidelines as to what doors are locked or unlocked – can be manifested as overlay. It’s a striking way to place all the game’s UI into the world, but it’s also a great interpretation of what futuristic, AR user interfaces might be a bit like.
Finally, a toy that never fails to make me smile – the Tuttuki Bako:
This is Matt Jones playing with a Tuttuki Bako in our studio. You place your finger into the hole in the box, and then use it to control a digital version of your finger on screen in a variety of games. It’s somewhat uncanny to watch, but serves as a great example of a somewhat different approach to augmented realities – the idea that our bodies could act as digital prosthetics.
All these examples show different ways of exploring an impending, future technology. Whilst much of the existing, tangible work in the AR space is incremental, building upon available technology, it’s likely that the real advances in it will be from technology we cannot yet conceive. Given that, it makes sense to also consider concepting from a purely hypothetical design perspective – trying things out unfettered by technological limitations. The technology will, after all, one day catch up.
What’s exciting is that this concept and design work is not always to be found in the work of design studios or technologists; it also appears in software, toys, and games that are readily consumable. In their own way, they are perhaps doing a better job of educating the wider world about AR (or other new technologies) than innumerable tech demos with white boxes.