This website is now archived. To find out what BERG did next, go to www.bergcloud.com.

Blog (page 7)

Lamps: a design research collaboration with Google Creative Labs, 2011

Preface

This is a blog post about a large design research project we completed in 2011 in close partnership with Google Creative Lab.

There wasn’t an opportunity for publication at the time, but it represented a large proportion of the studio’s efforts for that period – nearly everyone in the studio was involved at some point – so we’ve decided to document the work and its context here a year on.

I’m still really proud of it, and some of the end results the team produced are both thought-provoking and gorgeous.

We’ve been wanting to share it for a while.

It’s a long post covering a lot of different ideas, influences, side-projects and outputs, so I’ve broken it up into chapters… but I recommend you begin at the beginning…


Introduction

 


At the beginning of 2011 we started a wide-ranging conversation with Google Creative Lab, around near-future experiences of Google and its products.

Tom Uglow, Ben Malbon of Google Creative Lab with Matt Jones of BERG

During our discussions with them, a strong theme emerged. We were both curious about how it would feel to have Google in the world with us, rather than on a screen.

If Google wasn’t trapped behind glass, what would it do?

What would it behave like?

How would we react to it?

Supergirl, trapped behind glass

This traces back to our studio’s long preoccupation with embodied interaction. Also, our explorations of the technologies of computer vision and projection that we’ve talked about previously under the banner of the “Robot-Readable World”.

Our project through the spring and summer of 2011 concentrated on making evidence around this – investigating computer vision and projection as ‘material’ for designing with, in partnership with Google Creative Lab.

Material Exploration

 


We find that treating ‘immaterial’ new technologies as if they were physical materials is useful in finding rules-of-thumb and exploring opportunities in their “grain”. We try as a studio to pursue this approach as much as someone trying to craft something from wood, stone, or metal.

Jack Schulze of BERG and Chris Lauritzen, then of Google Creative Lab

We looked at computer-vision and projection in a close relationship – almost as one ‘material’.

That material being a bound-together expression of the computer’s understanding of the world around it and its agency or influence in that environment.

Influences and starting points

 

One of the very early departure points for our thinking was a quote by (then-)Google’s Marissa Meyer at the Le Web conference in late 2010: “We’re trying to build a virtual mirror of the world at all times”

This quote struck a particular chord for me, reminding me greatly of the central premise of David Gelernter‘s 1991 book “Mirror Worlds“.

I read “Mirror Worlds” while I was in architecture school in the 90s. Gelernter’s vision of shared social simulations based on real-world sensors, information feeds and software bots still seems incredibly prescient 20 years later.

Gelernter saw the power to simply build sophisticated, shared models of reality that all could see, use and improve as a potentially revolutionary technology.

What if Google’s mirror world were something out in the real world with us, that we could see, touch and play with together?

Seymour Papert – another incredibly influential computer science and education academic – also came to our minds. Not only did he maintain similar views about the importance of sharing and constructing our own models of reality, but was also a pioneer of computer vision. in 1966 he sent the ‘Summer Vision Memo“Spend the summer linking a camera to a computer, and getting the computer to describe what it saw…”



Nearly fifty years on, we have Kinects in our houses, internet-connected face-tracking cameras in our pockets and ‘getting the computer to describe (or at least react to) what it saw seems to be one of the most successful component tracks of the long quest for ‘artificial intelligence’.

Our thinking and discussion continued this line toward the cheapness and ubiquity of computer vision.

The $700 Lightbulb

 

Early on, Jack invoked the metaphor of a “$700 lightbulb”:

Lightbulbs and electric light went from a scientific curiosity to a cheap, accessible ubiquity in the late 19th and early 20th century.

What if lightbulbs were still $700?

We’d carry one around carefully in a case and screw it in when/where we needed light. They are not, so we leave them screwed in wherever we want, and just flip the switch when we need light. Connected computers with eyes cost $500, and so we carry them around in our pockets.

But – what if we had lots of cheap computer vision, processing, connectivity and display all around our environments – like light bulbs?

Ubiquitous Computing has of course been a long held vision in academia, which in some ways has been derailed by the popularity of the smartphone

But smartphones are getting cheaper, Android is embedding itself in new contexts, with other I/Os than a touchscreen, and increasingly we keep our data in the cloud rather than in dedicated devices at the edge.

Ubiquitous computing has been seen by many as in the past as a future of cheap, plentiful ‘throw-away’ I/O clients to the cloud.

It seems like we’re nearly there.

In 2003, I remember being captivated by Neil Gershenfeld’s vision of computing that you could ‘paint’ onto any surface:

“a paintable computer, a viscous medium with tiny silicon fragments that makes a pour-out computer, and if it’s not fast enough or doesn’t store enough, you put another few pounds or paint out another few square inches of computing.”

Professor Neil Gershenfeld of MIT

Updating this to the present-day, post web2.0 world – where if ‘it’s not fast enough or doesn’t store enough’ we request more resources from centralised, elastic compute-clouds.

“Clouds” that can see our context, our environment through sensors and computer vision, and have a picture of us built up through our continued interactions with it to deliver appropriate information on-demand.

To this we added speculation that not only computer-vision would be cheap and ubiquitous, but excellent quality projection would become as cheap and widespread as touch screens in the near-future.

This would mean that the cloud could act in the world with us, come out from behind the glass and relate what it sees to what we see.

In summary: computer vision, depth-sensing and projection can be combined as materials – so how can we use them to make Google services bubble through from the Mirror World into your lap?

How would that feel? How should it feel?

This is the question we took as our platform for design exploration.

“Lamps that see”

 

One of our first departure points was to fuse computer-vision and projection into one device – a lamp that saw.

Here’s a really early sketch of mine where we see a number of domestic lamps, that saw and understood their context, projecting and illuminating the surfaces around them with information and media in response.

We imagined that the type of lamp would inform the lamp’s behaviour – more static table lamps might be less curious or more introverted than a angle-poise for instance.

Jack took the idea of the angle-poise lamp on, thinking about how servo-motors might allow the lamp to move around within the degrees of freedom its arm gives it on a desk to inquire about its context with computer vision, track objects and people, and surfaces that it can ‘speak’ onto with projected light.

Early sketches of “A lamp that sees” by Jack Schulze

Early sketches of “A lamp that sees” by Timo Arnall

Of course, in the back of our minds was the awesome potential for injecting character and playfulness into the angle-poise as an object – familiar to all from the iconic Pixar animation Luxo Jr.



And very recently, students from the University of Wellington in New Zealand created something very similar at first glance, although the projection aspect is missing here.

Alongside these sketching activities around proposed form and behaviour we started to pursue material exploration.

Sketching in Video, Code & Hardware

 


We’d been keenly following work by friends such as James George and Greg Borenstein in the space of projection and computer vision, and a number of projects in the domain emerged during the course of the project, but we wanted to understand it as ‘a material to design with’ from first principles.

Timo, Jack, Joe and Nick – with Chris Lauritzen (then of Google Creative Lab), and Elliot Woods of Kimchi & Chips, started a series of tests to investigate both the interactive and aesthetic qualities of the combination of projection and computervision – which we started to call “Smart Light” internally.

First of all, the team looked at the different qualities of projected light on materials, and in the world.

This took the form or a series of very quick experiments, looking for different ways in which light could act in inhabited spaces, on surfaces, interact with people and things.

In a lot of these ‘video sketches’ there was little technology beyond the projector and photoshop being used – but it enabled us to imagine what a computer-vision directed ‘smart light’ might behave like, look like and feel like at human scale very quickly.

Here are a few example video sketches from that phase of the work:

Sketch 04 Sticky search from BERG on Vimeo.

Sketch 06: Interfaces on things from BERG on Vimeo.

One particularly compelling video sketch projected an image of a piece of media (in this case a copy of Wired magazine) back onto the media – the interaction and interference of one with the other is spellbinding at close-quarters, and we thought it could be used to great effect to direct the eye as part of an interaction.

Sketch 09: Media on media from BERG on Vimeo.

Alongside these aesthetic investigations, there were technical explorations for instance, into using “structured light” techniques with a projector to establish a depth map of a scene…

Sketch 13: Structured light from BERG on Vimeo.

Quickly, the team reached a point where more technical exploration was necessary and built a test-rig that could be used to prototype a “Smart Light Lamp” comprising a projector, a HD webcam, a PrimeSense / Asus depth camera and bespoke software.

Elliot Woods working on early software for Lamps

At the time of the project the Kinect SDK now ubiquitous in computer vision projects was not officially available. The team plumped for the component approach over the integration of the Kinect for a number of reasons, including wanting the possibility of using HD video in capture and projection.

Testing the Lamps rig from BERG on Vimeo.

Nick recalls:

Actually by that stage the OpenNI libraries were out (http://openni.org/), but the “official” Microsoft SDK wasn’t out (http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx). The OpenNI libraries were more focussed on skeletal tracking, and were difficult to get up and running.

Since we didn’t have much need for skeletal tracking in this project, we used very low-level access to the IR camera and depth sensor facilitated by various openFrameworks plugins. This approach gave us the correct correlation of 3D position, high definition colour image, and light projection to allow us to experiment with end-user applications in a unified, calibrated 3D space.

The proto rig became a great test bed for us to start to explore high-level behaviours of Smart Light – rules for interaction, animation and – for want of a better term – ‘personality’.

Little Brain, Big Brain

 

One of our favourite things of the last few years is Sticky Light.

It’s a great illustration of how little a system needs to do, for us to ascribe personality to its behaviour.

We imagined that the Smart Light Lamp might manifest itself as a companion species in the physical world, a creature that could act as a go-between for you and the mirror-worlds of the digital.

We’ve written about digital companion species before: when our digital tools become more than just tools – acquiring their own behaviour, personality and agency.

Bit, Flynn’s digital companion from the original Tron

You might recall Bit from the original Tron movie, or the Daemons from the Philip Pullman “His Dark Materials” trilogy. Companions that are “on your side” but have abilities and senses that extend you.

We wanted the Lamp to act as companion species for the mirror-worlds of data that we all live within, and Google has set out to organise.

We wanted the lamp to act as a companion species that illustrated – through its behaviour – the powers of perception that Google has through computer vision, context-sensing and machine-learning.

Having a companion species that is a native of the cloud, but on your side, could make evident the vast power of such technologies in an intuitive and understandable way.

Long-running themes of the studio’s work are at play here – beautiful seams, shelf-evidence, digital companion species and BASAAP – which we tried to sum up in our Gardens and Zoos talk/blog post , which in turn was informed by the work we’d done in the studios on Lamps.

One phrase that came up again and again around this areas of the lamps behaviour was “Big Brain, Little Brain” i.e. the Smart Light companion would be the Little Brain, on your side, that understood you and the world immediately around you, and talked on your behalf to the Big Brain in ‘the cloud’.

This intentional division, this hopefully ‘beautiful seam’ would serve to emphasise your control over what you let the the Big Brain know in return for its knowledge and perspective, and also make evident the sense (or nonsense) that the Little Brain makes of your world before it communicates that to anyone else.

One illustration we made of this is the following sketch of a ‘Text Camera’

Text Camera from BERG on Vimeo.

Text Camera is about making the inputs and inferences the phone sees around it to ask a series of friendly questions that help to make clearer what it can sense and interpret.

It reports back on what it sees in text, rather than through a video. Your smartphone camera has a bunch of software to interpret the light it’s seeing around you – in order to adjust the exposure automatically. So, we look to that and see if it’s reporting ‘tungsten light’ for instance, and can infer from that whether to ask the question “Am I indoors?”.

Through the dialog we feel the seams – the capabilities and affordances of the smartphone, and start to make a model of what it can do.

The Smart Light Companion in the Lamp could similarly create a dialog with its ‘owner’, so that the owner could start to build up a model of what its Little Brain could do, and where it had to refer to the Big Brain in the cloud to get the answers.

All of this serving to playfully, humanely build a literacy in how computer vision, context-sensing and machine learning interpret the world.

Rules for Smart Light

 


The team distilled all of the sketches, code experiments, workshop conversations and model-making into a few rules of thumb for designing with this new material – a platform for further experiments and invention we could use as we tried to imagine products and services that used Smart Light.

Reflecting our explorations, some of the rules-of-thumb are aesthetic, some are about context and behaviour, and some are about the detail of interaction.

24 Rules for smart light from BERG on Vimeo.

We wrote the ‘rules’ initially as a list of patterns that we saw as fruitful in the experiments. Our ambition was to evolve this in the form of a speculative “HIG” or Human Interface Guideline – for an imagined near-future where Smart Light is as ubiquitous as the touchscreen is now…


Smart Light HIG

  1. Projection is three-dimensional. We are used to projection turning a flat ‘screen’ into an image, but there is really a cone of light that intersects with the physical world all the way back to the projector lens. Projection is not the flatland display surfaces that we have become used to through cinema, tv and computers.
  2. Projection is additive. Using a projector we can’t help but add light to the world. Projecting black means that a physical surface is unaffected, projecting white means that an object is fully illuminated up to the brightness of the projector.
  3. Enchanted objects. Unless an object has been designed with blank spaces for projection, it should not have information projected onto it. Because augmenting objects with information is so problematic (clutter, space, text on text) objects can only be ‘spotlighted’, ‘highlighted’ or have their own image re-projected onto themselves.
  4. Light feels touchable (but it’s not). Through phones and pads we are conditioned into interacting with bright surfaces. It feels intuitive to want to touch, grab, slide and scroll projected things around. However, it is difficult to make it interactive.
  5. The new rules of depth. A lamp sees the world as a stream of images, but also as a three-dimensional space. There is no consistent interaction surface to interact with like in mouse or touch-based systems, light hits any and all surfaces and making them respond to ‘touch’ is difficult. This is due to finger-based interaction being very difficult to achieve with projection and computer vision. Tracking fingers is technically difficult, fingers are small, there is limited/no existing skeletal recognition software for detecting hands.
  6. Smart light should be respectful. Projected light inhabits the world alongside us, it augments and affects the things we use every day. Unlike interfaces that are contained in screens, the boundaries of the lamps vision and projection are much more obscure. Lamps ‘look’ at the world through cameras, which mean that they should be trustworthy companions.

Next, we started to create some speculative products using these rules, particularly focussed around the idea of “Enchanted Objects”

Smart Light, Dumb Products

 


These are a set of physical products based on digital media and services such as YouTube watching, Google calendar, music streaming that have no computation or electronics in them at all.

All of the interaction and media is served from a Smart Light Lamp that acts on the product surface to turn it from a block into an ‘enchanted object’.

Joe started with a further investigation of the aesthetic qualities of light on materials.

Projection materials from BERG on Vimeo.

This lead to sketches exploring techniques of projection mapping on desktop scales. It’s something often seen at large scales, manipulating our perceptions of architectural facades with animated projected light, but we wanted to understand how it felt at more intimate human scale of projecting onto everyday objects.

In the final film you might notice some of the lovely effects this can create to attract attention to the surface of the object – guiding perhaps to notifications from a service in the cloud, or alerts in a UI.

Then some sketching in code – using computer vision to create optical switches – that make or break a recognizable optical marker depending on movement. In a final product these markers could be invisible to the human eye but observable by computer vision. Similarly – tracking markers to provide controls for video navigation, calendar alerts etc.

Fiducial switch from BERG on Vimeo.

Joe worked with physical prototypes – first simple nets in card and then more finished models to uncover some of the challenges of form in relation to angles of projection and computer vision.

For instance in the Video object, a pulley system has to connect the dial the user operates to the marker that the Lamp sees, so that it’s not obscured from the computer vision software.

Here’s the final output from these experiments:

Dumb things, smart light from BERG on Vimeo.

This sub-project was a fascinating test of our Smart Light HIG – which lead to more questions and opportunities.

For instance, one might imagine that the physical product – as well as housing dedicated and useful controls for the service it is matched to – could act as a ‘key’ to be recognised by computer vision to allow access to the service.

What if subscriptions to digital services were sold as beautiful robot-readable objects, each carved at point-of-purchase with a wonderful individually-generated pattern to unlock access?

What happened next: Universe-B

 


From the distance of a year since we finished this work, it’s interesting to compare its outlook to that of the much-more ambitious and fully-realised Google Glass project that was unveiled this summer.

Google Glass inherits a vision of ubiquitous computing that has been strived after for decades.

As a technical challenge it’s been one that academics and engineers in industry have failed to make compelling to the general populace. The Google team’s achievement in realising this vision is undoubtedly impressive. I can’t wait to try them! (hint, hint!)

It’s also a vision that is personal and, one might argue, introverted – where the Big Brain is looking at the same things as you and trying to understand them, but the results are personal, never shared with the people you are with. The result could be an incredibly powerful, but subjective overlay on the world.

In other words, the mirrorworld has a population of 1. You.

Lamps uses similar techniques of computer vision, context-sensing and machine learning but its display is in the world, the cloud is painted on the world. In the words of William Gibson, the mirrorworld is becoming part of our world – everted into the spaces we live in.

The mirrorworld is shared with you, and those you are with.

This brings with it advantages (collaboration, evidence) and disadvantages (privacy, physical constraints) – but perhaps consider it as a complementary alternative future… A Universe-B where Google broke out of the glass.


Postscript: the scenius of Lamps

 


No design happens in a vacuum, and culture has a way of bubbling up a lot of similar things all at the same time. While not an exhaustive list, we want to acknowledge that! Some of these projects are precedent to our work, and some emerged in the nine months of the project or since.

Here are a selection of less-academic projects using projection and computer-vision that Joe picked out from the last year or so:


Huge thanks to Tom Uglow, Sara Rowghani, Chris Lauritzen, Ben Malbon, Chris Wiggins, Robert Wong, Andy Berndt and all those we worked with at Google Creative Lab for their collaboration and support throughout the project.

Friday Links

We haven’t had Friday Links for a while, so I’ll keep this relatively compact and to the point. A few things that have been flying around the office mailing list in the last few weeks, in no particular order…

1. The Sinclair ZX failing to pass modern EM testing

2. 2030 Megatrends

3. Simulating the qualities of CRT displays in emulations of old software

4. Lytro cameras now allow you to change perspective as well as focus (without any new hardware, and with all existing Lytro images) – which is a bizarre but brilliant experience

5. Microsoft’s Smartglass working outside of gaming, allowing a second screen for media content:

Using your second screen device, The Dark Knight Rises Xbox SmartGlass experience offers:

  • Exclusive content
  • Storyboards
  • Triviax
  • Quotes from the film’s director and actors
  • A closer look at The Dark Knight Rises’ vehicles, gadgets and characters
  • Time-synced information about the epic conclusion to the Dark Knight legend
  • Behind the scenes look at Bane’s in-flight hijacking, the stadium disaster and more

6. Wireless, e-ink supermarket shelf labels

7. ATOMS

ATOMS give kids of any age the ability to make their toys DO things. And not just new toys – ATOMS were built to work with the stuff kids already have, like LEGOs, costumes, stuffed animals, Barbies and action figures. ATOMS don’t require any electronics skills or programming experience – or supervision from a parent with an engineering degree. In fact, because of the tiny electronics built into each one, kids can make all sorts of cool stuff within 5 minutes of taking ATOMS out of the box.

8. Rethinking publishing

9. http://www.gifart.org/ (that’s GIF art) / + bonus Christmas Gifs

10. And finally – James Darling‘s last ever link to the list, Bad Kids’ Jokes.

 

Have a good weekend.

Week 390 and 391

Ten things that happened in the last ten days at BERG:

1) After an intensely busy time for the whole Little Printer team, and in particular Nick, Alice, Fraser, Helen and Alex, we finally began shipping. We celebrated with bunting! Today we started seeing the first pictures of Little Printers in the wild. Massively, massively rewarding. And, whilst we monitor our servers and publications and customer service channels, obviously, we’re also planning our next bits of functionality and shipments. More of that over on the BERG Cloud blog in due course!

2) Joe, Matt Webb and Matt Ward went to Sao Paulo for a workshop. It is currently 23 degrees warmer there than in London.

3) We said goodbye to James Darling and Pawel Pietryka. They are off to new exciting things, via jagermeister and beer. You should hire them both, if not for their amazing respective technical and design skills, then for their killer fashion instincts. They brought the Hip to BERG, in that they look like they’d be at home in an episode of Blossom from 1991. I miss them greatly.

4) Jack moved house. He must have lost his razor in the process; he is now the hairiest he has ever been. It’s quite something. In contrast, Matt Webb was on CNN last week talking about Little Printer looking very well coiffed indeed.

5) Andy returned from his most recent bout of The Babies (four people have caught The Babies this year at BERG, and Matt Jones is hoping to return next week after his case) and then lost half a tooth.

6) Eddie, Matt and Ollie continued to bring Lamotte a stage closer to real life with some focussed client workshopping (which involved a fair bit of giggling).

7) Saguaro was passed round for us all to play with and test, and it’s a beautiful thing. As that project nears its finish, we must say ‘see you later’ to Matt Walton and Phil Gyford this week, too. Hopefully, on the upside, that also means we can have some more cake.

8) The mighty Neil and Greg Borenstein wrapped up project Auryn last week – a challenging project, but one we learned a lot from.

9) Neil, Luke, Denise and Jack are all hard at work finishing up Paiute, for which Alice’s internet-famous nails are going to make another special appearance tomorrow.

10) Kari organised our Christmas dinner and also got us a brand new recycling bin!

 

Friday links

It’s Friday. I wasn’t expecting that… It’s been a busy week. We’ve got demos any minute and so these are some links to give you a five minute break from your work too.

This week, Eddie shared a link to a game created for the Norwegian milk company, Litago, which uses players photographs to create playable platforming levels.

Andy shared a link ‘in equal parts OMG and WTF'; the papers selection for this years’s SIGGRAPH:

James found this Warner Brother’s site hiding at the back of the internet attic.

Phil G shared a Sound Slice, made by Adrian Holovaty, who (co)created Django, and ChicagoCrime.org and EveryBlock. A smooth interface, it lets you annotate guitar videos to help you learn new songs.

And Saar shared this a little while ago, but I’ve only just had time to watch it. Dumb ways to die.

I should to go. My head’s on fire. Have a great weekend.

Week 389

It’s the three hundred and eighty ninth week since BERG began. Simon Pearson is down to do weeknotes but he is away with Matt Webb at Future London demoing Little Printer so I have swapped with him. Matt Jones is still away on paternity leave looking after two tiny humans. Andy is back after his stint of paternity leave and the studio also welcomes Kari who is back after what seems like a very long time away after her case of The Babies.

By project, this week is as follows:

Paiute work continues. Pawel has done some very slick looking graphics for it, Denise even did a weird squeal when she saw them (a pretty good sign). Jack continues to shepherd this project to it’s conclusion which I hope won’t be too soon because that possibly means Pav will have to go on to another contract in another place. And if Pav goes then who is going to get the pardy started?

BERG Cloud and Little Printer are very close to shipping indeed. I am putting the final flourishes to the front-end, James is getting his head into logging for the many bridges that will soon be connecting, Simon is coordinating factories and shipping containers, Helen and Kari are preparing themselves to give the best damned customer service this side of Newspaper Club, Alex is sending me the kind of ‘could this be two pixels lower, please, sorry, thanks’ emails that I know mean our remote site is going to look really good.
Nick is lining up interviewees for the position of The Next James Darling. If you think you fill this spot, head over here for more information.

Neil (now ships with custom BERG shop coat) is working on Auryn with studio friend Greg Borenstein in New York. This involves a lot of waving things at screens, much to my entertainment.

Director of Consulting, Mark Cridge, is taking care of business. He continues to show the office how the big boys do it in an array of shirt jumper combos and the occasional blazer. I keep trying to call his bluff and see if he is wearing a shumper, but as yet his shirts remain gloriously unstitched to his jumpers. What a pro.

Saguaro is beginning to look like a real thing which is lovely to see. Phil is implementing Alex’s design, the writing continues apace, all presided over by Matt Walton.

Sinawava is in the final push. Joe is doing a lot of gleeful dancing which I keep trying to capture a GIF of, but he’s too damn fast. Leo and Nick are finishing off the software, and Paul South continues to astonish with the things he pulls out of the bag.

Matt Thomas, Ollie Wright and Eddie Shannon are still grafting on Lamotte. There is a lot of very charming character design happening at that end of the room, which I enjoy watching evolve.

As always then, a busy studio.

Friday Links

Welcome to Friday Links. Today I am going to provide the links with wild assertions about what bigger categories of BERG interest they fit in.

Internet of Things Meta Watch

As the concept of Internet of Things continues to seep into more people’s brains (A waiting list of 57 for the next IoTLDN, with Matt Webb speaking!), naturally, the common patterns get pulled out and satirised. isittheinternetofthings.com does this well.

Music Hacking Watch

It’s been great to see Music Hack Days move from purely data driven (eg listening data) hacks to more artistic ones. My favourite meeting of the two so far has to be The Infinite JukeBox

Influencing Culture Watch

“Design is about cultural invention” is, to me, one of Jack’s founding principals of BERG. So you can imagine how excited we were to see that the new Avengers comic has Tony Stark owning a device that sounds awfully like a Little Printer.

Browser War Watch

As someone who wasn’t ‘in the business’ during the first browser wars, I’m more used to years of IE6 apathy. So watching the new browser war unfold, with adverts everywhere, very odd. But accompanying it is the commissioning of some great HTML experiments. Like Chrome Experiment’s ‘Stars’, this weeks example.

Infrastructure Hacking Watch

Any time I see a bit of software toying with massive infrastructure, I get tingles. This week’s tingle supplied by Randomized Consumerism.

Atom Watch

Shipping atoms is hard. Whilst I sat in my tower of software, watching the hardware being developed for Little Printer was a very eye opening experience. I hold a new reverence for atoms. Read about Jawbone’s process of developing Up to get a little bit of this.

3D Printing Watch

3D Printing. Proper good 3D printing.

Software Defined Radio Watch

Software Defined Radio is something that seems to rapidly moving from hobbyists to something that could be quite game changing for many products. See it’s first strides here .

‘Plussing’ Watch

We’re a big fan of Walt Disney’s ‘plussing’. So we admire this.

Toys with AI Watch

Speaking of Walt Disney… loosely… there’s a new company making some promising looking AI toys.

Have a great weekend.

Week 387

There are four major projects in the studio at the moment.
There are twenty three people working hard to create beautiful and inventive products.
There has been a studio reshuffle so everyone has new neighbours.
We’ve seen exciting drawing sessions, electronic prototyping, cardboard models, coding, meetings, group lunches, wooly hats, graphics, new shoes, 3D prints, business strategies, pints, project planning, engineering, Frazzles crisps, CAD models, Lemsip, PCBs, presentations, GUIs, video games, characters illustration, information design and more.
All to a milling-machine soundtrack.
Oh, and we’ve just found out that we’ve been drinking double-strength tea.
Probably a major reason the studio such a busy and exciting place to be.

Friday Links

This is my first Friday links. First up is attention to detail which is something we all take very seriously.

Joe found this brief DVD extra from the excellent (and very Scottish) Brave from Pixar where they went to extraordinary levels of detail right down to the modelling of individual threads

A slightly longer example of this level of detail was picked out by James from back in the day on Wall-E.

It’s always nice to see something that we’ve put out in the world get picked up and adapted somewhere else. Timo found this library of objects inspired by our own iPad light painting.

This example even has an audio guide track to help with the painting.

Running Man from Hugo Baptista on Vimeo.

I’m writing this whilst BERG drive time plays in the background so it is fitting that two musical links tickled our fancy this week. The first found by Fraser suggests that;

“Now, for the first time in history, this compilation uses innovative digital techniques to convert historic “pictures of sound” dating back as far as the Middle Ages directly into meaningful audio.”

Not sure what that actually means but worth a look at dust-digital.com/feaster

At the other end of the spectrum the good folks at Google Creative Labs have put together another Chrome experiment so you can ‘Jam’ in real time at jamwithchrome.com. I have it on good authority that if you hold down the keys A. C. I. D. at the same time you get an 303 drum machine as an easter egg (but that may not be true).

All around us we see the cost of new technology dropping precipitously, not least with the Txtr Beagle e-reader costing a mere £8 and which Jack found recently reviewed in the Guardian.

We were really impressed until James noted that;

“The txtr beagle can be offered at such a low price because its cost will be subsidised by mobile carriers. The beagle itself won’t be sold individually; you’ll only be able to get one is by purchasing it when you sign up for a mobile phone contract on specific carriers.”

So how much does it cost really?

Finally I’m very relieved to say that even your Jeans can now get involved in Social Media.

Denise found this over at PSFK and I’m sure you will be glad to hear you can share your ‘happiness level’ featuring eight different modes to choose from – how did we ever cope until now?

Join our Little Printer and BERG Cloud team

We’re looking for someone to join our engineering team to work on Little Printer and BERG Cloud!

This is a full-time role in our London studio, and is primarily focused on the backend systems that power Little Printer and all other BERG Cloud products. You’ll need a strong core knowledge of Ruby on Rails, and a few other skills we’ve listed in our Stack Overflow Careers job listing.

http://careers.stackoverflow.com/jobs/26841/backend-ruby-engineer-web-connected-physical-berg

Little Printer is the first of many ideas we want to bring to life. If you’re the kind of person who’s fascinated by the prospect of writing code not only for the Web but for actual physical objects, then we’d love to hear from you.

Email us at jobs@berglondon.com to get in touch.

Friday Links

This week we’ve clicked on:

The ‘two-screen’ video game ‘Forza Horizon’ and it’s partnering ‘Smartglass’ app.

http://www.joystiq.com/2012/10/30/forza-horizon-smartglass-experience-app-now-available-on-windo/

App-controlled lighting from Philips.

http://www.pocket-lint.com/news/48209/philips-hue-led-bulb-customises-your-homes-lighting

BUY DREAM NOW! Suidobashi Heavy Industry is offering the chance to buy a functional mecha suit.

http://suidobashijuko.jp

If this dream is yours, it’s likely to have been shaped by the long lineage of ‘Mobile Suits’ in Manga such as Gundam or Evangelion.

Cousin of The Caterpillar P-5000 Powered Work Loader in Aliens.

Which is still inspiring Halloween costumes twenty-six years later.

http://i.imgur.com/0Mh6G.jpg

We read how a Videogame God Inspired a Twitter Doppelgänger — and Resurrected His Career

http://www.wired.com/gamelife/2012/10/ff-peter-molyneux/

Also, http://www.webe.at launch a dramatic launch video for their customisable calendars.

http://vimeo.com/51447275

Which rivals Kanye West’s ‘All of the Lights’ for title of most intense psychedelic typographic bombardment.

http://www.youtube.com/watch?v=HAfFfqiYLp0

A striking reference to the title sequence of Enter The Void.

http://www.youtube.com/watch?v=dL0lNGXoP8E

And we’ll leave it at that.

Recent Posts

Popular Tags