This website is now archived. To find out what BERG did next, go to www.bergcloud.com.

Blog posts tagged as 'machinelearning'

B.A.S.A.A.P.

Design principle #1

The above is a post-it note, which as I recall is from a workshop at IDEO Palo Alto I attended while I was at Nokia.

And, as I recall, it was probably either Charlie Schick or Charles Warren who scribbled this down and stuck it on the wall as I was talking about what was a recurring theme for me back then.

Recently I’ve been thinking about it again.

B.A.S.A.A.P. is short for Be As Smart As A Puppy, which is my short-hand for a bunch of things I’ve been thinking about… Ooh… Since 2002 or so I think, and a conversation in a california car-park with Matt Webb.

It was my term for a bunch of things that encompass some 3rd rail issues for UI designers like proactive personalisation and interaction, examined in the work of Byron and Nass, exemplified by (and forever-after-vilified-as) Microsoft’s Bob and Clippy (RIP). A bunch of things about bots and daemons, conversational interface.

And lately, a bunch of things about machine learning – and for want of a better term, consumer-grade artificial intelligence.

BASAAP is my way of thinking about avoiding the ‘uncanny valley‘ in such things.

Making smart things that don’t try to be too smart and fail, and indeed, by design, make endearing failures in their attempts to learn and improve. Like puppies.

Cut forward a few years.

At Dopplr, Tom Insam and Matt B. used to astonish me with links and chat about where the leading-edge of hackable, commonly-employable machine learning was heading.

Startups like songkick and last.fm amongst others were full of smart cookies making use of machine learning, data-mining and a bunch of other techniques I’m not smart enough to remember (let-alone reference), to create reactive, anticipatory systems from large amounts of data in a certain domain.

Now, machine-learning is superhot.

The web has become a web-of-data, data-mining technology is becoming a common component of services, and processing power on tap in the cloud means that experimentation is cheap. The amount of data available makes things possible that were impossible a few years ago.

I was chatting with Matt B. again this weekend about writing this post, and he told me that the algorithms involved are old. It’s just that the data and the processing power is there now to actually get to results. Google’s Peter Norvig has been quoted as saying “All models are wrong, and increasingly you can succeed without them.“.

Things like Hunch are making an impression in the mainstream. Google Priority Inbox, launched recently, make the utility of such approaches clear.

BASAAP services are here.

BASAAP things are on the horizon.

As Mike Kuniavsky has pointed out – we are past the point of “Peak Mhz”:

driving ubiquitous computing, as their chips become more efficient, smaller and cheaper, thus making them increasingly easier to include into everyday objects.

This is ApriPoco by Toshiba. It’s a household robot.

It works by picking up signals from standard remote controls and asks you what you are doing, to which you are supposed to reply in a clear voice. Eventually it will know how to turn on your television, switch to a specific channel, or play a DVD simply by being told. This system solves the problem that conventional speech recognition technology has with some accents or words, since it is trained by each individual user. It can send signals from IR transmitters in its arms, and has cameras in its head with which it can identify specific users.

Not perhaps the most pressing need that you have in your house, but interesting none-the-less.

Imagine this not as a device, but as an actor in your home.

The face-recognition is particularly interesting.

My £100 camera has a ‘smile-detection’ mode, which is becoming common. It can also recognise more faces that a 6-month old human child. Imagine this then, mixed with ApriPoco, registering and remembering smiles and laughter.

Go further, plug it into the internet. Into big data.

As Tom suggested on our studio mailing list: recognising background chatter of people not paying attention. Plugged into something like Shownar, constantly updating the data of what people are paying attention to, and feeding back suggestions of surprising and interesting things to watch.

Imagine a household of hunchbots.

Each of them working across a little domain within your home. Each building up tiny caches of emotional intelligence about you, cross-referencing them with machine learning across big data from the internet. They would make small choices autonomously around you, for you, with you – and do it well. Surprisingly well. Endearingly well.

They would be as smart as puppies.

Hunch-Puppies…?

Ahem.

Of course, there’s the other side of domesticated intelligences.

Matt W.’s been tracking the bleed of AI into the Argos catalogue, particularly the toy pages for some time.

They do their little swarming thing and have these incredibly obscure interactions

The above photo of toys from Argos he took was given the title: “They do their little swarming thing and have these incredibly obscure interactions”

That might be part of the near-future: being surrounded by things that are helping us, that we struggle to build a model of how they are doing it in our minds. That we can’t directly map to our own behaviour. A demon-haunted world. This is not so far from most people’s experience of computers (and we’re back to Byron and Nass) but we’re talking about things that change their behaviour based on their environment and their interactions with us, and that have a certain mobility and agency in our world.

I’m reminded of the work of Rodney Brooks and the BEAM approach to robotics, although hopefully more AIBO than Runaways.

Again, staying on the puppy side of the uncanny valley is a design strategy here – as is the guidance within Adam Greenfield’s “Everyware”: how to think of design for ubiquitous systems that behave as sensing, learning actors in contexts beyond the screen.

Adam’s book is written as a series of theses (to be nailed to the door of a corporation or two?), and thinking of his “Thesis #37″ in connection with BASAAP intelligences in the home of the near-future amuses me in this context:

“Everyday life presents designers of everyware with a particularly difficult case because so very much about it is tacit, unspoken, or defined with insufficient precision.”

This cuts both ways in a near-future world of domesticated intelligences, and that might be no bad thing. Think of the intuitions and patterns – the state machine – your pets build up of you, and vice-versa. You don’t understand pets as tools, even if they perform ‘job-like’ roles. They don’t really know what we are.

We’ll never really understand what we look like from the other side of the Uncanny Valley.

Mechanical Dog Four-Leg Walking Type

What is this going to feel like?

Non-human actors in our home, that we’ve selected personally and culturally. Designed and constructed but not finished. Learning and bonding. That intelligence can look as alien as staring into the eye of a bird (ever done that? Brrr.) or as warm as looking into the face of a puppy. New nature.

What is that going to feel like?

We’ll know very soon.

Links for a Monday Morning

  • wikireader.jpg
    How come I’d never heard of WikiReader before? It’s a $99 device for reading Wikipedia from a memory card. There’s no network connection of any form, just a micro-SD slot, meaning it’ll last for a year on three AA batteries. You can update it at any point by downloading a new dump; if that’s not possible, you can get a new dump sent to you on a memory card for a small fee. It’s got a touch-screen, so the UI doesn’t have to be localised – foreign keyboards can be implemented in software. And, best of all, it has a hardware button marked “Random” – capturing one of the hidden joys of Wikipedia.

    It feels like a nice companion to an e-reader: book in one hand, universal lookup device in the other, and not a network connection in site. The chunky form-factor also makes it really robust and immediate; something I’d consider slinging in a bag, especially for trips abroad. It’s designed by Openmoko, and available now.

  • oomouse81.pngAnother open-source product making its debut in hardware is the OpenOffice.org Mouse (pictured left). Eighteen buttons, an analogue joystick… I admit to sucking my teeth in disbelief when I first saw it; the comparisons that have been made to the Homer seem justified.

    But take a step back, and consider it more slowly, and perhaps it’s not the car-crash it seems; instead, its problems are more subtle. Chris Messina has a sharp takes on this:

    What I worry about, however, is that pockets of the open source community continue to largely be defined and driven by complexity, exclusivity, technocracy, and machismo… so far I’ve see little indication that open source developers take seriously the need for simpler, easier, and more intuitive future-forward interfaces. Perhaps I’m wrong or just uninformed, but so long as products like the OpenOfficeMouse continue to characterize the norm in open source design, I’m not likely going to be able to soon recommend open source solutions to anyone but the most advanced and privileged users.

    Friend of BERG Phil Gyford picks up a similar point:

    The problem isn’t that it has appalling design — it’s poor and uninspired, but it’s not the worst thing ever.

    The fundamental problem is that the product is aiming for two very specific, probably unreconcilable, niche audiences (hard-core gamers and hard-core office workers) while associating itself with a brand (OpenOffice) that wants to be completely mainstream.

    I think Phil’s right. If this was pitched as, say, an EVE Online mouse, I’d probably go “oh, that makes sense for a game and UI that complex”. But for a brand trying to be taken seriously as a mainstream alternative to expensive office suites, this seems misguided, and only perpetuates preconceived notions of Open Source’s attitude towards design.

  • Schulze has bought a new car, and trust him to buy the only car I’ve seen with its own font. That is: not a font designed for the car, but a font made by the car.

    Toyota motion-captured an iQ from overhead using software written in openFrameworks, and used it to generate a handwriting font built out of careful cornering and handbrake turns. It feels like the opposite of DHL’s fake GPS art: Toyota are keen to show the software and prove it actually works. Best of all, they’ll even let you download – and use – the font itself.

  • I couldn’t let a round-up of links go with a mention to James Bridle’s recreation of MENACE, Donald Michie‘s learning machine to play noughts-and-crosses built only out of matchboxes and beads, which he first demonstrated at Playful two weeks ago.

    James was kind enough to bring his MENACE to a recent BERG drinks evening, and it drew the gasps it thoroughly deserves; 301 matchboxes is an imposing piece of computing.

  • And, finally, a nice little piece of what you might call design research: Giles Turnbull investigates nomenclatures for legobricks, surveying a selection of children he knows:

    This language of Lego isn’t just something our family has invented; every Lego-building family must have its own vocabulary. And the words they use (mostly invented by the children, not the adults) are likely to be different every time. But how different? And what sort of words?

    Hence, a survey. I asked fellow parents to donate their children for a few minutes, and name a selection of Lego pieces culled from the Lego parts store.

    Lovely. (Personally, I called a Brick 1×1 a “one-bobble” and a plate 1×1 a “flat one-bobble”).

Recent Posts

Popular Tags