Public Touchscreens and Logical Gestures

Lets gesture on it

After reading an interesting article by Neil Clavin (@neilclavin) here: http://bit.ly/GCtOGO via an urbanscale tweet (@urbanscale) I was prompted to write a little retort on the importance of understanding gestures when understanding and design interactions. And when I say gestures i’m not talking post-apple pinches and swipes, i’m talking physical behaviours that emphasise the action being undertaken. Old school gestures.

Read the original article and then my response:

Whilst I agree that public touch screens are actually a very unappealing concept when you begin to look at their day-to-day use, making the leap to “preferably touchless interactions” seems to ignore the tactile nature of being human.

I wonder if a better direction is to seek out interaction that are appropriate to the intended outcome. I’d say the reality of most touchless interfaces (such as oyster card) still involve an actual contact. In fact I saw an older woman vigorously banging her Oyster card on the sensor the other day, presumably in the hope that a bit more physicality would improve the functionality.

Above all, successful contactless systems are about a logical gesture. This is why QR is such a ‘WTF’ experience; because it doesn’t have a pre-existing behaviour attached to it (and of course the technology is as clunky as hell, and the content it reveals is usually crap).

So touchless = good. Touchless everything = no thanks.

From Adam Clavin's Walkshop: Analysing QR codes

I could also have gone into the potential difficulties that users with poor eyesight might have with a touchless interface – but this is a moot point, more so because I’ve actually got no specific evidence that it’s better or worse. My instinct is that it’s probably worse, but who knows.

What this really got me thinking about is this notion of carrying over instincts and behaviours from the old way of doing things. I’m forever debating this issue as a graphic designer; using the vernacular of the past to describe the current. Or. Using the vernacular of the present to introduce the future.

It depends on your perspective.

So using the funny visual cues of today can help people into the future, but how does it limit people? The UK road sign system icon for ‘speed camera’ is perhaps the oddest representation of a camera possible. It’s almost surreal.

And this is just visual stuff. What about gestures and behaviours? What learned behaviour do we carry forward to the post digital age? Do we invent new things? Or approximate the old? And what is the value in either?

I don’t have any suggestions yet, maybe other people do. Gesture me an answer if you know something that I don’t. Just make sure it’s a gesture I understand.

Machine Etiquette

Recently, while working at IDEO, I’ve been involved in a project that began to explore the potential impact of smart devices. With ever cheapening hardware and the widespread adoption of smart phones we are approaching a time when all kinds of devices will find a voice. Not only will they talk to us but they’ll talk to each other, all the time. So what does it mean?

Send in the robots

Most people that i’ve spoken to tend to jump to a worst case scenario (“why would i want to talk to my dish washer”) imagining a world where we not only have to deal with the complexity of relationships with people but also devices. Of course this is not such a stretch of the imagination and now we have social networks that simultaneously support human and robots (and every permutation of connection between the two) then we have a strangely level playing field upon which these games can take place.

Ultimately this becomes of a question of new rules of etiquette, after all the need to maintain a more active realtionship with a device hasn’t really existed before now. So who will define these new rules?

A home full of devices talking to each other

I’m a graphic/interaction designer and while this emergine need for machine etiquette is very clear to me, it’s not clear who’s remit this falls under. Is it the product designer? the UX specialist? the interface designer? Or perhaps there’s a new field needed here, the Device Behaviorist (or something along those lines, Behaviour Designer?). Someone is needed to give these dumb devices smart voices, and a sense of appropriateness. It certainly become an expansive challlenge when you conside the range of situations and locations that a device may find itself in and then need to communicate. The nuance of social interactions takes a lifetime to perfect, so how will my toaster fare?

Good news

On the plus side there are some distinct advantages that come with smarter appliances, when you consider that they can also communicate with other smarter robots via the internet. Suddenly the isolated coffee machine can track stock, record energy use, download new firmware, store favourites, suggest tips, diagnose its own faults and who know how many other menial tasks that you’d probably not spend your days doing.

Making use of the data collected

Its easy to imagine any number of scenarios like this in the short term, but more interestingly in the longer term the machine can also retrieve data over a longer period and build up enough data to make some pretty smart sugestions.

And who’s better placed to monitor how much energy you waste each time you boild the kettle than the kettle itself?

So i’m generally open to the idea of new smart devices, but only if they’ve learnt some manners first.

Remembering faces and names

It’s a fairly common story, you’re introduced to someone and almost immediately forget their name. But you don’t forget the face. In fact you’re not actually forgetting the name, you’re just failing to make the connection between the signifier (the name) and the signified (the face).In fact, the areas for faces and names aren’t even in the same half of the brain. So we really don’t stand a chance.

The reason for this is an evolutionary quirk that means the part of the brain that remembers faces is not the same as the one that remembers the name, so while it feels like you’re making some deep rooted error, it is in fact just the way your brain is put together.

we tend to remember faces, after all our visual centres were developed much earlier than our language centres and recognising freinds and foe meant survival. Labels (descriptors) however form a much smaller part of our later developed language capability and we normally use vocabulary in communicative structures and patterns such as speech. In short its easy to forget a name in isolation.

So next time you meet someone make sure you remember the short comings of your brain’s wiring.