From Wayfinding to Interaction Design

Originally published on Medium – https://medium.com/design-ux/3e7fef5a6512

Before joining IDEO as an Interaction Designer, I worked for one of the more influential wayfinding design companies — Applied Information Group (now called Applied). Wayfinding is the process of planning and making journeys through spaces; wayfinding design companies develop systems to help make this planning and journey-making easier. These systems come in all shapes and sizes, and can cover area naming, signage design, cartography, defining route networks and installing new landmarks to give an area more character. At Applied Information Group we worked on everything from simple internal building systems for hospitals to complex city-wide, multi-modal schemes that encompassed every mode of transport that the city offered.

The underpinning principles of the systems we designed were always the same, and we could draw on a great depth of research and academic study to inform our design; Kevin Lynch’s work on the mental models that people form when navigating the city goes back to the early 1960s. Of course the act of wayfinding is as old as the hills, so solutions were usually found in supporting innate behaviour rather than inventing new ones.

While I won’t go through all of the principles in this article, there are a few I’ve found to be useful in my move from wayfinding design to interaction design. I realise that it may not seem that a train station and a website have much in common, and so before I share some knowledge from the world of wayfinding, here are some of the similarities between the two:

1. Helping people get around complex spaces

If you break down a website, a city, a hospital, or an app, they can all be thought of as complex spaces that people travel through — spaces that are generally so complex that without some help users wouldn’t be able to navigate them confidently. A book, on the other hand, is not very complex — which is why I’m not comparing wayfinding to graphic design.

Interaction design and wayfinding design both seek ways to make it easier for people to understand these physical and virtual spaces.

2. Supporting journeys

When moving through complex spaces we make journeys made up of a sequential series of decision points — going from point A to B to C. But both interaction designers and wayfinding designers need to think in terms of journeys and not isolated points of interaction. Journeys are complex and sequential: the decision I make at point A will affect the rest of my journey (it might become impossible to get to point C for example). A journey could be the series of screens I encounter when viewing and retweeting a message on Twitter, or it could be a walk to the train station.

3. Creating solutions for a wide range of people

When you’re designing information for a transit authority, your potential audience is everyone. When designing Vancouver’s public transit system, we didn’t focus on demographics or market segments because everyone rides the bus, or at least anyone can. People with physical disabilities, young people, tourists, daily commuters, elderly people — the list is endless. For interaction designers — and the digital world we design for — the audience can be just as demanding. Though most projects and clients have a particular audience segment in mind, viewing the full spectrum of users and balancing their (often) contradictory needs is a daily challenge.

4. Prototyping and piloting

Related to all of the points above, prototyping and piloting are crucial to the design process. (I’ve also written about the importance of prototyping here.) When I talk about prototyping I’m talking about a range of different levels of detail, from the rough-and-ready cardboard mock-up to detailed near-final working versions. Problems and solutions are often so complex that you’ve got to make it real for people to be able to join the design process.

A page from the Legible London ‘Yellow Book’

What Can Interaction Designers Learn from Wayfinding?

Hopefully you’re with me about the similarities. What are the lessons to be learned from them by interaction designers? Below are some of the more interesting principles and methods that I’ve found applicable to both fields. Hopefully, for the designers out there, they will trigger some thoughts about how to improve their designs. For non-designers they should give you a glimpse into the difference between good and bad design.

1. Progressive Disclosure

If you give me every detail of a journey at the beginning, the chances are I won’t be able to process, store, and retrieve it. For example, when you ask for directions, after the fourth of fifth instruction you start to glaze over and struggle to remember what the first instruction was (Do I take the third left or the second left?).

Progressively disclosing information helps the end user by reducing the amount of information they have to deal with. The flip side is that we need to do a lot more work as designers to make sure everything fits together.

2. Consistency vs. Monotony

One of the best weapons for tackling complexity is consistency. As soon as I spot a familiar pattern in an environment, I can spend less time analysing and more time navigating. This is quite intuitive to most designers: uniform design means less “visual noise.” This becomes really important when you think about the baseline noise in a city. One of the key principles of the Legible London scheme (on which we collaborated with Lacock Gullam for TfL) was to remove signs that were made obsolete by the new system – rather than adding more and more.

But. There is a flip side: monotony. It’s a delicate balance to strike, but it’s good to remember that people may be using the service or tools daily or even hourly. So if there’s room for a little variation or even humour, then don’t be afraid of it.

3. Glanceable vs. Queriable

When I’m confirming I’m going the right way I only need to glance at information, but when I’m lost I need to query on a deeper level.

If your users will be shifting between these modes, then your interface should support them. This is the antithesis to “one size fits all.” Generally, serving up the most pertinent information in a digestible format requires a lot of analysis of what the user might actually need.

Wayfinding and Interaction Design

I expect that these points will already resonate with interaction designers, and that’s because much of the thinking involved in wayfinding is instinctive, based on the way human’s brains are wired up. Hopefully you’ll find something useful in the observations above, and at the very least they should give you some useful analogies when trying to explain your interaction design job to your friends.

As Twitter Tendlrins are gradually retracted

A sad email arrived this morning from API rerouting service If This Then That (IFTTT):

“In recent weeks, Twitter announced policy changes that will affect how applications and users like yourself can interact with Twitter’s data. As a result of these changes, on September 27th we will be removing all Twitter Triggers”

To those of us who use IFTTT this is a bit of a blow. Currently I have a routine saved on IFTTT (also known as a recipe) which grabs any of my starred tweets and sends them to my Instapaper account. A simple process that means when I tap the little star on a tweet with a link to an article, Instapaper sees the tweet and downloads a text only version of the linked article as saves it for offline viewing. I usually read my recent articles when travelling to work on the train and the internet connection isn’t so good. I go from starred tweets to interesting mini-magazine on my iPhone in seconds.

For me to set up this kind of cross posting system would have been at least a few hours work. On IFTTT it took all of 2 minutes. But now as twitter cuts off these third party apps my recipe is gone forever, worse still: the web gets a little less useful. In fact Twitter gets a little less useful, in return – they hope – they plan to monetise some more of its traffic by having all the data run through it’s own site (and not these third party companies).

“The result of people using a system and shaping it to suit them”

But is this actually not a bit damaging? Maybe not immediately, but at some point this might actually make people think twice about link sharing. I’m probably being a bit crotchety here, but the bigger point holds true: the way Twitter has grown is based on the early adopters jumped on the service and made it there own. Most of the recognisable features of the service (using the @ sign to replay, hashtags, retweets) are all the result of people using a system and shaping it to suit them.

But as Twitter gradually kills off it’s deep rooted connections to other services – just like a plant having its root structure damaged – the whole entity is weaker. And as twitter is quite a big deal, I fear the whole of the web will feel this shock. Basically, its a bit like that scene in Avatar when they chop the big tree down. Sort of.

And how am I going to get my mini-magazine now?


This article owes a lot this this excellent post on the same subject from a couple of months ago: What Twitter Could Have Been by Dalton Caldwell.

QR codes and the end the Turing Age

Many things have been written about the New Aesthetic (NA) over the last few months, like many i’ve been watching and waiting, wondering what might come out of it. Just as thoughts were beginning to come together in my mind Bruce Sterling’s essay in Wired.com has sharply focussed many people’s thinking, and the most interesting result of this essay is the response of the community and the clearer definition of various elements of this movement.

Sterling’s extended tretise calls for more thought and consideration and pitches the NA as the next significant movement after postmodernism and the 20th Century; for me the NA is less an artistic movement and more of a dawning realisation and connecting of disparate dots, each of which are a reality of living with networked digital tools. Whether NA truly represents the next movement in art after Post Modernism is for someone else to answer, I ask myself ‘what does the NA mean today?’.


Taming Lions and Domesticating Cats

The continuing domestication of high technology is really what’s at the heart of the NA, as so many articles rightly point out many of the things that get grouped under the umbrella term have been around for many years. It’s a strange set of cultural and technological touch points from satellite imagery to vectorised artefacts of 3D photography – newish things and oldish things – what holds them together is supernatural view of the world they give us. We can look over the planet from a mile above (flight), we can look though the walls of a building and see what’s happening inside (x-ray vision), we can communicate instantly with people on the other side of the planet (telekinesis), we can control objects remotely (telekinesis). I could go on. I almost wonder if our penchant for superhero films will begin to wane, we all have little supermen in our pockets now.

And these SmartPhones (SuperPhones) infiltrate our day-to-day they are leading the charge of pervasive technology. Automatic vacuum cleaners, Kinects, Drone-copters. They’re all making their way in the world. My guess is, if you’re sitting somewhere in the western world, you probably have 100 sensors of various types within 10 metres of you. Which is an amazing/alarming/alluring thing in itself, the fact they can all talk to each other as well is a very 21st Century state of affairs.

All of this technology has been domesticated and subsumed into the everyday, and by small increments we’ve been joined by a symbiotic species – we call them ‘devices’ and ‘widgets’ and ‘do-dahs’. We’ve begun to acknowledge the presence of these new things by adjusting our environment to suit them – albeit in a clunky way. The QR code heralds an interesting era where we share the visual landscape with our new robot friends, building in visual affordances for Computer Vision that make no sense to us at all, but that our smartphones absolutely love. As time goes by, and  Computer Vision improves, these QR codes and whatever follows them will disappear, or perhaps there will be a lasting remnance – just as even the most advanced CGI effects in films are identifiable, they remain otherworldly.


After Turing

Seeing our new robot compatriots as a different species is of course a bit spurious, but it might set some rules for understanding how to interact with them. The dream of Artificial Intelligence was to replace the human brain with technology, to build a thinking machine. The dawning reality is that this was probably the wrong thing to attempt, after all what do we gain from a machine like a human? What a wasted opportunity. The classic test of a thinking machine  was defined by Alan Turing; in short if you could chat with a computer and be fooled it was a human then we could all deem AI a success. What Turning hadn’t factored in was the adaptation by humans when communicating with machines, our ability to meet them half way changed our expectation of an interaction with a machine.

In fact we’ve already passed the point where spam messages can’t be distinguished from real messages, and people are falling in love with chat bots. Not because the machines got smart, but because we all exist on the same networks, and because these networks let everybody in. My Twitter feed is populated by friends, famous people and robots. They all get my attention, whether they pass the Turing test or not.

Ultimately we can leave the AI experiment in the 20th century and start to think about what we could better use robots for. And these decisions will be made by the content consumers not the content providers. More an more we’re building tools and small pieces that other people can assemble themselves, to construct their own personalised spaces. Of course this isn’t a new thing for the old physical world, but it is a new thing for the Network Age we now live in.


Enter the Network Age

So here we stand, us looking at the robots and the robots looking at us. Each trying to understand the other. Sharing spaces, being shaped by each other. What should we be doing to shape and disrupt and embrace this new world? To me there seems to be a few things to think about.

Firstly, maintaining some visible signs of the system we use feels important. Perhaps not in an overt way, but something to help people be aware of the robots having an impact on their world. It’s very easy to hide away the algorithms and snippets of code that set the boundaries around our lives especially as they become more complex – but people are becoming more code fluent and so perhaps there will be ways to keep things near the surface. Ultimately we can stop trying to humanise robots – what we need is to invent new personalities and behaviours that suit the machines themselves.

Secondly, and particularly for designers and content providers, we all need to be aware of the complex and shifting landscape our output will become a part of. This probably manifests itself as a combination of having a clear voice and a realistic expectation of the impact we might make. Let people appropriate your content and let them tell their own stories – forget trying to shape their reactions, they’ll do what they want. It’s hard to imagine how you can begin to legislate for the ways your stuff will bubble up in the world, so focus on making better content suited to serendipity. Distribution is no longer your problem.

And I suppose the best way to finish is to encourage people to ask more questions, and try to answer them publicly. Share the knowledge.

Public Touchscreens and Logical Gestures

Lets gesture on it

After reading an interesting article by Neil Clavin (@neilclavin) here: http://bit.ly/GCtOGO via an urbanscale tweet (@urbanscale) I was prompted to write a little retort on the importance of understanding gestures when understanding and design interactions. And when I say gestures i’m not talking post-apple pinches and swipes, i’m talking physical behaviours that emphasise the action being undertaken. Old school gestures.

Read the original article and then my response:

Whilst I agree that public touch screens are actually a very unappealing concept when you begin to look at their day-to-day use, making the leap to “preferably touchless interactions” seems to ignore the tactile nature of being human.

I wonder if a better direction is to seek out interaction that are appropriate to the intended outcome. I’d say the reality of most touchless interfaces (such as oyster card) still involve an actual contact. In fact I saw an older woman vigorously banging her Oyster card on the sensor the other day, presumably in the hope that a bit more physicality would improve the functionality.

Above all, successful contactless systems are about a logical gesture. This is why QR is such a ‘WTF’ experience; because it doesn’t have a pre-existing behaviour attached to it (and of course the technology is as clunky as hell, and the content it reveals is usually crap).

So touchless = good. Touchless everything = no thanks.

From Adam Clavin's Walkshop: Analysing QR codes

I could also have gone into the potential difficulties that users with poor eyesight might have with a touchless interface – but this is a moot point, more so because I’ve actually got no specific evidence that it’s better or worse. My instinct is that it’s probably worse, but who knows.

What this really got me thinking about is this notion of carrying over instincts and behaviours from the old way of doing things. I’m forever debating this issue as a graphic designer; using the vernacular of the past to describe the current. Or. Using the vernacular of the present to introduce the future.

It depends on your perspective.

So using the funny visual cues of today can help people into the future, but how does it limit people? The UK road sign system icon for ‘speed camera’ is perhaps the oddest representation of a camera possible. It’s almost surreal.

And this is just visual stuff. What about gestures and behaviours? What learned behaviour do we carry forward to the post digital age? Do we invent new things? Or approximate the old? And what is the value in either?

I don’t have any suggestions yet, maybe other people do. Gesture me an answer if you know something that I don’t. Just make sure it’s a gesture I understand.

Machine Etiquette

Recently, while working at IDEO, I’ve been involved in a project that began to explore the potential impact of smart devices. With ever cheapening hardware and the widespread adoption of smart phones we are approaching a time when all kinds of devices will find a voice. Not only will they talk to us but they’ll talk to each other, all the time. So what does it mean?

Send in the robots

Most people that i’ve spoken to tend to jump to a worst case scenario (“why would i want to talk to my dish washer”) imagining a world where we not only have to deal with the complexity of relationships with people but also devices. Of course this is not such a stretch of the imagination and now we have social networks that simultaneously support human and robots (and every permutation of connection between the two) then we have a strangely level playing field upon which these games can take place.

Ultimately this becomes of a question of new rules of etiquette, after all the need to maintain a more active realtionship with a device hasn’t really existed before now. So who will define these new rules?

A home full of devices talking to each other

I’m a graphic/interaction designer and while this emergine need for machine etiquette is very clear to me, it’s not clear who’s remit this falls under. Is it the product designer? the UX specialist? the interface designer? Or perhaps there’s a new field needed here, the Device Behaviorist (or something along those lines, Behaviour Designer?). Someone is needed to give these dumb devices smart voices, and a sense of appropriateness. It certainly become an expansive challlenge when you conside the range of situations and locations that a device may find itself in and then need to communicate. The nuance of social interactions takes a lifetime to perfect, so how will my toaster fare?

Good news

On the plus side there are some distinct advantages that come with smarter appliances, when you consider that they can also communicate with other smarter robots via the internet. Suddenly the isolated coffee machine can track stock, record energy use, download new firmware, store favourites, suggest tips, diagnose its own faults and who know how many other menial tasks that you’d probably not spend your days doing.

Making use of the data collected

Its easy to imagine any number of scenarios like this in the short term, but more interestingly in the longer term the machine can also retrieve data over a longer period and build up enough data to make some pretty smart sugestions.

And who’s better placed to monitor how much energy you waste each time you boild the kettle than the kettle itself?

So i’m generally open to the idea of new smart devices, but only if they’ve learnt some manners first.