From Wayfinding to Interaction Design

Originally published on Medium – https://medium.com/design-ux/3e7fef5a6512

Before joining IDEO as an Interaction Designer, I worked for one of the more influential wayfinding design companies — Applied Information Group (now called Applied). Wayfinding is the process of planning and making journeys through spaces; wayfinding design companies develop systems to help make this planning and journey-making easier. These systems come in all shapes and sizes, and can cover area naming, signage design, cartography, defining route networks and installing new landmarks to give an area more character. At Applied Information Group we worked on everything from simple internal building systems for hospitals to complex city-wide, multi-modal schemes that encompassed every mode of transport that the city offered.

The underpinning principles of the systems we designed were always the same, and we could draw on a great depth of research and academic study to inform our design; Kevin Lynch’s work on the mental models that people form when navigating the city goes back to the early 1960s. Of course the act of wayfinding is as old as the hills, so solutions were usually found in supporting innate behaviour rather than inventing new ones.

While I won’t go through all of the principles in this article, there are a few I’ve found to be useful in my move from wayfinding design to interaction design. I realise that it may not seem that a train station and a website have much in common, and so before I share some knowledge from the world of wayfinding, here are some of the similarities between the two:

1. Helping people get around complex spaces

If you break down a website, a city, a hospital, or an app, they can all be thought of as complex spaces that people travel through — spaces that are generally so complex that without some help users wouldn’t be able to navigate them confidently. A book, on the other hand, is not very complex — which is why I’m not comparing wayfinding to graphic design.

Interaction design and wayfinding design both seek ways to make it easier for people to understand these physical and virtual spaces.

2. Supporting journeys

When moving through complex spaces we make journeys made up of a sequential series of decision points — going from point A to B to C. But both interaction designers and wayfinding designers need to think in terms of journeys and not isolated points of interaction. Journeys are complex and sequential: the decision I make at point A will affect the rest of my journey (it might become impossible to get to point C for example). A journey could be the series of screens I encounter when viewing and retweeting a message on Twitter, or it could be a walk to the train station.

3. Creating solutions for a wide range of people

When you’re designing information for a transit authority, your potential audience is everyone. When designing Vancouver’s public transit system, we didn’t focus on demographics or market segments because everyone rides the bus, or at least anyone can. People with physical disabilities, young people, tourists, daily commuters, elderly people — the list is endless. For interaction designers — and the digital world we design for — the audience can be just as demanding. Though most projects and clients have a particular audience segment in mind, viewing the full spectrum of users and balancing their (often) contradictory needs is a daily challenge.

4. Prototyping and piloting

Related to all of the points above, prototyping and piloting are crucial to the design process. (I’ve also written about the importance of prototyping here.) When I talk about prototyping I’m talking about a range of different levels of detail, from the rough-and-ready cardboard mock-up to detailed near-final working versions. Problems and solutions are often so complex that you’ve got to make it real for people to be able to join the design process.

A page from the Legible London ‘Yellow Book’

What Can Interaction Designers Learn from Wayfinding?

Hopefully you’re with me about the similarities. What are the lessons to be learned from them by interaction designers? Below are some of the more interesting principles and methods that I’ve found applicable to both fields. Hopefully, for the designers out there, they will trigger some thoughts about how to improve their designs. For non-designers they should give you a glimpse into the difference between good and bad design.

1. Progressive Disclosure

If you give me every detail of a journey at the beginning, the chances are I won’t be able to process, store, and retrieve it. For example, when you ask for directions, after the fourth of fifth instruction you start to glaze over and struggle to remember what the first instruction was (Do I take the third left or the second left?).

Progressively disclosing information helps the end user by reducing the amount of information they have to deal with. The flip side is that we need to do a lot more work as designers to make sure everything fits together.

2. Consistency vs. Monotony

One of the best weapons for tackling complexity is consistency. As soon as I spot a familiar pattern in an environment, I can spend less time analysing and more time navigating. This is quite intuitive to most designers: uniform design means less “visual noise.” This becomes really important when you think about the baseline noise in a city. One of the key principles of the Legible London scheme (on which we collaborated with Lacock Gullam for TfL) was to remove signs that were made obsolete by the new system – rather than adding more and more.

But. There is a flip side: monotony. It’s a delicate balance to strike, but it’s good to remember that people may be using the service or tools daily or even hourly. So if there’s room for a little variation or even humour, then don’t be afraid of it.

3. Glanceable vs. Queriable

When I’m confirming I’m going the right way I only need to glance at information, but when I’m lost I need to query on a deeper level.

If your users will be shifting between these modes, then your interface should support them. This is the antithesis to “one size fits all.” Generally, serving up the most pertinent information in a digestible format requires a lot of analysis of what the user might actually need.

Wayfinding and Interaction Design

I expect that these points will already resonate with interaction designers, and that’s because much of the thinking involved in wayfinding is instinctive, based on the way human’s brains are wired up. Hopefully you’ll find something useful in the observations above, and at the very least they should give you some useful analogies when trying to explain your interaction design job to your friends.

Prototyping and Boundary Objects

I work at IDEO in London, a big part of the way we work is the idea of ‘building to think’. We specifically refer to this way of working as it’s a proven method for sharing ideas and moving towards a solution more quickly than simply talking and planning. We look for ways to prototype our emerging ideas as often and as soon as possible, this isn’t always an easy thing to convince clients to do and more often than not it represents an alien way to work.

Prototyping an entire retail environment, in paper and cardboard

Lots of my colleagues have written about the importance of prototyping and building to think elsewhere. It is a central tenant of the IDEO philosophy it translates into the mantra “Stop Talking, Start Making”. Watch IDEO founding father David Kelley talking about it in the General Assembly video below:

But why is building/making/prototyping better than talking when it comes to the creative process? Why shouldn’t we be able to simply talk through a problem and agree on a solution? Well, in the world of cross disciplinary teams, specialist practitioners and clients, the reality is that no two groups speak the same language. Coders speak in code, graphic designers talk in pictures, project managers, business designers and photographers all see the world in different ways. In a perfect world the best practitioners can talk across disciplines; but even then no one can talk across all disciplines. We all benefit from finding common ground and common points of reference, they help us calibrate the way we work together. And that’s where prototypes come in, a thing we can all gather around. In fact there is a sociological concept that sums up what a prototype is: A Boundary Object.

Three people talking about a house

Boundary Object

A boundary object is a ‘thing’ that is both defined enough that several communities can recognise it as the same thing, yet flexible enough that each community can use it according to their own needs. In the conceptual sense they can be abstract or concrete, but either way they exist outside of peoples’ heads. A conversation can’t be a boundary object for example as it doesn’t live beyond the people having the conversation. An annual report however is a boundary object as it lives by itself. There’s more about boundary objects here.

The beauty of a prototype/boundary object is that you can show it to a group of people and they can all stand around and point at it, interact with it, build on it and above all see the inherent complexity. It’s also much easier to understand the trade-off and decisions that have been made – even in a very low resolution prototype. Where a conversation could take literally hours to deliver consensus, a prototype can shortcut the process.

The real power of a process like this is that it forces everyone involved to consider their area of input alongside everyone else. So it’s harder for a decisions to be made in isolation.

Better Protoyping

So if we understand the importance of prototypes as a way to facilitate communication between different groups, what can the theory of Boundary Objects tell us and help us make better prototypes? In reading around for this blog post I came across a recent paper about the subject (which also references Mr. Kelley), it goes into much more detail than I have here but they pull out some key benefits of the prototype approach:

– Prototypes are a manifestation for feedback. For clients and designers alike
– Prototypes improve the team experience, by building confidence and emotional engagement
– Prototypes converge thinking

If we focus on these three key facets it’s easy to see how out prototypes can be designed to suit each of these needs:

1. Build Prototypes that are incomplete and demand feedback.
2. Prototype for the benefit of your team and your clients
3. Make prototypes early and iterate rapidly

There’s much more to be said about the world of boundary objects and the process of prototyping, hopefully understanding some of the theory that underpins it will help convince more clients to embrace the power of the prototype.

Moving the peanut

I’m currently working on a project for a very big and (probably)  slow moving client. In many ways these are the most challenging projects you face as a designer because of the propensity toward inertia, where smaller companies can be more agile and adopt change quickly, generally the bigger they come the slower they move.

In this light a colleague of mine at IDEO passed on a bit of wisdom that she had received years earlier about the mindset to adopt when working on these projects:

Moving the peanut forward one inch.

It’s a curious metaphor but for some reason it chimed with me, the notion being that sometimes even a very small step in the right direction is all that can be achieved. But most importantly these very small steps by very big companies are actually much more significant than they might seem. At the moment this mindset is all that’s keeping me sane.

Why the advice invokes a peanut it is anyone’s guess.

QR codes and the end the Turing Age

Many things have been written about the New Aesthetic (NA) over the last few months, like many i’ve been watching and waiting, wondering what might come out of it. Just as thoughts were beginning to come together in my mind Bruce Sterling’s essay in Wired.com has sharply focussed many people’s thinking, and the most interesting result of this essay is the response of the community and the clearer definition of various elements of this movement.

Sterling’s extended tretise calls for more thought and consideration and pitches the NA as the next significant movement after postmodernism and the 20th Century; for me the NA is less an artistic movement and more of a dawning realisation and connecting of disparate dots, each of which are a reality of living with networked digital tools. Whether NA truly represents the next movement in art after Post Modernism is for someone else to answer, I ask myself ‘what does the NA mean today?’.


Taming Lions and Domesticating Cats

The continuing domestication of high technology is really what’s at the heart of the NA, as so many articles rightly point out many of the things that get grouped under the umbrella term have been around for many years. It’s a strange set of cultural and technological touch points from satellite imagery to vectorised artefacts of 3D photography – newish things and oldish things – what holds them together is supernatural view of the world they give us. We can look over the planet from a mile above (flight), we can look though the walls of a building and see what’s happening inside (x-ray vision), we can communicate instantly with people on the other side of the planet (telekinesis), we can control objects remotely (telekinesis). I could go on. I almost wonder if our penchant for superhero films will begin to wane, we all have little supermen in our pockets now.

And these SmartPhones (SuperPhones) infiltrate our day-to-day they are leading the charge of pervasive technology. Automatic vacuum cleaners, Kinects, Drone-copters. They’re all making their way in the world. My guess is, if you’re sitting somewhere in the western world, you probably have 100 sensors of various types within 10 metres of you. Which is an amazing/alarming/alluring thing in itself, the fact they can all talk to each other as well is a very 21st Century state of affairs.

All of this technology has been domesticated and subsumed into the everyday, and by small increments we’ve been joined by a symbiotic species – we call them ‘devices’ and ‘widgets’ and ‘do-dahs’. We’ve begun to acknowledge the presence of these new things by adjusting our environment to suit them – albeit in a clunky way. The QR code heralds an interesting era where we share the visual landscape with our new robot friends, building in visual affordances for Computer Vision that make no sense to us at all, but that our smartphones absolutely love. As time goes by, and  Computer Vision improves, these QR codes and whatever follows them will disappear, or perhaps there will be a lasting remnance – just as even the most advanced CGI effects in films are identifiable, they remain otherworldly.


After Turing

Seeing our new robot compatriots as a different species is of course a bit spurious, but it might set some rules for understanding how to interact with them. The dream of Artificial Intelligence was to replace the human brain with technology, to build a thinking machine. The dawning reality is that this was probably the wrong thing to attempt, after all what do we gain from a machine like a human? What a wasted opportunity. The classic test of a thinking machine  was defined by Alan Turing; in short if you could chat with a computer and be fooled it was a human then we could all deem AI a success. What Turning hadn’t factored in was the adaptation by humans when communicating with machines, our ability to meet them half way changed our expectation of an interaction with a machine.

In fact we’ve already passed the point where spam messages can’t be distinguished from real messages, and people are falling in love with chat bots. Not because the machines got smart, but because we all exist on the same networks, and because these networks let everybody in. My Twitter feed is populated by friends, famous people and robots. They all get my attention, whether they pass the Turing test or not.

Ultimately we can leave the AI experiment in the 20th century and start to think about what we could better use robots for. And these decisions will be made by the content consumers not the content providers. More an more we’re building tools and small pieces that other people can assemble themselves, to construct their own personalised spaces. Of course this isn’t a new thing for the old physical world, but it is a new thing for the Network Age we now live in.


Enter the Network Age

So here we stand, us looking at the robots and the robots looking at us. Each trying to understand the other. Sharing spaces, being shaped by each other. What should we be doing to shape and disrupt and embrace this new world? To me there seems to be a few things to think about.

Firstly, maintaining some visible signs of the system we use feels important. Perhaps not in an overt way, but something to help people be aware of the robots having an impact on their world. It’s very easy to hide away the algorithms and snippets of code that set the boundaries around our lives especially as they become more complex – but people are becoming more code fluent and so perhaps there will be ways to keep things near the surface. Ultimately we can stop trying to humanise robots – what we need is to invent new personalities and behaviours that suit the machines themselves.

Secondly, and particularly for designers and content providers, we all need to be aware of the complex and shifting landscape our output will become a part of. This probably manifests itself as a combination of having a clear voice and a realistic expectation of the impact we might make. Let people appropriate your content and let them tell their own stories – forget trying to shape their reactions, they’ll do what they want. It’s hard to imagine how you can begin to legislate for the ways your stuff will bubble up in the world, so focus on making better content suited to serendipity. Distribution is no longer your problem.

And I suppose the best way to finish is to encourage people to ask more questions, and try to answer them publicly. Share the knowledge.

The Art Of Storytelling

Many moons ago I bought tickets for The Story conference (run by @matlock), when I first heard about it and read the blurb, something chimed about the idea of focussing on storytelling – not media, or platform, or industry, or any number of other ways of dividing and conquering, but the general idea of storytelling. I wonder if it planted a seed in someways because recently the notion of storytelling has come back to me from several different sources, in that zeitgeisty way things sometimes do

So why storytelling?

Well, for a start storytelling is quite a unique coming together of expert, audience and knowledge. Obviously there are many ways that these three things can combine – the teacher and the student, the preacher and the congregation, the singer and the crowd. But none of them are quite the same as storytelling, the art of storytelling is the transfer of knowledge along with context. In fact context often plays as big a role as the raw content. Maybe even more so…

Expert + Content + Audience

Context is all about what happened around an event, rather than the event itself, its extra data (‘meta data’, to borrow from the world of the web) and the right context can really make something relevant. Perhaps above all else the relevance of information is what makes it ‘stick’. So if you tell a good story you transmit data, and make it stick.

As a designer I deal in the communication of complex ideas to other people on a daily basis and this is where the art of storytelling comes into its own. Whilst design isn’t always about pure innovation it always includes an element of it; the process of design naturally leads new ‘things’ and new things  inherently come without context. This is when storytelling comes in; it allows you to explain something new, but with some kind of context. Context means relevance, relevance means it’ll stick. And in the best cases it sticks so well that the audience becomes the expert and will tell the story to others.

So long live the art of storytelling, especially in the world of design.

Machine Etiquette

Recently, while working at IDEO, I’ve been involved in a project that began to explore the potential impact of smart devices. With ever cheapening hardware and the widespread adoption of smart phones we are approaching a time when all kinds of devices will find a voice. Not only will they talk to us but they’ll talk to each other, all the time. So what does it mean?

Send in the robots

Most people that i’ve spoken to tend to jump to a worst case scenario (“why would i want to talk to my dish washer”) imagining a world where we not only have to deal with the complexity of relationships with people but also devices. Of course this is not such a stretch of the imagination and now we have social networks that simultaneously support human and robots (and every permutation of connection between the two) then we have a strangely level playing field upon which these games can take place.

Ultimately this becomes of a question of new rules of etiquette, after all the need to maintain a more active realtionship with a device hasn’t really existed before now. So who will define these new rules?

A home full of devices talking to each other

I’m a graphic/interaction designer and while this emergine need for machine etiquette is very clear to me, it’s not clear who’s remit this falls under. Is it the product designer? the UX specialist? the interface designer? Or perhaps there’s a new field needed here, the Device Behaviorist (or something along those lines, Behaviour Designer?). Someone is needed to give these dumb devices smart voices, and a sense of appropriateness. It certainly become an expansive challlenge when you conside the range of situations and locations that a device may find itself in and then need to communicate. The nuance of social interactions takes a lifetime to perfect, so how will my toaster fare?

Good news

On the plus side there are some distinct advantages that come with smarter appliances, when you consider that they can also communicate with other smarter robots via the internet. Suddenly the isolated coffee machine can track stock, record energy use, download new firmware, store favourites, suggest tips, diagnose its own faults and who know how many other menial tasks that you’d probably not spend your days doing.

Making use of the data collected

Its easy to imagine any number of scenarios like this in the short term, but more interestingly in the longer term the machine can also retrieve data over a longer period and build up enough data to make some pretty smart sugestions.

And who’s better placed to monitor how much energy you waste each time you boild the kettle than the kettle itself?

So i’m generally open to the idea of new smart devices, but only if they’ve learnt some manners first.