Inspired by similar email lists like Reilly Brennan’s Future Of Transportation and Data Machina, my goal was to look beyond the immediate challenges of UX and the design patterns of UI (after all there are plenty of people offering links to great articles in these areas). Instead I want to consider the other forces at work on our industry.
My hope is that by looking a little further we might we inspired by technology, psychology, business and the role we play in the organizations we work for.
It’s a simple start, and it’s pushing me to evaluate what I call inspiration, but I’m going to keep digging for new things and sharing them.
As a little taster here are a few of the interesting links from the last few emails:
Interaction Design is a young profession. In its short history it has been in a state of continuous reshaping, redefinition and stratification. The chart below gives a quick idea of just how much this has turned us into a group of individuals tagged by acronyms and overlapping with adjacent disciplines.
What’s striking is how quickly this change has happened to the Interaction Designer. This change is driven by technology, and the pace of technological change continues to increase.
Today, the Interaction Designer—and I count myself as one—is the person you call in to design the ways that people engage with digital systems. We’ve developed increasingly dynamic interfaces to back-ends that are largely static. The potential for more dynamic front-ends is driven by advances to the interface technology; it’s characterised by the move from keyboard to mouse to touch-screen and voice.
The static back-ends we connect to are servers, computers and databases that only change when the user or system owner changes them. In the future these back-ends will have increasing artificial intelligence, and this marks a seismic shift for out profession.
The dynamic potential of the front-end will be matched by the new dynamism in the back-end.
There are many back-end systems that could be called ‘dynamic’ or ‘active’ today, and this shouldn’t be a discussion about the semantics, it’s more important to consider how changes on the system side (not the user interface side) will push the Interaction Designer into new territory. AI-powered products and services will mean far more exchanges between smart people and smart systems. In the past we focussed all of our energy on the design of dynamic interfaces, we’ll now need to account for dynamic back-end systems too.
At first it will look like the same Interaction Design we’ve been doing for 30 years; we’ll use the same tools, the same paradigms and the same principles. However, the changes will become gradually clearer as we look to benefit from the potential of the new technology. It’s impossible to say where the Interaction Designer will be in another 30 years (in fact it’s impossible to say where we’ll be in 5 years), but one thing that is certain is that the new phase of Interaction Design will require us to change, and change quickly.
Today, designers who can code are highly valued. A designer who can visually, architecturally manipulate code is rare. In the future we’ll need designers who can go further and be literate in the back-end systems too. Or, perhaps we’ll need a new role—someone who can sit between the front- and back-ends acting as a translator between the designer of interface and the designer of the AI system.
Systems are changing, becoming intelligent
The change Interaction Design is about to go through will be characterised by increased access to Artificial Intelligence but it won’t be a centralised intelligence in a few places, it will be a broad distribution of Intelligence around our (currently) static systems, this will make it harder to spot at first.
This distributed artificial intelligence isn’t how science fiction has thought about it so far it—forget the image of a singular ‘brain’ able to answer any question. In fact, forget any notion of human-like intelligence. Instead, think about a million smart thermostats observing patterns of heat control, combining it with the weather forecast and deciding to adjust your heating one hundred times a day to save you money.
The cognification that Kelly refers to is a very narrow type of intelligence. It will be somewhat disappointing at first as small steps make incremental improvements. But the effect will magnify as many things each have their own intelligence, as it multiplies across the dozens of products and services you encounter every day.
In the future, as things become intelligent, products and services will be even more complex to interface with; these systems will change between the times that users interact with them.
The very use of these intelligent systems will allow them to grow and evolve. For users this will be simultaneously exciting, unsettling, alien and empowering. We will need Interaction Designers to navigate these choppy waters, to advocate for users and to make the best use of new technology.
We will need to ask new questions: how will it feel to use something which has altered itself since you last used it? The key word here it itself, users are probably already used to variable and unpredictable experience with software. Generally this is triggered by human action. In the future there won’t be a human in the loop.
Cognification also suggests other, more nuanced changes to products. Not only will they be in a state of on-going, algorithmically informed self-improvement, they will start to be opinionated and express personality. These are not new things for digital systems per se, but they will become more commonplace, as underlying smartness becomes the norm. In some cases personality will be key to the success of the system, in other cases the cold logic of a computer system will be preferred.
Interfaces that are both a front end user experience and a learning opportunity for smart systems is new; it’s not yet clear how different it will feel for users to be active in the continuous building of machine intelligences much more powerful than themselves where today they are just the passive receiver of the output of systems.
Interaction designers will be working with smarter, opinionated products existing on new non-screen-based interfaces. These products will change through use, and the more popular they are the more they will change.
Everything here talks to the dynamism of the back-end. The nature of technology also means that the front-end interfaces we’re designing are also changing. This is why the change we’re about to experience is so seismic.
Interfaces becoming cloud powered
One of the bigger driving forces in this new age of interaction is the proliferation of wireless connectivity in smaller and smaller devices. When a device is permanently connected to the Internet it benefits from two things, the first is the remote access to data and when devices don’t need hard drives to store data the physical size can reduce. At a certain point devices become so small that having a screen or a keyboard doesn’t make sense either. It calls for more novel forms of interaction.
Ironically, ‘novel’ interfaces here are very familiar to people: voice is becoming a feasible way of interacting; gestures and motion tracking are also starting to become more common. In theory we already know how to use these interfaces—how to talk, how to move—but we need to quickly learn how to design them to best serve the people using them.
The second benefit of connectivity to the cloud is the ability to access the higher computing power of the server. Things that would otherwise be too time or power intensive to run on a personal device with limited RAM and hard drive space. When I search Google on my iPhone, all of the power sits on the Google server; my phone is little more than a portal to the back end.
These increasingly small, increasingly powerful tools and services drive novel interfaces like voice and gesture but they also suggest increasingly confusing future where we spend most of our time trying to figure out how to use things rather than actually using them. Some are attempting to define principles for these new interfaces: Intercom’s first attempt at Principles for Bot Design suggest some interesting and perhaps unexpected directions for text/chat interface experiences.
In addition to the two benefits of cloud connectivity above, a less clear effect will be one of interconnectivity: as more products and services become connected they will expect to talk to each other. Products will rely on other products. We’re already seeing an explosion in API companies that simply provide the back-end plumbing for others to build with. What will it mean for services to be intertwined with each other in this way? How will we design for graceful failure when failure happens to a third party service that the user is unaware of?
There are many benefits for the Interaction Designer in the future, and many more challenges to tackle.
Cognification + Cloud: distributed AI
The two forces of cognification and cloud power suggest the coming era of distributed AI accessed through new interfaces, this will obviously have ramifications beyond the discipline of Interaction Design, but the Interaction Designer will be key to helping people make sense of the changes coming out way.
Distributed AI—by its nature—will be everywhere, it will be cheap and it will become expected by users. Although, while the distribution will be wide, the individual use cases for products will be narrow.
Narrow AI is being applied to diverse and complex human problems like sentencing criminals and deciding which patients should be discharged from hospitals. Equally, each smart product will have a narrow remit and look to tackle specific problems.
The individual systems will be highly specific, but the Interaction Designer’s skills will need to diversify: there’s already a need for the us to be competent in a range of skills: wire-framing, visual design, coding, animation and sketching are often expected. The future will call for an understanding of writing, logic, psychology, machine learning and behavioural economics. The future will probably call for a dozen more we can’t even imagine at the moment.
Growing these skills will call for collaboration with different experts, and that we each become more inquisitive about the new adjacent fields we’ll be working with. As part of this change i’ve started to collect interesting links together in a fortnightly email—Future Interaction Designer—please subscribe and submit your own links. It’s experiment to see if I can keep up with the rapid pace myself, I hope many can contribute.
When will it start?
The examples above are all current products and services—so this is the world we’re already living in. In fact some of the examples here aren’t even that new. So the question is not when will it start, but when will you start. When will you start to add to your knowledge, start to collaborate and start to help the rest of us navigate the choppy waters?
As the cost of data storage drops and we continue to see the democratisation of AI tools like TensorFlow from Google, we have the choice to ignore the future or play a part in it. This future will accelerate toward us whether we like it or not.
This post was originally published on the 30th August. Since then I’ve been contacted by x.ai who dispute the claims made by the Bloomberg article and the summary I gave. Having reviewed the article and rebuttal by x.ai I’ve decided to amend this article.
The section below on x.ai has now been updated. More to follow on this fascinating and evolving area.
As a designer it’s important to be aware of the current state of cutting edge technology and experience in your discipline area.
As an interaction designer this increasingly difficult as the line blurs between the technology that underpins the things we experience each day. Looking at the average digital product or service, it’s now all but impossible to distinguish between a public beta, a stable service, one half of an A/B test and an untested prototype.
This makes if difficult to know where the cutting edge really is. Staying up to date with things requires that we’re all aware of what’s technologically feasible. In the pr-software world of design it was easier to assume that real meant feasible. However, it’s not longer safe to assume that seeing and interacting with something in the real world means it’s a safe example of technology that you could suggest in your own work.
Real doesn’t mean feasible any more
Feasibility is the measure of how possible it is to build a design with today’s technology. In the design world it’s a term tied to our industrial heritage: a time when success was all about engineering a solution and mass producing it. Feasibility in the age of software is much murkier, much less easy to define.
As designers we still need to strive for feasibility on behalf of our clients. Although we might not be the ones engineering the impossible, it might be us who’s specifying the impossible. It’s really important to see how some of the most cutting edge experiences we’re inspired by might be entirely unfeasible. Below are a few examples.
1. Kickstarter’s fail rate
A case in point is the very existence of Kickstarter – a service built in the notion of a yet-to-exist product being available for preorder. Listers are encouraged to make it as real as possible to encourage people to invest.
The success rate of fulfilled products is an impressive 91%, however it means that 9% of successfully funded items will never actually see the light of day. This of course doesn’t include the others that simply don’t make their funding goal, the much higher number of 56%.
Amazon would be quite a different service if half of the products we’re listed as ‘unavailable’ and one in ten products order didn’t actually arrive.
2. Lyft Carpool
San Francisco born ride hailing service Lyft recently announced it was rolling back it’s pooled ride feature ‘Lyft Carpool’ – where car riders were encouraged to pickup someone on the same route.
It’s yet to be seen if the Waze equivalent Waze Carpool will also survive. But either way it’s interesting to see features rolled out and rolled back again.
Uber itself is an interesting example as leaked information earlier this month showed it had lost $1.2bn in the arly part this year.
3. x.ai accused of using humans
Automated assistant service x.ai garnered some less than positive press earlier this year when Bloomberg asserted that there were perhaps more humans in the loop that the layman might assume when thinking of and AI system. Bloomberg, it seems, have mis-interpreted the role of the ‘trainer’ of the machine (a common practice in AI systems)
Whether you believe the Bloomberg article or x.ai’s counterpoint, the reality is that it’s difficult to know which side to take when the simplified, understandable version may actually be very misleading. When most people assume that the state of AI is defined by services like x.ai, it’s important to realise that all is not what it might seem…
What it means for design
So as designers what can we do? It’s not realistic for us to see inside each the companies we’re looking to for inspiration. Even if we could, few of us could really interrogate their tech or business models (although these are both areas that designers should be making themselves more literate in).
There are a few things we can try to do though…
Firstly we can recognise that the examples above are themes that will be repeated. We can keep reminding ourselves and our clients that real doesn’t mean feasible. We can make ourselves aware that if it looks too good to be true, it might well be.
Secondly we can try to be more diligent. We can try to investigate the latest tech announcements, to see if they are too good to be true. Don’t just quote the tweet about the next big thing, read the article, track down the source, cross reference the claims being made.
Thirdly we can also be inspired by the impossible. Design is not solely about operating within the constraints of technology, it’s about pushing the boundaries in search of what’s best for the people were designing. So don’t take this article as a suggestion to be less ambitious, take it as a recommendation to be more professional.
You’re a designer. You’re designing an app, so who’s your biggest competitor? Who should you be inspired by in your design?
How about the Domino’s Pizza app?
Of course it’s tempting to look to the cutting edge of visual and UI design, you could spend your time sifting through dribble for slices of beautiful visuals. But your audience are thinking about other slices. Deep pan double pepperoni slices.
If you haven’t spoken to your customers how will you know which other apps they’re spending most time with? Über is an amazing service with a great app, but are your users using it? Maybe they are, or maybe they use their phone in a different way.
I single Domino’s out not because I want free pizza – in fact, to be clear I don’t remotely endorse their product. But that’s the whole point, it doesn’t matter what I like – I’m just the designer.
Empathy, not really
It’s also worth saying that this isn’t so much about empathy – although putting yourself in the shoes of your customers is vital – this is more about understanding your customers expectations.
The best examples of interface design aren’t found in your industry.
If you’re designing for a bank, a shop or a car company you should be looking at Domino’s to understand what users expectations will be. It doesn’t matter if you are the best banking app in the world, you sit on users phones alongside Domino’s. And domino’s have a great app.
Apps like CityMapper, Candy Crush and Snapchat are setting the bar. The good news: they can all give you great inspiration for your work.
The next time you meet with your users ask them which apps are making them smile. Then get using them yourself. Learn what’s setting your users expectations, more importantly understand where they’re spending their time because ultimately this is what you’re competing for – their time.
At IDEO we’re putting high resolution prototypes out in the real world to test ideas and design with real people in real situations.
In other words, we’re shipping product. Invoking the methodologies of a startup we can put our designs out in the wild and see how they respond under every day use.
This is great because it means we can move far more quickly than our clients could. We launch is days not months.
But when you ship product – even if it’s secretly a high resolution prototype – you start encountering other issues.
Features = barriers to use
Every single feature you put into an app becomes a barrier to use.
This should feel humbling/frustrating/counterintuitive to every interaction designer that reads this post. But every time you add functionality to a service you complicate it and force your users to make decisions. These decisions are, in part, evaluations of the service as a whole.
Do I really want to store my photos on this website? Do I really trust these guys to deliver on time? Do I really want to play this free game?
Questions your users are asking right now
The only reason Google is where it is today is because they stripped everything away from the experience and basically launched an MVP. There was zero friction from their interface, they got out of the way and before you realised you’d arrived on their site you’d already interacted and got value from them. And they still do the same today.
include only the features that people need to accomplish their goals
Google’s advice to developers
This problem is best highlighted by an innocuous feature we added to the app we’re building at the moment. We thought it would be useful to have the app capture a photo of each user. We could add it to their profile and it would help us track people and the data we’re building with.
But adding a photo is a hurdle. And a big one at that. People didn’t want to add one. And bear in mind that we’re paying our participants to be in our trial – so we have some licence to ask them to do certain things. But it forced people to stop and evaluate.
Worse for us, an incomplete sign up profile meant no data being captured and no design iteration happening.
When building new experiences, especially the crucial interactions around sign up and on boarding, make sure you have as little standing between you user and the core experience you want them to get to.
Every feature you add will cost you users and reduce your growth rate. Even if you think the feature is cool/nice/important think twice about it’s relevance and if it blocks your users from getting to the heart of your product.
Every feature you add is a hurdle for your next new user.
As ever, a thoughtful piece of design from BERG has kicked off some interesting analysis and writing by the design community.
Sitting in a more useful corner of ‘the smart home’, their Cloudwash washing machine concept/prototype is an interesting instantiation of a ‘mod cons’ becoming connected.
While most of the chatter has focused on the choice of device and what it says about men (and don’t get me wrong I think this is a tremendously interesting area of discussion, go and read Rachel Coldicutt’s post) the thing that really caught my attention was the roll of a single function button.
Single Function Buttons
In an era of touch screens the presence of buttons becomes more noticeable. The iPhone’s mute button; the turn page button on a nook ebook reader; the Nest’s dial control.
What do they have in common? They all represent such a fundamentally important control for the device that they get their own button. (Incidentally, I don’t include on/off buttons in this group – I’m focussing on the functionality once the device is on.)
These buttons also allow for more tactile interaction, a learnable physical behaviour. In the case of the iPhone it’s easy to switch mute on and off without even taking your phone from your pocket. On the nook my gaze never leaves the page. Click.
On BERG’s washing machine the button that is given its own sole function is the notification override. It makes a lot if sense: notifications are both very useful, yet have the potential to become a big annoyance.
Buttons for Interaction Designers
The consideration of physical buttons by interaction designers is more important now than ever before, as touch screens have become the most pervasive format (perhaps even the default format) in such a short period it’s easy to forget the point of difference that a physical button brings.
What functionality deserves a button? What type of button is best? What is the button’s default state? What does it sound like? How does it feel?
From an accessibility perspective physical buttons are also tremendously important. The reason that you don’t get touchscreen ATMs? They’d be tough to use by those with limited vision. Equally those with restricted dexterity may also benefit from something more tactile and forgiving than a capacitive iPhone screen.
A final thought on the roll of buttons from the new product developer at BERG (and recently Luckybite), Durrell Bishop.
The marble phone is an investigation into an entirely physical interface to an answer phone. Each new message is stored on a marble which rolls out onto the top of the machine when a message is left. You then listen to the message by placing the marble on the playback cup.
It’s easy to see at a glance if you have messages without a little flashing light (or notification). In fact it allows the technology to manifest as a piece of sculpture.
No buttons, no screen.
Worth thinking about when you next start designing the interactions or interface, on screen or with buttons.
Recently, some asked about the design of a news app. It got me thinking about how to tackle the problem.
An App for a Gap
Like the majority of the content I consume on my phone, I use it to fill the gaps in the day. Those moments in between: waiting for a train, sitting on the bus, procrastinating while a file downloads. When I read the news in these situations it somehow feels productive, like I’m optimizing an otherwise empty moment.
But these gaps are irregular, they aren’t always easy to plan for and my particular mood at each moment is always different. So how would you design for these pockets of time? Maybe you’d serve up content based on the time it takes to read. Or maybe each article would have would have a 1 minute, 2 minute and 5 minute summary? Or could each article be condensed to a paragraph? So I could easily digest as many articles as I have time for. Maybe the app would ask how long I wanted to read for, then serve up a selection of articles to perfectly fill the gap.
This would have a big impact on journalism itself, it would require writers and editors to be thinking in different versions for each story. Maybe a burden or maybe an opportunity.
I feel most efficient on my phone when I’m re-routing stuff. Save this for later, forward that email, take a photo of my receipt. My favorite IFTTT recipe copies the link from a tweet to my instapaper account, when I see an interesting article I favourite the tweet for later offline reading. Terribly efficient.
I want the same with my news app, but more refined. I want a button to save for offline reading, add to evernote, save to Dropbox, share with a google group. Not just a retweet button (get with it). In fact, why not just set up an IFTTT channel and let me build my own recipes.
Curated Chaos, Personalised Bubbles
“It’s amazing that the amount of news that happens in the world every day always just exactly fits the newspaper” – Jerry Seinfeld
Curating the world’s news down to fit a newspaper is one thing, to further reduce to an app is even tougher. While the temptation is to skim off the popular stories from the main paper, does this give a balanced view of the news? Does this selection suit my personal taste? Is popularity the right measure for the judging which story I get served up?
Another approach is personalization. If it is curated to my personal taste (through some complicated learning algorhythm) how do I escape the bubble of my own previous preferences? Will you occasionally surprise me with a story from a section I don’t normally read? Will you preserve the editorial tone amidst the individualization?
Perhaps through the design of the content you can let me be the filter of a large number of stories; like a newspaper, as my eye skims over, it’ll catch on the headline that suits my mood.
Newspapers are yet to crack the business model for news apps. Buying each article seems punitive, and paying for a days content at a time leaves me with the opposite feeling of buying a news paper (once I buy the paper, I can read the articles when I like).
Another direction is the spotify/netflix subscription model – complete access for a monthly fee. But this too seems wrong, perhaps because the price point for these services is close the unit price of the old media (a CD or DVD). Instinctively £10 a month for millions of CDs is a no brainer. But then these services are selling the same thing over and over again, newspapers need to generate a new collection of articles everyday.
The paywall model makes money, but it doesn’t suit the irregular reader. At least with a newspaper I buy it when I need it.
So how do you make money from a news app? Maybe you sell functionality: offline reading, summary mode, ad blocking. But this needs to be packaged in a way that feels like I’m unlocking extras, rather than paying to release arbitrary restrictions.
Or maybe you make the price of content so low that people won’t even think twice about buying it. 1p per article. Unarguably good value, so low that people won’t think twice about reading a second, third, fourth article. Just one more article. Oh, go on then.
Back to productivity. I feel most efficient on my phone when I’m re-routing stuff. Save this for later, forward that email, take a photo of my receipt. My favorite IFTTT recipe copies the link from a tweet to my instapaper account, when I see an interesting article I favourite the tweet for later offline reading. Terribly efficient. I want the same with my news app, but more refined: save for offline reading, add to evernote, save to Dropbox, share with a google group. Not just a retweet button, that’s so 2012.
So Much Data, So Little Time
Is there a more frustrating feeling than watching more data flow past you than you can process? Seeing your inbox fill up, trying to get it back to zero. More tweets in an hour than a person could process in a year. A symptom of the mass media age. etc.
I’d hate my news app to give me that feeling. I feel really guilty throwing away a newspaper that i haven’t read every single part of. Another app on my home screen with waiting notifications, no thanks.
Can you imagine a newspaper that fills up with new stories quicker than you can read them? At least with the newspaper i have the clear feeling of having ‘finished’ it. Is there a way that my app can convey this feeling? Is there a daily edition, or does this run at odds with the very nature of a live streaming device.
Ultimately I don’t want to be playing inbox zero with another app.
These are just a few thoughts swimming around in my head, what do you think? Agree, Disagree?
In the last few weeks we’ve been experimenting with domestic sensors – the first wave of commercial products in the Smart Home area of consumer electronics. As part of our research i’ve been immersed in a world where my plants send me emails and the occasional push notification warns when CO2 levels rise above 1000ppm in our studio (whatever that means).
Admittedly it’s been a slightly contrived situation; with half a dozen devices all running in close proximity, and mainly in the office rather than at home. But even having said that there have been some interesting findings and I thought i’d share them here for future reference.
Emerging principles and questions
1. Simplest Setup imaginable
Most of the systems and sensors i’ve played with are designed to make use of your home wifi network, this means that you need to grant them access to your router and give them permission to use your Internet connection. On the surface this seems fairly straightforward but when you have to run through half a dozen steps – including disconnecting and reconnecting to your network, downloading apps and optimal sensor positioning, you quickly see how prohibitive it will be to those less technically competent and patient.
Some of the smarter systems let you pair your devices using QR codes as a shortcut (this is actually a pretty good use of the format), or some other method that doesn’t involve lots of smartphone typing, but the reality is that unless your new system is pre-paired it’s a fairly awkward series of stages to go through. One exception is a system that promises to work straight out of the box; it uses bluetooth and cellular 2G data rather than wifi for it’s connectivity, but even thi solution comes with its own limitations.
Questions for designers: how long does the new systems take to get up and running? Will I need a smart phone to set it up? How much fun is the set up procedure? How technically confident will I need to be to install it?
2. Meet one need
As with many technology driven solutions, there is a desire to pack many features into new products. More features = more functionality, right? Wrong. And doubly wrong in this world of new technology; these new products can be overwhelming enough by themselves, multi-functioning open ended systems are even more complex and require a lot of learning by the user.
We could take a hint from the ‘App’ world here and make sure we are focusing on solving one problem at a time. It seems like there’s a clear analogy here between so called Modern Conveniences or ‘mod cons‘ of the 40s and 50s (the first wave of home technology products), each did one job and it was easy to grasp what the benefits would be — a washing machine washed clothes, a toaster toasted bread — meeting one need at a time is crucial to new users’ understanding and adoption these new systems.
Questions: how would you explain your new system to a 5 year old/85 year old? What current need are you meeting? How many functions does the product have? Is that too many?
3. Give actionable insights
It’s common for these new sensing devices to feedback information about their environment, but very few of them go a step further and suggest what the user should do with the information. One environmental sensor we experimented with was able to show the CO2 levels in the room, not only that, it sent a warning message as the level went past 1000ppm. But what does that mean? Is it high? Is it dangerous? Later it would send another warning message when the level had passed 2000ppm. Was this serious? Should we evacuate?
Turning raw data into useful information is only one part of the process, the crucial step further tells the user that 1000ppm is too high and that opening a window or door might be a good idea. If this is done well the user will learn much more quickly what the new sensor data actually means.
Questions: what should your user do with the data your system provides? How important or serious is this data? How should/could the user respond to this data?
4. Fail very gracefully
Most of the sensors out there today are reliant on other technology to function, typically this means joining your home wifi network and often connecting to the internet through it. It’s a certainty that at somepoint these other systems will fail – the question is how the new sensor product handles this failure. These systems must be designed for the imperfect world they will go into, when something goes wrong how much of the full functionality can you still provide? And when things do go wrong don’t bombard the user with error and warning messages. (For a seven days after I removed the sensor, one of my plant pots was still emailing me demanding more water).
It does rasie the question of how error messages could become more ambient; just because i’ve given something the power to send me push notifications doesn’t mean i want to hear from it every hour.
Questions: how much of the system can keep running during a break in connectivity? Does it really need to connect to the internet? When things start to go wrong, how much can the system hide from the user?
5. Manage sensor fatigue
In the coming years we are going to invite dozens of self aware devices into our homes, and they will each have a voice to express their needs, observations and concerns. It’s going to be a hectic place to live if each of these new systems isn’t aware of the current ‘noise’ levels of their new homes. Systems should at least be conscious of this, if not actively able to adjust to better ‘fit in’.
I have an oven at home with an alarm clock that I never got round to programming, occasionally i’ll accidentally nudge into it and then it’ll beep at me. So i turn it off. In a house with 10 things that might all start beeping at me the quietest devices will probably remain switched on for longest. A reversal of the old addage that “the wheel that squeaks loudest gets the oil”.
But there is a counter concern that if these systems do too much by themselves that eventually people will become numb to the things that affect their environments: don’t fiddle with the complex air conditioning system for 5 minutes, just open the window.
Questions for designers: how many other systems will your new one be functioning alongside? How much extra stress do you load onto the user in the management of the new system? How will the new system be welocmed into a household?
Finally, this is a personal plee to keep some of the fun in these new systems. It’s all too easy to celebrate the technology and the amazing data that it can produce, but something that makes me smile will earn it’s place in my home. Apple does this very well, even if it does get carried away with it sometimes.
Questions for designers: is there an opportunity for a bit of fun? Will it make me smile?
I hope some of this might be useful to designers working in this field, there certainly needs to be a dialogue about what we feel is right and wrong in this new world. I’d love to know what people think.