Building on unfeasible ground – An Amendment

This post was originally published on the 30th August. Since then I’ve been contacted by x.ai who dispute the claims made by the Bloomberg article and the summary I gave. Having reviewed the article and rebuttal by x.ai I’ve decided to amend this article.

The section below on x.ai has now been updated. More to follow on this fascinating and evolving area.

As a designer it’s important to be aware of the current state of cutting edge technology and experience in your discipline area.

As an interaction designer this increasingly difficult as the line blurs between the technology that underpins the things we experience each day. Looking at the average digital product or service, it’s now all but impossible to distinguish between a public beta, a stable service, one half of an A/B test and an untested prototype.

This makes if difficult to know where the cutting edge really is. Staying up to date with things requires that we’re all aware of what’s technologically feasible. In the pr-software world of design it was easier to assume that real meant feasible. However, it’s not longer safe to assume that seeing and interacting with something in the real world means it’s a safe example of technology that you could suggest in your own work.

Real doesn’t mean feasible any more

Feasibility is the measure of how possible it is to build a design with today’s technology. In the design world it’s a term tied to our industrial heritage: a time when success was all about engineering a solution and mass producing it. Feasibility in the age of software is much murkier, much less easy to define.

As designers we still need to strive for feasibility on behalf of our clients. Although we might not be the ones engineering the impossible, it might be us who’s specifying the impossible. It’s really important to see how some of the most cutting edge experiences we’re inspired by might be entirely unfeasible. Below are a few examples.

1. Kickstarter’s fail rate

A case in point is the very existence of Kickstarter – a service built in the notion of a yet-to-exist product being available for preorder. Listers are encouraged to make it as real as possible to encourage people to invest.

The success rate of fulfilled products is an impressive 91%, however it means that 9% of successfully funded items will never actually see the light of day. This of course doesn’t include the others that simply don’t make their funding goal, the much higher number of 56%.

Kickstarter Fail Rate

Amazon would be quite a different service if half of the products we’re listed as ‘unavailable’ and one in ten products order didn’t actually arrive.

2. Lyft Carpool

San Francisco born ride hailing service Lyft recently announced it was rolling back it’s pooled ride feature ‘Lyft Carpool’ – where car riders were encouraged to pickup someone on the same route.

Lyft Carpool

It’s yet to be seen if the Waze equivalent Waze Carpool will also survive. But either way it’s interesting to see features rolled out and rolled back again.

Waze Carpool

Uber itself is an interesting example as leaked information earlier this month showed it had lost $1.2bn in the arly part this year.

3. x.ai accused of using humans

Automated assistant service x.ai garnered some less than positive press earlier this year when Bloomberg asserted that there were perhaps more humans in the loop that the layman might assume when thinking of and AI system. Bloomberg, it seems, have mis-interpreted the role of the ‘trainer’ of the machine (a common practice in AI systems)

x.ai

Whether you believe the Bloomberg article or x.ai’s counterpoint, the reality is that it’s difficult to know which side to take when the simplified, understandable version may actually be very misleading. When most people assume that the state of AI is defined by services like x.ai, it’s important to realise that all is not what it might seem…

What it means for design

So as designers what can we do? It’s not realistic for us to see inside each the companies we’re looking to for inspiration. Even if we could, few of us could really interrogate their tech or business models (although these are both areas that designers should be making themselves more literate in).

There are a few things we can try to do though…

Firstly we can recognise that the examples above are themes that will be repeated. We can keep reminding ourselves and our clients that real doesn’t mean feasible. We can make ourselves aware that if it looks too good to be true, it might well be.

Secondly we can try to be more diligent. We can try to investigate the latest tech announcements, to see if they are too good to be true. Don’t just quote the tweet about the next big thing, read the article, track down the source, cross reference the claims being made.

Thirdly we can also be inspired by the impossible. Design is not solely about operating within the constraints of technology, it’s about pushing the boundaries in search of what’s best for the people were designing. So don’t take this article as a suggestion to be less ambitious, take it as a recommendation to be more professional.

Single Function Buttons

Washing Machines

As ever, a thoughtful piece of design from BERG has kicked off some interesting analysis and writing by the design community.

Sitting in a more useful corner of ‘the smart home’, their Cloudwash washing machine concept/prototype is an interesting instantiation of a ‘mod cons’ becoming connected.

Single Function Button

While most of the chatter has focused on the choice of device and what it says about men (and don’t get me wrong I think this is a tremendously interesting area of discussion, go and read Rachel Coldicutt’s post) the thing that really caught my attention was the roll of a single function button.


Single Function Buttons

In an era of touch screens the presence of buttons becomes more noticeable. The iPhone’s mute button; the turn page button on a nook ebook reader; the Nest’s dial control.

nest_thermostat
Nest Thermostat
iphone 4 volume controls and silent switch
iPhone silent switch
Barnes-Noble-Nook-Simple-Touch-with-GlowLight-library
Nook, next page button

 

What do they have in common? They all represent such a fundamentally important control for the device that they get their own button. (Incidentally, I don’t include on/off buttons in this group – I’m focussing on the functionality once the device is on.)

These buttons also allow for more tactile interaction, a learnable physical behaviour. In the case of the iPhone it’s easy to switch mute on and off without even taking your phone from your pocket. On the nook my gaze never leaves the page. Click.

On BERG’s washing machine the button that is given its own sole function is the notification override. It makes a lot if sense: notifications are both very useful, yet have the potential to become a big annoyance.

buttons

Buttons for Interaction Designers

The consideration of physical buttons by interaction designers is more important now than ever before, as touch screens have become the most pervasive format (perhaps even the default format) in such a short period it’s easy to forget the point of difference that a physical button brings.

What functionality deserves a button? What type of button is best? What is the button’s default state? What does it sound like? How does it feel?

From an accessibility perspective physical buttons are also tremendously important. The reason that you don’t get touchscreen ATMs? They’d be tough to use by those with limited vision. Equally those with restricted dexterity may also benefit from something more tactile and forgiving than a capacitive iPhone screen.

But physical buttons also bring different challenges, they fail, they stick, they break. In fact, all the more reason to keep them for the most important functions. You can read more on how hard atoms are compared to pixels here.

More buttons, less buttons, no buttons

A final thought on the roll of buttons from the new product developer at BERG (and recently Luckybite), Durrell Bishop.

Marble Answer Machine

The marble phone is an investigation into an entirely physical interface to an answer phone. Each new message is stored on a marble which rolls out onto the top of the machine when a message is left. You then listen to the message by placing the marble on the playback cup.

It’s easy to see at a glance if you have messages without a little flashing light (or notification). In fact it allows the technology to manifest as a piece of sculpture.

No buttons, no screen.

Worth thinking about when you next start designing the interactions or interface, on screen or with buttons.

Some thoughts on the design of a news app

Recently, some asked about the design of a news app. It got me thinking about how to tackle the problem.

An App for a Gap

Like the majority of the content I consume on my phone, I use it to fill the gaps in the day. Those moments in between: waiting for a train, sitting on the bus, procrastinating while a file downloads. When I read the news in these situations it somehow feels productive, like I’m optimizing an otherwise empty moment.

But these gaps are irregular, they aren’t always easy to plan for and my particular mood at each moment is always different. So how would you design for these pockets of time? Maybe you’d serve up content based on the time it takes to read. Or maybe each article would have would have a 1 minute, 2 minute and 5 minute summary? Or could each article be condensed to a paragraph? So I could easily digest as many articles as I have time for. Maybe the app would ask how long I wanted to read for, then serve up a selection of articles to perfectly fill the gap.

This would have a big impact on journalism itself, it would require writers and editors to be thinking in different versions for each story. Maybe a burden or maybe an opportunity.

Super Productivity

I feel most efficient on my phone when I’m re-routing stuff. Save this for later, forward that email, take a photo of my receipt. My favorite IFTTT recipe copies the link from a tweet to my instapaper account, when I see an interesting article I favourite the tweet for later offline reading. Terribly efficient.

IFTTT receipe

I want the same with my news app, but more refined. I want a button to save for offline reading, add to evernote, save to Dropbox, share with a google group. Not just a retweet button (get with it). In fact, why not just set up an IFTTT channel and let me build my own recipes.

Curated Chaos, Personalised Bubbles

“It’s amazing that the amount of news that happens in the world every day always just exactly fits the newspaper” – Jerry Seinfeld

Curating the world’s news down to fit a newspaper is one thing, to further reduce to an app is even tougher. While the temptation is to skim off the popular stories from the main paper, does this give a balanced view of the news? Does this selection suit my personal taste? Is popularity the right measure for the judging which story I get served up?

Another approach is personalization. If it is curated to my personal taste (through some complicated learning algorhythm) how do I escape the bubble of my own previous preferences? Will you occasionally surprise me with a story from a section I don’t normally read? Will you preserve the editorial tone amidst the individualization?

Perhaps through the design of the content you can let me be the filter of a large number of stories; like a newspaper, as my eye skims over, it’ll catch on the headline that suits my mood.

Money Making

Newspapers are yet to crack the business model for news apps. Buying each article seems punitive, and paying for a days content at a time leaves me with the opposite feeling of buying a news paper (once I buy the paper, I can read the articles when I like).

Another direction is the spotify/netflix subscription model – complete access for a monthly fee. But this too seems wrong, perhaps because the price point for these services is close the unit price of the old media (a CD or DVD). Instinctively £10 a month for millions of CDs is a no brainer. But then these services are selling the same thing over and over again, newspapers need to generate a new collection of articles everyday.

The paywall model makes money, but it doesn’t suit the irregular reader. At least with a newspaper I buy it when I need it.

So how do you make money from a news app? Maybe you sell functionality: offline reading, summary mode, ad blocking. But this needs to be packaged in a way that feels like I’m unlocking extras, rather than paying to release arbitrary restrictions.

In App functional upgrade
The app ‘Paper’ allows users to upgrade the functionality with in app purchases. It feels less punitive.

Or maybe you make the price of content so low that people won’t even think twice about buying it. 1p per article. Unarguably good value, so low that people won’t think twice about reading a second, third, fourth article. Just one more article. Oh, go on then.

Back to productivity. I feel most efficient on my phone when I’m re-routing stuff. Save this for later, forward that email, take a photo of my receipt. My favorite IFTTT recipe copies the link from a tweet to my instapaper account, when I see an interesting article I favourite the tweet for later offline reading. Terribly efficient. I want the same with my news app, but more refined: save for offline reading, add to evernote, save to Dropbox, share with a google group. Not just a retweet button, that’s so 2012.

So Much Data, So Little Time

Is there a more frustrating feeling than watching more data flow past you than you can process? Seeing your inbox fill up, trying to get it back to zero. More tweets in an hour than a person could process in a year. A symptom of the mass media age. etc.

I’d hate my news app to give me that feeling. I feel really guilty throwing away a newspaper that i haven’t read every single part of. Another app on my home screen with waiting notifications, no thanks.

Screen Shot 2014-01-22 at 23.17.27

Can you imagine a newspaper that fills up with new stories quicker than you can read them? At least with the newspaper i have the clear feeling of having ‘finished’ it. Is there a way that my app can convey this feeling? Is there a daily edition, or does this run at odds with the very nature of a live streaming device.

Ultimately I don’t want to be playing inbox zero with another app.

In Conclusion

These are just a few thoughts swimming around in my head, what do you think? Agree, Disagree?

Living with Sensors

In the last few weeks we’ve been experimenting with domestic sensors – the first wave of commercial products in the Smart Home area of consumer electronics. As part of our research i’ve been immersed in a world where my plants send me emails and the occasional push notification warns when CO2 levels rise above 1000ppm in our studio (whatever that means).

Plant Sensor
This is my ‘Smart Plant’, the white object that resembles golf club is a wifi connected sensor.

Admittedly it’s been a slightly contrived situation; with half a dozen devices all running in close proximity, and mainly in the office rather than at home. But even having said that there have been some interesting findings and I thought i’d share them here for future reference.


Emerging principles and questions

1. Simplest Setup imaginable

Most of the systems and sensors i’ve played with are designed to make use of your home wifi network, this means that you need to grant them access to your router and give them permission to use your Internet connection. On the surface this seems fairly straightforward but when you have to run through half a dozen steps – including disconnecting and reconnecting to your network, downloading apps and optimal sensor positioning, you quickly see how prohibitive it will be to those less technically competent and patient.

Some of the smarter systems let you pair your devices using QR codes as a shortcut (this is actually a pretty good use of the format), or some other method that doesn’t involve lots of smartphone typing, but the reality is that unless your new system is pre-paired it’s a fairly awkward series of stages to go through. One exception is a system that promises to work straight out of the box; it uses bluetooth and cellular 2G data rather than wifi for it’s connectivity, but even thi solution comes with its own limitations.

Questions for designers: how long does the new systems take to get up and running? Will I need a smart phone to set it up? How much fun is the set up procedure? How technically confident will I need to be to install it?

2. Meet one need

As with many technology driven solutions, there is a desire to pack many features into new products. More features = more functionality, right? Wrong. And doubly wrong in this world of new technology; these new products can be overwhelming enough by themselves, multi-functioning open ended systems are even more complex and require a lot of learning by the user.

We could take a hint from the ‘App’ world here and make sure we are focusing on solving one problem at a time. It seems like there’s a clear analogy here between so called Modern Conveniences or ‘mod cons‘ of the 40s and 50s (the first wave of home technology products), each did one job and it was easy to grasp what the benefits would be — a washing machine washed clothes, a toaster toasted bread — meeting one need at a time is crucial to new users’ understanding and adoption these new systems.

Questions: how would you explain your new system to a 5 year old/85 year old? What current need are you meeting? How many functions does the product have? Is that too many?

3. Give actionable insights

It’s common for these new sensing devices to feedback information about their environment, but very few of them go a step further and suggest what the user should do with the information. One environmental sensor we experimented with was able to show the CO2 levels in the room, not only that, it sent a warning message as the level went past 1000ppm. But what does that mean? Is it high? Is it dangerous? Later it would send another warning message when the level had passed 2000ppm. Was this serious? Should we evacuate?

Turning raw data into useful information is only one part of the process, the crucial step further tells the user that 1000ppm is too high and that opening a window or door might be a good idea. If this is done well the user will learn much more quickly what the new sensor data actually means.

Questions: what should your user do with the data your system provides? How important or serious is this data? How should/could the user respond to this data?

WeMo Sensor kit

4. Fail very gracefully

Most of the sensors out there today are reliant on other technology to function, typically this means joining your home wifi network and often connecting to the internet through it. It’s a certainty that at somepoint these other systems will fail – the question is how the new sensor product handles this failure. These systems must be designed for the imperfect world they will go into, when something goes wrong how much of the full functionality can you still provide? And when things do go wrong don’t bombard the user with error and warning messages. (For a seven days after I removed the sensor, one of my plant pots was still emailing me demanding more water).

It does rasie the question of how error messages could become more ambient; just because i’ve given something the power to send me push notifications doesn’t mean i want to hear from it every hour.

Questions: how much of the system can keep running during a break in connectivity? Does it really need to connect to the internet? When things start to go wrong, how much can the system hide from the user?

5. Manage sensor fatigue

In the coming years we are going to invite dozens of self aware devices into our homes, and they will each have a voice to express their needs, observations and concerns. It’s going to be a hectic place to live if each of these new systems isn’t aware of the current ‘noise’ levels of their new homes. Systems should at least be conscious of this, if not actively able to adjust to better ‘fit in’.

I have an oven at home with an alarm clock that I never got round to programming, occasionally i’ll accidentally nudge into it and then it’ll beep at me. So i turn it off. In a house with 10 things that might all start beeping at me the quietest devices will probably remain switched on for longest. A reversal of the old addage that “the wheel that squeaks loudest gets the oil”.

But there is a counter concern that if these systems do too much by themselves that eventually people will become numb to the things that affect their environments: don’t fiddle with the complex air conditioning system for 5 minutes, just open the window.

Questions for designers: how many other systems will your new one be functioning alongside? How much extra stress do you load onto the user in the management of the new system? How will the new system be welocmed into a household?

6. Smile

Finally, this is a personal plee to keep some of the fun in these new systems. It’s all too easy to celebrate the technology and the amazing data that it can produce, but something that makes me smile will earn it’s place in my home. Apple does this very well, even if it does get carried away with it sometimes.

Questions for designers: is there an opportunity for a bit of fun? Will it make me smile?

I hope some of this might be useful to designers working in this field, there certainly needs to be a dialogue about what we feel is right and wrong in this new world. I’d love to know what people think.

QR codes and the end the Turing Age

Many things have been written about the New Aesthetic (NA) over the last few months, like many i’ve been watching and waiting, wondering what might come out of it. Just as thoughts were beginning to come together in my mind Bruce Sterling’s essay in Wired.com has sharply focussed many people’s thinking, and the most interesting result of this essay is the response of the community and the clearer definition of various elements of this movement.

Sterling’s extended tretise calls for more thought and consideration and pitches the NA as the next significant movement after postmodernism and the 20th Century; for me the NA is less an artistic movement and more of a dawning realisation and connecting of disparate dots, each of which are a reality of living with networked digital tools. Whether NA truly represents the next movement in art after Post Modernism is for someone else to answer, I ask myself ‘what does the NA mean today?’.


Taming Lions and Domesticating Cats

The continuing domestication of high technology is really what’s at the heart of the NA, as so many articles rightly point out many of the things that get grouped under the umbrella term have been around for many years. It’s a strange set of cultural and technological touch points from satellite imagery to vectorised artefacts of 3D photography – newish things and oldish things – what holds them together is supernatural view of the world they give us. We can look over the planet from a mile above (flight), we can look though the walls of a building and see what’s happening inside (x-ray vision), we can communicate instantly with people on the other side of the planet (telekinesis), we can control objects remotely (telekinesis). I could go on. I almost wonder if our penchant for superhero films will begin to wane, we all have little supermen in our pockets now.

And these SmartPhones (SuperPhones) infiltrate our day-to-day they are leading the charge of pervasive technology. Automatic vacuum cleaners, Kinects, Drone-copters. They’re all making their way in the world. My guess is, if you’re sitting somewhere in the western world, you probably have 100 sensors of various types within 10 metres of you. Which is an amazing/alarming/alluring thing in itself, the fact they can all talk to each other as well is a very 21st Century state of affairs.

All of this technology has been domesticated and subsumed into the everyday, and by small increments we’ve been joined by a symbiotic species – we call them ‘devices’ and ‘widgets’ and ‘do-dahs’. We’ve begun to acknowledge the presence of these new things by adjusting our environment to suit them – albeit in a clunky way. The QR code heralds an interesting era where we share the visual landscape with our new robot friends, building in visual affordances for Computer Vision that make no sense to us at all, but that our smartphones absolutely love. As time goes by, and  Computer Vision improves, these QR codes and whatever follows them will disappear, or perhaps there will be a lasting remnance – just as even the most advanced CGI effects in films are identifiable, they remain otherworldly.


After Turing

Seeing our new robot compatriots as a different species is of course a bit spurious, but it might set some rules for understanding how to interact with them. The dream of Artificial Intelligence was to replace the human brain with technology, to build a thinking machine. The dawning reality is that this was probably the wrong thing to attempt, after all what do we gain from a machine like a human? What a wasted opportunity. The classic test of a thinking machine  was defined by Alan Turing; in short if you could chat with a computer and be fooled it was a human then we could all deem AI a success. What Turning hadn’t factored in was the adaptation by humans when communicating with machines, our ability to meet them half way changed our expectation of an interaction with a machine.

In fact we’ve already passed the point where spam messages can’t be distinguished from real messages, and people are falling in love with chat bots. Not because the machines got smart, but because we all exist on the same networks, and because these networks let everybody in. My Twitter feed is populated by friends, famous people and robots. They all get my attention, whether they pass the Turing test or not.

Ultimately we can leave the AI experiment in the 20th century and start to think about what we could better use robots for. And these decisions will be made by the content consumers not the content providers. More an more we’re building tools and small pieces that other people can assemble themselves, to construct their own personalised spaces. Of course this isn’t a new thing for the old physical world, but it is a new thing for the Network Age we now live in.


Enter the Network Age

So here we stand, us looking at the robots and the robots looking at us. Each trying to understand the other. Sharing spaces, being shaped by each other. What should we be doing to shape and disrupt and embrace this new world? To me there seems to be a few things to think about.

Firstly, maintaining some visible signs of the system we use feels important. Perhaps not in an overt way, but something to help people be aware of the robots having an impact on their world. It’s very easy to hide away the algorithms and snippets of code that set the boundaries around our lives especially as they become more complex – but people are becoming more code fluent and so perhaps there will be ways to keep things near the surface. Ultimately we can stop trying to humanise robots – what we need is to invent new personalities and behaviours that suit the machines themselves.

Secondly, and particularly for designers and content providers, we all need to be aware of the complex and shifting landscape our output will become a part of. This probably manifests itself as a combination of having a clear voice and a realistic expectation of the impact we might make. Let people appropriate your content and let them tell their own stories – forget trying to shape their reactions, they’ll do what they want. It’s hard to imagine how you can begin to legislate for the ways your stuff will bubble up in the world, so focus on making better content suited to serendipity. Distribution is no longer your problem.

And I suppose the best way to finish is to encourage people to ask more questions, and try to answer them publicly. Share the knowledge.

Public Touchscreens and Logical Gestures

Lets gesture on it

After reading an interesting article by Neil Clavin (@neilclavin) here: http://bit.ly/GCtOGO via an urbanscale tweet (@urbanscale) I was prompted to write a little retort on the importance of understanding gestures when understanding and design interactions. And when I say gestures i’m not talking post-apple pinches and swipes, i’m talking physical behaviours that emphasise the action being undertaken. Old school gestures.

Read the original article and then my response:

Whilst I agree that public touch screens are actually a very unappealing concept when you begin to look at their day-to-day use, making the leap to “preferably touchless interactions” seems to ignore the tactile nature of being human.

I wonder if a better direction is to seek out interaction that are appropriate to the intended outcome. I’d say the reality of most touchless interfaces (such as oyster card) still involve an actual contact. In fact I saw an older woman vigorously banging her Oyster card on the sensor the other day, presumably in the hope that a bit more physicality would improve the functionality.

Above all, successful contactless systems are about a logical gesture. This is why QR is such a ‘WTF’ experience; because it doesn’t have a pre-existing behaviour attached to it (and of course the technology is as clunky as hell, and the content it reveals is usually crap).

So touchless = good. Touchless everything = no thanks.

From Adam Clavin's Walkshop: Analysing QR codes

I could also have gone into the potential difficulties that users with poor eyesight might have with a touchless interface – but this is a moot point, more so because I’ve actually got no specific evidence that it’s better or worse. My instinct is that it’s probably worse, but who knows.

What this really got me thinking about is this notion of carrying over instincts and behaviours from the old way of doing things. I’m forever debating this issue as a graphic designer; using the vernacular of the past to describe the current. Or. Using the vernacular of the present to introduce the future.

It depends on your perspective.

So using the funny visual cues of today can help people into the future, but how does it limit people? The UK road sign system icon for ‘speed camera’ is perhaps the oddest representation of a camera possible. It’s almost surreal.

And this is just visual stuff. What about gestures and behaviours? What learned behaviour do we carry forward to the post digital age? Do we invent new things? Or approximate the old? And what is the value in either?

I don’t have any suggestions yet, maybe other people do. Gesture me an answer if you know something that I don’t. Just make sure it’s a gesture I understand.

Machine Etiquette

Recently, while working at IDEO, I’ve been involved in a project that began to explore the potential impact of smart devices. With ever cheapening hardware and the widespread adoption of smart phones we are approaching a time when all kinds of devices will find a voice. Not only will they talk to us but they’ll talk to each other, all the time. So what does it mean?

Send in the robots

Most people that i’ve spoken to tend to jump to a worst case scenario (“why would i want to talk to my dish washer”) imagining a world where we not only have to deal with the complexity of relationships with people but also devices. Of course this is not such a stretch of the imagination and now we have social networks that simultaneously support human and robots (and every permutation of connection between the two) then we have a strangely level playing field upon which these games can take place.

Ultimately this becomes of a question of new rules of etiquette, after all the need to maintain a more active realtionship with a device hasn’t really existed before now. So who will define these new rules?

A home full of devices talking to each other

I’m a graphic/interaction designer and while this emergine need for machine etiquette is very clear to me, it’s not clear who’s remit this falls under. Is it the product designer? the UX specialist? the interface designer? Or perhaps there’s a new field needed here, the Device Behaviorist (or something along those lines, Behaviour Designer?). Someone is needed to give these dumb devices smart voices, and a sense of appropriateness. It certainly become an expansive challlenge when you conside the range of situations and locations that a device may find itself in and then need to communicate. The nuance of social interactions takes a lifetime to perfect, so how will my toaster fare?

Good news

On the plus side there are some distinct advantages that come with smarter appliances, when you consider that they can also communicate with other smarter robots via the internet. Suddenly the isolated coffee machine can track stock, record energy use, download new firmware, store favourites, suggest tips, diagnose its own faults and who know how many other menial tasks that you’d probably not spend your days doing.

Making use of the data collected

Its easy to imagine any number of scenarios like this in the short term, but more interestingly in the longer term the machine can also retrieve data over a longer period and build up enough data to make some pretty smart sugestions.

And who’s better placed to monitor how much energy you waste each time you boild the kettle than the kettle itself?

So i’m generally open to the idea of new smart devices, but only if they’ve learnt some manners first.

Qrcode train timetables

20111130-215315.jpg

From First Capital Connect, scan the we code to get a timetable and line map.

This is actually a bit useful, it essentially replaces the little paper maps with a PDF you can save to your device, and for one the whole experience is mobile optimised.

Well done.

The link, incidentally, takes you here: http://www.kadfire.com/webtts/fcc/d/btn/btn.pdf

It’s ironic that the poster is actually te weak link here, sorry for the bad picture above, but as you might see the alignment of information is awful.

Ruddy, downright, awfulmess.

The service seems to be supplied, at least I part by http://www.kadfire.com/ I must look them up…

Idiots guide to QR codes

I hope there is a minor silver lining to be found in all this QR code nonsense: with every qrcode that is shoved out into the world we move slightly closer to something replacing them.

20111115-090739.jpg

This bizarre bus add is basically seems to be trying to explain what a QR code is. Odd eh?

Paid for by Clear Channel, the owner of the advertising space. When you scan the bloody thing you just get one line of text:

‘Password is: smartphones’

No idea what the bloody hell is going on to be honest. Pretty much sums up the weirdness of these ugly little blighters.