[From Pocket-lint (“the largest independent gadget news and reviews site in the UK”)]
The touchy feely future of the user-interface
FUTURE WEEK: How the machines will come out and meet us in 2015
31 March 2010 12:00 GMT / By Dan Sung
The user interface is big business right now. In truth, it always was, but it’s taken the mass popularisation of the iPhone to bring it the public agenda. Until then, a good interface was one which you didn’t notice. If no one mentioned it, then it was doing its job. It was allowing the user to perform the task they’re looking to do with minimal fuss, but when the famous talky tablet turned up, it brought with it something that would change this principle for consumers everywhere – a touchscreen.
Of course, Apple was by no means the first to use a touchscreen but it was the first to get it right. Both its perfect responsiveness and the multitouch technology showed people there was an alternative method of getting your message across to a machine other than hammering away at keys, and, as a result, we now expect more. Now, as well as minimal fuss, we desire maximal simplicity and with it a certain sense of grace and beauty too. According to user experience guru Gus Desbarats, it’s about time too.
“We have allowed ourselves to come out and meet the machines more than half way”, he says, “but now they’re going to have to come out and meet us”.
Gus is the chairman of leading UK technology industrial design house TheAlloy and has been working in the field for over 25 years. His projects have included everything from the Sinclair C5 to the BT Homehub and even the FUSE future mobile concept we reported on at CES 2010. He also happens to have won an Oscar for technical achievement and, with it, our pick of the crop as the person to talk to about UIs. We sat down with the man for Future Week and picked his brains on just how machines might be reaching back to humanity by 2015.
Touch
One might think that we’ve already got to grips with this sense on the UI front and that there’s little room left for manoeuvre what with multitouch and haptic feedback both commonplace. Not so.
“We’ve only just realised that pecking at bits of plastic might not be the best way of interfacing with machines,” says Gus. “There’s more richness in a tactile interface than we’ve uncovered and miles more sophistication we can implement. Obviously the FUSE is a great example of this but there’s also just more subtlety we can hone with what there already is”.
The FUSE, of course, introduced the idea of adding an analogue nature to touch with pressure – a squeezable interface – but that can work for the feedback as well. As we begin to explore touch further in the next few years, we’ll see the same in haptics as well, with different levels and pulses of vibrations to suddenly offer a much bigger language of communication between ourselves and machines. Add onto that gesture control like we see on a primitive level in laptop touchpads and more interestingly with apps like Android app, Gesture, and suddenly there’s a huge vocabulary to work with. All this makes for a more intuitive and subtle relationship which can only work in our favour.
Multitouch itself can also improve in a similar way. It’s well suited for handheld devices of 4 inches or less but as the screens get bigger, there’s suddenly room for all your digits to get involved as well.
Israeli company N-Trig has developed a four-finger and stylus pen system called DuoSense which allows you to do just this. If you can stomach the saccharine of the production, the demo video of just what it can do is well worth a watch. As it happens, DuoSense has been around for a year or 2 already, so expect this kind of multi-finger mulititouch to be arriving in our gadgets well before 2015.
Touch is now a part of computers, with the arrival of Windows 7, so we’re bound to be seeing this kind of thing more and more. There’s tablets, laptops and we’ve this year seen some incredibly exciting prototypes of touch technology that should be a part of consumer technology in 5 years’ time – the Skinput, which turns our bodies into touchscreens, and the Light Touch which turns any surface at all into one. By 2015, they’ll probably even be able to sell the Microsoft Surface too.
Sight
We’ve been staring at computer and mobile phone screens for a long time now and it’s only of late there’s started to be more significant advances in the way we use our eyes to work with machines. Once displays went colour, there was really very little change. So we upped the resolution a little for a cleaner experience, but what our eyes have been crying out for is a way to move the UI away from the flat monitor and into something more immersive entirely.
“We haven’t leveraged sight enough yet”, admits Gus. “Our computer screens are in a completely flat environment but behind them is such a very rich one. Our natural instincts are about where things are in a spatial world. Now we’re seeing 3D come involving our senses more and in five years we’ll have holograms and projectors to free ourselves from these flat screens. That’s the next thing we’ll be seeing from Star Trek come to life – the holodeck”.
“The technology to build things like this already exists. Very soon there’ll be enough cheap memory to record our entire lives in high resolution from birth to death. People might even have large digital annexes in their houses with enough information to create whatever kinds of visual fields we wish. And, of course, we will have more gesture controlled content very soon with Thai Qi style talking to machines that we’ve seen in Minority Report. The technology is there. We just need to create the environments”.
As exciting as it sounds, of course, it will take some time not only to arrive but, importantly, for it to catch on. As we already said, touch was around a long time before Apple and, similarly, we have experimented with the idea of the all-encompassing visual environment before with Virtual Reality. The difference is that there has to be a combination of the technology being smooth and seamless enough, and the experience being useful and rewarding enough for something new like this to catch on.
VR was clunky with heavy wired kit, but soften the approach of creating a whole new visual environment to work with by enhancing the one we already have and now you’ve got a workable useful UI in a 360 world that Augmented Reality provides. The experience is still in a screen but the screen itself is moved and shaped to wherever we need it and now gives relevance to its locale.
Speak to any futurologist and they’ll tell you AR will be big well before 2015. Glasses, windscreens, mirrors, pico projectors and perhaps even contact lenses will bring the machine’s voice closer to our normal worlds. It’s not hard to see how this is a precursor to creating entirely generated environments in the way that VR prematurely had a go at 20 years ago in the consumer space.
For now though, there are simple examples of what we can do with flat screens as way for us to improve them. The JDome projects games onto a curved surface which users can play inside to give a more involving real-life experience, and it might be these kinds of techniques that are first used for the Minority Report style interfaces. After all, our arms work all around us, not just in a 2D plane.
Of course, the other option is why bother using arms at all? Let the machines come right up to our faces. Let them track our eye movements instead of having even to use a mouse. Just drag and drop by looking; whether this is something we’ll want them to do in the future is another thing.
Sound
Again, with sound, there’s two sides of the story. There’s the machines making sounds for us and us making sounds for the machines. Whichever way you look at it, it’s an important area as far as Gus is concerned.
“Devices with better sounds have a richer, more compelling experience. It’s well documented. Sound is the biggest problem in home cinema and is the source of most of the disappointment for people. Over the net five years, we’re going to see some amazing stuff in this field with people stuffing more and more of that clear orchestral sound into even more tiny boxes and the likes of smartphones too”.
“The idea of audio branding is going to be big as well. You might get an Audi car door opening with a certain three-note jingle, for example, that people will then begin to associate with these products and bring them into our lives on a deeper level”.
On the one hand, having more of our senses advertised to more often seems quite abhorrent but, then, perhaps for some, having your car bleep a chirpy hello to you as you step out for the morning commute might make a pleasant addition.
Where we’d all probably like to see sound improved as an interface between man and machine is in voice activation. There’s already plenty from Google, among others, in this area with voice recognition in the Android software allowing users to fill any field on the phone by talking, and the good news from Gus is that there’s plenty more of this to come.
“There are already very good algorithms for voice recognition and in five years we’re going to see a lot more of it but we won’t necessarily be using it all the time. It’s about trying to visualise the context of use. It’s only valuable if it’s faster and more reliable than typing in a given situation. Typing is fairly slow compared to voice, but pushing a single button is quicker. If you take the context of driving, then voice control becomes highly valuable but, on the train, some might see the idea of voice as more intrusive even if it happens to functionally be a better option”.
“It’s going to be difficult to find the killer app for voice but it might be something simple or even a combination of input mediums. Saying, ‘George, dial’ might feel a little unnatural for people but saying ‘George, squeeze’ or just ‘George’ and then squeezing your FUSE-type phone or making a gesture has some more of an emotional content to it that might catch on”.
Smell
It’s hard to think of many non-natural interactions with smell in our lives that are purpose built for communication. It’s no doubt a sense of huge importance to us, but massively underused and underdeveloped and almost completely absent in technology and UIs. Perhaps the only place it’s been implemented, and to great effect, is in the artificial odour given to household gas. That smell we now associate with methane has saved countless lives.
“Smell is an incredibly powerful sense for us that connects to the subconscious very quickly indeed”, says Gus. “I think we will see it used increasingly over the next five years as engineers look to involve our senses but only on a fairly rudimentary level. The trouble is that it’s very hard to model mathematically. It operates on random diffusion and can’t be predicted in the same way as the precise mapping of optics”.
This is, of course, the very reason it works so well to warn of gas leaks. If a colour additive was used instead, it would become too diffuse in the air and by the time you could see it, the concentration of the noxious chemicals would already have become strong enough to overwhelm you.
Taking it one step further, there’s a very good chance we’ll find more smell alerts in the future. Pungent odours could be used to warn the deaf and blind or we could all find ourselves waking up to smell alarm clocks, sadly though, it’s unlikely to be to the aroma of hot buttered toast, green meadows or fresh baked bread.
“The problem is that the chemistry of smell is just too complex”, says Gus. “We’ll have these things one day but it would take a smell printer of sorts and that would mean having all the ingredients to make all the possible smells on demand. It would be the equivalent of needing hundreds of thousands of toner cartridges at the moment, so I imagine we’re a good 50-100 years away from that right now”.
Of course, that wouldn’t rule out devices that could omit a single odour. Imagine, for example, some kind of AR frame or small nostril mounted-unit tucked away up your hooter that could amplify a particular smell by triggering the release of more particles when it comes in to contact with just one or two in its environment. It could make it much easier for a pig to track down truffles, a dog to find the scent of a kidnapped child or even for us to detect pheromone changes in each other. That’d certainly add a twist to hanging out in bars.
Taste
Of all the underused senses in our interactions with machines, taste has to be the number one. There’s not an awful lot of licking gadgets that goes on. In fact, there’s none. We save our most private sense for food and for sex and the second of those is really about touch anyway. It just so happens to involve touching with the organ which we also use for tasting.
Sadly, any ideas of online food shopping where we get to try the ingredients before we put them in the shopping cart suffers the same problem as for smell. It’s just going to be impossible, for the near future, to be able to create a library of synthetic flavours on demand, even if we were interested in the idea of licking our computer displays. At the end of the day, we’d probably rather just trust that broccoli tastes like broccoli and those for whom it might be useful, such as epicures or wine connoisseurs, would probably find the idea of choosing by man-made flavours offensive anyway.
So, the only UI we really have to work with our tongues is one, that like kissing, is an extension of touch but this time it’s enabling the blind to see. Allow the video to explain…
Conclusions
While it may be a shame our senses of smell and taste will probably remain very much absent in our interactions with machines come 2015, it’s at least positive to know that there’s still plenty of space still to explore with sight, touch and even sound too.
By 2015, we can expect multitouch to have become polytouch, for gesture control to be bigger than ever and for both voice activation and touch options to be more or less ubiquitous. Proper full hologramatic environments will probably be a push for 5 years, but expect us to be writing lots of news about it with a view of it reaching consumers a short time later. Until then, it’s back to pecking the plastic.