Why the human body will be the next computer interface

[From Fast Company’s Co.Design, where the story includes additional images and reader comments]

Hand sensor

Why The Human Body Will Be The Next Computer Interface

Fjord charts the major innovations of the past, and predicts a future of totally intuitive “micro gestures and expressions” that will control our devices.

By Andy Goodman and Marco Righetto (of Fjord)
March 5, 2013

By now you’ve probably heard a lot about wearables, living services, the Internet of Things, and smart materials. As designers working in these realms, we’ve begun to think about even weirder and wilder things, envisioning a future where evolved technology is embedded inside our digestive tracts, sense organs, blood vessels, and even our cells.

As a service design consultancy we focus on how the systems and services work, rather than on static products. We investigate hypothetical futures through scenarios that involve production/distribution chains and how people will use advanced technology. Although scientifically grounded, the scenarios proposed aren’t necessarily based on facts but on observations. They are designed to create a dialogue around technologies that are still in science labs.

To see the future, first we must understand the past. Humans have been interfacing with machines for thousands of years. We seem to be intrinsically built to desire this communion with the made world. This blending of the mechanical and biological has often been described as a “natural” evolutionary process by such great thinkers as Marshall McLuhan in the ’50s and more recently Kevin Kelly in his seminal book What Technology Wants. So by looking at the long timeline of computer design we can see waves of change and future ripples. Here’s our brief and apocryphal history of the human-computer interface.

1801: the first programmable machine

Let’s skip the abacus and the Pascal adding machine and move straight to the 19th-century Jacquard loom. This, although not a computer, used punched cards to change the operation of the machine. This was foundational for the invention of computing. For the first time, people could change the way a machine worked by giving it instructions.

1943: Colossus valve computer

Not only critical for its role in winning WWII but for being the world’s first electronic digital computer. Tommy Flowers’s Colossus can be described as the first modern computing machine. It used a series of on and off switches to make complex calculations that no human could manage in any kind of realistic timeframe. Without it, the scientists at Bletchley Park would never have cracked the Nazi messages encrypted by the equally revolutionary Enigma machines.

1953: the FORTRAN punch card

Very similar to the Jacquard loom’s punch cards, here was a system that could order a machine to perform many different calculations and functions. Prior to FORTRAN, machines could practically perform only one function and the input was merely used to change the pattern of that function. Now we had entered a world of multifunctional thinking machines.

So really it was a century and a half before the fundamental paradigm of computing changed, but one thing stayed constant: In order to use these machines we had to become like them. We had to think like them and talk to them in their language. They were immobile, monolithic, mute devices, which demanded our sacrifice at the twin altars of engineering and math. Without deep knowledge, they were impenetrable and useless lumps of wood, metal, and plastic. What happened next (in a truly “ahead of its time” invention) was the first idea that began the slow shift in emphasis to a more human-centric way of interfacing with the machine.

1961: the first natural computer interface

Something in the zeitgeist demanded that the ’60s would see a humanistic vision appearing in the rapidly expanding sphere of computer engineering. And so in a typically counterintuitive move, Rand Corporation, that bastion of secretive government and military invention, created the first tablet and pen interface. It remained hidden for many years as a military secret, but this was definitively the first computer interface that was built around a natural activity, drawing with a pen on paper.

1979: touch screens appear on the horizon

Although the Fairlight CMI was the first touch-screen interface, it was many years before the technology was affordable. The Fairlight cost around $20,000 and was out of reach for everyone apart from the likes of Stevie Wonder, Duran Duran, and Thomas Dolby. What was equally remarkable about it, in addition to the advancement of having a touch-sensitive screen operated with a light pen, was that it used the comprehendible interface of the musical score. However it was still pretty damn complicated to work and was thus neither truly democratic nor humanistic. Almost all musicians needed to hire a programmer to create their synthetic soundscapes.

1980: a regression occurs

Although hugely important in popularizing the home computer, MS-DOS, with its bare green text glowing on a black screen, showed only the barest hint of human warmth. A layer down, though, there were important ideas like accessible help systems and an easily learned command interface that gave access to the logical but not very user-friendly hierarchy of the file system. All told, however, it was a retrogressive step back to the exposed machine interface.

1984: a more human space

Apple took all the innovations from the stubbornly uncommercial minds at Xerox Parc and made them work for a mass audience. The mouse was an incredibly concise invention, bringing ergonomic touch to the desktop interface. If you put a mouse in the hands of a novice, they almost immediately understand the analogue between the movement on the plane of the desk and the corresponding movement of the pointer on the screen. This was the result of much experimentation in the exact gearing ratio, but it felt natural and effortless. The iconographic approach to the interface, meanwhile, was also a massive step toward an intuitive computer world with close resemblance to familiar physical objects.

Then, for a long time nothing happened–except, that is, iteration after iteration of the same metaphors of the Macintosh, each a further refinement or regression, depending on the quality of design.

2007: touch screen computing finally arrives

The iPhone was not the first touch screen by any means, but it was the most significant, demonstrating that we really wanted to get our hands on, even inside, the interface, as if yearning to touch the actual data and feel the electrons passing through the display. We were now almost making contact, just a thin sheet of glass between us. Paradoxically, the visual metaphors had hardly changed in over 20 years. Maybe this was all that was needed.

2009: Kinect blows it all wide open again

Of course, just when everything seems to be stable and static, a wild and unpredictable event occurs. Kinect (and let’s not forget the honorable Wii) showed a new way of interacting in which the body becomes the controller. The game format allows a one-to-one relationship between the physical body and the virtual body. A leg movement corresponds to a kick on screen; a wave of a hand becomes a haymaker knocking out your opponent. This is very satisfying and instantly accessible, but in the end, is no good for anything more complex than role-playing.

2011 Siri, the no-interface interface

For the third time in 30 years, Apple took an existing and poorly implemented technology and made it work, properly, for the masses. Siri does work and is a leap forward in terms of precision. But it is hard to say it is any more sophisticated than a 1980’s text-based adventure. Combine a few verbs and nouns together and come back with a relevant response. Still, Siri understands you no better than the primitive text parser.

So when put in a timeline, it is clear that we have dramatically shifted the meeting point of man and machine. It is now almost entirely weighted toward the human languages of symbols, words, and gestures. Nevertheless, that last inch seems to be a vast chasm that cannot be breached. We are yet to devise interfaces that can effortlessly give us what we want and need. We still must learn some kind of rules and deal with an interpretation layer that is never wholly natural.

A predictive world of sensors

Some early attempts at predictive interactions exist: The Japanese vending machine that recognizes the age and sex of the user and presents choices based on demographic breakdown, and the brilliant but scary ability of McDonald’s to predict what you’re going to order based on the car you drive with 80% accuracy. The latter was necessary so the fast-food chain could reduce the unacceptable 30-second wait while your drive-in order was prepared.

The sensor world that makes these kinds of predictive systems possible will only become richer and more precise. Big data will inform on-demand services, providing maximum efficiency and total customization. It will be a convincing illusion of perfect adaptation to need. However, there are still three more phases of this evolution that we see as being necessary before the machine really becomes domesticated.

The first evolutionary leap is almost upon us, embedding technology in our bodies. This finally achieves the half-acknowledged desire to not only touch machines but have them inside us. Well, maybe that’s pushing it a bit; don’t think that we’re going to become cyborgs. But great artists like David Cronenberg have imagined what it would be like to have machines embedded in humans and what kinds of advantages it could bring. Dramatic embellishments aside, the path is clear: Beyond mechanical hips and electric hearts, we will put intelligences inside us that can monitor, inform, aid, and heal.

Embedded tech brings a new language of interaction

The new language will be ultra subtle and totally intuitive, building not on crude body movements but on subtle expressions and micro-gestures. This is akin to the computer mouse and the screen. The Mac interface would never have worked if you needed to move the mouse the same distance as it moved on the screen. It would have been annoying and deeply unergonomic. This is the same for the gestural interface. Why swipe your arm when you can just rub your fingers together. What could be more natural than staring at something to select it, nodding to approve something? This is the world that will be possible when we have hundreds of tiny sensors mapping every movement, outside and within our bodies. For privacy, you’ll be able to use imperceptible movements, or even hidden ones such as flicking your tongue across your teeth.

Think about this scenario: You see someone at a party you like; his social profile is immediately projected onto your retina–great, a 92% match. By staring at him for two seconds, you trigger a pairing protocol. He knows you want to pair, because you are now glowing slightly red in his retina screen. Then you slide your tongue over your left incisor and press gently. This makes his left incisor tingle slightly. He responds by touching it. The pairing protocol is completed.

What is lovely about these micro gestures and expressions is that they are totally intuitive. Who doesn’t stare at someone a second too long when they fancy them, and licking your lips is a spontaneously flirtatious gesture. The possible interactions are almost limitless and move us closer and closer to a natural human-computer interface. At this point, the really intriguing thing is that the interface has virtually disappeared; the screens are gone, and the input devices are dispersed around the body.

What we will explore in the next article is the end game of this kind of technology as we bring the organic into the machine and create a symbiotic world where DNA, nanobots, and synthetic biology are orchestrated to create the ultimate learning devices. We will also explore the role of the designer when there is no interface left to design: Will they become choreographers and storytellers instead? Or will they disappear from the landscape entirely, to be replaced by algorithmic processes, artificial intelligence and gene sequencing? What we can say for sure is that the speed of change is accelerating so rapidly that the advanced interface technologies that we marvel at today will seem as out-dated as FORTRAN before we have time to draw breath.


Comments

One response to “Why the human body will be the next computer interface”


  1. Already we are very dependent on our technology. A glitch or disconnection can cause a lot of stress, disrupting the fast-paced lives that technology has afforded us. If the human body will be the next computer interface, we will be vulnerable on a whole new level.

Leave a Reply

Your email address will not be published. Required fields are marked *

ISPR Presence News

Search ISPR Presence News:



Archives