[From Wired; more information is available at Patently Apple]
[Image: An illustration showing a combined system using target tracking and a touchscreen. Image: Free Patents Online.]
Apple Granted Patent for High-Concept Input Techniques
By Christina Bonnington
July 24, 2012
Apple was granted a sweeping, multipronged patent today by the United States Patent and Trademark Office — but you may need a degree in theoretical physics to divine exactly which technologies the patent is protecting. The patent, “Method for providing human input to a computer,” addresses both new and existing ways we interact with touchscreen devices, and covers everything from computer interfaces, to Kinect-style gaming and virtual-reality gloves, to touch input for vehicles.
“This is a classic submarine patent,” General Patent Corporation CEO Alexander Poltorak told Wired, referring to patents filed before November 2000, which remain secret until they’re granted. Hidden for years, a submarine patent can be significantly updated and amended until it finally emerges — at which point the whole world might be tripping over its patented technology. “It was just issued this year, but if you look at the history, patent continuation after continuation, it was originally filed in 1995. It has been submerged under water in the U.S. patent office for a long time,” Poltorak said.
Apple acquired this patent from a Canadian inventor named Timothy Pryor. Today’s granted patent dates back to an application filed in 2009, and although extensive, Poltorak says it’s not actually broad per se: “It describes a lot of things, but each claim is, in its own way, very specific,” he said.
The patent describes a number of input techniques for computer systems, including optical techniques for detecting surface distortion from a physical input like the touch of a finger. One of the major distinctions with this type of input is that it would register displacement information in the X, Y and Z planes, as opposed to just the X and Y. “No known commercial devices can do this, and a limited technology set exists for this purpose—especially over large extensive screen or pad areas,” the patent description states.
Apparently, the patented technique would allow for “a potential ‘four’ or ‘five dimensional’ capability” rather than just two-dimensional input sensitivity. (It’s nice to know the USPTO recognizes the fifth dimension, right?) Regardless, the patent states the technology would allow you to press harder or softer to change the degree of input, like when you’re drawing a line.
Apple’s invention could also detect complex area “signatures” instead of merely points, differentiating input from the palm of the hand, for example, which could help blind and visually impaired people use a device. The technology would also be able to dynamically store and remember input signatures, either physically or in memory.
The method also accounts for tactile feedback, such as vibrations or a burst of air. This feedback would be particularly useful for controlling things in an environment like the inside of a car’s cabin, where you need to keep your eyes on the road, according to the patent.
“The capability of the invention to be ergonomically and ‘naturally’ compatible with human data entry is a major feature,” the patent description states. Although the patent largely discusses touch inputs, the techniques described aren’t limited to touchscreen devices: “Sensing of the screen is non contact, and the sensing screen can be as simple as a piece of plate glass, or a wall.”
Apple’s technology could also be used for gaming applications. One of the proposed input methods would allow a player to play a game as he might in a physical sports, along the lines of the Microsoft Kinect platform.
Does all of this sound a bit confusing? You’re not alone. The patent uses inscrutable language, and it’s debatable whether the patent office even knew what it was granting. “It’s not easy to unravel what this patent covers, and whether it’s valid or not,” Poltorak said.
On Tuesday, Apple was also granted a patent relating to providing information based on object recognition, which could appear in future iPhones.
The system would first detect your current environment based on data from the phone’s camera, an IR sensor, or an RFID tag reader. Then you could switch the mode to “Museum” or “Restaurant,” to search for pieces of art in a museum, or locate nearby restaurants, respectively. It could keep a log of previously identified objects, and the user could create an album of those objects, too.
Leave a Reply