BanderasNews
Puerto Vallarta Weather Report
Welcome to Puerto Vallarta's liveliest website!
Contact UsSearch
Why Vallarta?Vallarta WeddingsRestaurantsWeatherPhoto GalleriesToday's EventsMaps
 NEWS/HOME
 AROUND THE BAY
 AROUND THE REPUBLIC
 AROUND THE AMERICAS
 THE BIG PICTURE
 BUSINESS NEWS
 TECHNOLOGY NEWS
 WEIRD NEWS
 EDITORIALS
 ENTERTAINMENT
 VALLARTA LIVING
 PV REAL ESTATE
 TRAVEL / OUTDOORS
 HEALTH / BEAUTY
 SPORTS
 DAZED & CONFUSED
 PHOTOGRAPHY
 CLASSIFIEDS
 READERS CORNER
 BANDERAS NEWS TEAM
Sign up NOW!

Free Newsletter!
Puerto Vallarta News NetworkTechnology News | June 2006 

High-Tech Sign Language Could Replace the Mouse
email this pageprint this pageemail usDawn C. Chmielewski - LATimes


G-speak's technology allows users to interact with computers using hand gestures.
John Underkoffler wants to build a better mouse.

Working in a downtown Los Angeles loft, the co-founder of G-speak is developing technology to replace the ubiquitous computer mouse with a more natural interface: human hands.

Riffs on Underkoffler's technology have featured prominently in movies and television shows. The 2002 sci-fi film "Minority Report," for example, included a dramatic scene in which Tom Cruise gestures with gloved hands to navigate and manipulate crime scene information on translucent screens.

That scene was the product of movie industry special effects. But it marked the birth of G-speak.

Underkoffler moved to Los Angeles to serve as a science and technology advisor on the Steven Spielberg movie. He came from the Massachusetts Institute of Technology Media Lab, where he was a doctoral candidate researching new ways for people to interact with computers.

"The film consultancy stuff in a way was a detour," Underkoffler said. "I'd been building stuff like G-speak before 'Minority Report,' and it was very natural to return to that after. Except that the movie also had the interesting effect of, in some sense, validating a specific form of those ideas. Audiences really responded to those sequences. So in a weird way we'd been able to use the film mechanism to prototype a technology."

In May 2005, Underkoffler and other former MIT colleagues pooled their own money and formed G-speak to find commercial applications for what's called gestural technology. The eight-person company has contracts with aerospace companies and federal agencies interested in the technology. It won't disclose revenue.

"It seems to us that manipulating things with our hands is a fundamental way to interact with the world," said Kwindla Hultman Kramer, G-speak's director of business development.

To demonstrate how the technology works, Underkoffler donned a pair of black gloves with reflective beads attached to the back of the hand, the thumb and two fingers. He drew a pair of heavy black curtains to block light from entering the loft and stepped onto the center of a makeshift stage, surrounded by a scaffolding holding eight infrared-sensitive cameras.

The cameras use light reflected off the BB-sized beads to calculate the position of Underkoffler's hands and interpret each of about two dozen core gestures as a computer command. He faced a 16-foot screen and held his hands like a kid holding imaginary twin six-shooters (index finger pointed, thumb raised) to tag objects on screen and depressed the thumb to grab the object — in this case a letter of the alphabet.

Holding his hands vertically, thumbs touching, and gesturing to the left like a little league coach signaling a base runner to lead off first base causes the data to sweep to the left edge of the screen. Making an OK gesture to the screen with the thumb and index finger causes the text to form a circle.

Underkoffler said gestures were a more nuanced way to interact with images.

He displayed a panoramic scene of downtown shot from the roof of G-speak's building. He panned the landscape to the left with a simple sweeping gesture and zoomed in on a distant skyscraper by pushing his hand toward the screen.

The technology affords TiVo-like control over video. Underkoffler holds his hands upright in a gesture to "stop" and the video of traffic moving through a nearby intersection freezes. Making a chopping motion to the left, hands parallel to the floor, reverses the footage; gesturing to the right causes it to fast-forward.

Unlike a computer mouse, whose motion is limited to a flat two-dimensional plane, this system of gestures takes advantage of the position of the hands and fingers in space to manipulate images in three dimensions.

Kramer said the technology had attracted interest from Hollywood studios, which are considering ways to incorporate it into film editing, and the video game industry. It also has potential applications in other areas, including computer-aided design, medical imaging, air-traffic control, shipping logistics and homeland security.

Alex McDowell, production designer on "Minority Report" and several other films, said the G-speak technology has applications in the pre-visualization process of filmmaking, in which three-dimensional virtual environments are created on a computer to help the director evaluate where to place characters or the camera within a scene.

The director currently relies on an intermediary — a character modeler or camera operator — to maneuver the camera or manipulate placement of characters, using a keyboard and mouse, McDowell said. Gestural technology would give the director the ability to use his hands to manipulate objects within the computer-generated space or position the camera just-so.

"That's a paramount kind of use case for getting rid of the mouse," said McDowell, who is working on "Bee Movie," a 3-D animated film for DreamWorks. "I'm in exactly the same situation when I'm working with 3-D. If we're all working in a space that was G-speak friendly, I could gesture to where I want to go. I can physically work in that space, I can grab that figure or gesture to change that to red."

But gestural technology is not without its critics.

Ed Chi, senior research scientist at Palo Alto Research Center, which did pioneering work on the mouse, said new interfaces based on speech or hand gestures have limitations. Researchers discovered one of the most obvious drawbacks while developing virtual reality "caves," 3-D computer environments that let people explore the Lascaux caves or the ceiling of the Sistine Chapel.

"It turns out people are lazy," Chi said. "They don't want to move a lot to interact with their technology."

Other obstacles can be more problematic. For common human behaviors such as gestures or speech to replace the mouse, computers need to be placed in a particular "mode" to respond. Otherwise, it won't know to ignore the hand gestures made during casual conversation, or words spoken when answering a phone call.

It was just this such problem that derailed a promising new technology that used the glance of an eye to control the mouse cursor, Chi said.

"A small issue like this is enough to potentially kill a technology," Chi said.

One of the biggest obstacles may be inertia.

Charles Wolf, a computers and peripherals analyst with Needham & Co., said there was a reason the computer mouse has survived for all these years, despite other improvements to the computer. It's so easy to use.

"The new technology would really have to be compelling to get people to switch," Wolf said. "It would have to be dramatically easier to use, and I don't know if any such thing exists, despite the claims of this new company."



In accordance with Title 17 U.S.C. Section 107, this material is distributed without profit to those who have expressed a prior interest in receiving
the included information for research and educational purposes • m3 © 2008 BanderasNews ® all rights reserved • carpe aestus