麻豆淫院


Computing with a wave of the hand (w/ Video)

Computing with a wave of the hand
Media Lab researchers demonstrate a laboratory mockup of a thin-screen LCD display with built-in optical sensors. Photo: Matthew Hirsch, Douglas Lanman, Ramesh Raskar, Henry Holtzman

(麻豆淫院Org.com) -- The iPhone鈥檚 familiar touch screen display uses capacitive sensing, where the proximity of a finger disrupts the electrical connection between sensors in the screen. A competing approach, which uses embedded optical sensors to track the movement of the user鈥檚 fingers, is just now coming to market. But researchers at MIT鈥檚 Media Lab have already figured out how to use such sensors to turn displays into giant lensless cameras. On Dec. 19 at Siggraph Asia -- a recent spinoff of Siggraph, the premier graphics research conference -- the MIT team is presenting the first application of its work, a display that lets users manipulate on-screen images using hand gestures.

Many other researchers have been working on such gestural interfaces, which would, for example, allow computer users to drag windows around a screen simply by pointing at them and moving their fingers, or to rotate a virtual object through with a flick of the wrist. Some large-scale gestural interfaces have already been commercialized, such as those developed by the Media Lab鈥檚 Hiroshi Ishii, whose work was the basis for the system that Tom Cruise鈥檚 character uses in the movie Minority Report.

But 鈥渢hose usually involve having a roomful of expensive cameras or wearing tracking tags on your fingers,鈥 says Matthew Hirsch, a PhD candidate at the Media Lab who, along with Media Lab professors Ramesh Raskar and Henry Holtzman and visiting researcher Douglas Lanman, developed the new display. Some experimental systems 鈥 such as Microsoft鈥檚 Natal 鈥 instead use small cameras embedded in a display to capture gestural information. But because the cameras are offset from the center of the screen, they don鈥檛 work well at short distances, and they can鈥檛 provide a seamless transition from gestural to interactions. Cameras set far enough behind the screen can provide that transition, as they do in Microsoft鈥檚 SecondLight, but they add to the display鈥檚 thickness and require costly hardware to render the screen alternately transparent and opaque. 鈥淭he goal with this is to be able to incorporate the gestural display into a thin LCD device鈥 鈥 like a cell phone 鈥 鈥渁nd to be able to do it without wearing gloves or anything like that,鈥 Hirsch says.

The Media Lab system requires an array of liquid crystals, as in an ordinary LCD display, with an array of optical sensors right behind it. The liquid crystals serve, in a sense, as a lens, displaying a black-and-white pattern that lets light through to the sensors. But that pattern alternates so rapidly with whatever the LCD is otherwise displaying 鈥 the list of apps on a smart phone, for instance, or the virtual world of a video game 鈥 that the viewer never notices it.

The simplest way to explain how the system works, Lanman says, is to imagine that, instead of an LCD, an array of pinholes is placed in front of the sensors. Light passing through each pinhole will strike a small block of sensors, producing a low-resolution image. Since each pinhole image is taken from a slightly different position, all the images together provide a good deal of depth information about whatever lies before the screen. An array of liquid crystals could simulate a sheet of pinholes simply by displaying a pattern in which, say, the central pixel in each 19-by-19 block of pixels is white (transparent) while all the others are black.

The problem with pinholes, Lanman explains, is that they allow very little light to reach the sensors, so they require exposure times that are too long to be practical. So the LCD instead displays a pattern in which each 19-by-19 block is subdivided into a regular pattern of black-and-white rectangles of different sizes. Since there are as many white squares as black, the blocks pass much more light.

The 19-by-19 blocks are all adjacent to each other, however, so the images they pass to the sensors overlap in a confusing jumble. But the pattern of black-and-white squares allows the system to computationally disentangle the images, capturing the same depth information that a pinhole array would, but capturing it much more quickly.

LCDs with built-in optical sensors are so new that the Media Lab researchers haven鈥檛 been able to procure any yet, but they mocked up a display in the lab to test their approach. Like some existing touch screen systems, the mockup uses a camera some distance from the screen to record the images that pass through the blocks of black-and-white squares. But it provides a way to determine whether the algorithms that control the system would work in a real-world setting. In experiments in the lab, the researchers showed that they could manipulate on-screen objects using hand gestures and move seamlessly between gestural control and ordinary touch screen interactions.

Of the current crop of experimental gestural interfaces, 鈥淚 like this one because it鈥檚 really integrated into the display,鈥 says Paul Debevec, director of the Graphics Laboratory at the University of Southern California's Institute for Creative Technologies, whose doctoral thesis led to the innovative visual effects in the movie The Matrix. 鈥淓veryone needs to have a display anyway. And it is much better than just figuring out just where the fingertips are or a kind of motion-capture situation. It鈥檚 really a full three-dimensional image of the person鈥檚 hand that鈥檚 in front of the display.鈥

Indeed, the researchers are already exploring the possibility of using the new system to turn the display into a high-resolution camera. Instead of capturing low-resolution three-dimensional images, a different pattern of black-and-white squares could capture a two-dimensional image at a specific focal depth. Since the resolution of that image would be proportional to the number of sensors embedded in the screen, it could be much higher than that of the images captured by a conventional webcam.

Computing with a wave of the hand
Darkening all but the central pixel in a 19-by-19 block turns an array of liquid crystals into a pinhole camera; but a pattern of black-and-white rectangles of different sizes passes much more light while providing a way to computationally disentangle overlapping images. Diagrams: Matthew Hirsch, Douglas Lanman, Ramesh Raskar, Henry Holtzman

Raskar, who directs the Media Lab鈥檚 Camera Culture Group, stresses that the work has even broader implications than simply converting displays into cameras. In the history of computation, he says, 鈥渋ntelligence moved from the mainframe to the desktop to the mobile device, and now it鈥檚 moving into the screen.鈥 The idea that 鈥渆very pixel has a computer behind it,鈥 he says, offers opportunities to reimagine how humans and computers interact.

鈥淚t鈥檚 kind of the hallmark of a lot of Ramesh鈥檚 work,鈥 says Debevec. 鈥淗e comes up with crazy cameras with the guts hanging out of them and strange arrangements of different mechanics in something that at first you鈥檙e wondering, 鈥榃ell, why would you do that?鈥 No one quite does things the way that he does because no one else thinks the way he does. Then you start to understand it and you realize that there鈥檚 actually a very interesting new thing happening.鈥

Provided by Massachusetts Institute of Technology ( : )

Citation: Computing with a wave of the hand (w/ Video) (2009, December 11) retrieved 8 July 2025 from /news/2009-12-video_1.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Forest ecologist sees climate consequences

0 shares

Feedback to editors