Researchers Showcase Holographic TV Concept
The Consumer Electronics Show in Las Vegas in January 2010 was abuzz
about a slew of prototype 3-D TVs, but if new research from the MIT Media Lab is any indication, holographic TVs could be close behind.
At the Society of Photo-Optical Instrumentation Engineers' (SPIE)
Practical Holography conference in San Francisco the weekend of Jan.
23, members of Michael Bove's Object-Based Media Group presented a
new system that can capture visual information using off-the-shelf
electronics, send it over the Internet to a holographic display, and
update the image at rates approaching those of feature films.
In November, researchers at the University of Arizona made headlines with an experimental holographic-video transmission system that used 16 cameras to capture data and whose display refreshed every two seconds. The new MIT system uses only one data-capture device - the new Kinect camera designed for Microsoft?s Xbox gaming system - and averages about 15 frames per second. Moreover, the MIT researchers didn?t get their hands on a Kinect until the end of December, and only in the week before the conference did they double the system's frame rate from seven to 15 frames per second. They're confident that with a little more time, they can boost the rate even higher, to the 24 frames per second of feature films or the 30 frames per second of TV rates that create the illusion of continuous motion.
The difference between holograms and the type of 3-D images becoming common in movie theaters is frequently overlooked, Bove says. During a screening of, say, the 3-D version of Avatar, viewers on the far-left aisle of the theater see the same image that viewers on the far-right aisle do. That image may have depth, but it's filmed from a single perspective. As a viewer moves around a hologram, however, his or her perspective on the depicted object changes continuously, just as it would if the object were real.
A standard 3-D movie camera captures light bouncing off of an object at two different angles, one for each eye. But in the real world, light bounces off of objects at an infinite number of angles. Holographic video systems use devices that produce so-called diffraction fringes, fine patterns of light and dark that can bend the light passing through them in predictable ways. A dense enough array of fringe patterns, each bending light in a different direction, can simulate the effect of light bouncing off of a three-dimensional object.
The challenge with real-time holographic video is taking video data - in the case of the Kinect, the light intensity of image pixels and, for each of them, a measure of distance from the camera - and, on the fly, converting that data into a set of fringe patterns. Bove and his grad students - James Barabas, David Cranor, Sundeep Jolly and Dan Smalley - have made that challenge even tougher by limiting themselves to off-the-shelf hardware.
"Really, the focus of our work in digital holography - and I think this makes us pretty much unique among the very small community of people in the world even doing holovideo - is that we?re trying to make a consumer product," Bove says. "So we've been saying, How do you make it as cheap as possible ? take advantage of hardware and standards and software and everything else that already exists?? Because that's the quickest way to bring it to market."
In the group's lab setup, the Kinect feeds data to an ordinary laptop, which relays it over the Internet. At the receiving end, a PC with three commercial graphics processing units - GPUs - computes the diffraction patterns.
The one component of the researchers? experimental system that can't be bought at an electronics store for a couple hundred dollars is the holographic display itself. It's the result of decades of research that began with MIT's Stephen Benton, who built the first holographic video display in the late 1980s. (When Benton died in 2003, Bove?s group inherited the holographic-video project.) The current project uses a display known as the Mark-II, a successor to Benton's original display that both Benton's and Bove's groups helped design. But Bove says that his group is developing a new display that is much more compact, produces larger images, and should also be cheaper to manufacture. (Bove and his students reported on an early version of the display at the same SPIE conference four years ago.)
Mark Lucente, director of display products for Zebra Imaging in Austin, Texas, which is commercializing holographic displays for videoconferencing applications, says that his company's prospective customers are often uncomfortable with the sheer computational intensity of holographic video. "It's very daunting," he says. "1.5 gigabytes per second are being generated on the fly." By demonstrating that off-the-shelf components can keep up with the computational load, Lucente says, Bove's group is "helping show that it's within the realm of possibility." Indeed, he says, "by taking a video game and using it as an input device, [Bove] shows that it's a hop, skip and a jump away from reality."
When the Media Lab researchers demonstrate their new technology at the conference in San Francisco, another grad student in Bove's group, Edwina Portocarrero, sporting a cowled tunic and a wig with side buns, will re-enact the scene from the first Star Wars movie in which a hologram of Princess Leia implores Obi-Wan Kenobi to re-join the battle against the evil empire. The resolution of the real hologram won't be nearly as high as that of the special-effects hologram in the movie, but as Bove points out, "Princess Leia wasn't being transmitted in real time. She was stored."
In November, researchers at the University of Arizona made headlines with an experimental holographic-video transmission system that used 16 cameras to capture data and whose display refreshed every two seconds. The new MIT system uses only one data-capture device - the new Kinect camera designed for Microsoft?s Xbox gaming system - and averages about 15 frames per second. Moreover, the MIT researchers didn?t get their hands on a Kinect until the end of December, and only in the week before the conference did they double the system's frame rate from seven to 15 frames per second. They're confident that with a little more time, they can boost the rate even higher, to the 24 frames per second of feature films or the 30 frames per second of TV rates that create the illusion of continuous motion.
The difference between holograms and the type of 3-D images becoming common in movie theaters is frequently overlooked, Bove says. During a screening of, say, the 3-D version of Avatar, viewers on the far-left aisle of the theater see the same image that viewers on the far-right aisle do. That image may have depth, but it's filmed from a single perspective. As a viewer moves around a hologram, however, his or her perspective on the depicted object changes continuously, just as it would if the object were real.
A standard 3-D movie camera captures light bouncing off of an object at two different angles, one for each eye. But in the real world, light bounces off of objects at an infinite number of angles. Holographic video systems use devices that produce so-called diffraction fringes, fine patterns of light and dark that can bend the light passing through them in predictable ways. A dense enough array of fringe patterns, each bending light in a different direction, can simulate the effect of light bouncing off of a three-dimensional object.
The challenge with real-time holographic video is taking video data - in the case of the Kinect, the light intensity of image pixels and, for each of them, a measure of distance from the camera - and, on the fly, converting that data into a set of fringe patterns. Bove and his grad students - James Barabas, David Cranor, Sundeep Jolly and Dan Smalley - have made that challenge even tougher by limiting themselves to off-the-shelf hardware.
"Really, the focus of our work in digital holography - and I think this makes us pretty much unique among the very small community of people in the world even doing holovideo - is that we?re trying to make a consumer product," Bove says. "So we've been saying, How do you make it as cheap as possible ? take advantage of hardware and standards and software and everything else that already exists?? Because that's the quickest way to bring it to market."
In the group's lab setup, the Kinect feeds data to an ordinary laptop, which relays it over the Internet. At the receiving end, a PC with three commercial graphics processing units - GPUs - computes the diffraction patterns.
The one component of the researchers? experimental system that can't be bought at an electronics store for a couple hundred dollars is the holographic display itself. It's the result of decades of research that began with MIT's Stephen Benton, who built the first holographic video display in the late 1980s. (When Benton died in 2003, Bove?s group inherited the holographic-video project.) The current project uses a display known as the Mark-II, a successor to Benton's original display that both Benton's and Bove's groups helped design. But Bove says that his group is developing a new display that is much more compact, produces larger images, and should also be cheaper to manufacture. (Bove and his students reported on an early version of the display at the same SPIE conference four years ago.)
Mark Lucente, director of display products for Zebra Imaging in Austin, Texas, which is commercializing holographic displays for videoconferencing applications, says that his company's prospective customers are often uncomfortable with the sheer computational intensity of holographic video. "It's very daunting," he says. "1.5 gigabytes per second are being generated on the fly." By demonstrating that off-the-shelf components can keep up with the computational load, Lucente says, Bove's group is "helping show that it's within the realm of possibility." Indeed, he says, "by taking a video game and using it as an input device, [Bove] shows that it's a hop, skip and a jump away from reality."
When the Media Lab researchers demonstrate their new technology at the conference in San Francisco, another grad student in Bove's group, Edwina Portocarrero, sporting a cowled tunic and a wig with side buns, will re-enact the scene from the first Star Wars movie in which a hologram of Princess Leia implores Obi-Wan Kenobi to re-join the battle against the evil empire. The resolution of the real hologram won't be nearly as high as that of the special-effects hologram in the movie, but as Bove points out, "Princess Leia wasn't being transmitted in real time. She was stored."