Researchers have used clever mathematical solutions to make a 3-D camera so simple, cheap and power-efficient that it could one day be incorporated into handheld devices such as cellphones.

Researchers at the Massachusetts Institute of Technology on Thursday said they have essentially “optimized” individual and computer interaction in a paper that will be presented at the Institute of Electrical and Electronics Engineers’ International Conference on Acoustics, Speech and Signal Processing in March.

“3-D acquisition has become a really hot topic,” Vivek Goyal, a co-author of a study said in a released statement. “In consumer electronics, people are very interested in 3-D for immersive communication, but then they’re also interested in 3-D for human-computer interaction.”

“Sensing is always hard,” Goyal said, “and rendering it is easy.”

Indeed there are already 3-D movies, andphone manufacturer HTC just recently began marketing a 3-D phone which features a 3-D display and 3-D camera. However, the camera on the HTC EVO 3D is not exactly a 3-D camera. Instead, the 3-D display is created with two separate cameras on the back of the phone that creates a binocular disparity with two 2-D images, much like the combined perception of left and right eyes.

The technology created by researchers at MIT is more like Microsoft’s Kinect, a device that lets video gamers control games with physical gestures instead of controllers and other more sophisticated depth-sensing devices.

Kinect works by producing a visual map of a scene by utilizing information about the distance to individual objects. Researchers at MIT have used Kinect’s concept of visual mapping to create a computer interface that can be used as a navigation system for miniature helicopters and holographic video transmitters and a number of other technologies.

The newly developed system uses the “time of flight” of light particles to determine depth. A pulse of infrared laser light is shot at a scene and a camera measures the time it takes for the light to return from the object at difference distances.

The concept of using “time of flight” has already been tested and is used in many depth assessment technologies. However instead of using the traditional light detection and ranging (LIDAR) system that fires a series of pulses precisely corresponding to a point in a grid and then separately measures their time of return, the new system is more efficient and affordable.

A traditional LIDAR system camera costs thousands of dollars, but researchers have come up with a system that uses only a single light detector, a one pixel camera, instead of the customary model of a series of sensors that precisely distinguishing each small photon of light.

Researchers said that “clever mathematic tricks” helped in creating the new affordable device.

The first tick of compressed sensing uses one light laser that passes through a series of randomly generated patterns of checkered squares, which remarkably provided enough information for algorithms to reconstruct a two dimensional visual image from light intensities measured by just a single pixel.

The number of laser flashes and also the number of checkerboard patterns needed to build a decent depth map was only 5 percent of the number of pixels needed to produce the final image, researchers said.

In comparison with the traditional LIDAR system would need to send out a separate laser pulse for every pixel. The new system is significantly more cost effective.

Researchers used a technique called parametric signal processing to add a third dimension to the depth map. For simplicity, researchers assumed that all scene surfaces directed toward the camera were flat plane because calculation of light bouncing off a flat surface is easier than calculation of light bouncing from curved surfaces.

Researchers found that the parametric algorithm created a very accurate depth map from very limited visual information.

The algorithm created is so simple that it can run on the type of processor ordinarily found in smartphones. By contrast, to interpret data provided by the Kinect, the Xbox would need extra processing power of a graphics processing unit, a powerful but expensive special-purpose piece of hardware.

“This is a brand-new way of acquiring depth information,” said Yue Lu, an assistant professor of electrical engineering at Harvard University in a statement. “It’s a very clever way of getting this information.”

However, an obstacle of placing the newly developed system in a handheld device could be the difficulty of emitting light pulses of adequate intensity without draining the battery, Lu added.

Qualcomm, a telecom giant has awarded the MIT research team a $100,000 grant though its 2011 Innovation Fellowship program.