A new twist on 3D imaging technology could one day enable your self-driving car to spot a child in the street half a block away, let you answer your smart phone from across the room with a wave of your hand, or play “virtual tennis” on your driveway.
The new system, developed by researchers at the University of California, Berkeley, can remotely sense objects across distances as long as 30 feet, 10 times farther than what could be done with current low-power laser systems. With further development, the technology could be used to make smaller, cheaper 3D imaging systems that offer exceptional range for potential use in self-driving cars, smart phones and interactive video games, such as Microsoft’s Kinect, all without the need for big, bulky boxes of electronics or optics.
The new system relies on LIDAR (“light radar”), a 3D imaging technology that uses light to provide feedback about the world around it. LIDAR systems of this type emit laser light that hits an object and then can tell how far away that object is by measuring changes in the light frequency that is reflected back. It can be used to help self-driving cars avoid obstacles halfway down the street or to help video games tell when you are jumping, pumping your fists or swinging a “racket” at an imaginary tennis ball across an imaginary court.
In contrast, current lasers used in high-resolution LIDAR imaging can be large, power-hungry and expensive. Gaming systems require big, bulky boxes of equipment and you have to stand within a few feet of the system for them to work properly. Bulkiness is also a problem for driverless cars such as Google’s, which must carry a large 3D camera on its roof.
The researchers sought to shrink the size and power consumption of the LIDAR systems without compromising their performance in terms of distance.
Figure 1. Conceptual vision for an integrated 3D camera with multiple pixels using the FMCW laser source. Credit: Behnam Behroozpour
In their new system, the team used a type of LIDAR, called frequency-modulated continuous-wave (FMCW) LIDAR, which they felt would ensure their imager had good resolution with lower power consumption. This type of system emits “frequency-chirped” laser light (that is, whose frequency is either increasing or decreasing) on an object and then measures changes in the light frequency that is reflected back.
To avoid the drawbacks of size, power and cost, the Berkeley team exploited a class of lasers called MEMS tuneable VCSELs. MEMS (micro-electrical-mechanical system) parts are tiny micro-scale machines that, in this case, can help to change the frequency of the laser light for the chirping, while VCSELs (vertical-cavity surface-emitting lasers) are a type of inexpensive integrable semiconductor lasers with low power consumption. By using the MEMS device at its resonance, the natural frequency at which the material vibrates, the researchers were able to amplify the system’s signal without a great expense of power.
Figure 2. 3D schematic showing MEMS-electronic-photonic heterogeneous integration. Credit: Niels Quack
The team’s next plans include integrating the VCSEL, photonics and electronics into a chip-scale package. Consolidating these parts should open up possibilities for applications that haven’t been invented yet, including the ability to use your hand, Kinect-like, to silence your ringtone from 30 feet away.
UC Berkeley’s Behnam Behroozpour will present the research team’s work at CLEO 2014, being held 8-13 June in San Jose, California, US. Presentation AW3H.2, entitled “Method for Increasing the Operating Distance of MEMS LIDAR beyond Brownian Noise Limitation,” will take place Wednesday, 11 June at 4:45 p.m. in the Room 210H of the San Jose Convention Center.