Society is becoming increasingly automated with sensor-packed electronic devices that will dominate everyday life.
3D sensors can be one of two general types: contact or non-contact. A contact sensor scans an object by touching it, while a non-contact sensor scans items by measuring radiation coming from the object. If the measured radiation is reflected from the non-contact sensor, the sensor is classified as active, but if the sensor is just measuring the object’s natural radiation, such as visible light, it is classified as passive. Examples of active sensors are those using infrared beams or radar, while an example of a passive sensor is a dual stereo camera sensor.
Active sensors can be fixed or handheld, depending on the application, and how much precision is required. Handheld sensors detect and scan objects using triangulation, whereby the system projects a laser dot onto a surface and measures it with a camera, while accounting for the distance between the laser emitter and camera. Fixed sensors can also utilize triangulation, which is accurate for short distances, but for longer distances sensors measure the time it takes for light to reflect off the scanned object, a technique known as time-of-flight.
3D depth sensing systems using infrared light, such as Microsoft's Kinect, have five main components: illumination sources, controlling optics, an optical bandpass filter, a depth camera and firmware.
Illumination sources are typically LEDs, laser diodes or vertical Side Emitting Lasers that generate infrared or near-infrared light. The laser emits pulses of a pattern of light to reflect off the world in view. This light is normally invisible to users and is often optically modulated to enable higher resolution results. By modulating or adjusting the signal, the system can introduce a signature wavelength that is easier for the system to detect.
Narrow bandpass near-infrared filters with very high transmission in the desired band allow for deep optical blocking elsewhere. Only reflected light that matches the illuminating light frequency reaches the light sensor, eliminating ambient and other stray light that would degrade performance. Efficient filtering is critical to achieving satisfactory system performance under adverse light conditions. Limiting the light that gets to the sensor eliminates unnecessary data unrelated to the 3D depth sensing task at hand. Combined with noise-reducing software algorithms, this dramatically reduces the processing load on the firmware.
3D sensor bandpass filters, a low-angle shift (LAS) technology, focuses the reflected light into a narrower wavelength, giving the system an easier signal to process and thereby improving the system's signal-to-noise ratio. The reflected, filtered light is then detected by a depth camera, a high performance optical receiver that then turns the light into an electrical signal that is sent to a processor.
That processor, containing permanently installed software known as firmware, is one or more very-high-speed application-specific integrated chips (ASIC) or digital signal processor (DSP) chips that convert the electrical signal into a format that can be understood by an application such as video game software.
3D sensors are used in a variety of computing applications, including gaming, interface controls and imaging systems. A common 3D sensor application is video game gesture control. Microsoft's Kinect camera is used with the company’s Xbox gaming platform and available as a standalone software development kit for Windows applications. The Kinect camera projects near-infrared light onto objects, and its sensor measures the time it takes for light to return – "time-of-flight" – to determine how far away an object is. In addition to gesture control, the measurements are accurate enough to allow facial recognition.
The most common use of 3D sensors is seen in the consumer electronics space, more specifically, cellphones. Infrared (IR) proximity sensors located near the earpiece detect users’ ears, alerting the phone to disable the display during calls. This technology is useful for users because it helps avoid pressing the mute button or accidentally hanging up calls. Use of multimodal biometrics is also on the rise, as passwords are expected to become obsolete by 2020. Cell phones may soon see the introduction of iris recognition technology, eliminating the need for passwords.
A multitude of sensors enable conceptual facial recognition technologies
As car automation technology continues to evolve with more multisensory capabilities, vehicles will better understand what’s happening under a wider variety of situations to appropriately respond. Lidar sensing is an emerging car technology that has long been used for applications in space, agriculture and geographical mapping -- think radar + light. It’s now being used for semi and fully autonomous driving that employs filter technology. Lidar enables cars to have full environmental awareness for the best driving experience. Lidar utilizes laser light that projects from the car and bounces off an external object. The returning light is then sent through a specialized filter before hitting a sensor, which feeds data to the car and driver. Driver monitoring inside the car also employs similar 3D depth-sensing technology but at a shorter range to recognize and adapt to conditions for the driver. Soon, cars will automatically recognize drivers using biometrics.
Another area that uses 3D sensors is prototyping and modeling. Sensors recreate scanned objects using high-tech manufacturing processes such as 3D printing. 3D models are used for archiving and preservation purposes. To enhance the educational experience, museums are increasingly turning to 3D technologies to capture moments in time. The Smithsonian scans some of the 137 million items in its collection and is looking for ways to make the images available to a wider audience.
Written by Markus Bilger, Product Line Manager Director, Consumer Products, Viavi Solutions