Novus Light Home

Blog

Blog

By mounting a panoramic color camera to the vehicle as well, it is then possible to add color information to the map captured by the 2D Lidar system

You would not have been disappointed at last week’s UK-based Photonex 2015 event at the Ricoh arena in Coventry whether you were searching for a new component or system supplier for your machine vision or scientific instrumentation application, or to attend one or more of the numerous technical seminars held concurrently with the exhibition.

On the show floor, a record number of manufacturers and distributors demonstrated their wares, making the exhibition the largest event in ten years. The so-called “Enlighten” and “Vision in Action” conferences brought together many speakers from industry and academia who delivered insightful presentations on subjects as varied as optical metrology, bio-imaging and machine vision.

Vision in the driving seat

Although many of the attendees at the show may not have been directly involved in developing autonomous vehicles, a keynote speech at the UK Industrial Vision Association’s (UKIVA) “Vision in Action” conference by Dr. Will Maddern, a senior researcher from the mobile robotics group at the University of Oxford, drew more interest than any previous Photonex vision events.

In the keynote, entitled “Vision for autonomous driving: challenges and opportunities”, Dr. Maddern explained how conventional autonomous vehicles have used spinning Lidar scanners to enable them to navigate their surroundings. However, since these Lidar systems are typically twice the cost of the vehicles that they are mounted on, Dr. Maddern’s group has been experimenting with the use of computer vision systems to make future autonomous cars less expensive and more widely available.

Dr. Maddern’s team does employ a 2D Lidar system mounted to a vehicle, but this is used to initially capture a highly dense 3D map of the environment. By mounting a panoramic color camera to the vehicle as well, it is then possible to add color information to the map captured by the 2D Lidar system (See image above).

Once the maps are created, image software classifiers can  remove any temporary objects from the scene, such as parked cars.

A vehicle fitted with several stereo vision systems -- but without a Lidar system fitted to it -- can then be used to extract depth data from points of interest from a scene as the vehicle moves through it. By successively matching features found in the scenes across multiple frames, the trajectory of the vehicle through the environment can be determined to a high degree of accuracy. And by correlating the data from the cameras with the previously acquired data in the 3D map, it is possible to ascertain the location of the vehicle in the environment down to millimeter precision.

Dr. Maddern said that in addition to its obvious use in autonomous vehicles on Earth, his team had also recently trialed the use of such stereo vision systems in a prototype Mars Rover. Once deployed, the vision systems successfully autonomously guided the Rover across the barren landscape of the Atacama Desert in South America, despite the lack of many identifying features in the surroundings.

Robot vision slices produce

The 3D theme continued to garner interest from the Photonex attendees as Paul Wilson, the Managing Director of Scorpion Vision, described how 3D stereo vision could be gainfully employed in robot vision applications. He also touched on the issue of how such 3D systems could be used to lower cost -- but in his case, it was by automating the manual processes currently used in food production.

One of the points raised by Mr. Wilson during his talk was the need to capture the 3D structure of objects such as produce in a highly accurate fashion. This, he said, is often achieved by projecting patterns of structured light onto the surface of an object. Once illuminated, images of the object can then be captured by a pair of cameras after which the images can be processed to create an accurate 3D rendition of their surface.

On the Scorpion Vision booth itself, the company was discussing robotic systems that it had recently been involved in developing with one of its OEMs to fully automate the cutting of produce. Traditionally, this has been a manual process in which trained operators use mechanical knives to slice produce into individual pieces, which often leads to waste.

The new vision based systems developed by Scorpion Vision use a laser with a pattern projector to illuminate the produce with a random texture. The pattern is then captured using a pair of cameras integrated into a single unit along with the laser source. The data from the cameras is transferred to a PC, which compares the two images captured by the cameras and calculates the surface co-ordinates of the fruit. These are then used to create a cutting path which can be transferred to a robotic controller to orient a water cutter on the robot end effecter to move across the produce, slicing it into sections. Alternatively, knowing the 3D co-ordinates of the produce, the robot can pick it up and guide it under a stationary water cutter to perform a similar cutting operation.

According to Scorpion Vision’s Mr. Wilson, the new systems will not only replace existing manual procedures, but will make the process of cutting produce more repeatable and lead to less waste. Mr. Wilson said that he had a number of such vision-based robotic food production systems in the works and that Scorpion Vision will be making further announcements on those applications over the next couple of months.

High speed and high accuracy

Returning to the show floor, the theme of 3D imaging was again a subject for conversation at the Odos Imaging booth. There, Ritchie Logan, the company’s VP of Business Development, was on hand to discuss the way in which the company is expanding the markets for its 3D Time of Flight camera.

Traditionally, vendors of 3D Time of Flight cameras that use a pulsed light modulation technique have targeted industrial applications in logistics automation where the +/- 1cm depth resolution they offer at frame rates of 30 frames/sec at working distances between 0.5 to 5m is acceptable. But the engineers at Odos Imaging also recognized that the architecture of their time of flight 3D vision system could be repurposed to create a camera that would be able to capture images at frame rates between 450 – 17,900 fps, enabling the company to enter the high-speed image capture marketplace as well.

Where ambient illumination is insufficient in a particular high speed application -- or for ultra high speed applications requiring the use strobe illumination -- the new camera can be interfaced to the company’s pulsed laser illumination modules. The illumination modules deliver short <200 ns intense pulses of light onto a high velocity object, enabling images to be captured in perfect clarity with negligible motion blur.

capturing a flying bullet

Odos Imaging has used its high speed imaging camera to capture images of a 7.62 mm projectile as it exited the muzzle of a rifle barrel. Even with a very high speed object travelling at 750 m/s, the short pulse used to illuminate the bullet largely eliminated motion blur.

Odos representatives were discussing how the camera had been used to capture images of a 7.62 mm projectile as it exited the muzzle of a rifle barrel (see image above). Using the camera configured with dual illumination modules and a band pass filter centered around 880 nm, the camera was configured to capture a single image when triggered by the projectile making contact with a ballistic foil. Even with a very high speed object travelling at 750 m/s, the short illumination pulse largely eliminated motion blur.

For those applications where measuring depth more accurately is an absolute necessity, Dr. Russell Evans, the Managing Director of Omniscan, was present at the show to highlight the advantages of using the MikroCAD system from Berlin-based GFMesstechnik. This system, he said, could be used to capture the surface topography of large parts which would be out of the remit of confocal or white light interferometry systems.

Dr. Evans added that the MikroCAD system -- which is distributed his company in the UK -- could, for example, be used to measure the complete sealing surface of an engine cylinder head where it could resolve the height of the components on the engine head, such as the valves, down to micrometer accuracy. To do so, the system projects fringes of light onto the surface of an object, and captures an image of those fringes which are analyzed to produce full data on the height of the object.

Aside from such stand-alone systems, MikroCAD also offers a range of 3D sensors that can be integrated into OEM automation systems. The sensors, which sport an on-board processor, can not only capture the 3D data but process it to enable the distances of points on a surface to be measured accurately. This data can then be transferred to a PC for further analysis by a users own application program. 

Welding with vision

Although Swiss camera manufacturer Photon Focus was not exhibiting at the show per se, Dr. Peter Schwider, the Chief Technical Officer of the company was at the company’s UK distributor Clearview Imaging to discuss the company’s cameras which are widely used in welding inspection applications.

According to Dr. Schwider, what makes the cameras particularly unique is their so-called “linear-logarithmic” response curve. In low light, the response of the sensors in the camera is linear while at higher intensities of light, the response curve is logarithmic. The point at which the logarithmic compression commences, and the strength of the compression that is required, can be adjusted by the user in software.

One company that has taken advantage of this feature is Swiss-based Souvis, which has developed an inspection system for the quality control of weld seams.

Mounted on a robot to follow a particular weld seam, a Souvis imaging head combines both a 3D and a 2D imaging system to capture images of the weld seam

The Souvis 5000 inspection system automatically monitors welding and brazing seams by analyzing both 2D and 3D images captured by cameras from Photon Focus.

Mounted on a robot to follow a particular weld seam, a Souvis imaging head combines both a 3D and a 2D imaging system to capture images of the weld seam .

The 3D system works by projecting a laser line on the surface of the welded seam and captures the image of the reflected line using a Photon Focus 2D camera. Using triangulation, the system can then calculate the distance from the object to the camera and create a 3D profile of it. Here, the camera captures data in the logarithmic range, enabling the system to detect the bright regions of the seam illuminated by the laser to a great deal of accuracy.

A second Photon Focus camera, on the other hand, is set up to capture images of the structure of the welded seam and the material surrounding it in the linear range of the cameras transfer characteristic. By combining the 2D and 3D approaches, the Souvis system can detect welding defects such as pores down to 50um at inspection speeds up to 30m/min.

Coming up

While the number of visitors on the second day of the show appeared to be a little low compared to the previous opening day, there is no doubt that the Photonex show still remains a key event in the UK calendar for engineers to learn more about new technologies such as 3D imaging. 

By Dave Wilson, Senior Editor, Novus Light Technologies Today

Labels: Photonex,3D imaging,scientific instrumentation,Lidar,time-of-flight,sensors,automation,illumination

Back Back to Blog
 

Illuminating Products

Copyright © 2017 Novus Media Today Group, LLC. All rights reserved. Website design and build by MM Design.