During the 17th EMVA Business Conference mid May in Copenhagen, a high-level panel discussion titled “How will Embedded Vision impact our future?” attracted the attention of the more than 100 attendees.
Already at the first question on how they define embedded vision and see the main difference to “traditional” machine vision, opinions parted amongst the five panelists. Colin Pearce, CEO of Active Silicon stated that with the concept of smart cameras, the embedded approach has been around in the industry for almost 15 years and the current focus of embedded vision lies in the System-on-Chip (SoC) approach. Mark Williamson, Managing Director and Director of Corporate Marketing at Stemmer Imaging agreed and added that design of smart cameras started at component level and with today’s embedded systems design would start at the subsystem level, which would drive down development costs. Some disagreement came from Dr. Olaf Munkelt, CEO of MVTec Software saying that embedded vision has a much broader scope of applications than “just” machine vision. Andreas Franz, CEO at Framos joined this opinion by saying that the focus of embedded vision solutions lies at measuring new customer groups, which are not imaging experts and don’t have an interest in the physics behind imaging. Arwyn Roberts, CTO at Surface Inspection had a different perspective and saw embedded vision as a new paradigm of architecture for machine vision or computer vision technology. With it the selection of components would be changing, it would be an economic shift into what is feasible at what price. He stated further: “Also, industrial vision is just one application of vision technology. Most of what is coming to us in embedded vision is coming from the outside of that, e.g., from the Silicon Valley or big companies creating incredible IP at low costs. Our challenge is how to exploit that.”
On the question on whether the ‘traditional’ machine vision companies by gaining new markets or the embedded device companies would benefit more from the trend toward embedded vision, it was pointed out by the panelists that everyone can benefit by delivering new values to the marketplace. Andreas Franz stated that the difference Framos sees at the moment is that companies like Sony and Intel add new functionalities to the semiconductor level. Many embedded devices would have a down-upwards direction as they originally were not intended to serve the high-level machine vision market. For Dr. Munkelt, the Embedded World show this year in Nuremberg proved to be a paradigm shift as even traditional machine builders found the way to the former “techie” show to see if there are embedded vision solutions that address their problems. “People from both ends are looking for new solutions now” is what he observed. Mark Williamson pointed toward the massive market that develops outside the traditional machine vision segment such as in autonomous driving or the retail sector, from which predominantly component manufacturers would benefit. Andreas Franz further stated that the one thing experienced machine vision players could export to mass markets, such as autonomous driving, is application-specific know-how. Therefore, it might be worth it to look into these markets without forgetting the quite high risks.
Benefit to the end user
For Colin Pearce the end user will be the one to benefit most due to the growing applications. He also said that embedded vision could become a disruptive technology as it might be able to even replace camera-based systems in the industry such as a flat-glass panel inspection system that switched to embedded technology recently.
‘Can each vision problem be solved soon by embedded vision?’ was the next question and the answers of the panelists might have eased some fears of the audience that embedded vision will soon be the only surviving technology. Framos` CEO Andreas Franz made a point by simply stating “Machine vision does not to be fixed, it is already working“; and Dr. Munkelt joined in saying that “classic” machine vision will survive as there is still a demand for high end solutions with respective cameras and frame grabbers. “However”, he added, “a lot of applications that were considered high-end a few years ago can probably be tackled by embedded vision soon.” Colin Pearce said that “certain applications are perfect to be solved by embedded vision, e.g., drones”, but added that PC based systems will always be there as it’s easy and quick and many programmers still stand by for creating algorithms. For Arwyn Roberts one of the big opportunities the embedded approach has “is the ability to take the application-specific know-how and push it toward the sensor.” However, to be able to leverage and integrate embedded vision, he sees the need of more standardization.
“Embedded vision will touch us all and become so ubiquitous that there will be no way around” was a common statement on the question of whether machine vision companies will be able to exist without embedded vision products in future. Andreas Franz added that he sees much bigger application areas for embedded vision coming ahead. Also software solutions/algorithms for embedded vision are coming from Silicon Valley and China that fit the requirements of low- to mid-end machine vision systems. This poses the question to established players on how to differentiate toward these “good-enough” embedded solutions. Still, it was also said by the panelists that while embedded vision will creep into factory floor applications, the whole automation segment is too conservative to be open to disruptive changes.
This paved the way for the next question, which was whether smartphone solutions from Silicon Valley will solve all machine vision problems in the future. The panelists agreed that the machine vision market is too small and fragmented for companies such as Apples and Amazons to focus on. Instead, these companies would invent and the machine vision industry benefits such by making standards on top of USB and Ethernet. As Arwyn Roberts put it: “What Silicon Valley companies basically deliver is technology to mass markets. And we, as an industry, need choose the technologies to pick and standardize for the applications relevant to our industry.” Also Andreas Franz meant that surface inspection will never be done with a smart phone. Instead, it would more be the other way around according to Colin Pearce: “For example Nvidia developed a GPU for the computer gaming industry that turned out to be quite useful for machine vision as well.” Similar developments would soon be seen by System-on-Chip (SoC) solutions that are currently developed for smart phones but might be of great value for the machine vision industry al well.
The 18th EMVA Business Conference will take place from 25–27 June, 2020 in Sofia, Bulgaria.
Written by Andreas Breyer, Senior Editor, Germany, Novus Light Technologies Today