AI and ML Offer a Completely New Way to Solve Problems

Interview with Markus Levy, Director of AI and Machine Learning Technologies, NXP Semiconductors

“In the medium and long term, we want to integrate some level of AI acceleration engine into many of our components. We even go so far as to explore integrating cost-effective, special hardware blocks on the low-end MCUs.” – Markus Levy

 

Question: Are AI and Machine Learning buzzwords or is the hype behind AI/ML real?

Markus: For sure the AI and/or ML buzzwords are hyped, but it’s more real than the hype that surrounded IoT five years ago. This is not meant to undermine the value of IoT, because it has led to the development of many cool new application areas, but IoT was more or less an evolution of embedded. On the other hand, AI/ML reflects a ‘market segment’ that provides an entirely new way to solve problems. Remember that machine learning in the form of neural networks and classical algorithms has been around for decades. However, prior to recent advancements in machine learning frameworks, no one knew how to practically train deep neural networks and it was impractical to use commodity hardware, so they remained relatively dormant. Now the amount of research has exploded, and so has the availability of opensource and proprietary options to train, optimize, and deploy machine learning models. Today we see many more companies investing resources and money into AI/ML development, and this is just the tip of iceberg. As with any cutting-edge technology where people ‘smell’ money, there will be winners and losers.


Question: How do you see AI proliferating into mainstream technology?

Markus: First you should note that for AI/ML to proliferate across all or many application areas and become mainstream technology, it must be simplified to a level that models can be trained and inference engines can be developed and deployed with a python script or pull-down menu, rather than burdening the developer with having to create complex mathematical algorithms. Programming for AI/ML is often referred to as Software 2.0. It’s not about using traditional programming methods, instead, it’s about utilizing existing neural networks and frameworks and classical ML libraries. To support this, the effort is more about determining weights and parameters (e.g. typically known as training models).

Regarding the particular benefits of AI, fundamentally this is an area where the data is gold. In other words, AI/ML is more or less useless unless the application developer can collect, and subsequently label, data that is used for training the models. Assuming that the application developer is able to train the model, the benefits of AI can be applied in a wide range of applications targeted at vision, voice, and anomaly detection.


Question: Where does NXP stand in AI/ML leadership?

Markus: NXP is recognized as a leader in the AI environment because we address the AI applications directly from a software and complementary hardware perspective. In other words, the first ingredient to a successful deployment of AI is having a scalable portfolio of hardware platforms that give developers a choice in performance, power, and price. The ability to run an ML algorithm on a low-end MCU as well as a high-end applications processor is driven by the acceptable inference latency and the memory footprint. Software is the second ingredient required for AI deployment, and for this, NXP has chosen a path that provides a variety of open source options, we refer to this as the eIQ™ machine learning software development environment. It’s become obvious that proprietary solutions for machine learning deployment still offer viable approaches, but they must also be able to keep up with the rapid progress we see happening with open source. Therefore, our mission is to enable open source options, such as TensorFlow Lite, GLOW, Arm® NN, and others, and apply device-specific optimizations as needed to achieve more competitive results.


Question: What are the applications covered in NXP’s AI/ML enablement environment?

Markus: As I mentioned earlier, we focus on vision, voice, and anomaly detection – of course this is very broad and at a high level represents most applications residing at the edge. Vision can be subdivided into applications such as face recognition and object detection and can even cross over into the anomaly detection domain. Voice encompasses keyword detection for Alexa™-like applications, as well as limited natural language processing (limited because of the restrictions on memory capacity on edge devices). And anomaly detection spans applications such as predictive maintenance and monitoring through a wide variety of sensors to detect normal and abnormal circumstances.




Related Article:  
AI-ML and Thor96 Single Board Computers: Specially Built for AI, IoT, Visual Display and More



Question: What are the features of NXP chips that make it uniquely suited for AI/ML applications?

Markus: A scalable portfolio will lend itself well to these AI applications. It all comes down to how much performance is required (in other words, the inference time) and the amount of memory available. People will say that they want more performance and fancy ML accelerators but cost is always the overriding factor. For that matter, NXP products with integrated DSPs will be an excellent choice for voice and vision applications. The integrated DSP is used for keyword detection, and it allows the devices to remain in low-power mode until the appropriate keyword is detected. At the other end of the spectrum, NXP offers the i.MX 8 and Layerscape family of processors, with integrated GPUs, multicore CPU cores, and DSP accelerators, giving the application the ability to perform heterogeneous computing and multiple machine learning algorithms in parallel. Next generation devices will include dedicated machine learning accelerators, yielding unprecedented performance levels.


Question: What is NXP’s eIQ machine learning enablement?

Markus: At the base level, eIQ is a collection of open source technologies to deploy machine learning applications. These technologies include run-time engines such as TensorFlow and TensorFlow Lite, it includes network parsers and dedicated inference engines such as Arm NN, and support for libraries such as OpenCV. While all these technologies can be obtained from sources such as Github, NXP’s eIQ makes these easier to use, provides detailed documentation, and integrates these into our development environments (MCUXpresso for microcontrollers, and Yocto/Linux® for i.MX application processors).


Question: What is planned next for NXP hardware to support AI/ML applications?

Markus: The medium- and long-term goals are to include some form of AI hardware acceleration engines in many of our devices, even specialized hardware blocks on the lowest-end MCUs.




Coming Soon - AI and ML Single-Board Computers based on NXP i.MX 8 Application Processors
AI-ML Board based on NXP i.MX 8X SoC and the Thor96 Board based on NXP i.MX 8M SoC.
Contact Arrow to reserve your boards.



Download PDF

Related news articles

Latest News

Sorry, your filter selection returned no results.

We've updated our privacy policy. Please take a moment to review these changes. By clicking I Agree to Arrow Electronics Terms Of Use  and have read and understand the Privacy Policy and Cookie Policy.

Our website places cookies on your device to improve your experience and to improve our site. Read more about the cookies we use and how to disable them here. Cookies and tracking technologies may be used for marketing purposes.
By clicking “Accept”, you are consenting to placement of cookies on your device and to our use of tracking technologies. Click “Read More” below for more information and instructions on how to disable cookies and tracking technologies. While acceptance of cookies and tracking technologies is voluntary, disabling them may result in the website not working properly, and certain advertisements may be less relevant to you.
We respect your privacy. Read our privacy policy here