Arrow Electronic Components Online

Vision sensing solutions for intelligent mobile robots

Robotics17 Apr 2026
A modern agricultural robot navigates between rows of lush green plants inside a greenhouse. The robot is equipped with wheels and visible wiring, designed for automated crop management. The scene is brightly lit with natural sunlight, highlighting the advanced technology and healthy vegetation. No visible text or numeric values are present on the robot or in the environment.
View all articles

With the rapid advancement of smart manufacturing and automated logistics, Autonomous Mobile Robots (AMRs) and Automated Guided Vehicles (AGVs) are increasingly becoming essential core equipment in modern factories and warehousing systems. To achieve autonomous navigation, environmental perception, and precise obstacle avoidance, vision sensing technology plays an increasingly critical role in robotic systems. By integrating high-resolution image sensors, depth vision modules, and edge AI algorithms, robots can identify objects, pathways, and personnel in their environment in real time, thereby enhancing operational safety and task efficiency. Addressing the complexities of diverse application scenarios, this article introduces how to combine advanced vision sensing hardware with high-performance data processing architectures, along with vision sensing solutions from onsemi.

Unmanned robots incorporating mobility, perception and connectivity capabilities

Autonomous Mobile Robots (AMRs) and Automated Guided Vehicles (AGVs) are unmanned robots incorporating mobility, perception and connectivity capabilities used to transport and move around loads of various weights and sizes, as well as other functions. Such robots are usually powered by a battery, typically between 12 V and 48 V. These systems can interact with humans to varying extents depending on their technology and intended use, from operating safely around people to high levels of cooperation and collaboration.
 
As the industry moves closer to reaching Industry 5.0 - the next phase of digitalization in the manufacturing sector - there is increasing need for human-machine interaction and improvements in robotics. Smart robots can range from robotic arms to wheeled autonomous delivery robots to fully walking humanoids. Smart robots differ from traditional industrial robots in their use of many different sensors, AI, and advanced algorithms to interact with their environment, detect obstacles, and cooperate with humans and other machines.
 
Advantages of deploying autonomous robots include increased productivity and efficiency - performing repetitive and/or time-consuming tasks, thus allowing human workers to focus on value-added activities. At lighter payloads, the system can be operated from a 12V battery, while higher voltage (e.g., 48V) can be used to reduce operating current and thus reduce wire size and costs. Currently, smart robots are primarily deployed in controlled environments such as warehouses and production plants, but there is a growing trend toward outdoor operation.
 
Thanks to advances in artificial intelligence (AI) technology, the global smart robot market is experiencing significant expansion. AI advancements enable the construction of more sophisticated autonomous robots that can be deployed not only in warehouses but also outdoors and in less controlled environments. The autonomous robot segment offers solutions for various industries such as e-commerce, manufacturing, and healthcare, and can be highly customized.
 
While the segment of existing solutions such as collaborative robots is expected to grow at a compound annual growth rate (CAGR) of 32% from 2023 to 2030, many new, innovative, and more complex use cases are emerging. It is anticipated that self-driving forklifts will lead market growth in the e-commerce sector in the coming years. Other established robot types include warehouse robots, projected to grow at a CAGR of 19% from 2023 to 2030, and delivery robots, expecting 30.3% growth.
 
The emerging Industry 5.0, also known as the Fifth Industrial Revolution, represents a new phase of industrialization in which humans work alongside advanced AI-powered robots. This new phase builds upon Industry 4.0 and is enabled by developments in information technology, such as artificial intelligence, automation, big data analytics, the Internet of Things (IoT), machine learning, and robotics.
 
The primary advantage of Industry 5.0 lies in creating higher-value employment opportunities. By automating manufacturing process management, human workers can dedicate more time to creativity, efficient business solutions, and value-added tasks. This leads to increased productivity and motivation. The growing emphasis on sustainability and resilience means businesses can become more agile and flexible while positively impacting society. Industry 5.0 is being developed across numerous sectors, including manufacturing, healthcare, education, agriculture, and beyond.

This diagram illustrates the architecture of a smart device system, highlighting power input, battery management, and sensor integration. The flow starts from AC input through a battery charger, distributing power between 12-48V, smart protection, and DC/DC conversion. Key modules include motion control, CPU communication, lighting and sound, and various sensors such as image, ultrasonic, and LiDAR. The diagram visually separates each functional block with clear labels and icons, and the numeric value '12-48V' is explicitly shown.

Indirect time-of-flight vision sensing applications for precise distance measurement

onsemi has developed comprehensive solutions for intelligent mobile robot applications, with robotic systems consisting of several interconnected sub-blocks. Main sub-blocks include battery management, motion control, sensing, and CPU. These sub-block solutions are highly dependent on the application, meaning robots operating only indoors require fewer sensors, while robots with robotic arms need more motor inverters. Due to space constraints, this article will focus on introducing vision sensing solutions.
 
Indirect Time-of-Flight (iToF) technology represents a crucial vision sensing application, measuring the phase shift of modulated light pulses to accurately calculate distances. This method enables precise 3D imaging and depth perception, making it ideal for various industrial applications. These sensors can be used stand-alone or complementarily with other depth sensing techniques to enhance data accuracy. In robotic systems, they can calculate the depth required for robotic arms to grasp and manipulate objects. Another potential application involves creating environmental maps for floor navigation.
 
onsemi's Hyperlux ID depth iToF sensors are designed to enhance machine vision applications through their advanced depth sensing capabilities, covering up to 10 times the distance of other iToF solutions. They offer multiple advantages: first, they provide high depth accuracy, which is essential for tasks requiring precise 3D mapping and object detection. Second, their ability to operate at high frame rates ensures reliable performance in dynamic environments, easily capturing fast-moving objects. Additionally, these sensors are optimized for low power consumption, making them suitable for battery-powered and multi-sensor systems.
 
The Hyperlux ID depth iToF sensors maintain robust performance even under challenging conditions such as low-light and high dynamic range environments, ensuring reliable operation across diverse industrial settings. This makes them an excellent choice for enhancing efficiency, accuracy, and safety in machine vision applications.
 
Depth calculation requires resources including FPGAs or MCUs, frame memory, and high-speed interfaces (for sensors exceeding 1 MP). With depth maps output directly from the sensor, fewer external devices are needed, resulting in lower computational requirements, reduced power consumption, and more relaxed interface speeds.

A flowchart illustrates the process of depth calculation using an iToF sensor system. The diagram shows a laser illuminator emitting illuminated light towards an object, with reflected light returning to the iToF sensor. The sensor then sends data for depth calculation. Key terms such as 'Laser Illuminator', 'iToF Sensor', and 'Depth Calculation' are clearly visible.

Image sensors designed for exceptional depth sensing and imaging

Taking onsemi's AF013X image sensor series as an example, the AF013X Smart iToF 1.2 MP CMOS sensor family is specifically engineered for superior depth sensing and imaging. These sensors feature a 1/3.2-inch optical format and BSI CMOS global shutter technology, including a 1.2 MP CMOS Smart iToF sensor with advanced 3.5 µm pixel stacked BSI technology. They deliver exceptional low-light and ambient light performance, enhanced NIR response at 850 nm and 940 nm wavelengths (QE > 40%), dual laser operation for increased depth range, and laser eye safety monitoring functionality.
 
Key characteristics of Hyperlux ID depth iToF sensors include high precision, providing accurate distance measurements crucial for applications requiring precise 3D mapping and object detection. Additionally, they offer high frame rates capable of capturing fast-moving objects (60-100 fps), ensuring reliable performance in dynamic environments. Their low power consumption, optimized for energy efficiency, makes them ideal for battery-powered and multi-sensor systems.
 
Furthermore, Hyperlux ID depth iToF sensors excel under demanding conditions, demonstrating outstanding performance in low-light and high dynamic range environments, ensuring dependable operation across various industrial settings. Their on-chip dual laser driver controls and modulation frequencies up to 200 MHz deliver precise and reliable depth measurements, while integrated on-chip laser eye safety thresholds guarantee safe operation in all environments.
 
The AF0130 sensor is equipped with on-chip depth processing ASIC, which rapidly calculates depth, confidence, and intensity maps from laser-modulated exposures, making it ideal for applications requiring fast and accurate depth data. The AF0131 sensor offers identical high performance but without on-chip depth processing functionality. This version is perfect for solutions preferring off-chip depth calculations, offering flexibility for customized processing requirements.
 
onsemi also provides the AF0130 evaluation kit - AF0130CSSM30SMKAH3-GEVK, which utilizes onsemi's innovative iToF sensor featuring a 1.2 MP 1/3-inch BSI global shutter. It works with the laser driver board to program VCSEL modulation settings for illumination. The sensor board can interface with the Demo3 evaluation board, and system configuration is accomplished using onsemi's popular DevSuite software, which provides comprehensive setup interfaces and flexibility for evaluating sensor features and capabilities.

A detailed view of a TUSOYO circuit board featuring a large central chip and various electronic components. The board is black with gold mounting holes and visible printed text, including 'TUSOYO' and 'V1.0 2021-06-10'. The setting appears to be a product shot on a white background, highlighting the intricate layout and connectors. No people or additional branding are visible.

Machine vision and stereoscopic vision solutions supporting inspection and image recognition

Optical sensors can be employed for depth sensing, orientation detection, or providing additional robot capabilities such as inspection or image recognition. Multiple image sensors (IS) and IS processors may be found in different robotic subsystems, representing the only sensing solution capable of color detection. Using optical sensors, systems can detect obstacles, enhance safety, or readout the information (e.g., from barcodes).
 
The image sensors with global shutter store pixel data from the entire image at the same time, thus avoiding motion artifacts. This makes them particularly useful when clear images are required - for instance, when robots are moving and need maximally sharp images. Rolling shutter sensors offer higher dynamic range, enabling better performance under poor or changing lighting conditions. Image sensors can also incorporate High Dynamic Range (HDR) technology, which helps produce clearer images in high-contrast situations. HDR is essential for applications with uncontrolled lighting.
 
Machine vision equips machines with the ability to "see" and understand the visual world, using cameras and image processing software to extract information from images and videos, enabling machines to perform tasks requiring human visual inspection. onsemi offers multiple image sensor series, including the AR0234CS image sensor - a 1/2.6-inch 2.3 MP CMOS digital image sensor featuring a 1920x1200 active pixel array. It offers leading global shutter efficiency, superior low-light and IR performance, and supports auto exposure, windowing, and row/column skip modes.
 
Another option, the AR0822 image sensor, is a 1/1.8-inch 8 MP CMOS digital image sensor with a 3840x2160 active pixel array. It supports rolling shutter, enhanced NIR response, and features on-board eHDR functionality with high sensitivity and low read noise. It also supports intelligent linearization to reduce motion artifacts and LED flicker.

Conclusion

Vision sensing technology has become a fundamental enabler for intelligent mobile robots to achieve advanced autonomous capabilities. By integrating high-performance image sensors, depth sensing technologies, and edge AI computing platforms, robots can not only perceive their surroundings in real time but also perform precise positioning, navigation, and object identification, further enhancing overall operational efficiency and safety. As AI algorithms, image processing chips, and sensor technologies continue to advance, onsemi's vision sensing solutions deliver exceptional resolution, lower latency, and reduced power consumption. These innovations will provide ongoing support for intelligent mobile robots in fields such as smart manufacturing, logistics warehousing, and service robotics, offering critical support for industrial advancement.

Article Tags

Global
Robotics
Internet of Things (IoT)
Artificial Intelligence (AI)
Machine Learning

Related Content