Vision-guided robot systems

A vision-guided robot (VGR) system is basically a robot fitted with one or more cameras used as sensors to provide a secondary feedback signal to the robot controller to more accurately move to a variable target position. VGR is rapidly transforming production processes by enabling robots to be highly adaptable and more easily implemented, while dramatically reducing the cost and complexity of fixed tooling previously associated with the design and set up of robotic cells, whether for material handling, automated assembly, agricultural applications,[1] life sciences, and more.[2]

In one classic though dated example of VGR used for industrial manufacturing, the vision system (camera and software) determines the position of randomly fed products onto a recycling conveyor. The vision system provides the exact location coordinates of the components to the robot, which are spread out randomly beneath the camera's field of view, enabling the robot arm(s) to position the attached end effector (gripper) to the selected component to pick from the conveyor belt. The conveyor may stop under the camera to allow the position of the part to be determined, or if the cycle time is sufficient, it is possible to pick a component without stopping the conveyor using a control scheme that tracks the moving component through the vision software, typically by fitting an encoder to the conveyor, and using this feedback signal to update and synchronize the vision and motion control loops.

Such functionality is now common in the field of vision-guided robotics (VGR). It is a fast-growing rapidly evolving technology proving to be economically advantageous in countries with high manufacturing overheads and skilled labor costs by reducing manual intervention, improving safety, increasing quality, and raising productivity rates, among other benefits.[3]

[4] [5]

The expansion of vision-guided robotic systems is part of the broader growth within the machine vision market, which is expected to grow to $17.72 billion by 2028. This growth can be attributed to the increasing demand for automation and precision, as well as the broad adoption of smart technologies across industries.

Vision systems for robot guidance

[edit]
Camera lens for machine vision

A vision system comprises a camera and microprocessor or computer, with associated software. This is a very wide definition that can be used to cover many different types of systems which aim to solve a large variety of different tasks. Vision systems can be implemented in virtually any industry for any purpose. It can be used for quality control to check dimensions, angles, colour or surface structure-or for the recognition of an object as used in VGR systems.

A camera can be anything from a standard compact camera system with integrated vision processor to more complex laser sensors and high resolution high speed cameras. Combinations of several cameras to build up 3D images of an object are also available.

Limitations of a vision system

[edit]

There are always difficulties of integrated vision system to match the camera with the set expectations of the system, in most cases this is caused by lack of knowledge on behalf of the integrator or machine builder. Many vision systems can be applied successfully to virtually any production activity, as long as the user knows exactly how to set up system parameters. This set-up, however, requires a large amount of knowledge by the integrator and the number of possibilities can make the solution complex. Lighting in industrial environments can be another major downfall of many vision systems.

Overcoming lighting constraints with 3D vision

[edit]

An advantage of 3D vision technology is its independence from lighting conditions. Unlike 2D systems that rely on specific lighting for accurate imaging, 3D vision systems can perform reliably under a variety of lighting scenarios. This is because 3D imaging typically involves capturing spatial information that is less sensitive to contrast and shadows than 2D systems.

In recent years, start-ups have started to appear, offering softwares simplifying the programming and integration of these 3D systems, in order to make them more accessible for industries. By leveraging 3D vision technologies, robots can navigate and perform tasks in environments with dynamic or uncontrolled lighting, which significantly expands their applications in real-world settings.

VGR approaches

[edit]

Typically, vision guidance systems fall into two categories; stationary camera mount, or robot arm-mounted camera. A stationary camera is typically mounted on a gantry or other structure where it can observe the entire robot cell area. This approach has the advantage of knowing its fixed position, providing a stable point of reference for all the activity within the cell. It has the disadvantage of additional infrastructure cost, and occasionally having its view obstructed by the robot arm's position. It also typically requires large image files (5 Mpixel or more) since the image must cover the entire work area.

These may be 2D or 3D cameras, although the vast majority of installations (2019) are using machine vision 2D cameras offered by companies such as Keyence, Basler, Sick, Datalogic, COGNEX and many others. Emerging players such as Leopard Imaging, Pickit3D, Zivid, and Photoneo are offering 3D cameras for stationary use. COGNEX recently acquired EnShape to add 3D capabilities to its lineup as well. 3D stationary mount cameras create large image files and point clouds that require substantial computing resources to process.

A camera mounted on a robot arm has some advantages and disadvantages. Some 3D cameras are simply too large to be practical when mounted on a robot, but Pickit 3D's Xbox cameras and 2D cameras such as Robotiq's wrist camera are compact and/or light enough to not meaningfully affect available robot working payload. An arm mounted camera has a smaller field of view, and can operate successfully at lower resolution, even VGA, because it is only surveying a fraction of the entire work cell at any point in time. This leads to faster image processing times.

However, arm mounted cameras, whether 2D or 3D, typically suffer from XYZ disorientation because they are continually moving and have no way of knowing the robot arm's position. The typical workaround is to interrupt each robot cycle long enough for the camera to take another image and get reoriented. This is visible in essentially all published videos of arm-mounted camera's performance, whether 2D or 3D, and can increase cycle times by as much as double what would otherwise be required.

Pickit 3D's Xbox camera has been arm-mounted for some applications. While capable of more complex 3D tasks such as bin picking, it still requires the stop-take-a-picture re-orientation mentioned above; it's 3D awareness does not help with that problem.

Visual Robotics claims to eliminate this cycle interruption with their "Vision-in-Motion" capabilities. Their system combines a 2D imager with internal photogrammetry and software to perform 3D tasks at high speed, owing to the smaller image files. The company claims a pending patent covering techniques for ensuring the camera knows its location in 3D space without stopping to get reoriented, leading to substantially faster cycle times. While much faster than other 3D approaches, it is not likely to be able to handle the more complex 3D tasks a true stereo camera can. On the other hand, many 3D applications require relatively simple object identification easily supported by the technique. To date, their ability to visually pick objects in motion (e.g. items on a conveyor) using an arm-mounted camera appears to be unprecedented.

Conversely, inbolt presents a platform-independent 3D Vision-based robotic guidance system that integrates a 3D camera, advanced algorithms, and the fastest point cloud processing AI currently available. Their system is designed to handle high-frequency data processing efficiently, allowing for real-time tracking. This means the robot can adjust to variations in the position and orientation of objects within its field of vision. This adaptability is critical in environments where precision and flexibility are essential, making it well-suited for unstructured and unplanned environments. By enabling robots to operate without the need for mechanical constraints, it also eliminates the need for expensive jigs and fixtures.

These new solutions are changing the paradigm of manufacturing industries by offering unique solutions that cater to the evolving needs of modern manufacturing processes.

VGR systems benefits

[edit]

Traditional automation means serial production with large batch sizes and limited flexibility. Complete automation lines are usually built up around a single product or possibly a small family of similar products that can run in the same production line. If a component is changed or if a complete new product is introduced, this usually causes large changes in the automation process-in most cases new component fixtures are required with time-consuming setup procedures. If components are delivered to the process by traditional hoppers and vibrating feeders, new bowl feeder tooling or additional bowl feeder tops are required. It may be that different product must be manufactured on the same process line, the cost for pallets, fixtures and bowl feeders can often be a large part of the investment. Other areas to be considered are space constraints, storage of change parts, spare components, and changeover time between products.

VGR systems can run side by side with very little mechanical setup; in the most extreme cases a gripper change is the only requirement, and the need to position components to set pick-up position is eliminated. With its vision system and control software, it is possible for the VGR system to handle different types of components. Parts with various geometry, can be fed in any random orientation to the system and be picked and placed without any mechanical changes to the machine, resulting in quick changeover times. Other features and benefits of VGR systems are:[6]

  • Switching between products and batch runs is software controlled and very fast, with no mechanical adjustments.
  • High residual value, even if production is changed.
  • Short lead times, and short payback periods
  • High machinery efficiency, reliability, and flexibility
  • Possibility to integrate a majority of secondary operations such as deburring, clean blowing, washing, measuring and so on.
  • Reduces manual work

See also

[edit]

References

[edit]
  1. ^ Shafiekhani, Ali; Kadam, Suhas; Fritschi, Felix B.; DeSouza, Guilherme N. (2017-01-23). "Vinobot and Vinoculer: Two Robotic Platforms for High-Throughput Field Phenotyping". Sensors. 17 (1): 214. Bibcode:2017Senso..17..214S. doi:10.3390/s17010214. PMC 5298785. PMID 28124976.
  2. ^ Robotic Industries Association by Tanya M. Anandan, Contributing Editor POSTED 04/02/2013 http://www.robotics.org/content-detail.cfm?content_id=3992
  3. ^ Intelligent Robots: A Feast for the Senses by Tanya M. Anandan, Contributing Editor Robotic Industries Association POSTED 06/25/2015, http://www.robotics.org/content-detail.cfm?content_id=5530
  4. ^ Zens, Richard. 2006. Vision-guided robot system trims parts. Vision Systems Design, PennWell Corporation (Tulsa, OK), http://www.vision-systems.com/articles/article_display.html?id=261912
  5. ^ Perks, Andrew. 2004. Vision Guided Robots. Special Handling Systems, UK RNA Automation Ltd, http://www.rna-uk.com/products/specialisthandling/visionguidedrobots.html
  6. ^ Perks, Andrew. 2006. Advanced vision guided robotics provide 'future-proof' flexible automation. Assembly Automation, Vol.26 No.3, Emerald (Brandford), p216-217