Vision Guided Robotics

Publish Date: Feb 04, 2013 | 0 Ratings | 0.00 out of 5 |  PDF

Overview

Vision and robotics systems are able to be programmed within one application with LabVIEW and the ImagingLab Robotics Library for DENSO. This paper will cover the calibration between the vision and robot systems to use the same coordinate system.

Table of Contents

  1. Background
  2. Calibration
  3. Relative and Absolute Movements
  4. Parallel Processing
  5. Next Step

1. Background

This paper is part of the ImagingLab Robotics Library for DENSO Reference Guide and assumes you are familiar with the basic use of the library.  For a review on these topics and an introduction to the library, follow the preceding link to the reference guide.

 

Back to Top

2. Calibration

The bridge between the vision system and the robot is a shared coordinate system. The vision system finds a part and reports the location, but, to instruct the robot to move to that location, the system must convert the coordinates into units the robot accepts. Calibration allows the vision system to report positions in real-world units such as millimeters, which are used with the robot’s Cartesian coordinates. A common method for calibration is using a grid of dots. For more in-depth information on image calibration, refer to the NI Vision Concepts Manual. You can use this grid of dots to calibrate the vision system as well as calibrate the robot with the vision system. When calibrating the vision system, you must select an origin to define the x-y plane. Usually one of the corner dots is selected as the origin and then either a row or column is defined as the x-axis.

The robot’s method for creating a coordinate system is similar, so you can use the same dot on the calibration grid you select for the vision system’s origin as the robot’s origin. Simply move the robot to that dot, store the location as a position variable on the robot controller, move the robot in the x- and y-axes, and store those locations as position variables. You can store position variables by using either LabVIEW or the DENSO teaching pendant. Once you have completed this, you can use the DENSO teaching pendant to autocalculate a coordinate system or work area based on the three stored positions. To use the autocalculate tool on the DENSO teaching pendant, navigate from the home screen to Arm»Auxiliary Functions»Work»Auto Calculate. Once in the work autocalculate menu, simply select the position variables that correspond to the origin, x-axis, and x-y plane position variables stored as previously mentioned. You can input the calibrated vision system’s positions directly into the robot’s Cartesian move VIs once this autocalculation is complete and the recently created work is set as the current one used. See Figure 10 for a simple example of passing the coordinates. These results that are passing directly to the robot move VIs also include the angle of the match, which is an output of the vision system’s geometric pattern matching results. Other methods of calibration are to input the parameters of the coordinate system directly rather than using the built-in autocalculation on the teaching pendant. You can do this with the DENSO teaching pendant or LabVIEW. See the DENSO manual for more information on calibration methods for the robot.

 

Back to Top

3. Relative and Absolute Movements

In most cases, a camera has a fixed location relative to the robot, meaning that once you have completed a calibration for the robot and vision system, you can directly input the calibrated position output by the vision code into the robot move VIs. This is using absolute movements. In other cases, you can fix the camera to the end effector or some other moving device. Because the view of the camera changes, you must either update the calibration with the view or provide relative movements. When using relative movements, the target position is given as a relative offset from the current position, such as the camera needing the target to be in the center of the frame each acquisition. When the target is off-center, the image results command the robot to move a relative amount until the target is recentered. You also can use a hybrid system in situations where the fixed camera gives absolute movements to pick up parts and move to the assembly area. You use the second camera for precise guidance to the final position.

 

Back to Top

4. Parallel Processing

The ImagingLab Robotics Library for DENSO has a sequential API in which commands are executed in the order they are programmed, but in many applications, the robot control code may not be the only code that needs to execute. Vision acquisition and processing, HMIs and user interfaces, alarms management, control of feeding devices, and other communication processes are all tasks that can run in parallel with the robot control. LabVIEW features many architectures, such as a Master-Slave or Producer-Consumer architecture, that are capable of handling various tasks, and you can select an architecture based on your specific application. The image acquisition and processing does not need to be in-line with the robot commands when implementing vision-guided robotics applications, so using a parallel processing architecture allows these functions to occur at the same time as the robot commands and other control processes. This keeps the robot in continual motion because it does not have to wait for the vision code to complete before moving. Once the robot has left the viewing area, you can take a new image and implement the geometric pattern matching and inspection while the previous part is assembled or moved to its place location. Hence, once the previous part has been placed, the new part’s location is ready to be sent to the robot move VIs. The following is a sample flowchart for the vision code that runs in parallel with the robot VIs.

 

Back to Top

5. Next Step

View the data sheet, product information, and webcasts on calibration and programming pick and place movements at the following link.

>> ImagingLab Robotics Library for DENSO

Back to Top

Bookmark & Share

Ratings

Rate this document

Answered Your Question?
Yes No

Submit