Vision System for 3D Model Reconstruction Using Low-Price Hardware Components and Structured Light Reconstruction Algorithms in LabVIEW

"Thanks to the flexibility and expandability of LabVIEW vision software, we developed a new 3D scanning system that satisfies the accuracy requirements of our customer in a short time, and left space to improve the system."

- Luca Bigi, Asterisco Tech S.r.l.

The Challenge:

Asterisco Tech wanted to use software reconstruction algorithms to obtain the desired accuracy for a vision system to integrate in retail machines for accurate 3D model reconstruction of small real-world objects.

The Solution:

We chose LabVIEW software for shape identification, stereo system calibration, and 3D point cloud reconstruction from the disparity map, and integrating new algorithms for disparity calculation using the structured light method instead of the block matching method.

A customer asked Asterisco Tech to implement a vision system to integrate in retail machines for accurate 3D model reconstruction of small real-world objects. Equipment pricing restrictions made it impossible to use high-end hardware for scanning, so we needed to try software reconstruction algorithms to obtain the desired accuracy.

 

We must scan and reconstruct objects that are uniform colored and have no texture, so we cannot use the standard block matching algorithm to perform the scanning. Because of the accuracy required in scanning, we explored alternative reconstruction methods. The objects that we must scan are usually opaque, or we can spray them to make their surfaces opaque. This is an ideal situation for structured light reconstruction techniques.

 

 

We used simple and inexpensive hardware that includes two USB 1280 x 1024 cameras, one 800 x 600 DLP projector, and a two-axis motorized rotating table we can use to acquire images of all the sides of the object. We can control the rotating table with an inexpensive Arduino board for stepper motors.

 

We needed an accurate calibration of the stereo system because of the low precision of the hardware used, the mechanical support that holds all the components, and the high accuracy required for the reconstructed model. The calibration process tells the system which are the relative positions of the cameras and which are the distortions produced by the lenses of the cameras. After this calibration the system knows the exact position of each pixel of the cameras, and we can perform the triangulation process of scanned points. The calibration consists of the acquisition of a known pattern (a dot matrix) by both cameras. We can move the pattern in different positions so that the camera sensor is fully covered to compensate for optical distortions. Moreover, by acquiring the pattern at same time with both cameras and moving the pattern in different angulations, we can determine the relative positions of the cameras. We can easily accomplish this process using NI Vision tools.

 

 

Once the system is calibrated, we can start acquiring the images to create a 3D model of the scanned object. The acquisition process includes several steps. First, for every position of the rotating table, we acquire several images of the object and generate a partial point cloud. Second, we merge all the partial point clouds together to create a point cloud for the entire object. Third, we create a mesh from the point cloud.

 

The first step requires the biggest effort because the accuracy of the acquisition system depends on this and it is where we implement the structured light reconstruction method that we must create from scratch. In general, all structured light techniques are based on projecting a known texture pattern on the surface of the object to be scanned, acquiring the illuminated scene with two cameras forming the stereo system, and identifying the known pattern points in the images produced by the two cameras. We chose a structured light pattern to create this acquisition system that is based on multiple acquired images in sequence for each scene and in each frame of a sequence the object is illuminated with a different pattern. We chose an illumination sequence based on gray code projected patterns followed by phase shifting patterns. The projector illuminates the object to be scanned with a sequence of vertical stripes smaller for each acquired frame following a gray code sequence, then illuminates the object with small vertical stripes shifting of a single pixel for each acquired frame. This technique makes it possible to calculate the disparity map of the scene by giving a unique ID to each pixel of the image acquired by one camera and by finding the same ID on the image acquired by the other camera with sub-pixel accuracy. Once the disparity map is calculated, we can use NI Vision to generate the partial point cloud of the scene.

 

We can perform the second step by roughly aligning the partial point clouds of the scenes based on the known movements of the rotating table and then refining the alignment applying an algorithm that minimizes the error between two point clouds. In this way, we can obtain a uniform point cloud of the entire object scanned without holes produced by obstructions unavoidable in a single scene.

 

In the third step, we calculate a continuous mesh by applying a ball pivoting algorithm to the point cloud of the object. At the end of the reconstruction process the scanner system outputs a 3D model that we can export in standard formats such as PLY or STL for further elaboration in standard commercial 3D CAD softwares.

 

The system can now scan objects contained in a volume of about 15 x 15 x 15 cm with a resolution of less than 50 um. However, we can easily increase the resolution of the scan by using higher resolution cameras and projector. Similarly, we can scan a bigger area by modifying the mechanical structure of the system and using different optical lenses.

 

 

Results

Thanks to the flexibility and expandability of LabVIEW vision software, we developed a new 3D scanning system that satisfies the accuracy requirements of our customer in a short time, and left space to improve the system.

 

 

Author Information:

Luca Bigi
Asterisco Tech S.r.l.
Via C. Bozza 14
Corciano 06073
Italy
Tel: +39 0757825790
Fax: +39 0757823791
luca.bigi@asteriscotech.com

 

 

 

 

 

Figure 1. Detail of the Optical System Used
Figure 2. Screenshot of the User Interface During a Scan