A Layered Approach to Designing Robot Software

Publish Date: Apr 10, 2012 | 2 Ratings | 4.50 out of 5 | Print

Table of Contents

  1. Driver Layer
  2. Platform Layer
  3. Algorithm Layer
  4. User Interface Layer

Robot software architectures are typically a hierarchical set of control loops, representing high-level mission planning on high-end computing platforms, all the way down to motion-control loops closed with field-programmable gate arrays (FPGAs). In between, there are other loops controlling path planning, robot trajectory, obstacle avoidance, and many other responsibilities. These control loops may run at different rates on different computing nodes, including desktop and real-time OSs, and custom processors with no OS.

At some point, the pieces of the system must come together. Often, this is accomplished by predetermining very simple interfaces between the software and the platform—perhaps as simple as just controlling and monitoring direction and speed. Sharing sensor data across different layers of the software stack is a good idea, but comes with considerable integration pain. Each engineer or scientist involved in designing a robot brings a different view of the world, and an architecture that works well for the computer scientist may not work well for the mechanical engineer, for example.

The proposed software architecture for mobile robotics, shown in Figure 1, takes the form of a three- to four-layer system represented by the following graphic. Each layer in the software depends only on the specific system, hardware platform, or end goal of the robot and remains completely blind to the contents of the layer above or below it. A typical robot’s software contains components in the driver, platform, and the algorithm layers; however, only applications with some form of user interaction include the user interface layer (fully autonomous implementations might not need this layer).

Figure 1. Robotics Reference Architecture

In this specific example, the architecture represents an autonomous mobile robot with a manipulator that is designed to execute tasks including path planning, obstacle avoidance, and mapping. This type of robot might be used in several real-world applications, including agriculture, logistics, or search and rescue. The onboard sensors include encoders, an inertial measurement unit (IMU), a camera, and several sonar and infrared (IR) sensors. Sensor fusion is used to combine the data from the encoders and IMU for localization and to define a map of the robot environment. The camera is used to identify objects for the onboard manipulator to pick up, and the position of the manipulator is controlled by kinematic algorithms executing on the platform layer. The sonar and IR sensors are used for obstacle avoidance. Finally, a steering algorithm is used to control the mobility of the robot, which might be on wheels or treads. The NASA robots shown in Figure 2 illustrate a mobile robot architecture.

Figure 2. Mobile Robots Designed by SuperDroid Robots

Developers can implement these mobile robot platform layers using NI LabVIEW system design software. LabVIEW is used to design sophisticated robotic applications—from robotic arms to autonomous vehicles. The software increases the productivity of engineers and scientists by abstracting I/O and integrating with a wide variety of hardware platforms. A commonly used hardware platform for robotics is NI CompactRIO, which includes an integrated real-time processor and FPGA technology. The LabVIEW platform includes built-in functionality for communicating data between each layer, and for sending data across a network and displaying it on a host PC.

1. Driver Layer

As the name suggests, the driver layer handles the low-level driver functions required to operate the robot. The components in this layer depend on the sensors and actuators used in the system as well as the hardware that the driver software runs on. In general, blocks in this level take actuator setpoints in engineering units (positions, velocities, forces, and so on) and generate the low-level signals that create the corresponding actuation, potentially including code to close the loop over those setpoints. Similarly, this level contains blocks that take raw sensor data, turn it into meaningful engineering units, and pass the sensor values to the other architecture levels. The driver level code shown in Figure 3 was developed using the LabVIEW FPGA module and executes on an embedded FPGA on the CompactRIO platform. The sonar, IR, and voltage sensors are connected to digital  I/O pins on the FPGA, and the signals are processed within continuous loop structures that execute in true parallelism on the FPGA. The data output by these functions is sent to the platform layer for additional processing.

Figure 3. The Driver Layer Interfaces to Sensors and Actuators

The driver layer can connect to actual sensors or actuators, or it can interface to simulated I/O within an environment simulator. A developer should be able to switch between simulation and actual hardware without modifying any layers in the system, other than the driver layer. The LabVIEW Robotics Module 2011, shown in Figure 4, includes a physics-based environment simulator so users can switch between hardware and simulation without modifying any code other than the hardware I/O blocks. Developers can use tools such as the LabVIEW Robotics Environment Simulator to quickly verify their algorithms in software.

Figure 4. An environment simulator should be implemented at the driver layer if simulation is required.

Back to Top

2. Platform Layer

The platform layer contains code that corresponds to the physical hardware configuration of the robot. This layer frequently translates between the driver layer and the higher level algorithm layer by converting low-level information into a more complete picture for the higher levels of the software and vice versa. In Figure 5, we are receiving the raw IR sensor data from the FPGA using a LabVIEW FPGA Read/Write node, and processing it on the CompactRIO real-time controller. We are using functions in LabVIEW to convert the raw sensor data to more meaningful data—in this case, distance. We are also determining if we are outside the range of 4 m to 31 m.

Figure 5. The platform layer translates between the driver layer and algorithm layer.

Back to Top

3. Algorithm Layer

Components at this level represent the high-level control algorithms for the robotic system. Figure 6 shows how blocks in the algorithm layer take system information such as position, velocity, or processed video images and make control decisions based on all of the feedback, representing the tasks that the robot is designed to complete. This layer might include components that map the robot’s environment and perform path planning based on the obstacles around the robot. The code in Figure 6 shows an example of obstacle avoidance using a vector field histogram (VFH). In this example, the VFH block receives distance data from a distance sensor, which was sent from the platform layer. The output of the VFH block contains path direction, which is sent down to the platform layer. In the platform layer, the path direction is input into the steering algorithm, which generates low-level code that can be sent directly to the motors at the driver layer.

Figure 6. The algorithm layer makes control decisions based on feedback.

Another example of an algorithm layer component is a robot tasked to search its environment for a red spherical object that it needs to pick up using a manipulator. The robot has a defined way to explore the environment while avoiding obstacles—a search algorithm combined with an obstacle avoidance algorithm. While searching, a block in the platform layer processes images and returns information on whether or not the object has been found. Once the sphere has been detected, an algorithm generates a motion path for the endpoint of the arm to grasp and pick up the sphere. 

Each of the tasks in the example provides a high-level goal independent from the platform or the physical hardware. If the robot has multiple high-level goals, this layer will also need to include some arbitration to rank the goals.

Back to Top

4. User Interface Layer

Not always required in fully autonomous applications, the user interface layer provides physical interaction between the robot and a human operator or displays relevant information on a host PC. Figure 7 shows a GUI that displays live image data from the onboard camera, and the X and Y coordinates of nearby obstacles on a map. The servo angle control allows the user to rotate the onboard servo motor that the camera is attached to. This layer can also be used to read input from a mouse or joystick, or to drive a simple text display. Some components of this layer, such as a GUI, could be very low priority; however, something like an emergency stop button would need to be tied into the code in a deterministic manner.

Figure 7. The user interface layer allows a user to interact with a robot or display information.

Depending on the target hardware, the software layers could potentially be distributed across multiple targets. In many cases, all of the layers run on one computing platform. For nondeterministic applications, the software targets a single PC running Windows or Linux. For systems that require tighter timing constraints, the software should be targeted to a single processing node with a real-time OS.

Due to their small size, power requirements, and hardware architecture, CompactRIO and NI Single-Board RIO make excellent computing platforms for mobile applications. The driver, platform, and algorithm layers can be distributed across the real-time processor and the FPGA, and if required, the user interface layer can run on a host PC, as shown in Figure 8. High-speed components such as motor drivers or sensor filters can run deterministically in the fabric of the FPGA without tying up clock cycles on the processor. Mid-level control code from the platform and algorithm layers can run deterministically in prioritized loops on the real-time processor, and the built-in Ethernet hardware can stream information to a host PC to generate the user interface layer.

Figure 8. Mobile Robotics Reference Architecture Mapped to a CompactRIO or NI Single-Board RIO Embedded System

A brief look at literature concerning software architecture for mobile robotics shows that many different ways exist for approaching the topic. This article presents a generalized answer to the problem of how to structure a mobile robot’s software; however, any design will require some forethought and planning to fit into an architecture. In return, a well-defined architecture helps developers easily work on projects in parallel by partitioning the software into layers with well-defined interfaces. Furthermore, partitioning the code into functional blocks with well-defined inputs and outputs fosters reusing components of code in future projects.

-Meghan Kerry

Meghan Kerry is the NI marketing manager for Robotics. She holds a bachelor's degree in mechanical engineering from the University of Tennessee. 

Learn More About NI Technology for Designing Autonomous Systems

Explore Robotics Architectures and Code Created by the Community

Back to Top

Bookmark & Share


Ratings

Rate this document

Answered Your Question?
Yes No

Submit