A control law is a set of rules that are used to determine the commands to be sent to a system based on the desired state of the system. Control laws are used to dictate how a robot moves within its environment, by sending commands to an actuator(s). The goal is usually to follow a pre-defined trajectory which is given as the robot’s position or velocity profile as a function of time. The control law can be described as either open-loop control or closed-loop (feedback) control.
1. Open-loop control
An open-loop controller sends commands to the actuator(s) without using the data collected from the robot’s sensors. This means that the desired trajectory is divided into smaller simple segments of straight lines and arcs that the robot moves in to get from the initial to the final position. This can be represented in the following control block diagram:
Figure 1. Open-loop control diagram
One major disadvantage to this method of control is that it is difficult to exactly model the behavior of the robot such that it will follow the desired path. Another major disadvantage is that the robot will not be able to adapt or change its trajectory if there are any changes in its environment.
2. Closed-loop Control
A closed-loop (feedback) controller uses the information gathered from the robot’s sensors to determine the commands to send to the actuator(s). It compares the actual state of the robot with the desired state and adjusts the control commands accordingly, which is illustrated by the control block diagram below. This is a more robust method of control for mobile robots since it allows the robot to adapt to any changes in its environment.
Figure 2. Closed-loop control diagram
The following block diagram is an example of a control algorithm written in LabVIEW that presents the robot kinematics (or plant) with a transfer function. The controller uses simple proportional control which means the command to the actuator has a value that is proportional to the measured actual state of the robot.
Figure 3. Step Response in LabVIEW using the MathScript Node
3. PID Control
Proportional-Integral-Derivative (PID) control is the most common form of closed-loop control and consists of creating control commands by calculating the proportional, integral and derivative responses of the robot. The ultimate control command is a sum of these three components and each component affects the system in a different way. The proportional control will reduce the rise time, which is the time it takes for the robot to reach the desired position and orientation. The integral control is used to reduce the steady-state error of the robot and finally the derivative control is used to increase the stability of the robot’s response. The control block diagram of a basic PID control algorithm is shown in the figure below. In this figure the dotted box is the PID controller of the system and the PID.vi in LabVIEW can be used in this way to control a robot.
Figure 4. Control block diagram of a basic PID control algorithm
4. Adaptive Control
In the basic PID controller the control gains remain constant; however there are situations where the value of the control gains would be dependent on the state of the robot. In these instances an adaptive control can be used, which includes a model of the system when determining the control commands. The output of the robot is compared to the predicted response of the robot and the model of the robot behavior is updated during its operation. Figure 5 shows a control block diagram of adaptive control, and figure 6 shows how an adaptive control system is implemented in LabVIEW. These figures show how both the control commands and the robot response are used to adjust the model of the robot kinematics. These adjustments are then used to modify the control commands to the robot.
Figure 5. Control block diagram of an adaptive control system
Figure 6. An adaptive control system implemented in LabVIEW
5. Fuzzy Logic Control
Most types of control uses numeric terms to define the control commands but fuzzy logic uses descriptive rules (such as: if the robot is veering left, turn the wheels right) to implement the control commands. The idea behind this is to mimic the way a human would control something despite having no information about the exact model of a system. Take as a simple example, the way someone drives a car. If the car is moving straight forward the steering wheel is centered. However, if the car begins to veer to the left, the driver will turn the wheel to the right to correct the course of the car. The more the car veers to the left, the more the driver will turn the wheel to correct the car. Similarly, if the car veers to the right, the driver will turn the wheel to the left. This can be represented by the graph below where the car behavior is split into three different states (left, straight or right) which can be mapped to a truth value between 0 and 1. The car can be in state where it is only veering very slightly to one side but is acceptable and does not need to be corrected. This is the area where the car can be in a mixture of two states. Depending on what state the car is in, the amount a driver will turn the wheel will change. A similar concept can be extended to a mobile robot and there is a set of vi’s in LabVIEW that are available to help create a fuzzy logic controller.
Figure 7. Fuzzy logic uses descriptive commands to define control commands
When developing robotics applications, fuzzy logic can be used for expert decision making, such as pattern recognition or fault diagnosis. The LabVIEW PID Control Toolkit which is part of the LabVIEW Real-Time Module adds sophisticated control algorithms to control applications. By combining the PID and fuzzy logic control functions with measurement analysis functions in LabVIEW, users can quickly develop programs for automated control. In addition, by integrating these control tools with embedded platforms such as NI CompactRIO, users can create powerful, robust and deterministic control systems for robotics applications.
To learn more about robotics, refer to the Robotics Fundamentals Series homepage at ni.com/zone.
Franklin, Gene F. Feedback Control of Dynamic Systems, New Jersey, Practice Hall, 2002.
Siegwart, Roland and Nourbakhsh, Illah R. Introduction to Autonomous Mobile Robots, Cambridge, Massachusetts: MIT Press, 2000.
Lea, R. K., Allen, A. and Merry, S.L. A Comparative Study of Control Techniques for an Underwater Flight Vehicle, International Journal of System Science, 30(9):947-964, 1999.