The following sections provide a basic overview of the OPC standard’s purpose, motivation, and goals, as well as a brief explanation of OLE, COM, and ActiveX.
OPC Introduction -- OLE for Process Control
OPC (OLE for Process Control) is a standard interface between numerous data sources, including devices on a factory floor, laboratory equipment, test system fixtures, and databases in a control room. To alleviate duplication of effort in developing device drivers, eliminate inconsistencies between drivers, provide support for hardware feature changes, and avoid access conflicts in industrial control systems, the OPC Foundation defined a set of standard interfaces that allow any client to access any OPC-compatible devices. Most suppliers of industrial data acquisition and control devices work with the OPC Foundation standard.
OPC allows device-side server and application software -- two separate processes -- to communicate with each other through a standard Microsoft COM interface.
Note: The OPC specification only specifies COM interfaces, not the implementation of those interfaces.
OPC was designed to be a layer of abstraction between the specific device and the program that needs to get information or control that device. The OPC standard specifies the behavior that the interfaces are expected to provide to the clients who use them, and the client receives the data from the interface using standard function calls and methods. Consequently, any computer analysis or data acquisition program can communicate with any industrial device so long as that program contains an OPC client protocol and the device driver has an OPC interface associated with it.
The underlying layer of the OPC specification is based on Microsoft's COM/DCOM technology, which is also known as Object Linking and Embedding (OLE) or more commonly as ActiveX. COM (Component Object Model) and DCOM (Distributed COM) act as interfaces between the client and other system components. In modern operating systems, processes are shielded from each other, and a client that needs to talk to a component in another process cannot do so directly. COM provides an interface communication layer that allows local and remote procedure calls to be made between processes. DCOM or Distributed COM is the natural extension of COM to support communication among objects on different computers -- on a LAN, WAN, or the Internet. COM is referred to as OLE when the application is used to embed documents of one type inside a document of another type. One example of a COM implementation is the ability to create and edit Microsoft Excel spreadsheets within a Microsoft Word document. COM is commonly known as ActiveX when referring to its Internet capabilities. An example of ActiveX is the ability to embed multimedia players within pages on the Web. OPC uses the term OLE because that was the most commonly used term to describe COM when the OPC Specification was defined.
The communication between processes in COM supports three basic types of interaction:
- Properties--individual settings for a control
- Methods--functions called on a control to perform a specific action
- Events--messages that a control creates to alert the outside world of what is happening within the process
The OPC Specification defines a standard COM interface for use in industrial acquisition and control settings. The specification includes a protocol for defining objects, setting properties on those objects, and standardizing function calls and events. In doing so, OPC accommodates a wide variety of data sources. Device I/O includes data acquisition devices, valves, fieldbuses, and Programmable Logic Controllers (PLCs). The specification also includes a protocol for working with Data Control Systems (DCSs) and application databases, as well as online data access, alarm and event handling, and historical data access for all of these data sources.
The data access server has three divisions:
- Server--contains all of the group objects
- Group--maintains information about itself and contains and organizes the OPC items
- Item--contains a unique identifier held within the group that acts as a reference for the individual data source, as well as value, quality, and timestamp information. The value is the data from the source, the quality status gives information about the device, and the timestamp is the time that the data was retrieved.
An OPC application accesses all items through the OPC group rather than through the item itself. The group also contains a specified update rate for the group, which tells the server at what rate to make data changes available to the OPC client. A deadband specified for each group tells the server to reject values if they have changed by less than the specified deadband percentage.
. An example implementation of the OPC specification
The OPC server also provides alarm and event handling to clients. Within a server, an alarm is an abnormal condition of special significance to the client -- a condition associated with the state of the server or the state of a group or item within the server. For example, if a data source value that represents the real-world temperature of a mixer drops below a certain temperature, then the application can be sent an alarm so that it will be able to properly handle the low temperature. Events are detectable occurrences that are of importance to the server and client, such as system errors, configuration changes, and operator actions.
OPC also incorporates historical data access standards, which are a way to access the data stored by historical engines, including raw data storage servers and complex data storage and analysis servers. This feature of OPC allows interoperability of proprietary database systems.
OPC Ideal for High-Channel Count Applications
The OPC Foundation’s stated design, goals, and motivation for industry standardization of control systems has enabled the implementation of high-channel count systems that are efficient and user-friendly.
Client software developers and users of these applications have greater flexibility in implementing a solution that is tailored to their needs, because data is organized into groups and the naming, or tagging, of data points is determined by the client software. Grouping is beneficial in dealing with large sets of data sources because it provides greater organization of the data as well as easy reference to similar sets of data. In an OPC application, a tag gives a unique identifier to an I/O point. The OPC Specification leaves the responsibility for naming tags up to the client software, which can either name the tags programmatically or pass that task on to the user. In large systems, meaningful names for data sources improves usability by allowing the operator to choose easy to remember identifiers that can specify the data source by function, hardware name, or other name based on the operator’s discretion. This flexibility is a significant factor in the ability of client software to provide solutions that are tailored for high-channel count applications.
Client software also specifies the rate at which the server supplies new data to the client. The server is responsible for data publication, so the client becomes event-driven and can handle large sets of data much more efficiently, because it does not have to poll the data sources to get new data. Instead, the client software becomes a reactive object that waits for new data to arrive. The program does not need to perform time-consuming data polling, which frees up more time for analysis and datalogging.
The client also specifies deadbands on the server, which allows the client to determine what data is important and disregard data that is not significant. Deadband percentages reject values that do not change more than a certain percentage from the previous value recorded. By establishing moderate deadband values, a much greater number of channels can be monitored, because the client only receives information about channels that it deems essential, and it does not get flooded with superfluous information.
The OPC specification enables high performance throughput, a necessity in high-channel count applications. The The Performance and Throughput of OPC white paper on the OPC Foundation website discusses the results of testing a simulated OPC client. The paper shows that OPC technology allows for throughput higher than most client software packages can currently obtain. Throughput is highly dependent on the hardware configuration and the amount of data that can be obtained from the underlying data source. Therefore, OPC is not the bottleneck in throughput in most cases, creating a technology that is able to handle the necessary performance of high-channel count supervisory systems.
Because OPC is now an industry standard, client software can connect to almost every vendor device available. Client software now is compatible with many types of devices, so the software can be designed with large numbers and varieties of devices in mind. These are a few of the many characteristics of OPC that give development software a huge advantage when OPC connectivity is leveraged to implement high-channel count application software.