1. Process Model Theory
Testing a UUT requires more than just executing a set of tests. Usually, the test system must perform a series of operations before and after it executes the sequence that performs the tests. Common operations include identifying the UUT, notifying the operator of pass/fail status, logging results, and generating a test report. These operations define the testing process. This set of operations and their flow of execution are called a process model. Process models could be implemented directly into your execution engine or be part of the actual tests you perform. The previous architecture limits your system’s ability to integrate new functionality and reduces its maintainability. The process model should be implemented independently from the test code as well as the execution engine in order to maintain the system’s modularity.
Types of Process Models
Three of the most commonly used process models are the Sequential Model, the Batch Model, and the Parallel model. You can use the Sequential model to run a test sequence on one UUT at a time. This process model is most effective when the test system has only one test fixture and therefore, only unit can be tested at a time. The Batch Model was designed for a test system that contains a fixture with more than one unit that need to be tested together. For example, if a unit needs to be tested inside a temperature chamber, the test fixture will contain multiple units which will need to start and stop all the tests at the same time. The third process model is the Parallel Model which is best used for systems that contain more than one independent test fixture. This model lets each test fixture run through as many units as it can without consideration of the testing rate of other test fixtures in the same system.
Callbacks are test sequences that are typically executed inside of a process model, but allow client sequences to override the default the behavior and therefore increasing the modularity of your test system. When a callback is overridden, rather than calling the code inside of the process model’s sequence, code is executed in the client sequence. This allows for unique behavior for a particular test sequence without impacting other parts of the process model. For example, for one particular type of UUT, you may want to use a barcode scanner rather than the manual serial number entry. Another example of a model callback is the Process Setup and Process Cleanup sequences. When testing a large batch of UUTs, it may make sense to initialize and clean up a set of instruments a single time rather than initializing and cleaning up before and after every UUT.
Execution Entry Points
Execution entry points in a process model to allow for different execution modes that have lead to different testing procedures. The default process models in NI TestStand have two entry points- Test UUTs and Single Pass. The Test UUTs execution entry point executes in a loop that repeatedly identifies and tests UUTs. The Single Pass execution entry point tests a single UUT without identifying it. Execution entry points can be configured to only allow certain users. For example, Operators are only allowed to run the Test UUTs execution entry point, but Technicians are allowed to run both the Test UUTs and Single Pass execution entry point. NI TestStand allows for users to customize the process model to create their own entry points for other purposes, such as debugging. This modular architecture allows for incredible flexibility and maintainability by reducing the amount of recoding when changes are made to the process model.
Configuration Entry Points
Configuration entry points allow an operator to set various configuration options for the process model, such as Configure Report Options and Configure Database Options. Like the other parts of the process model, configuration entry points are fully customizable. Existing entry points can be customized to allow for additional options and new configuration entry points can be added to allow different options. Configuration entry points allow for a station to be configured in several different ways without changing code. For example, one test station may be used to diagnose problems and may not need to use database logging. By using a configuration entry point, this station can be configured to disable database logging without the need of recoding the process model or operator interface.
For more information, view Process Model Theory whitepaper.
2. Software Deployment
One of the tasks a test system developer will need to accomplish when creating a modular test system is to deploy this test management software to the production floor. Whether you choose to build an installer, use a network server to distribute your test software, or leverage a source code control package, you will want to use care to ensure the integrity of the test stations while at the same time working quickly to have the upgrade process be as seamless as possible. Therefore, an appropriate software deployment strategy will be needed. Software deployment is the process of managing and automating the packaging, testing, distribution and installation of software files and/or applications to test systems across an enterprise network or production floor.
Before you can consider what method you will use to deploy your test system, you must replicate the system by gather all the required files and creating a deployable image. Once the files that make up your test system have been determined you can deploy your system one of three different methods. By using the file copy method, you can copy the files manually to each test station. You can also leverage a source code control system to determine which version will be pushed to the system. Finally, you can also create an easy to use installer and not include all the necessary files, but also include any drivers you need to be installed.
A test system can rely on a wide variety of files and components. When it comes time to deploying your test software, all of these components must be identified, collected, in order to replicate the development system in the target computer. A deployment image is created that contains a directory of files that will be installed to the target computer. The two different methods for creating this deployment image are to so manually or to use a utility to collect all the files in your development system automatically.
Manually gathering all files needed for distribution is a tedious job, but provides the developer full user control over what files are deployed to the target computer. One of the most common problems developers experience when deploying test software is missing or incorrect versions of file dependencies. A file dependency is a secondary file that a file requires to compile, load, or run correctly. Normally dependencies come in the form of DLLs, .NET assemblies or subVIs. It is extremely important that you identify exactly what dependencies your test software requires, as well as their versions.
A second method of replicating your system is to use a utility to track all the files in your application and created a deployable image. In order to track the files in your test system, you will need to define the files in your test system by using a workspace or project file. In the ideal case, the development environment will both have some kind of workspace functionality as well as a utility to create a deployable image. NI TestStand includes a utility called the NI TestStand Deployment Utility which, leveraging the NI TestStand Workspace, can generate a deployable image of your test system automatically. Due to NI TestStand’s tight integration with LabVIEW, it can also determine any dependencies between VIs and include them in the image.
Now that the deployment image has been created, the next step is to actually deploy this image onto your test machines using one of three possible methods. You can deploy your test system using the File Copy Method which consists of copying all the files directly from one computer to the other. The Source Code Control (SCC) Method consists of using SCC software to push different versions of the software to production machines. Finally, using the Installer Method, an easy to use installer can be generated with all the necessary files and drivers and install the system on the production floor.
The File Copy method consists of copying and pasting the deployment image either directly to a test station’s local drive or to a network drive. Copying the deployment image to a local drive allows the test station to become “self-sufficient” and thus only reliant upon itself. However, the downside to using this option is the fact that distributing updates to all the separate test stations will be time consuming. Copying the deployment image to a network drive reduces distribution time and greatly assists in providing updates to the test software. Since this method is solely network based, common network problems such as network status (up, down), speed of accessing network components, and accessing files already in use must be factored in. Another difficulty when deploying to a network drive would be determining if the test system fails because of new network update or actually test failure. When test software developers send updates to the network, normally notices do not get sent out. Thus it will be difficult to narrow down the test failure.
Using a source code control (SCC) system to assist in distributing your deployment image can be very beneficial. In this case, the source code control server will maintain a centralized master copy of the deployment image and allow clients (test stations) to sync up and use the test software. Even though in this case the SCC software is networked based, local copies of the deployment image will be downloaded to each client machine which addresses the possible event of network failure. In order to use SCC software, one does require some background knowledge and experience with SCC. Similarly to the copy and paste method, this method will also require the installation of software/hardware drivers been done separately.
The deployment methods mentioned earlier centralize around a more user-controlled deployment approach where the developer performs most of the labor. However, as software applications continue to grow in complexity, a more automated approach to deployment might be a feasible option. Installers allow one to integrate installer technology with the deployment image to create one easy to use installer package that is distributed using any convenient means. Installers provide the additional benefit of bundling together supporting software such as hardware drivers, documentation, licenses, and configuration files with your software application into a single package. Aside from including the supporting software, installers also work directly with the Windows operating system to register user created files, and use the modify/repair/uninstall features of windows installers.
Installers do provide multiple benefits but also bring along some challenges such as distributing minor updates and usability. It is difficult to simply deploy minor changes to a target system as a whole new installation package must be rebuilt. One must also consider the usability difficulties that come along with using common installer packages. Most installer applications are not user-friendly.
The NI TestStand Deployment Utility reduces the common difficulties associated with normal installers. The easy-to-use graphical user interface for creating installers is extremely simple to use and provides a flexible and customizable environment to include various components. You need no previous installer knowledge to create a NI TestStand installer. Including National Instruments drivers and 3rd party drivers is easily done with the deployment utility.
For more information, view the Software Deployment whitepaper.
3. Parallel Testing
Parallel testing involves testing multiple products or subcomponents simultaneously. A parallel test station typically shares a set of test equipment across multiple test sockets, but, in some cases, it may have a separate set of hardware for each unit under test (UUT). The majority of nonparallel test systems test only one product or subcomponent at a time, leaving expensive test hardware idle more than 50 percent of the test time. Thus, with parallel testing, you can increase the throughput of manufacturing test systems without spending a lot of money to duplicate and fan out additional test systems. Parallel testing can follow different approaches depending on the requirements of your application. On the other hand, no matter what approach you select, finding a simple yet powerful method of sharing instruments and synchronizing the execution of different threads is very important for your system to take advantage of the advantages offered by parallel testing. The following sections will discuss the different approaches you can take to a parallel test system and how to facilitate instrument sharing and thread synchronization.
Common Parallel Process Models
When testing the proper assembly or functionality of a UUT, there are a variety of tasks to perform in any test system. These tasks include a mix of model or family-specific tests as well as many procedures that have nothing to do with actually testing the UUT. A process model separates the system-level tasks from the UUT-specific tests to significantly reduce development efforts and increase code reuse. Some of the tasks that a process model handles are tracking the UUT identification number, initializing instruments, launching test executions, collecting test results, creating test reports, and logging test results to a database. Two of the most common process models are the parallel and batch process models.
You can use a parallel process model to test multiple independent test sockets. With this model, you can start and stop testing on any UUT at any time. For example, you might have five test sockets for performing radio board tests. Using the parallel process model, you can load a new board into an open socket while the other sockets test other boards. Each test socket can test a UUT as soon as it has finished the previous unit’s test sequence reducing the idle time of the fixture and instruments.
Alternatively, you can use a batch process model to control a set of test sockets that test multiple UUTs as a group. For example, you might have a set of circuit boards attached to a common carrier. The batch model ensures you can start and finish testing all boards at the same time. The batch model also provides batch synchronization features. For instance, you can specify that the step runs only once per batch if a particular step applies to the batch as a whole. With a batch process model, you can also specify that certain steps or groups of steps cannot run on more than one UUT at a time or that certain steps must run on all UUTs simultaneously.
NI TestStand ships with both a parallel and batch process models fully implemented. The NI TestStand process models implement more than the general execution flow of your system. They also include database logging and reporting features. The database logging features of the process models enable you to automatically log your test results to a number of different databases such as Oracle, Microsoft SQL Server and MySQL. The reporting features can generate test sequence reports in XML, HTML, ASCII text, and ATML formats. Furthermore, both process models are open and fully customizable.
Instrument Sharing and Synchronization
In trying to increase your test system performance while lowering your cost, providing each test socket with a dedicated set of instruments is not a feasible solution. Implementing a parallel test system often does not require any additional hardware investment. With parallel testing, you can share existing instrumentation in the test system among multiple test sockets. Decreasing idle time during a UUT test cycle provides substantial performance improvements without additional hardware costs. In many cases, you can add additional inexpensive instruments to further optimize overall system performance while sharing the more expensive hardware among the test sockets.
One of the key components in sharing instruments across different test sockets switching. A switch enables you to route signals helping you connect multiple measurement sources to the input of an instrument at different times. Your system might contain something as simple as a general-purpose relays that control power to a device under test (DUT) all the way to complex matrix configurations that route thousands of test points to dozens of instruments. Do to the complexity of controlling the routing of all these signals, powerful software for controlling you switch platform is necessary. NI Switch Executive is an intelligent switch management and routing application. With Switch Executive, you gain increased development productivity by interactively configuring and naming switch modules, external connections, and signal routes. The switch configurations developed in Switch Executive can be used in NI TestStand by leveraging the switching properties of test steps. By separating switching code from you test code you improve maintainability and the reuse of your switching configuration.
Prior to the availability of off-the-shelf test management software, programming the allocation of shared instrumentation among multiple test sockets running a parallel test system required that you add a large amount of low-level synchronization code to test programs. Critical sections and mutexes often were intertwined with the actual code, making it difficult to program or reuse sections in future test systems.
By implementing parallel test systems that leverage many of the built-in features in NI TestStand, you can effortlessly control the sharing of instruments and synchronize multiple devices under test. You can use synchronization step types and configurable test properties at the individual test level to manage resource sharing between tests in a sequence. The synchronization step types used in test sequences often include lock, rendezvous, queue, notification, wait, and batch synchronization step types.
For more information, view the Parallel Testing whitepaper.
4. Enterprise Systems Connectivity
In addition to these standard test executive abilities, you are likely to want your test software framework to include connectivity to enterprise systems. The features, functionality, and benefits of enterprise systems and solutions keep your test systems integrated into the larger network of tools and applications used throughout your business. Whether you are integrating source code control tools or data management systems, there can be inherent challenges when integrating multiple software solutions. The connectivity you include in your framework should be based on industry-standard tools and protocols whenever possible to ease the integration of enterprise systems. The enterprise systems that are commonly integrated into a test software framework include tools for configuration management, requirements management, data management, and communicating results.
For software, performing configuration management includes connecting your test software to source code control (SCC) tools as well as managing software deployment and upgrade processes. Software configuration management is synonymous with source code control, or version control. That is, a key aspect of managing your software configuration is controlling what version of software is installed on the test stations. There are several providers of source, or version, control software. Microsoft has defined a standard application programming interface, API, for source code control.
As part of managing your software configuration, you can plan your software deployment strategy. Whether you choose to build an installer or use a network server or manually copy your files to distribute your test software, you will want to use care to ensure the integrity of the test stations while at the same time working quickly to have the deployment process be as seamless as possible.
There tends to be a natural concern about upgrading software on deployed test stations. Whether you are upgrading the software you have developed or upgrading a commercial off-the-shelf (COTS) tool, your choice to upgrade will likely be based on leveraging new features or fixing bugs. In both cases, you can plan and prepare for these upgrades to minimize downtime during the transition. Use modular components to reduce impact of an upgrade. Maintain backwards compatibility for easy transitions. Use software maintenance options when purchasing COTS tools.
The specifications, or requirements, defined in a project contain technical and procedural requirements that guide the product through each engineering phase. Many people and processes in an organization use requirements. They may be used in planning and purchasing decisions, reporting your development progress to your customer, guiding software development, ensuring ultimate product completion or project success. Requirements management, then, hinges around tracking the relationships between requirements at different levels of detail and the relationship of requirements to implementation and test. Tracking the relationship from requirements to test, measurement, and control software is crucial for validating implementation, analyzing the full impact of changing requirements, and understanding the impact of test failures. To get this level of traceability, you need to connect your test software framework to the tools you have used to document or manage your requirements. NI Requirements Gateway is a requirements traceability solution that links your development and verification documents to formal requirements stored in documents and databases. Once the connections are made, you can respond quickly to changes in your requirements or changes in your code.
Data management for test systems involves more than logging test results. While result collection is a fundamental component, access to parametric data and data analysis are important features to consider as well. If databases are being used to store test-related information, then the technology for connecting to your database should be chosen wisely. Database connectivity for your test framework will give you the means to dynamically read and write values to and from a database. On the other hand, data management does not have to be synonymous with a database management system. Results can certainly be logged to a database, but you may also want results logged to a file. There are pros and cons to either using a database or using files. Using data in a database requires some experience with SQL or a DBMS. On the other hand, with the speed of database logging and reading you can get near real-time reports and alerts. File I/O has its own set of challenges like performance, format, and security. An advantage of logging to a file is the inherent chronological nature of log files and their persistence.
Quality database integration does, of course, include logging results, but it should also give you a means to have test parameters, e.g. limits, loaded from select data. With the ability to dynamically load parametric data, your test sequences can easily be reused for multiple product models when the testing scenario only differs by the parameter values. Depending on your needs and implementation, you might choose to store parametric data in files instead of a database. Instead of using your database integration, you simply need to deal with file I/O.
Although Data Management covers the retrieval and storage of test results, it is still important to discuss how we will communicate this information to users of the test system. Communicating results can be performed in different ways. The test system can communicate directly with the user as the test is executing. The user interface for your test executive is the first line of communication with the test operator. It is a very efficient method of communication because it is responsible for running the test and therefore, has access to the test results as they are being generated.
We might also want to communicate test results after the execution of a sequence of tests using reports. Test reports can be generated using different formats such as ASCII text, HTML and XML. Which report format you should choose will depend on the features you wish to include in the report and its target audience. An ASCII report might be easy to read in any platform but not provide much formatting functionality. On the other side of the spectrum, an XML will include be very flexible formatting functionality but might require an XSL compliant web browser to view the report.
For more information, view the Enterprise Systems Connectivity whitepaper.
5. Effective Test Software Development with NI TestStand
National Instruments TestStand is test executive software that directly implements a modular test system architecture. NI TestStand is a management tool that interacts directly with code modules written in almost any ADE. An investigation of NI TestStand’s relationship with ADEs reveals how a modular test system is realized.
Initially, NI TestStand plays the role of a test system automation controller. All test system operations are automated through test sequences composed of steps. Steps execute sequencing operations or invoke code modules. Each code module performs a task consisting of measurement acquisition or non-test related I/O. Non-test related I/O includes task that do not directly relate to the integrity of a particular UUT such as reading a bar code scanner for a UUT serial number. NI TestStand is capable of invoking code modules developed in several ADEs. Each step invokes code modules directly through NI TestStand’s set of module adapters. The adapters handle communication with specific module types such as LabVIEW VIs, C/C++ DLLs, ActiveX automation servers and .NET assemblies.
Additionally, NI TestStand separates UUT specific test operations from system-level operations by distinguishing between UUT test sequences and process model sequences. Process model sequences perform all non-UUT specific actions including result tracking, report generation, database logging, and parallel execution. A process model also controls the execution of UUT specific test operations allowing looped UUT testing and concurrent UUT test executions.
Application Development Environments are primarily used to develop code modules executed from NI TestStand. Code modules implement tasks such as measurement I/O, advanced data processing, and test specific dialogs. The ultimate goal of a modular test system is to develop and maintain as little code as possible. In that respect, significant emphasis should be placed on developing highly modular and reusable code modules. The guidelines enumerated in this article provide a rubric for achieving a modular test system.
An additional task of ADEs is to develop thin client operator interfaces for a deployed NI TestStand system. Operator interfaces provide a deployable test application that will execute on test stations. The application acts as an interface for any test operator executing tests. Operator interfaces are built from the open NI TestStand API and user interface components, each exposed through a set of ActiveX automation servers. A NI TestStand operator interface can be developed in any ADE that is capable of ActiveX functionality.
For more information, view the Guide to Effective Test Software Development with NI TestStand whitepaper.
6. Relevant NI Products and Whitepapers
National Instruments, a leader in automated test, is committed to providing the hardware and software products engineers need to create these next generation test systems.
- NI TestStand Test Management Framework
- NI LabVIEW for Automating Test and Validation Systems
- Signal Express Interactive Measurement Software
- Modular Instruments (Oscilloscopes, Multimeters, RF, Switching, and more)
- Multi-function Data Acquisition
- PXI System Components (Chassis and Controllers)
- Instrument Control (GPIB, USB, and LAN)
Test System Development Resource Library
National Instruments has developed an extensive collection of technical guides to assist you with all elements of your test system design. The content for these guides is based on best practices shared by industry-leading test engineering teams that participate in NI customer advisory boards and the expertise of the NI test engineering and product research and development teams. Ultimately, these resources teach you test engineering best practices in a practical and reusable manner. Download guides from the Test System Development Resource Library.