Most applications require some form of visualization, but a key decision that you need to make is where this processing takes place: inline, offline, or both.
Inline visualization implies that data is displayed in the same application that acquired it. For example, acquired data may be displayed on a computer screen so that a technician can literally see the signal being measured and ensure that all connections are properly made. If inline analysis is run with inline visualization, a filtered version of the same signal might also be displayed on the monitor. This type of architecture gives you “instant feedback” since you can visualize acquired data in near real time, but it requires that your chosen application software contain all of the visualization tools you may need.
Just like with performing inline analysis, the caveat to visualizing data inline is that it takes extra processing power to execute required calculations and display data. User interface updates are one of the most processor-intensive actions that a CPU performs, which means that if your acquisition application has strict timing requirements, you must make sure that your visualization does not become a system bottleneck and cause you to miss some data. While you are developing your application, you can benchmark how long it takes to acquire, analyze, and visualize your data and make sure that you aren’t missing any data points. Another option is parallelizing your code so that one thread performs the data acquisition while another handles all of the signal processing and visualization with a lower priority only as CPU resources become available. This helps by taking advantage of the multiple CPUs in most machines.
Though common, inline visualization is not always the correct methodology when implementing your system. In fact, it isn’t even necessary in some applications. You may choose to perform offline visualization when you do not need to see data as it is being acquired or when you want to ensure that your computer processor can focus entirely on acquiring and streaming data to disk. Offline visualization involves storing data for inspection at a later date and requires an appropriate storage format and the selection of a dedicated offline visualization tool. However, opting to view data offline gives you unlimited flexibility when interacting with your data since you have access to the original raw data as it was acquired. Additionally, you are not limited by the timing and memory constraints of data acquisition, and visualization as a bottleneck during live acquisition is no longer a concern since the CPU no longer has to perform computationally intense graphics updates.
Many applications combine both inline and offline data visualization. Normally, inline visualization is limited to the minimal processing required to confirm the correct behavior of the system (for example, by slowing down the update rate of a graph). You can use offline visualization in conjunction with inline visualization to inspect and correlate data in detail when doing so no longer affects the acquisition itself.
When choosing a data visualization tool, you must consider the volume of data that you can represent and the format of incoming data. If your data is being visualized inline using the application software you have chosen for acquisition, formatting shouldn’t be an issue. However, remember that data rates influence the amount of data that needs to be represented, which ultimately affects the graphics processing power necessary to render the visualization.
If your visualization tool is offline, then format is a concern. You should make sure that the visualization tool that you choose is capable of interpreting the file or database format you intend to use to store data.
Additionally, even offline data analysis tools are limited by the amount of memory allocated by the operating system and therefore can load only a certain subset of larger data sets. Many visualization tools impose a data constraint because of this limitation and prevent engineers from both loading and graphing more than a predetermined volume of data. For example, many financial tools impose a loading limit of 1,048,576 (220) data points per column and a graphing limit of 32,000 points per chart. Selecting a visualization tool that was designed to handle engineering data sets helps you access and visualize your data appropriately and often includes data reduction techniques that simplify working with extreme data sets.
For visualization, most engineers require, at a minimum, basic charting and graphing capabilities. Luckily, almost every data visualization tool on the market can make simple charts and graphs, and dedicated visualization tools offer a robust set of additional capabilities that you can use to learn more from your data.
If you anticipate needing to graph different curves that have drastically different y-scales on the same chart, you need to ensure that your graphing tool has the capability to distinguish between these scales. Many tools have this capability but also have a limited maximum y-axis count.
In addition, you should consider your visualization needs that go beyond basic 2D graphing. For example, if you need to represent data using polar plots, or if your data would be best represented in the form of a 3D graph, then your visualization tool must support these capabilities.
The scalability and customizability of visualization is important to consider. Because every engineering measurement application is different – from containing different measurement types to having different goals – a visualization tool should be flexible enough to be tailored to the needs of your application. Out-of-the-box tools that incorporate data acquisition and visualization in a closed package often impose rigid limitations on the type of visualization offered. While this may be suitable initially, you may need a more extensive look into acquired data to make informed decisions; this requires that more data be graphed per curve, more curves be plotted per graph, and more graphs be viewed. Or it may require even simply zooming, scrolling, and scaling tools that many visualization packages restrict. If you expect your visualization needs to grow as your application expands, make sure you choose a visualization tool that scales as well.
Advanced engineering data postprocessing tools feature synchronized visualization that extends beyond simple spreadsheets and static graphs. In addition to zooming and scrolling graph axes, cursors between graphs can be synchronized – usually using a common timebase – to correlate information viewed between one graph and another. For example, cursors on a graph may allow an engineer to specify a beginning and ending x-region subset of a curve that is used to dynamically calculate and display the fast Fourier transform (FFT) of the data in the resulting region. That band can then be panned back and forth along the superset of data, extended, or reduced to isolate the region of interest.
In additional to synchronizing graphs of measurement data with other graphs of data or resultant calculations, advanced tools can synchronize measurement data on graphs with data from other sources such as video, sound, 3D models, or GPS. By correlating measurement data with information from these other sources that often provide even more of an engineering context than simple curves on graphs, you are empowered to discover more from your measurement investment. For example, these synchronization tools enable advanced visualization capabilities that play back measurement data linked to video so you can see what happened during a measurement, sound so you can hear what happened, or GPS so you can determine where something happened.
Download the complete guide to building a DAQ system
Review the advantages of choosing NI DAQ for your application