Become Tech-Forward with a Data-Informed Approach

published

03.08.2022

Next
89/1OO

In 1986, Tom Cruise felt the need for speed. In 2022, we all do. With today’s accelerated pace of business, faster time to market (TTM) is the key to having a competitive advantage. Manufacturers are pushing products to market as soon as possible, as winning not only requires delivering the best product but also being the first product.

In dynamic market segments like electronics and medical devices, the brands that win are the ones compressing TTM and time to profit (TTP). Developing products faster, better, and cheaper means optimizing product development and new product introduction (NPI) timelines. How can you do the same? By harnessing the power of data—from subsystems, equipment, results, yield, performance, throughput, reruns, and so forth—you can acquire meaningful knowledge to make impactful adjustments across your validation and production life cycles.

But when it comes to obtaining significant speed, there’s one piece you may be missing: test data. Maximizing equipment utilization helps with cost optimization. Tracking system health ensures accurate results. Situational awareness improves root cause analysis. And proactive alerts keep operations going, limiting unplanned downtime. This combination gives you the foundation to derive quality insights.

This is what it means to be tech-forward—taking a holistic view of your operations and leveraging data across the life cycle to connect individual data elements into a clear picture of your business. 

Getting the Full Picture

According to IDC, by 2025, the world’s data volume is expected to grow to 175 zettabytes (61 percent annually). How much data did your organization produce last year? What about last month? How much since you started reading this article? If you don’t know how much data you’re producing, how can you manage or use it? This explosion of data in today’s manufacturing environment drives the need for comprehensive data management.

Missing pieces of data can have real consequences. Imagine if we prematurely passed a battery for a TV remote. A customer attempting to use that remote to change the channel would be pretty frustrated when nothing happened. Now imagine if we prematurely passed a battery for a pacemaker. The consequences could be far more severe.

When it comes to reducing product failure, data is vital. With complete and numerous sets of data that can be compared and analyzed, we can validate and guarantee performance. Incorrect or incomplete information may result in equipment malfunctions or life-threatening escapes.

But how do you use your data as a driver for real-time decision-making to get the full picture in every situation? You employ five key vectors—ontology, format, interface, integrity, and harmonization. 

Data-driven companies grow, on average, 30 percent faster than their peers, according to Forrester. So, if you want to outpace your competition, you should become proficient in the five vectors.

Raising Your Standards

The first three vectors—ontology, format, and interface—are centered around data standardization and data exchange. Even though the amount of data increases exponentially, the benefit comes from utilization, not volume. And it’s not only about accessing data in the present. In order to leverage data, you must be able to access data from a year ago and/or another department. Without a form of data standardization, it’s difficult to collaborate, creating data silos across an organization. The ability to look at material source history, test results, and operational information is only possible when data is consumable.

Ontology

Ontology is a form of organizing data so that data elements can be related to one another. Ontology is a descriptive manner of distinguishing concepts and their multidimensional relationships, as well as information about the relationships among entities. Examples include Universal Modeling Language (UML) to specify the ontology model and Uniform Resource Identifiers (URIs) to define the actual connection. Thus, ontology involves forming a comprehensive and cohesive structure that’s extensible as new technologies and test requirements emerge. Ontology goes beyond pure taxonomy and combines data models and their associations between each other, connecting existing data silos.

Think about the creation of a product. All data related to the production process (inventory, stock, and supply chain management) must be available. In addition, critical data about the product itself should be gathered during production test (quality management). However, there is valuable data along the life cycle starting in design (simulation) and moving to verification and validation (verify product requirements). Often, these individual pillars require specific data repositories and tools to achieve their individual goals—thus producing data silos. These silos make it difficult to get a holistic view of the process to determine the root cause for events like recalls. Ontology connects the individual data repositories, provides a 360-degree view, and the answer.

Format

Format refers to the structural organization of data within a file, stream, or database. While based on an ontology, format is also a critical part of the communication protocol that ensures data is internally consistent when being exchanged. Format assists in creating a standard practice for storing and locating data. Often when first testing a prototype and collecting data quickly, you’re more concerned with collecting data than doing it in a structured way. However, as you move to validation and have multiple people testing across stations, it becomes increasingly difficult to compare results. Format standardizes the way that you name and organize files and data so that you can find what you need when you need it.

Interface

Interfaces ensure that the data can be connected. That is, there is a bridge for standardized data exchange between two entities. An interface can exist between two separate processes or between different modules of the same process. The existence of a formal definition of such an interface—either in the form of an Interface Definition Language (IDL), a simple C-header file, or by a comparable technology—is important in both cases. As we look to compare early test results in design to production—from the current data to what is upcoming—compatibility of different versions is essential. If the data from a previous iteration cannot be compared to a current, more comprehensive test, redoing tests and spending additional time may be unnecessarily required.

As a solution, one might look to data formats and interfaces created by a standardization body like ASAM ODS including MDF4, de-facto standards used by industry like Google’s Protocol Buffer, Amazon’s AWS S3, or NI’s TDMS. And reducing engineering time on data management is beneficial because of the widespread utilization. If new technology arrives or some unforeseen shift happens, there are others in your field that will be similarly adjusting. And if you’re thinking of using a product from a third party, using a standard format can assist with connection by minimizing cost of specialty services that would be rendered for integration. 

                                                                                            Figure 1

Figure 1: Utilizing the five vectors of data for comparison. Chart A displays a blank radar chart. Chart B compares three file formats. Chart C compares mechanisms for managing, searching, and locating data.

The five vectors shown in Figure 1 can be used to evaluate data options. In Chart B, we see a comparison of three file formats, which help to fulfill standardization. TDMS and MDF4 provide the best structure; however, these formats alone aren’t enough. They don’t address quality. For this, we look at Chart C, which compares mechanisms for managing, searching, and locating data. You need to consider all vectors to get a complete picture.

Quality over Quantity

While the first three vectors guarantee standardization, the integrity and harmonization vectors examine quality. When using data to inform decisions, quality is essential. This is not about the number of significant figures. What is required is a holistic view of the entire data set and being able to rely on multiple pieces of information for reference and analysis.

Integrity

Integrity refers to the accuracy and completeness of data and is about ensuring the data set is accurate and contains all of the desired information. Look to XML-Schema (XSD) or the current development of JSON Schemas to better understand integrity. This is where the data itself is studied. With data accessibility guaranteed, integrity ensures the data set can be trusted. To preserve integrity, we need to take additional actions: from simple checksum creation to detect manipulation up to data governance defining access rights, visibility, and privacy (remember Europe’s GDPR). When looking to gather insight, this allows us to get a complete and compliant overview.

Harmonization

When looking to evaluate test results, or anything for that matter, we need an apples-to-apples comparison. Harmonization plays an important role in this task. In test design, considerations must be made to ensure the data can be combined and cross-referenced. Harmonization can be applied, for instance, by mapping incoming data to the same data model (or schema), resampling and synchronizing measurement signals, or converting to the same engineering units. If we tested a device with varying conditions during each iteration, we wouldn’t be able to efficiently analyze the results for verification. As tests vary and evolve, there must be a common thread to extract information and compare results.

If we want data to drive our decisions for making timely and strategic choices, we need good data. This means making the right decision at the right time. But what if you’re missing a piece of data? Or what if it’s inaccurate? What if you make a misstep, have delayed action, and/or need to perform costly retests? Integrity and harmonization allow you to see what’s happening and have confidence in your next decision.

According to IBM, poor quality data is costing organizations across the United States $3.1 trillion. How much will it cost you?

Smarter, Faster Decisions

A cohesive long-term strategy requires expanding the viewpoint of data along these five vectors. We’re living in an increasingly connected world, and maximizing the value of technology and data relies on tracking multiple sources at once. Being tech-forward means bringing these pieces together, to enable real-time response. The key to success is proactive problem identification, ensuring all test stations are healthy and turning individual results into understandable insights.

For post-pandemic growth, harnessing the power hidden within data gives manufacturers the agility required to thrive. When it comes to reducing TTM and ensuring profit in a new product introduction, finding hidden areas to improve quality, efficiency, and processes is possible with insights derived from data.

 If you fail to prepare, you prepare to fail. When it comes to data management, organizations can power their future growth. Without taking the steps today to prepare for tomorrow, you will fall behind—but that doesn’t have to be the case. As you work towards improving the overall performance across your test workflows, NI can support you through tailored software.

Data ontology, format, interface, integrity, and harmonization all set the required foundation. How many vectors are you looking at?