Pundits tout time to market as an indicator of innovative, fast-moving companies. At NI, we are hearing from our test-manager community that aggressive time-to-market goals can put intense pressure on them and their teams. We asked our electronics production test specialist Graham Green to explain what’s going on, and what best practices can relieve a little of this pressure.
NI: Graham, before we get into the effects of time-to-market pressure, can you explain why it exists and why it’s stronger than ever?
GG: In short, it exists because we all want new stuff, and we want it now. Moore’s Law is well-documented for accelerating processor performance, consequently driving new devices into the market, but it’s not alone. New wireless standards, built-in audio assistants, and screen and battery technology all are becoming differentiators in which being first to market offers significant benefit. I see devices becoming more complex whilst intervals between releases shorten, significantly pressuring engineering and manufacturing teams.
NI: If we accept that design schedules are getting shorter, why does this have a particularly adverse effect on test?
Figure 1. Schedule Compression throughout a Project
GG: Here is a graphic I have shown for years, and it’s as true today as it was when I joined the industry. It shows that the later the process stage you are responsible for, the higher the risk that your schedule is going to be unfairly squeezed. It is rare that I talk to a test engineer who is not feeling the pressure to hit aggressive development schedules. This is compounded by two important differences between the time-to-market and test-development schedule:
NI: Let’s take these one at a time. Getting that first test station operational on time seems critical—what can test engineering teams do to build confidence that they will reliably achieve this?
GG: The textbook answer is to talk about high-productivity software tools, or building team proficiency for efficient group development. If this interests you, check out these LabVIEW and Center of Excellence resources. But there’s a less-publicized factor that has a huge effect on hitting schedules: Effective test planning.
The first planning hurdle is ensuring that your test cases cover all of the requirements in the test specification. Finding that a use case is not covered, and adding it later, is significantly inefficient. All of us have felt the pain of last-minute changes, especially in highly distributed test lines, in which balancing cycle-time is critical.
You can avoid this by closely collaborating with design teams to perform coverage analysis, “linking” test cases from your test specification to product requirements. Of course, it’s easy to say “closely collaborate,” and harder to make it an organizational reality.
If you want to secure an equal seat at the table, you must demonstrate how test engineering involvement benefits other teams. Specification changes (especially at later stages) usually cause the most friction, so this is a great place to start. By codefining a change-management process for both requirements and test specifications, you all succeed, which can lead to further collaboration. NI has plenty of experience with this, and our teams are happy to advise you.
NI: OK, so once you’ve agreed on test specification, how can you plan for efficient development?
GG: Before we build forward, we have to be confident in our starting position. We want to minimize the likelihood that information about existing resources or capabilities is inaccurate. Test engineers love to reinvent the wheel—we went into engineering because we love making things. And what better excuse to write new code than a lack of confidence that the test asset or code library is correct and up-to-date?
I’ve seen this again and again with engineers who refuse to use standard libraries because they are convinced that their way is better. It only takes a few failed attempts to lose trust in the new system and revert to old ways. Once this behavior is set, it’s hard to drive change in an organization, no matter how much it’s needed.
Best practice once again dictates diligent process administration. Does your organization agree who must be informed/sign off before changing hardware on a test station or piece of reusable software? Do all parties know where this information is stored, and how this information gets updated? While there are software products that keep track of all this, you need momentum and active stakeholder adoption to achieve success. Then, it’s critical that you maintain this library to build confidence that the assets within it are current and high-quality.
NI: Can you give us an example of this kind of strategy in action?
GG: Sure. Neil Evans did exactly this with his team while working on ultrasound products for Philips. They built a library of software modules of well-written, verified code. Their architecture was designed from the ground up to encourage reuse.
The most significant investment in standardization comes with a core team setting up the initial framework. Once this is complete, adding updates and maintaining codebases is a lighter lift because, at this stage, teams from different organizations can participate as local contributors.
-Neil Evans, Senior Manager, Philips
Evans’ team effectively documented each module’s functionality and use case, encouraged engineers to use them correctly. Initial success soon fueled organic adoption and collaboration, and the project gathered momentum. Overall, then, their team achieved an 80 percent reduction in new-product-introduction (NPI) test development effort and schedule from comparable previous projects (calculated in engineering hours recorded per project).
NI: So far, we’ve discussed getting that first test station out the door. How about when it comes to scaling systems to meet manufacturing volume—how can we improve time-to-market here?
GG: Traditionally, there’s a trade-off between time spent firming up a design before replication, and time spent updating replicated test stations with new revisions introduced late in the process. Unless you work in a highly regulated industry (such as medical devices), where certification forces designs to be locked down early, adopting a more agile test approach means that you don’t need to make this trade off.
What does this agility look like? First, design a minimum viable test station around chassis-based modular instrumentation so that you can expand your I/O without changing footprint or rack layout. Next, software-connect your test stations to manage system configuration, software versioning, and data. This way, you can remotely update, reducing in-person software deployment and maintenance.
Bridging the gap between test-station operational technology and IT infrastructure has long fallen on test engineers’ shoulders, and is a common complaint. In most cases, test engineers are not network communication, database, and visualization technology experts, which puts a development and maintenance strain on teams and takes them away from value test engineering work. As more commercial off-the-shelf solutions—such as NI SystemLink software—enter this market space, not only can engineers confidently deploy iteration after iteration of test code, but they receive other benefits such as system health monitoring, test asset utilization data, and more holistic test-data analysis.
NI: Agile test-station development and deployment sounds good, but can you give an example of where this is actually happening?
GG: Of course. Let’s talk about the team at a leading appliance manufacturer. Their ability to remotely deploy and manage test revisions justified their investment in enterprise-wide test-station management software. On top of this, their expanded access to data visualizations is spawning many new process improvements every day, further shortening development and improving operational metrics. Their team maintain over 170 test stations and according to their group manager:
Using SystemLink to achieve graceful shutdown, install, and restart process, we were able to reduce deployment time from 30 minutes per system to 3 minutes for an entire production line.
-Test Engineering Manager
NI: So, what’s next for time-to-market and NPI schedule improvement?
GG: I’m excited about using machine learning to better-define what and how we test, as well as to automate our workflows. Data analytics studying each tester can identify critical areas that need stringent tests, as well as overserved areas that we can optimize. We gain further value when we use analytics to automate processes across all connected testers and assets. For example, I predict that future test programs automatically generated through an intelligent, data-driven system. Once this is a reality, test engineers can swiftly iterate through and optimize designs, and missed NPI schedules will exist only in history.
Learn more about NI's solutions for functional test or download the solutions brochure for more in-depth reading on the constituent parts and products. Do you have specific questions on your test strategy or an upcoming test station project? Talk to us today.
©2020 National Instruments. All rights reserved. National Instruments, NI, ni.com, and LabVIEW are trademarks of National Instruments Corporation. Other product and company names listed are trademarks or trade names of their respective companies. An NI Partner is a business entity independent from NI and has no agency, partnership, or joint-venture relationship with NI.