For decades, design and validation have relied upon the V diagram and, aside from a few variants, it has remained unchanged. This is largely due to the V diagram’s validity, its ability to scale, and its proven record.
As vehicles become connected, autonomous, shared, and electric (CASE), uncompromising safety is driving design and test towards the left side of the V diagram, which significantly increases test coverage: Not only because of the software running in the vehicles, but because of the use cases and unknowns that correspond with advancing technology and the push for continuous software updates during development and in aftermarket.
Let’s look at the V diagram from an automotive-industry test perspective to discover design-optimization opportunities.
Vehicles have a mechanical past, but a software future. Despite software’s primary role in modern cars, the industry continues to invest heavily in prototype vehicle testing. While this may be necessary from a safety standpoint, it’s nearly impossible to validate designs in real conditions because of cost, late fault detection, and lack of repeatability.
Because of these limitations, companies strive to get that physical prototype—and the finalized design—in the best possible state from the start. Automotive OEMs are modifying the V diagram to “front-load” development and test and increasingly utilize virtual prototypes (Figure 1), significantly lowering cost and rework burden and speeding up development. This facilitates tighter collaboration between design, development, and validation groups earlier in the process.
Figure 1. Reduce rework and front-load development using software and data toolchains for fast test iterations.
Similar to front-loading that leads to the double V in Figure 1, companies might attempt to “narrow the V” or “shift left,”; however, no matter the design process, they all turn to simulation and lab techniques to increase test coverage in the safer, faster, and less-costly virtual world. In 2018, researchers found that, by splitting automated driving function test cases into virtual and physical, they achieved quite substantial cost savings, compared with testing solely in live situations.1 In this case, the researchers estimated an up to 90 percent cost reduction.
Shifting towards virtual simulation is almost instinctive for emerging automotive companies, such as Waymo, that focus on autonomy and possess software-testing expertise. Based on experience, these companies know that testing more during simulation leads to:
With all that automotive companies stand to gain from virtual prototypes, why isn’t it more common?
While there is no established process to make the shift to front-load test, there are several complex, interrelated challenges involving people, process, and technology. Within these challenges lie opportunities. But before we examine those opportunities, let’s define each challenge:
People—This relates to skills and training and how an organization supports continuous virtual and lab-based testing and integration, as well as internal alignment between groups, and external alignment to include suppliers and collaborators.
Process—Process involves what to test and when, using automated test-management techniques and methods that correlate virtual, lab, and real-world testing. These methods, which require buy-in, utilize the skills and training mentioned above to speed up development and test processes without sacrificing reliability, budget, or test coverage. Process also encompasses safety standards such as ISO-26262, as well as existing and upcoming regulations.
Technology—Technology equates to the tools that align with skill sets and processes. With technology, testers perform X-in-the-loop (model-in-the-loop and software-in-the-loop) and bring test to a hardware-in-the-loop system or a lab test. Technology spans the entire spectrum—from a single component or domain to full-vehicle real-world testing.
These three vectors and their related components are complex enough to deserve undivided attention. We recommend assessing existing circumstances and determining what variables or processes you want to optimize for your specific business goals to achieve the proper balance among the three.
Here, though, we’ll focus on technology and how it impacts confident test front-loading (acknowledging that examining a single vector is insufficient to overcome test challenges).
In Figure 2, we see the traditional representation broken out by where the design and the test are done (virtual vs lab vs physical). Breaking it out helps pinpoint opportunities to run the test earlier, have the test iterations in the right place and time, and get to physical test as effectively as possible.
Figure 2. Expanding the V diagram shows where test happens to help identify opportunities for shifting left.
Clearly, the further right and up we go on the V diagram, the more likely identified-defect complexity and timing can negatively impact development; however, recall that the test environment impacts key variables (Figure 3). In summary, powerful, all-encompassing testing on the left side leads to real-world trials with equally high test coverage.
Figure 3. This continuum shows the trade-offs and benefits of testing at different stages.2
Now, let’s look at some areas in which technology can help with the shift and increase test on the left side of the V diagram.
Test-optimization is helpful when you’re trying to front-load the left side of the V and increase effectiveness in the lab and physical world, as well as across all domains. This is the moment that reveals technology’s strengths and weaknesses.
Traditionally, test suppliers have focused primarily on being the best in one area—simulation, lab, or physical test—but not all three. When improving test through a modular hardware approach that is also connected through software, it helps to optimize these three areas:
While reusing components poses a cost benefit, it’s really the time savings that makes it so valuable—and it’s absolutely software-centric. By minimizing rework between V-diagram stages, you achieve a more integrated design and test.
In practice, though, challenges abound— including supplier tensions, the way the organization is structured, siloed, or measured, and technology. The technology challenge, though, should be easier to surmount, as you can incorporate the right test architecture so that engineers can take the test modules across different in-the-loop stages, from component test to system to integration test.
Employing open test-system and test-development software architectures, we can reuse test cases, equipment, and engineering development, not only within groups working on the same products, but between serially developed products. Volvo's example validates how the right architecture and technology future-proofs test systems to hit delivery dates, quality standards, and budget requirements. They efficiently integrated products from multiple vendors, reused existing components, and built flexibility into the system to prepare for Volvo’s future needs. Setting up the whole system was so seamless that they delivered world-class quality on time and at the right cost with limited resources.
When you’re shifting left, you need to make data work in favor of the shift, instead of it being another challenge to overcome. This is especially important with CASE, as test data has exploded so much that organizations absolutely must become more data-driven.
Because of technology and methodology limitations, it’s common to analyze only a portion of test-specific data (which, subsequently, is rarely linked back to previous test stages or pushed forward as test intelligence to future test stages). However, Jaguar Land Rover automated their data management for increased analysis, significantly reducing test reruns and weaving cost and test-reliability benefits all the way through to physical test.
Using data to test earlier also:
And, perhaps most importantly, using data to bridge communication between groups working on a specific product expedites decision-making, collaboration, and course-correction.
As with any journey, the starting point matters just as much as where you’re going. Really understanding how test is happening now helps you discover gaps in which test could be occurring, and how covering those gaps would benefit your company.
The basic concept is simple: Move away from reworks in the red zone, as shown in Figure 4.
Figure 4. Rework at different stages can place you in the red zone, where wasting time and resources grows disproportionately.
However, this is tremendously difficult, and often harder to do singlehandedly, as it requires companies to be self-critical, multidisciplinary, and data-driven, with an appropriate understanding of industry best practices. Fortunately, bringing in a consulting service such as NI can introduce an external, diversified, multivector (people, process, and technology) view backed by broad experience, leading to valuable discoveries and, eventually, an action plan to make the shift.
To get started with your self-assessment, consider where you’re sitting in the testing-scenario balance: How much do you test in simulation, HIL, replay, and on the road, and where/how can you optimize your investment? Difficulty answering this question clearly indicates an opportunity to shift more towards simulation and HIL to become more effective at testing.
While it’s understandable that you may be too overwhelmed by potential implementation challenges to take action, consider the high cost of inaction. By not creating a structured approach with a clear strategy, you risk doing things as they’ve always been done—and that, of course, produces the same old results. When you lay out the existing process, define all optimization candidates, and strategically plan the order in which to tackle the steps (along with outlining successful key performance indicators), you increase both immediate and long-term chances for success.
We’ll continue to explore people and process challenges and strategies, but for now, understanding how technology can shift test to the left is the first milestone in our test-evolution journey. NI has the right teams, knowledge, and technologies to help you improve your testing—helping you bring your autonomous vehicle to life.