Aerospace program officers are primarily concerned with meeting customer requirements and preventing any quality escapes rather than the inner-workings of their test architecture. At the enterprise level, quality testing involves better model-based design, greater test automation, the ability to share common architectures between phases of the life cycle, and requirements tracking. But typically, these process improvements require modernizing the underlying test infrastructure and are sacrificed so that the program’s basic elements—like having a pin to test on—can be completed to support the schedule.
To keep attention focused on product quality, a test architecture needs to be flexible enough to allow for continuous evolution from program to program. Paradoxically, the migration to this type of architecture must occur within a single program. Capital budgets outside a program are rare and the need for an upgrade typically arises in the middle of a program when you’re most risk averse. Any path forward requires a clear understanding of a program’s primary cost, risk, and schedule drivers. Factors like designing the test system, point-to-point wiring, and building test adapters are essential to create a functioning test system. But they don’t necessarily contribute to increased product quality. The percentages shown in Figure 1 are typical of many aerospace companies.
Figure 1: To architect and deploy a new LRU test system includes trade-offs in up-front cost, development time, and acceptance of risk. The typical LRU tester deployed today is highly custom and has a long build time, both of which add significant risk to a program schedule.
Hardware typically accounts for less than a quarter of the total cost, whereas the design and build labor accounts for the greatest impact to budget and schedule. Based on typical data, you can estimate $800 to $1,000 per pin of I/O with an 8- to 12-month schedule, depending on the size of the system. To make an impact, you must address both cost and time.
There is a large technology overlap between LRU test systems across companies. If you off-load these common system components to off-the-shelf components, you are free to work on the niche test system pieces only you can do that greatly enhance your testing.
"Using the SLSC system further promotes our goal to focus the attention on building HIL test systems and rigs, not developing advanced hardware."
—Anders Tunströmer, SAAB Aeronautics
» Read the Case Study