The 3 Pain Points of the Mil/Aero Test Engineer

Publish Date: Aug 03, 2017 | 1 Ratings | 5.00 out of 5 | Print

Overview

We all have our pains and struggles within our team or organization - whether it’s the junior engineer that couldn’t possibly be wrong about anything - ever, the dreaded consensus building meetings that do anything but, or a nearly impossible deadline – we all have them. The life of the aerospace test engineer is no different. They may be supporting depot-level test systems with 30-year old technology or racing to be first to market with the latest and greatest radar technology, but inevitably they have their share of challenges to tackle.

While unloading all our pains and struggles may be therapeutically beneficial, this article will focus on overcoming the challenges of the aerospace test engineer that have the biggest impact on the organization’s success and, in concert, their own career growth.

This article was originally published in High Frequency Electronics

Table of Contents

  1. Legacy Test Program Set Support
  2. Rapid RF Evolution
  3. Increasing Sphere-of-Influence to Reduce the Cost of Test
  4. Learn More

1. Legacy Test Program Set Support

The first, and most obvious, challenge that average test engineer faces is the need to support legacy Test Program Sets (TPSs). Commercial and military aerospace programs are extending well beyond their intended lifecycles, and support teams must carry these fleets forward into the next wave of technology lifecycles. When looking to upgrade a test system (or subsystem) for one of these programs, Test engineers cannot only consider the technology insertion, they must also consider the hundreds or thousands of TPSs that have been developed for the system and the ripple effect that technology insertion will inevitably have on the program as a whole.

The most motivating and technologically savvy approach is for the test engineer to develop a completely new test system with exciting new instruments, instrumentation test adapters (ITAs), and fixtures while rehosting as many legacy TPSs as possible. Unfortunately they ultimately have to answer to a budget and usually end up refurbishing existing test systems to replace the obsolete pieces through planned maintenance.;

Let’s take the example of refurbishing an existing system by replacing an obsolete oscilloscope with the objective of minimizing TPS migration costs. Sounds simple, right? On the surface, the test engineers job sounds relatively straightforward – find an oscilloscope that can perform as well (if not better) than the existing scope in the system. After all, most scopes in 2015 are going to pale in comparison to the dinosaurs that were designed into the system 10-, 15-, or 20-years ago.

The first bump in the road is form-factor. The new instrument needs to take up the same or less space in the 19” Rack so as not to warrant a reconfiguration of the rack layout. Because there is a significant amount of system-level documentation, changing the layout of the rack will introduce a massive amount of documentation changes (not to mention any possible signal integrity issues with changing cable lengths to the mass interconnect). This form-factor challenge is one of the many reasons that modular platforms like PXI (and formerly VXI) have dominated the Aerospace/Defense ATE market for the last 30-years. By following the strict guidelines of the PXI specification, a scope from vendor ‘A’ will be the same size and utilize the same backplane power as vendor ‘B’, giving test engineers an easier upgrade path for their systems.

The second hurdle in the road is hardware abstraction layer (HAL) integration. Any test system that is expected to last for 5-10+ years will inevitably have planned maintenance and operational costs. These are significantly reduced by abstracting vendor-specific hardware and drivers into a HAL or MAL (measurement abstraction layer). The test engineer is also tasked with evaluating the driver stack of the new instruments to ensure they plug into the HAL to mitigate the risk when migrating the thousands of TPSs still to come. Many HALs utilize the IVI driver class where possible and supplement with Plug-and-Play drivers. Since this example is an oscilloscope, we’ll make a blanket claim that the test engineer has it ‘easy’ and gets a pass on software because there is an existing IVI class specified for oscilloscopes.

 

Figure 1. Hardware Abstraction Layers (HALs) significantly mitigate the impact of hardware obsolescence, but are difficult to justify in the absence of a long-term support strategy.

A third and often hidden hurdle is the answer to the question: ‘Is Better Really Better?’ The specifications of this new oscilloscope are multiple generations of technology ahead of the obsolete equipment, so where’s the issue? The issue comes when, for example, you insert this new oscilloscope into the system and the rise time or settling time measurements change significantly because you’re sampling at 3-, 5-, or 10-times the rate of the previous instrument, which results in dozens of incompatible TPSs that previously provided great system utilization. Another issue arises when legacy TPSs require trigger functionality that instrument vendors obsoleted years or decades prior. In this situation, the test engineer is challenged with looking across the entire database to identify which TPSs will be broken by inserting a new instrument that does not support the legacy trigger functionality - a database which often doesn’t exist and requires weeks or months of manual effort to identify.

In order to minimize the unknown risks of TPS rehosting, many test engineers are taking advantage of software-designed instruments (SDIs) to give more flexibility in the rehosting process. Software-designed (also known as synthetic) instruments combine core analog and digital front-end technology with powerful, user-programmable FGPAs to provide the most flexible instruments on the market. If we apply the SDI approach to the oscilloscope challenges above, the test engineer (or TPS developer) can easily implement custom trigger functionality on the FPGA of the SDI to emulate legacy trigger technology. Some go further and use digital signal processing to emulate the analog performance of the legacy instrument’s analog-to-digital converter technology.

Figure 2. While difficult to accomplish, emulating legacy instrument capabilities greatly reduces the risk of TPS migration issues. Software-Designed, or Synthetic Instruments offer a unique approach to test equipment emulation.

Back to Top

2. Rapid RF Evolution

On the other side of the spectrum (literally and figuratively) is the challenge of keeping pace with the rapid evolution of RF technologies engineered into radars, signal intelligence systems, communications equipment, and other line-replaceable units (LRUs). This rapid pace of innovation keeps test engineers on their toes in terms of building scalable architectures that can not only test the technologies of today, but scale to support the next ‘wave’ of RF capabilities.

 

Figure 3. The evolution of NI vector signal analyzer bandwidth is one example of how aerospace ATE systems can scale to support the latest radar, communications, and signal intelligence systems.

 

Historically, most high-mix test systems in aerospace/defense haven’t included RF ATE subsystems as part of the core configuration due to the cost/benefit analysis of adding high-performance (high-price) RF test equipment to cover a small set of LRUs. The asset utilization simply couldn’t justify the expense. As the number of RF-capable LRUs increases and RF instrumentation becomes more cost effective, it’s becoming more common for RF equipment to be part of core high-mix test system configurations.

Figure 4. Traditional ATE systems commonly used the ‘bolt-on’ RF sub-system strategy due to the cost of RF equipment. As RF technology becomes more prevalent in LRUs and RF test equipment costs come down, we’ll see RF test equipment become integrated into the core system.

 

To illustrate the complexity facing the test engineer, let’s use an example of a test system for a direction-finding, multi-antenna radar subsystem. In the manufacturing environment, it’s reasonable to assume that each antenna will be tested serially using a high-performance signal source and a wide-band vector signal analyzer, along with some high-speed serial communication for controlling the UUT. Saying this is easy would be a massive overgeneralization, but when you compare this to the capabilities of the maintenance test system it sounds like a walk in the park. So whose job is it to develop that complex test system for planned maintenance and field defective units? That’s right, the test engineer.

When performing maintenance tests or analyzing a returned unit from the field, your test cases are far more inclusive than the ‘did we build it right’ manufacturing test case. You will need to emulate the real-world environment with highly synchronized signal sources including closed-loop control between the sources and analyzers to stress the DSP engine and measure the phase-coherency of the system. To address the synchronization and data transfer challenges, test engineers need to look beyond traditional boxed instrumentation to a platform-based approach such as PXI. To emulate the real-world environment with closed-loop control, engineers need flexible RF instrumentation architecture that combines data streaming architectures, FPGA-based signal processing, and high-performance, high-instantaneous bandwidth RF front-end technology to capture and process the incoming pulses.

It’s also no secret that operational costs are high when sending units back to the intermediate- (I-) or depot- (D-) Level centers for maintenance or repair. As RF test equipment becomes easier to adopt in field test, these operational costs greatly improve. Not only does the organization benefit from the decrease in operational cost, but they can get better IP leverage between the depot and field testers for in-situ troubleshooting and diagnostics.

As you can imagine, the RF challenges of scalability, synchronization, and latency create complex system-level test architectures for the test engineer and are quite different than replacing the legacy oscilloscope and mitigating TPS rehosting costs, though both technology elements are great opportunities for the test engineer to provide significant value to the organization.

Back to Top

3. Increasing Sphere-of-Influence to Reduce the Cost of Test

A third, and maybe more subtle pain point for test engineers is justifying short-term spend to mitigate long-term operational costs. Market pressures are as high as they have ever been, so test engineers are opt for point-solutions that neither provide the scalability for evolving technology demands nor have an architecture that simplifies maintenance for future upgradeability.

Furthering this problem is the fact that this short-term spend may not actually come directly from the test engineering budget. Looking upstream, we all know how difficult it can be to get a design engineer to modify a design once it meets the design specifications, but organizations can see significant improvements to their bottom line by engaging the test engineering group early as part of a Design For Test (DFT) or Design for Manufacturability (DFM) strategy. When yields improve and asset utilization increases, these optimizations typically go directly to the gross margin of the product.

Beyond DFM, it’s also critical that the test engineers be involved early in the new product introduction (NPI) process. By actively engaging in every stage-gate of NPI, the test engineer can be developing product-specific test code along the way and collaborating with validation engineers on automated code modules to simplify the validation and ease the transition into production. This is actually a process that National Instruments went through in the early 2000’s as we released 200+ products/year with increased complexity per generation. By bringing test engineering to the conversation early, we saw over 40% reductions in release to manufacturing (RTM) time, which directly shortened our time-to-market.

Figure 5. There are inefficient and costly flaws with the traditional approach of engaging test engineering late in the NPI process. Engaging earlier in the design cycle can lead to faster time-to-market, lower manufacturing cost, and improved yield.

If we look downstream, the test engineering budgets and the operations budgets are often decoupled, so the test engineering organization is not inherently incentivized to architect the system in a way that minimizes long-term operational costs. This is where siloed organizations struggle and strong communicators differentiate. At the heart of these negotiations and tradeoffs is the inherent knowledge of the Test engineer about the suite of UUTs supported, the stability of the test system, and the areas to optimize or improve. While it can be painful for the Test engineer, expanding their sphere-of-influence to the entire design-cycle makes them a truly valuable asset to the organization.

Figure 6. Many organizations have different business units for the develop/deploy and the support/maintain costs of a test system. Test engineers can greatly impact the operational costs of supporting a system, but must expand their influence beyond their own organization to understand and implement solutions to mitigate the long-term costs of supporting an ATE system.

 

While the challenges of obsolescence management, rapidly evolving RF requirements, and influencing DFM are by no means all-encompassing, these challenges represent tremendous opportunity for the Test engineer to impact the bottom line of the organization and showcase the value the test engineering team can deliver.

 

Back to Top

4. Learn More

 

Back to Top

Bookmark & Share


Ratings

Rate this document

Answered Your Question?
Yes No

Submit