Traditional DMMs generally focus on resolution and precision and do not offer high-speed acquisition capability. There is some inherent limitation in noise performance versus speed, of course, which is a function of basic physics. The Johnson thermal noise of a resistor is an example of one theoretical limit, and semiconductor device technology sets some practical limitations. But you have many other options to help you achieve the highest possible measurement performance.
Some specialized high-resolution DMMs tease with both resolution and somewhat higher speeds, but they are very expensive – near $8,000 USD – and available only in full-rack configurations that consume significant system or bench space.
Another DMM speed limitation is driven by the traditional hardware platform – the GPIB (IEEE 488) interface bus. This interface, in use since the 1970s, is often considered the standard despite trade-offs in speed, flexibility, and cost. Most traditional “box” DMMs use this interface, although alternative interface standards, such as USB and Ethernet, are now available as options with traditional DMMs. All of these interfaces communicate with the DMM by sending messages to the instrument and waiting for a response, which is inherently slower than the register-based access used in PXI modular instruments.
Even with the first attempts to move away from the GPIB interface, the basic limitation with DMMs in both speed and precision continues to be the ADCs used in these products. To better understand the technologies used, you need to examine more closely what they offer in terms of performance.
Dual-Slope ADC Technology
From a historic perspective, one of the oldest but most common forms of precision A/D conversion is the Dual-Slope ADC. This technique has been widely used since the 1950s. It is essentially a two-step process. First, an input voltage (representing the signal to be measured) is converted to a current and applied to the input of an integrator through switch S1. When the integrator is connected to the input (at the beginning of the integration cycle or aperture), the integrator ramps up until the end of the integration cycle or aperture, at which time the input is disconnected from the integrator. Now, a precision, known reference current is connected to the integrator through switch S2 and the integrator is ramped down until it crosses zero. During this time, a high-resolution counter measures the time it takes for the integrator to ramp down from where it started. This measured time, relative to the integration time and reference, is proportional to the amplitude of the input signal. See Figure 1.
Figure 1. Dual-Slope Converter Block Diagram
This technique is used in many high-resolution DMMs, even today. It has the advantage of simplicity and precision. With long integration times, you can increase resolution to theoretical limits. However, the following design limitations ultimately affect product performance:
- Dielectric absorption of the integrator capacitor must be compensated, even with high-quality integrator capacitors, which can require complicated calibration procedures.
- The signal must be gated on and off, as must the reference. This process can introduce charge injection into the input signal. Charge injection can cause input-dependent errors (nonlinearity), which are difficult to compensate for at very high resolutions (6½ digits or more).
- The ramp-down time seriously degrades the speed of measurement. The faster the ramp down, the greater the errors introduced by comparator delays, charge injection, and so on.
Some topologies use a transconductance stage prior to the integrator to convert the voltage to a current, and then use “current steering” networks to minimize charge injection. Unfortunately, this added stage introduces complexity and possible errors.
Despite these design limitations, dual-slope converters have been used in a myriad of DMMs from the most common bench or field service tools to high-precision, metrology-grade, high-resolution DMMs. As with most integrating A/D techniques, they have the advantage of providing fairly good noise rejection. Setting the integration period to a multiple of 1/PLC (power line frequency) causes the A/D to reject line frequency noise – a desirable result.
Charge-Balance-with-Ramp-Down ADC Technology
Many manufacturers overcome the dielectric absorption and speed problems inherent in dual-slope converters by using the charge-balance-with-ramp-down A/D technique. This technique is fundamentally similar to the dual slope but applies the reference signal in quantized increments during the integration cycle. This is sometimes called “modulation.” Each increment represents a fixed number of final counts. See Figure 2.
Figure 2. Charge-Balance Converter Block Diagram
During this integration phase, represented in Figure 2 by taperture, S1 is turned on and Vx is applied through R1, which starts the integrator ramping. Opposing current is applied at regular intervals through switches, S2 and S3. This “balances” the charge on C1. Measurement counts are generated each time S5 is connected to VR. In fact, for higher-resolution measurements (longer integration times), most of the counts are generated during this taperture phase. At the end of the charge-balance phase, a precision reference current is applied to the integrator, as is done in the case of the dual-slope converter. The integrator is thus ramped down until it crosses zero. The measurement is calculated from the counts accumulated during the integration and added to the weighted counts accumulated during the ramp down. Manufacturers use two or more ramp-down references, resulting in fast ramp downs to optimize speed and then slower “final slopes” for precision.
Although you can greatly improve your integrator capacitor dielectric absorption problems with the charge-balance with ramp-down A/D, it has performance benefits similar to the dual-slope converter. (In fact, some dual-slope converters use multiple ramp-down slopes.) Speed is greatly improved because the number of counts generated during the charge- balance phase reduces the significance of any ramp-down error, so ramp down can be much faster. However, there is still significant dead time if you make multiple measurements or if you digitize a signal because of disarming and rearming the integrator.
This type of ADC, in commercial use since the 1970s, has evolved significantly. Early versions used a modulator similar to that of a voltage-to-frequency converter. They suffered from linearity problems brought on by frequency-dependent parasitic effects and were thus limited in conversion speed. In the mid-1980s the technique was refined to incorporate a “constant frequency” modulator, which is still widely used today. This dramatically improved both the ultimate performance and manufacturability of these converters.
Sigma-Delta Converter Technology
Sigma-delta converters, or noise-shaping ADCs, have historic roots in telecommunications. Today, the technique is largely used as the basis for commercially available off-the-shelf A/D building blocks produced by several manufacturers. Significant evolution has taken place in this arena over the last decade (driven by a growing need for high dynamic range conversion in audio and telecommunications), and much research is still ongoing. Some modular DMMs (PXI(e), PCI(e), and VXI) use sigma-delta ADCs at the heart of the acquisition engine today. They are also commonly used to digitize signals for:
- Dynamic signal analysis (DSA)
- Commercial and consumer audio and speech
- Physical parameters such as vibration, strain, and temperature, where moderate-bandwidth digitizing is sufficient
A basic diagram of a sigma-delta converter is shown in Figure 3.
Figure 3. Sigma-Delta Converter Block Diagram
The basic building blocks of a sigma-delta converter are the integrator or integrators, one-bit ADC and DAC (digital-to-analog converter), and digital filter. You conduct noise shaping by combining the integrator stages and digital filter design. You have numerous techniques for implementing these blocks. Different philosophies exist regarding the optimum number of integrator stages, number of digital filter stages, number of bits in the A/D and D/A converters, and so on. However, the basic operational building blocks remain fundamentally the same. A modulator consisting of a one-bit charge-balancing feedback loop is similar to that described above. The one-bit ADC, because of its inherent precision and monotonicity, leads the way to very good linearity.
There are many advantages to using commercially available sigma-delta converters:
- They are fairly linear and offer good differential nonlinearity (DNL)
- You can control signal noise very effectively
- They are inherently self-sampling and tracking (no sample-and-hold circuitry required)
- They are generally low in cost
However, there are some limitations to using off-the-shelf sigma-delta ADCs in high-resolution DMMs:
- Speed limitations, especially in scanning applications, due to pipeline delays through the digital filter
- Although generally linear and low noise, manufacturer specifications limit precision to 5½ digits (19 bits)
- Modulation “tones” can alias into the passband, creating problems at high resolutions
- Limited control over speed-noise trade-offs, acquisition time, and so on