Digital Multimeter Measurement Fundamentals

Publish Date: May 28, 2010 | 70 Ratings | 3.30 out of 5 |  PDF

Table of Contents

  1. Accuracy
  2. Sensitivity
  3. Resolution
  4. Noise
  5. Precision

1. Accuracy

Accuracy essentially represents the uncertainty of a given measurement because a reading from a digital multimeter (DMM) can differ from the actual input. Accuracy is often expressed as:

(% Reading) + Offset
(% Reading) + (% Range)
±(ppm of reading + ppm of range)
Note: Refer to the specifications included with your DMM to determine which method is used.

For example, assume a DMM set to the 10 V range is operating 90 days after calibration at 23 ºC ±5 ºC and is expecting a 7 V signal. The accuracy specifications for these conditions state ±(20 ppm of reading + 6 ppm of range). To determine accuracy of the DMM under these conditions, use the following formula:

Accuracy = ±(ppm of reading + ppm of range)

Accuracy = ±(20 ppm of 7 V + 6 ppm of 10 V)

Accuracy = ±((7 V(20/1,000,000) + (10 V(6/1,000,000))

Accuracy = 200 µV

Therefore, the reading should be within 200 µV of the actual input voltage.
Accuracy can also be defined in terms of the deviation from an ideal transfer function as follows:

y = mx + b
where x is the input
m is the ideal gain of a system
b is the offset

Applying this example to a DMM signal measurement, y is the reading obtained from the DMM with x as the input, and b is an offset error that you may be able to null before the measurement is performed. If m is 1, the output measurement is equal to the input. If m is 1.000001, then the error from the ideal is 1 ppm or 0.0001 percent.

ppm to Percent Conversions

ppm Percent
1 0.0001
10 0.001
100 0.01
1,000 0.1
10,000 1

High-resolution, high-accuracy DMMs describe accuracy in units of ppm and are specified as ±(ppm of reading + ppm of range). The ppm of reading is the deviation from the ideal m; ppm of range is the deviation from the ideal b, which is zero. The b errors are most commonly referred to as offset errors.

Temperature can have a significant effect on the accuracy of a digital multimeter and is a common problem for precision measurements. Temperature coefficient, or tempco, expresses the error caused by temperature. Errors are calculated as ±(ppm of reading + ppm of range)/ºC. Therefore, the gain and offset in the DMM transfer function vary with temperature but are not worse than those specified by the tempco specification.

>>Compare NI digital multimeters

Back to Top

2. Sensitivity

Sensitivity is the smallest unit of a given parameter that can be meaningfully detected with the instrument when used under reasonable conditions. For example, assume the sensitivity of a DMM in the volts function is 100 nV. With this sensitivity, the DMM can detect a 100 nV change in the input voltage.

>>Compare NI digital multimeters

Back to Top

3. Resolution

For a noise-free DMM, resolution is the smallest change in an input signal that produces, on average, a change in the output signal. Resolution can be expressed in terms of bits, digits, or absolute units, which can be related to each other.


The resolution of general-purpose digitizers is often expressed in bits. Bits specifically refer to the performance of the analog-to-digital converter (ADC). Theoretically, a 12-bit ADC can convert an analog input signal into 212 (4,096) distinct values. 4,096 is the number of least-significant bits (LSB). You can translate LSB into digits of resolution:

Digits of resolution = log10 (Number of LSB)  (1)

Using the above equation, a DMM with a 12-bit ADC has a resolution of:

Log10 (4,096) = 3.61 digits

Note: If you use a 12-bit ADC to digitize signals in a DMM, it is insufficient to call this DMM a 3½-digit DMM because you must also consider noise. Noise may reduce the number of LSB, therefore reducing the number of digits.

Digital Multimeter Absolute Units and Digits of Resolution

Traditionally, 5½ digits refers to the number of digits displayed on the readout of a DMM. A 5½-digit DMM has five full digits that display values from 0 to 9 and one half digit that displays only 0 or 1. This DMM can show positive or negative values from 0 to 199,999.

For more sophisticated digital instruments and, particularly, virtual instruments, digits of resolution does not directly apply to the digits displayed by the readout. Therefore, you must be careful when specifying the number of digits for these measurement devices.

Absolute Units

Counts for a DMM is analogous to LSB for an ADC. A count represents a value that a signal can be digitized to and is equivalent to a step in a quantizer. The weight of a count, or the step size, is called the absolute unit of resolution.

Absolute unit of resolution = total span/counts  (2)


Digits can be defined as:

Digits of resolution = log10 (total span/absolute unit of resolution)   (3)

For example, a noise-free DMM set to the 10 V range (20 V total span) with 200,000 available counts has an absolute unit of resolution of:

Absolute unit of resolution = 20.0 V/200,000 = 100 µV

The readout of this noise-free DMM displays six digits. A change of the last digit indicates a change of 100 µV of the input signal.

An 18-bit ADC provides the minimum number of LSB. You can now calculate the digits of resolution:

(217 = 131,072, 218 = 262,144)

Digits of resolution = log10 (20.0 V/100 x 10-6 V)

Digits of resolution = 5.3

This noise-free DMM can be called a 5½-digit DMM.

The quantization process introduces into any converted signal an irremovable error, the quantization noise. For input signals through a uniform quantizer (without overload distortion), the rms value of the quantization noise in a noise-free DMM can be expressed as:

rms of quantization noise = absolute units of resolution /   (4)

In reality, a noise-free DMM does not exist, and you need to account for the noise level when calculating its absolute units of resolution. Using formula 4, you can define the effective absolute units of resolution of a noisy DMM as the step size of a noise-free DMM with a quantization noise equal to the total noise of the noisy DMM.

Effective absolute units of resolution = * rms noise   (5)

From formula 3, you can define the effective number of digits (ENOD) of this noisy DMM as:

ENOD = log10(total span/effective absolute units of resolution)  (6)

For example, if a DMM set on the 10 V range (20 V total span) shows readings with an rms noise level of 70 µV, its effective absolute units of resolution and the ENOD is:

Absolute units of resolution = * 70 µV = 242.49 µV

ENOD = log10 (20.0 V/242.49*10-6 V) = 4.92 digits

This DMM can be called a 5-digit DMM.
The minimum number of counts needed for this DMM is 20 V/242.49*10-6 V = 82,478. The minimum number of bits needed is 17 (216 = 65,536, 217 = 131,072).

As another example, if the same DMM demonstrates an rms noise level of 20 µV:

Absolute units of resolution = * 20 µV = 69.28 µV

ENOD = log10 (20 V/69.28*10-6 V) = 5.46 digits

This DMM is considered a 5½-digit DMM.

The minimum number of counts needed for this DMM is 20 V/69.28*10-6 V = 288,675. The minimum number of bits needed is 19 (218 = 262,144, 219 = 524,288).

Table 1 relates bits, counts, and ENOD to conventional digits of resolution for DMMs. As evidenced by the table, bits, counts, and ENOD are deterministically related. A direct mathematical relationship between ENOD and digits does not exist because digits is used only as an approximation.

Table 1. Relating Bits, Counts, and ENOD to Conventional Digits of Resolution for DMMs

>>Compare NI digital multimeters

Back to Top

4. Noise

Noise in a measurement can originate from the instrument taking the measurement or from an interfering signal passing through the instrument and causing measurement instability. When considering noise, you need to know the measurement bandwidth because it sets the bounds for how you can manage the noise. You can decrease the measurement bandwidth by increasing the aperture of the measurement or by averaging the measurement.

Noise in the system is a common and problematic challenge in designing measurement systems. Noise sources in the environment can be electrostatically or inductively coupled in from the power line; therefore, most DMMs specify noise rejection at line frequencies of 50 or 60 Hz. The rejection at 400 Hz is, at a minimum, as good as the rejection at 50 Hz because the aperture time for 50 Hz also eliminates 400 Hz components. For more information on configuring an NI 4070 DMM for optimum normal mode rejection ratio (NMRR), refer to DC Noise Rejection.

A commonly overlooked source of noise in precision instrumentation is the source noise resistance (ohms), as shown in the following figure.

Present in every resistor at common laboratory temperatures, this noise is caused by random thermal motion of electrically charged carriers within the device. It is a function of temperature, the value of the resistance (ohms), and the bandwidth of the measurement. The noise is defined as:

You can convert this equation to:

R =resistance (ohms) being measured
f = noise bandwidth of the measurement

This equation assumes ideal resistor elements exhibiting white noise that is Gaussian in distribution. Some resistors, such as certain carbon film resistors, can generate noise from other mechanisms when current (amps) is passed through them. Metal foil and wire wound resistors approach this theoretical limit.

As a point of reference to simplify the calculation, a 1 kΩ resistor has  rms noise density (1 Hz bandwidth). You can scale this value to get to any noise level given any resistor or frequency by multiplying it by . For example, a 100 kΩ resistor in a 100 Hz bandwidth has a noise of:

en = 400 nVrms

If the DMM is digitizing at a 1 kS/s, the measurement bandwidth is 1 kHz and the effective noise is:

en = 4 nV x 316

en = 1.26 µVrms or 8.3 µVp-p

Thus, the source resistance (ohms) limits the noise floor of the measurement over a 1 kHz bandwidth to 8.3 µVp-p.

>>Compare NI digital multimeters

Back to Top

5. Precision

Precision is a measure of the stability of the DMM and its capability of resulting in the same measurement over and over again for the same input signal. Precision is given by:

Precision = 1 – |XnAv(Xn)|/|Av(Xn)|


Xn = the value of the nth measurement

Av(Xn) = the average value of the set of n measurement

For instance, if you are monitoring a constant voltage of 1 V, and you notice that your measured value changes by 20 µV between measurements, then your measurement precision is:

Precision (1 to 20 µV/1 V) x 100 = 99.998%

Precision is most valuable when you are using the DMM to calibrate a device or performing relative measurements.

>>Compare NI digital multimeters

Back to Top

Bookmark & Share


Rate this document

Answered Your Question?
Yes No