Specifications Explained: NI DSA & SC Express DAQ


The specification manuals for NI Dynamic Signal Acquisition (DSA) & NI SC Express Data Acquisition (DAQ) devices and modules provide the technical details necessary to determine or choose which DAQ device or module is best suited for your application, and as a reference to validate device or module performance during system development. This document provides definitions of the terminology used, in a glossary format, to illustrate the importance and relevance of each specification.



This guide is broken up into the same sections as most NI specifications manuals. Terms and definitions below are listed in alphabetical order and may occur in a different order in the specification manuals. This guide exclusively applies to 44xx (DSA) and 43xx (SC Express) DAQ devices and modules. Other NI product families such as Multifunction I/O (MIO) DAQ, cDAQ and cRIO Chassis and Controllers, 91xx, 92xx, 94xx C Series Modules, Multifunction RIO 78xx R Series, Digital Multimeters, Scopes/Digitizers and other instruments may use different terminology or methods to derive specifications and as such, this guide should not be used as a reference for devices and modules other than those in the DSA and SC Express DAQ families. 


Understanding Specification Terminology

First, it is important to note the categorical difference between various specifications. NI defines the capabilities and performance of its Test & Measurement instruments as either Specifications, Typical Specifications, and Characteristic or Supplemental Specifications. See your devices' specifications manual for more details on which specifications are warranted or typical.

  • Specifications characterize the warranted performance of the instrument within the recommended calibration interval and under the stated operating conditions.
  • Typical Specifications are specifications met by the majority of the instruments within the recommended calibration interval and under the stated operating conditions. Typical specifications are not warranted.
  • Characteristic or Supplemental Specifications describe basic functions and attributes of the instrument established by design or during development and not evaluated during Verification or Adjustment. They provide information that is relevant for the adequate use of the instrument that is not included in the previous definitions.


Analog Input Specifications

NI DSA and SC Express devices and modules typically only have analog input or analog output systems.  There are specifications unique to DSA or SC Express, but also some specifications which apply to both. This section is organized in three sections to cover the common specifications, DSA analog input specific, and  SC Express analog input specific.


Common Analog Input Specifications

Analog-to-Digital Converter (ADC) Type

Successive Approximation Register (SAR)

An ADC which converts an analog signal into a discrete digital representation by using a binary search to match a created voltage to the provided signal voltage.


An ADC architecture consisting of a 1-bit ADC and filtering circuitry which over-samples the input signal and performs noise-shaping to achieve a high-resolution digital output.

Oversample Rate

This terminology applies only to modules with delta-sigma ADCs. Delta-sigma ADCs use sample rates that are large multiples, such as 128 times the Nyquist rate for a given signal. For example, to sample a 25 kHz signal, a sample rate greater than the Nyquist rate (that is above 50 kHz) would be sufficient. However, a delta-sigma ADC using an oversample rate of 128 times, samples the signal at 6 MHz. This approach has several benefits, such as better anti-aliasing and higher resolution.

The oversampled data is processed by a digital filter within the ADC before the data is made available as an output. Since it takes a non-zero amount of time for the oversampled data to be processed by the digital filter, the output data rate is always lower than the oversample rate. The output data rate equals to the oversample rate divided by the ADC decimation ratio.

ADC Resolution    

Resolution is the smallest amount of input signal change that a device or sensor can detect. The number of bits used to represent an analog signal determines the resolution of the ADC.


The NI PXIe-4300 is a 16-bit device, which means that lowest amplitude change that can be detected on the ±5 V range is 0.152 mV. On the ± 0.1 V range, this value is 3.05 µV.



This specification will state what frequencies are available to be used as the timebase frequency.


Specified typically in ppm, this specifies how far from the listed frequency the actual timebase frequency can be. This specification is subject to the device's cal interval spec.


This will state what external routes are available to be used as external timebases.

Reference Clock

Locking Frequencies

Typically shown in a table, this will show based on the reference signal provided what are valid locking frequencies for reference clock synchronization.


This will display what are valid reference clock sources for the device.  This will typically consist of the onboard clock and the chassis backplane clock.

Filter Group Delay (ADC Filter Delay)

Analog Delay

This filter delay is the result of the signal passing through an analog filter on the device. The values of this will vary based on the range/gain being used on the device.

Digital Filter Delay

 Uncompensated Group Delay

This filter delay is the result of the following: analog filter delay, buffered mode digital filter, or hardware-timed single point mode filter. This delay is not compensated for when synchronizing and will need to be accounted for to not see a phase shift in the data.

Compensated Group Delay

This filter delay is the result of the anti-aliasing filter working in a bufered mode. When synchronized with other hardware this delay will be automatically compensated for.

Base Filter Group Delay

This delay is of a result of the signal passing through the digital anti-aliasing filter on the device.  It is a partial component of the Compensated Group Delay spec, and is determined by the input frequency provided to the module.

Variable Filter Group Delay

This delay is of a result of the signal passing through the digital anti-aliasing filter on the device.  It is a partial component of the Compensated Group Delay spec and is determined by the sample rate currently being used on the module.

Bandwidth/ Alias Rejection

Passband/ Alias-Free Bandwidth

The passband/Alias-Free Bandwidth is the defined as the frequency range starting at DC to the point where the anti-aliasing filter reaches its' -0.1 dB point.  This range is where the signal being measured should exist to get a proper measurement


The PXIe-4464 lists its Alias-free bandwidth (BW) (passband) as "DC to 0.454 * fs" meaning from DC to 0.45 4 times your sample frequency will be the passband for your current measurement

Stopband/Alias Rejection

The stopband/alias rejection is defined by 2 parts, an attenuation or rejection spec as well as a frequency spec. The attenuation/rejection defines the minimum amount of attenuation that will be applied to the signal, while the frequency determines where the starting point for that attenuation will occur.


The specification for the PXIe-4330 is shown below.

For the PXIe-4330 this means that signals found at a frequency of 0.55 times the sample frequency will see an attenuation of 100 dB from the original signal.

Minimum Frequency for Alias Hole

The delta-sigma ADCs on devices include an oversampled architecture and sharp digital filters with cut-off frequencies that track the sampling rate.  Due to how the digital filter is designed inside of the ADC, there becomes a small hole, where in theory aliases could be seen, at larger frequencies.

Rejection at Alias Hole

To combat the alias holes left by the ADC an analog fixed frequency filter is applied to high frequency components along the analog path.  This specifies what rejection those high frequency components will see.

Common Mode Rejection Ratio (CMRR)

When the same signal is seen on the positive and negative inputs of an amplifier, the CMRR specifies how much of this signal is rejected from the final output (typically measured in dB). An ideal amplifier will remove 100% of the common mode signal, but this is not achievable in implementation.


The NI PXIe-4300 has a CMRR of 100 dB for the 5 V range. This means that it will attenuate common mode voltages by 100,000x. If the signal being measured is a 5 Vpk sine wave, and the offset or common voltage between the positive and negative inputs is 5 VDC, the final output will reject or attenuate the 5 VDC input to 5 µV. CMRR is not included in accuracy derivations and should be accounted for separately if the signal measured contains common mode voltages.


The measure of how much a signal on one channel can couple onto, or affect, an adjacent channel. Crosstalk exists any time an amplitude-varying signal is present on a wire or PCB trace that is physically close to another wire or PCB trace.


The NI PXIe-4464 has a crosstalk specification of -95 dB for adjacent channels and -125 dB for non-adjacent channels when using its’ -10 dB Gain range. This means that channel ai2 will have a crosstalk specification of -95 dB between channels ai1 and ai3, and a crosstalk specification of -125 dB to all other ai channels.


Some modules such as the NI 4461 and NI 4464 support both DC and AC coupled modes. When DC coupling mode is selected, any DC offset in the source signal is passed on to the ADC. When AC coupling mode is selected, a high pass filter is enabled at the input of the signal path, filtering out most DC content of the signal.

See Also: Basic Information about AC and DC Coupling.

Excitation Characteristics

          Excitation Type

Specifies what type of out is provided when requesting excitation from a module. For SC Express this will typically be listed as “Constant Differential Voltage (Balanced)”.


Provides the noise found on the excitation line as well as what bandwidth said noise was measured with. Typically represent in uVrms.

          Values/ Voltage Programmability

Specifies what voltage values are available to be provided by the internal excitation form the module.

          Maximum Fault Current

Is the maximum current that the module can possibly provide in the case of a fault in the external circuitry.

          Minimum Current/ Maximum Current/ Current Drive

Is the smallest max current we guarantee that the module will be able to provide through excitation.

          LVDT Module Specific

               Voltage Programmability

  Specifies what voltage values are available to be provided by the internal excitation form the module.

               Frequency Programmability

  Specifies what frequency values can be selected by the device for the provided excitation.

               Current Drive

                    Is the smallest max current we guarantee that the module will be able to provide through excitation.

               Gain Accuracy

                    Is the accuracy in the gain of the provided excitation voltage from the module.

          Adjustment Range

A range for the output of the excitation the user is able to select any values inside said range that are compatible with the corresponding adjustment resolution step size.

          Load Regulation

Is the capability to maintain a constant voltage (or current) level on the output channel of a power supply despite changes in the supply's load.

          Current Limit Detection

Is a software readable property of the excitation that will change based on comparing the current excitation current to the specified limit current.        


IEPE Open Detection

                    Is a software readable property for the module that will tell the user if the circuit is currently open.

IEPE Short Detection

                    Is a software readable property for the module that will tell the user if the circuit currently has a short.

Compliance Voltage

Specified as either a value or value min, the compliance voltage is minimum max value that Vcommon-mode + Vbias + Vfull-scale will add up to. Where
Vcommon-mode is the common mode voltage seen by the input channel Vbias is the DC bias voltage of the sensor Vfull-scale is the AC full scale voltage of the sensor.

Balanced Source

A balanced source uses three conductors to carry the signal. Two of the conductors carry negative and positive signals (AC signal), and the third is used for grounding.  The high gain and isolated ground make for a cleaner, noise-free signal when using a balanced source.

Unbalanced Source

An unbalanced source, there are only two conductors. One carries positive, the carries negative and is also used for ground.  Better for short cables found in low noise environments.

 FIFO Buffer Size

 NI DAQ devices can store data in an onboard FIFO when performing analog input or analog output tasks.

    • For input tasks, this FIFO is used to buffer data prior to the NI-DAQmx driver software transferring the data to a pre-allocated location in RAM known as the PC buffer.
    • For output tasks, the data that a user requests to generate can be buffered in a combination of the FIFO and the PC buffer.

Devices that have input and output channels will have a dedicated FIFO for each subsystem. However, the FIFO is shared across all channels within that FIFO. For analog input, NI-DAQmx implements data transfer mechanisms to ensure that the data stored in the FIFO is transferred to the PC buffer fast enough so that the onboard FIFO is not overrun. For analog output, NI-DAQmx implements data transfer mechanisms to ensure that data in the PC buffer is transferred to the onboard FIFO fast enough such that the FIFO is not underrun. For analog output, there are user-selectable properties to specify whether to use the PC buffer at all, and to regenerate a single waveform from just the onboard FIFO.


The signals within a passband have frequency-dependent gain or attenuation. The small amount of variation in gain with respect to a reference frequency is called the passband flatness.

Frequency Response (Magnitude and Phase)

As a frequency sweep is performed across the analog input of a module the magnitude and phase of the measured signal will change based on filter characteristics of the module.  This data is typically represented in either a chart or a table of frequency values.

Gain Amplitude/Accuracy

Gain error inherent to the instrumentation amplifier and is known to exist after a self-calibration.

Input Impedance

Input impedance is a measure of how the input circuitry impedes current from flowing through to analog input ground. For an ideal ADC, this value should be infinite—meaning no current will flow from the input to ground—but in practice this is not possible. The implication of some finite input impedance is that the ADC will have some degree of loading down a circuit, particularly those of high output impedance. It is typical for sensors to have low output impedance.


The NI PXIe-4309 has an input impedance of Zin > 10 G Ω. Taking the worst-case scenario of lowest input impedance, you can view a single-ended measurement as the following simplified circuit, assuming a sensor with output impedance Zout = 150 Ω.

The series combination of the sensor output and DAQ device input means that voltage will be divided between the two impedance values, with the larger impedance bearing most of the voltage. This means that if the sensitivity of this sensor is 20 °C / V and is measuring 100 °C (outputting 5 V), then the voltage measured by the DAQ device will be the output voltage multiplied by the ratio of the input impedance to the sum of the DAQ input and sensor output impedance:

This 75 nV measurement difference corresponds to a near-negligible 1.5u °C measurement error due to impedance.
To illustrate an example when input impedance becomes an important specification, take the hypothetical case where a sensor has an extremely high output impedance, such as 5 GΩ. Connecting the DAQ device to a sensor with this extremely high output impedance causes a 5 V nominal output from the sensor to be read as 3.333 V, or a hypothetical measurement error of 33.34 °C.

Input Noise

          Additional system noise generated by the analog front end, measured by grounding the input channel.


 Isolation is the means of physically and electrically separating two parts of a device. It protects computer circuitry and human operators, breaks ground loops, and improves common-mode voltage and noise rejection.


Each channel is isolated from every other channel and other non-isolated components.  The figure below represents channel-to-channel isolation. Va,b, Vc,d, Ve,f, and Vg,h are all on separate buses and are isolated from one another.


Channels of the device and the device's Earth ground are electrically isolated from one another.  Channel-to-Earth isolation is represented in the Figure below. Voltages of the isolated front end (Va-c ) are on the same bus; these voltages are not isolated from one another. Ve,d are on a separate bus and are isolated from the front end.



Channel-to-Channel Matching

Interchannel Gain Mismatch

Channel-to-channel gain mismatch defines the difference in gain on any given channel relative to a reference channel. It is specified as a function of the input frequency.

Interchannel Phase Mismatch

Channel-to-channel phase mismatch defines the difference in phase on any given channel relative to a reference channel. It is specified based on the measurement range and the input signal frequency of the measurement

NI 4302 in the 10V range specifies channel-to-channel gain mismatch of fin * 0.035°/kHz maximum. For example, if the input signal frequency is 2 kHz, the maximum difference in phase on one channel relative to any other channel on the module. is 0.07°.


Offset (Residual DC)

Offset error inherent to the instrumentation amplifier and is known to exist after a self-calibration.


Overvoltage Protection

The analog input circuitry has protection diodes in place that will gate a large voltage from damaging the most critical components of the device, such as the PGIA or ADC.

    • When the device is powered on, these diodes are biased at some positive and negative voltage, meaning that a voltage larger than the sum of the bias and reverse voltage must be present before these diodes are overloaded and can be damaged.
    • When the device is off, the bias voltage is removed, so the voltage needed to reverse the diodes is lower, making the device more susceptible to being damaged.

When in an overvoltage state, the maximum amount of current that a device can sink is specified by the input current during overvoltage condition.


The NI PXIe-4480 has protection up to ±30 V for two AI positive pins. If more than two AI pins experience an overvoltage larger than ±30 V, the device can be damaged. While the device is off, there is a lower level of protection at ±15 V.


Phase Linearity

Ideally in a linear phase system, the phase and the frequency of the signal have a linear relationship. This means that input signals of all frequencies have the same time delay through the system. Phase non-linearity is an expression of the extent to which the phase-frequency function deviates from the ideal.



For analog input, this is the maximum positive and negative value that can be measured with guaranteed accuracy. For analog output, this is the maximum positive or negative value that can be generated. Some devices have multiple input or output ranges that can be used to provide a higher resolution at lower level signals.


               The NI PXIe-4300 has four input voltage ranges: ±1 V, ±2 V, ±5 V, and ±10 V.

          Common Mode

Typically provided as channel-to-earth ground, this is the measure of how far the differential signal can stray from earth ground while still providing correct measurements.

          Maximum Working Voltage

The maximum working voltage is determined by taking the signal voltage and them adding the common voltage to it.  In most instances where this is specified the maximum working voltage will vary based upon what signal range is currently in use.


The signal range specifies the maximum and minimum values that has been configured to be handled by the ADC.  This is a user selectable property when setting up DAQmx Tasks.  Providing too small of a range will result in the signal clipping at the maximum and minimum values, while providing too large of a range will result in a worse resolution in the measurement.



 A useful specification when measuring thermocouples, sensitivity defines based on timing mode what the smallest change in temperature can be recorded by the device.  This specification is currently only used on the PXIe-4353.



 This specification covers how a variety of specifications can change based on either time or temperature.  As a result, this specification will typically alter other specifications based on a temperature difference, or a time since last calibration.


Sample Rates


Sample rate range specifies what values are available for how often an ADC converts data from analog to digital values. Some devices have only one ADC, so the sample rate is shared across channels while other devices have a dedicated ADC per channel. Sample rate is measured in Samples per second (S/s) or Samples per second per channel (S/s/ch) when acquiring from multiple channels.

      • Single Channel Maximum—For a shared sample rate across channels, a single channel can acquire data at a higher rate than allowed when sharing
      • Multichannel Maximum—For a device that shares the sample rate across channels, this is the maximum rate at which all channels combined can acquire data
      • Minimum—The minimum rate at which data can be acquired


Sample and update rates for analog input and output tasks are restricted to discrete values when using an onboard timing engine. The difference in clock periods between two adjacent rates is known as timing resolution. NI-DAQmx will coerce a selected frequency up to the next available frequency if it cannot generate the exact one specified by the user.


The NI PXIe-4300 has a specified timing resolution of 10 ns. This means that it can generate or acquire data at integer multiples of 10 ns. For example, 32,000.00 Hz and 32,010.2432... Hz are two adjacent frequencies, as their clock periods are 31.250 µs & 31.240 µs, respectively. To find the next available frequency, add or subtract the timing resolution to a known clock period.


Spectral Noise Density

While it can be shown in 2 different methods, this specification is the representation of noise across the spectral range of the device. It can be shown either based on the current frequency being measured or based on the current range of the module.  This specification is typically given in Volts per square root Hertz.


Spurious Free Dynamic Range (SFDR)

Spurious free dynamic range is the usable dynamic range before spurious noise interferes with or distorts the fundamental signal. Analog input and analog output circuitry both have non-linearities that result in harmonic distortion. SFDR is easily observable in the frequency domain as:


The PXIe-4339 has an SFDR of around 100 dB. Taking the graph above as an example, if the fundamental signal was applied at 0 dB, the next highest spur would occur at 100 dB lower - providing a usable dynamic range without spurious interference.


Total Harmonic Distortion

Due to inherent nonlinearities of ADC and DAC components, harmonic frequencies will appear in the measured or generated signals. The ratio of the sum of these harmonics' powers to the power of the fundamental is known as total harmonic distortion.


The PXIe-4480 has a specified THD of -100 dB when fs = 51.2 kS/s. This means that for a given test signal, in this case a 10 kHz sine wave at full scale, the amplitude of the signal attributed to harmonic distortion is less than 0.001%. Conversely, more than 99.999% of the amplitude that is measured can be attributed to the fundamental tone or signal of interest.


DSA Analog Input Specifications

Cutoff Frequency

The cutoff frequency is the point in the frequency domain at which attenuation is seen in the signal.  This can be provided as both the –3 dB point (where the signal is at 50% its original power, or at the –0.1 dB point (the point signifying the leaving of the passband).


Dynamic Range

The dynamic range of a device is the ratio of the largest and smallest signals that can be measured by circuit, normally expressed in dB.  

                                  Dynamic Range in dB = 20 * log10( Vmax / Vmin )

In most cases, the full scale input of a device is the largest signal that can be measured and the idle channel input noise determines the smallest signal that can be measured.  National Instruments DSA devices specify Dynamic Range, Idle Channel Noise, and Spectral Noise Density, all of which can be used to calculate dynamic range.  The easiest way to measure your device’s dynamic range is to take an idle channel noise measurement and convert that measurement to dB full scale, as described in Measure the Dynamic Range of My Data Acquisition Device.

Dynamic range is a very important quantity to consider when choosing a DSA device.  Oftentimes, DSA applications require the use of microphones and accelerometers- sensors that have very large dynamic ranges.  Choosing an appropriate measurement device will allow you to take advantage of these sensors and the fullness of their range.


Idle Channel Noise

Idle channel noise is a measurement of what the noise floor is for a module.  It can be calculated for your module by the following steps:

    1. Terminate an input channel, or provide a –60 dB tone
    2. Calculate the offset from the mean for the terminated channel
    3. Convert the offset to Vrms to determine the Noise


Intermodulation Distortion

IMD is another measure of distortion due to non-linearity in the module. IMD is often used to measure the distortion of a module near the high-frequency limit of the module or the measurement system. If the input to the module is a multi-tone signal, the non-linearity present in the module would cause the tones to mix and create new tones in the spectrum which are undesirable. The level of these new signals are defined by IMD.


Total Harmonic Distortion + Noise (THD+N)

This specification is the same as THD but includes noise.  You can think of THD+N as the total signal distortion due to harmonic signals and noise.
THD + N = √(∑Power (harmonics) + ∑Power (noise)) / √Power (fundamental))


Intermodule ADC Skew

 This is the measurement of what the max difference in time is between two identical signals measured on different ADCs of the same module.  This spec also remains valid for other modules within the same chassis when using reference clock synchronization and can be determined for modules in other chassis by adding in the PXI clock distribution potential skew.


SC Express 

AI Absolute Accuracy

Accuracy refers to how close to the correct value of a measurement is. Absolute Accuracy at Full Scale is a calculated theoretical accuracy assuming the value being measured is the maximum voltage supported in a given range. The accuracy of a measurement will change as the measurement changes, so to be able to make a comparison between devices, the accuracy at full scale is used. Note that absolute accuracy at full scale makes assumptions about environment variables, such as 25 °C operating temperature, that may be different in practice.

      • Nominal Range Positive Full Scale—The ideal maximum positive value that can be measured in a particular range
      • Nominal Range Negative Full Scale—The ideal maximum negative value that can be measured in a particular range
      • Residual Gain Error—Gain error inherent to the instrumentation amplifier and is known to exist after a self-calibration
      • Gain Tempco—The temperature coefficient that describes how temperature impacts the gain of the amplifier compared to the temperature at last self-calibration
      • Residual Offset Error—Offset error inherent to the instrumentation amplifier and is known to exist after a self-calibration
      • Reference Tempco—The temperature coefficient that describes how accurate a measurement is at a specific temperature compared to the temperature at last external calibration
      • INL Error (relative accuracy resolution)—The maximum deviation from the voltage output of an ADC to the ideal output. Can be thought of as worst case DNL. See also: DNL
      • Offset Tempco—The temperature coefficient that describes how temperature affects the offset in an ADC conversion compared to the temperature at last self-calibration
      • Random/System Noise—Additional system noise generated by the analog front end, measured by grounding the input channel


The NI PXIe-4300 has a range of ± 1.0 V. The absolute accuracy at full scale is calculated with the assumption that the signal being measured is 1.0 V. The absolute accuracy at full scale for the ± 1.0 V range is 575 µV.
See Also
How Do I Calculate Absolute Accuracy Or System Accuracy?


ADC Timing Mode

          High Resolution

Found on SC Express temperature input modules, this timing mode slows down the sample rate of the device to improve performance, through a process called noise shaping.  This process dramatically increases the ADC Conversion time of the device.  This mode enabled by default.

         High Speed

Found on SC Express temperature input modules, this timing mode speeds up the sample rate of the device to maximum supported sample rate for the device, while sacrificing performance.  This process dramatically reduces the ADC Conversion time of the device.

          ADC Conversion Time

The ADC Conversion Time is how long it takes for the ADC to provide samples from the input data stream.  Used as a describing specification with the ADC Timing Modes.


Bridge Completion

Bridge completion refers to options provided to the user in terms of connecting Wheatstone bridge sensors, which are typically used to measure strain, to the module. The software-selectable bridge completion modes are full-bridge, half-bridge, and quarter-bridge (These options may not be available
on all devices).


Bridge Resistance

 This specifies what bridge resistance values that the module has been designed to handle.  On SC Express devices these values are typically: 120, 350, and 1000 Ohms.


Shunt Calibration

Shunt calibration is a method of calibration done to bridge based circuits where a resistor is placed in parallel with one of the legs of the bridge, driving the sensor to a known value.  Doing so will allow you to know what the full-scale output of the transducer is.


This specifies how shunt calibration is done on the selected device.


Specifies where and what resistor is being used to complete the shunt calibration.

Switch Resistance

This is a measurement across the switch which is activated when enabling shunt calibration.


This is a measure of how much the value of the shunt resistor can vary based on the current operating temperature of the module.


This is a measure on what the acceptable amount of variation can be seen in the resistance of the onboard shunt resistor.


This is the resistance that is being used to do the shunt calibration


This provides guidance to where the shunt resistor is on the device, in relation to the current active bridge.


Channel-to-Channel Matching

Channel-to-channel matching is the knowledge of if the same signal is provided to multiple channels of the device, how much that signal can vary in Gain and Phase between channels.


Fault Protection

This is the amount of voltage that the module can withstand between two pins on the device without being damaged.  This specification can be given for either the “On” state of the device or the “On or Off” state.



On many SC Express devices, filtering has been added to the device in order to provide a cleaner signal.  When specified the document will provide information on where the filter is in the frequency domain, and how the filter was implemented.


Specifies where in the frequency domain the passband ends.  On SC Express devices this is a user selectable feature, so the various options that are selectable will be given here.

Cutoff Frequency Tolerance

This is a measure of how accurate the value selected for the lowpass filter will be.  This value is typically +/- 5%.

Filter Type

This specifies what filter type is being used to create the lowpass filter.  The filter implemented will typically be either a Butterworth or Elliptic filter.


Input Bias Current

A consequence of having a finite input impedance is that the device requires a small amount of current to be able to detect a signal. Theoretically, this value should be 0 A, but in practice this is not possible.


The NI PXIe-4300 has an input bias current of ±6 nA. This means that any sensor being measured by the NI PXIe-4300 must be able to source at least that much current across its entire voltage output range to be correctly digitized.


Timing Accuracy

When generating a clock signal on an NI DAQ device for timing signals, the actual frequency generated will be within the timing accuracy. This specification is derived from the overall accuracy of the onboard crystal oscillator. Timing accuracy is typically measured in parts per million (ppm). To convert this accuracy value to Hz, multiply by the accuracy value divided by 1 million. The frequency of the clock is not likely to change drastically from cycle to cycle.


The PXIe-4300 has a timing accuracy of 50 ppm. For an analog output task with an update rate of 1,000 S/s, the sample clock will run at 1,000 Hz ± 50 ppm. In Hz, this comes out to:




Chopping is a feature that can be used to remove signal path offset voltages and reduce noise. The improved measurement performance is achieved by measuring the signal twice, once normally (V0) and once with the inputs inverted (V1). These measurements are then averaged by the device to create a sample. A hardware diagram is provided in the image below.


Wire Mode

          CJC Accuracy

The thermocouple itself relies on the principle that an electrical potential exists at the junction of two different metals. CJC becomes necessary because the junction between each end of the thermocouple and your measuring system (connector block, terminal block) also adds a potential difference to the thermocouple voltage. This is compensated by using an onboard CJC sensor to measure the temperature between the thermocouple and the measuring system. CJC sensor accuracy is the accuracy specification of the CJC sensor only.


Common Analog Output Specifications


 Analog Converter (DAC) Resolution for how to calculate step size) and the actual value that is output (typically measured in LSB). In an ideal DAC, DNL would be 0 LSB.


The NI PXIe-4322 has a DNL of ±1 LSB, which means that for any value that is output from the DAC, the actual value can be ±1 LSB away from the value programmed. For example, if the user programs the DAC to output a value of 1 V on the ±5 V range, the output (not including effects of accuracy) can range from:



INL is the compound effect of DNL so the INL specification is often used in accuracy calculations. For the NI PXIe-4322, the INL specification in the accuracy table is 64 ppm, or 4 LSB, of the range used.


SC Express Analog Output Specifications

Settling Time

Is how long it will take for the output signal to remain within the specified value, which is specified by Least Significant Bits (LSBs). This amount of time will vary as the load changes as the time being represented here is directly related to the time constant for the circuit.


Was this information helpful?