Today’s latest electronic designs are characterized by their converging functionality and the increasing prevalence of seemingly interwoven analog and digital technology. Designing, prototyping, and testing these systems, such as latest-generation wireless handsets and set-top boxes, where video, audio, and data are converging, requires tightly integrated digital and analog acquisition and generation hardware matched in base-band sampling rate, distortion, and timing characteristics. Analog and digital instrumentation can no longer be stand-alone systems with disparate timing engines and mismatched analog performance. Furthermore, with manufacturing of such devices running around the clock in many locations around the world, the need for stability and consistency of performance specifications over a wide temperature range is compulsory for reliable, high-throughput functional test.
National Instruments designed the Synchronization and Memory Core (SMC) as the common architecture for a suite of high-speed modular instruments that answer the challenge of testing converged devices. The important SMC features that are critical to integrated mixed-signal prototyping and test systems are:
1. Flexible input and output data transfer cores
2. High-speed deep onboard memory scalable up to 512 MB per channel
3. Precise timing and synchronization engine
Central to the SMC architecture is a field-programmable gate array (FPGA) controller, the DataStream FPGA (DSF), which is the "CPU" of the instrument. It processes all instructions, listens to triggers and locks, routes signals externally, and manages waveform traffic between the instrument and the host computer.
Two major data transfer cores are instantiated in the DSF: one for input and one for output. The input core is designed for high-speed analog waveform digitization and digital waveform input. The output core is designed for high-speed analog waveform generation and digital waveform output. The data transfer cores in the DSF handle data and instruction processing, event triggering, trigger and marker routing, waveform buffer linking and looping, and interdevice and intradevice communication buses (Figure 1).

To read more about the SMC and how it works, refer to the National Instruments Synchronization and Memory Core -- a Modern Architecture for Mixed-Signal Test paper.
Many test and measurement applications call for the timing and synchronization of multiple instruments because of the limited number of stimulus/response channels on a single instrument and/or the need for mixed-signal stimulus/response channels. For example, an oscilloscope may have up to four channels and a signal generator up to two channels. Applications ranging from mixed-signal test in the electronic industry to laser spectroscopy in the sciences require timing and synchronization (T&S) for higher-count channels and/or the need to correlate digital input and output channels with analog input and output channels.
Distributing clocks and triggers to achieve high-speed synchronized devices is beset by nontrivial issues. Latencies and timing uncertainties involved in orchestrating multiple-measurement devices are significant challenges in synchronization, especially for high-speed measurement systems. These issues, often overlooked during the initial system design, limit the speed and accuracy of synchronized systems. Two main issues that arise in the distribution of clocks and triggers are skew and jitter.
National Instruments has developed a patent-pending method for synchronization whereby another signal-clock domain is employed to enable alignment of sample clocks and the distribution and reception of triggers. The objectives of NI T-Clock (TClk) technology are twofold:
TClk synchronization is flexible and wide ranging; it can address the following use cases:
The purpose of TClk synchronization is to have devices respond to triggers at the same time. The "same time" means on the same sample period and having very tight alignment of the sample clocks. TClk synchronization is accomplished by having each device generate a trigger clock that is derived from the sample clock. Triggers are synchronized to a TClk pulse. A device that receives a trigger from an external source or generates it internally will send the signal to all devices, including itself, on a falling edge of TClk. All devices react to the trigger on the following rising edge of TClk.
To read more about NI T-Clock and how it works, refer to the National Instruments T-Clock Technology for Timing and Synchronization of Modular Instruments paper.
NI-STC3 timing and synchronization technology delivers a new level of performance to National Instruments X Series multifunction data acquisition (DAQ) devices. This technology is the driver behind the advanced digital, timing, triggering, synchronization, counter/timer, and bus-mastering features.
A retriggerable task is a measurement task that executes a specified operation each time a specific trigger event occurs. Previous generations of synchronization and timing technology were only able to retrigger counter operations, which could provide retriggerable sample clocks for other tasks but created fairly complex code. NI-STC3 technology now equips all acquisition and generation tasks with inherent retriggerable capabilities with a single NI-DAQmx property node.
NI-STC3 technology also provides a faster 100 MHz timebase, replacing the 80 MHz timebase used by previous devices for many counter applications. The 100 MHZ timebase is also used to generate analog and digital sampling or update rates, compared to a 20 MHz timebase used in prior devices. For generating arbitrary sampling rates, the generated clock rate can now be significantly closer to the user requested rate because of this 5x speed improvement. In addition, the faster timebase and improved device front end reduce the time between triggering and the first sample clock edge, which improves the responsiveness of the device to triggers.
Buffered counter input functionality, using NI-STC3 technology, has improved on its predecessors’ capabilities in the areas of buffered period and frequency measurements. Although the user can continue selecting implicit as the timing type, the user can now select sample clock as well. When using a sample clock as the timing type, buffered frequency and period measurements are made by counting both an internal timebase (counted by embedded counter) as well as the unknown signal of interest up until the rising edge of the sample clock. However, the sample clock is a signal that must be specified and created by the user. The ideal frequency of the internal timebase is then divided by its count to find the effective frequency up to the next sample clock edge.
NI-STC3 technology also provides several features for the digital I/O and programmable function input (PFI) lines on X Series devices. This includes programmable power-up states, watchdog timers, event detection, and new PFI filtering.
With NI-STC3 technology, users can now accomplish more advanced analog, digital, and counter operations than ever before. In addition, applications that previously required additional onboard resources or were difficult to program can now execute independently and with less NI-DAQmx code.
To read more about NI-STC3 and how it works, refer to the NI-STC3 Timing and Synchronization Technology paper.
NI-MCal is a software-based calibration algorithm that generates a third-order polynomial to correct for the three sources of voltage measurement error – offset, gain, and nonlinearity. Using software-based measurement corrections, NI-MCal can optimize every selectable range with a unique correction polynomial that hardware-based calibration cannot accommodate.
NI-MCal, introduced as a feature on National Instruments M Series devices, takes a unique approach to device self-calibration. In addition to using a new technique in hardware to compensate for measurement error, NI-MCal also uses software to characterize and correct offset, gain, and nonlinearity error. At the heart of the technology is an algorithm that determines a set of third-order polynomial coefficients to accurately translate the digital output of an ADC into voltage data.
NI-MCal algorithm executes when a self-calibration function is called from software such as LabVIEW. On a typical GHz PC, NI-MCal takes less than 10 seconds to characterize nonlinearity, gain, and offset, and save the correction polynomials to the onboard EEPROM. Subsequent measurements are scaled automatically by the device driver software before being returned to the user through application software. Unlike other self-calibration schemes, NI-MCal has the unique ability to return calibrated data from every channel in a scan, even if they are at different input ranges. This is because NI-MCal determines, saves, and applies correction polynomials for every input range on the device. Other self-calibration mechanisms use hardware components for data correction, and cannot dynamically load correction functions fast enough to provide accuracy when multiple input ranges are used in a single scan. Instead, NI-MCal uses software for data correction, which can easily load and apply channel-specific correction functions even while scanning at maximum device rates.
NI-MCal performs unlike other self-calibration techniques by correcting for nonlinearity error in addition to applying channel-specific data correction functions for all channels in a scan sequence. By eliminating the limitations of hardware components traditionally used for device error correction and using the power and speed of software and PC processing, NI-MCal raises the bar for measurement accuracy by redefining device self-calibration.
To read more about NI-MCal and how it works, refer to the NI-MCal Calibration Methodology Improves Measurement Accuracy paper.