Consumers are continuously demanding better performance, in a smaller form factor, with reduced power requirements. In a world of big data, we have seen a drastic change of digital communication buses shifting from parallel to serial buses starting in the early 2000s. The transition from parallel buses to high-speed serial buses has led to devices with much smaller footprints, much higher data throughput, and lower power requirements – enabling many of the technologies such as SATA, USB, or PCI Express that consumers take advantage of today.
There are many challenges at multiple layers of the high-speed serial link for successful communication. Having an understanding of the different concepts at each level helps to implement and test different layers. For any one layer to work, the layers below it need to be functioning correctly. There are many defined specifications for physical and data link layers, and using one of the standard implementations allows engineers to not have to determine the low level details on their own. Another benefit of using a standard physical and data link layer is there is typically IP available that implements all these low level details for you. A great example of this is the Xilinx Aurora Protocol, a free IP that implements a light-weight, data link layer protocol for point-to-point serial communication. This allows abstraction to the user from the small details like clock correction, channel bonding, idle character, encoding/decoding, etc., allowing engineers to focus on their upper, application-specific layers.
With the benefits of reduced size and power, paired with increased performance, high-speed serial links are quickly growing in popularity. Industry is continuously making improvements to the fundamentals of high-speed serial, allowing faster and faster line rates and enabling the world of big data.
- The Need for High-Speed Serial
- Layers of High-Speed Serial Links
- Related Content