The proposed CROWD architecture  uses the heterogeneity of dense wireless deployments, both in terms of radio condition and nonhomogeneous technologies. It offers tools to orchestrate the network elements to mitigate intrasystem interference, improve performance of channel-opportunistic transmission/reception techniques, and reduce energy consumption. An extremely dense and heterogeneous network deployment comprises two domains of physical network elements: backhaul and radio access network (RAN). The latter is expected to become increasingly heterogeneous not only in terms of technologies (for example, 3G, LTE, and WiFi) and cell ranges (macro-/pico-/femto-cells), but also at density levels (from macro-cell base-station (BS) coverage in under populated areas to several tens or hundreds of potentially reachable BS in hot spots). Such heterogeneity also creates high traffic variability over time because of statistical multiplexing, mobility of users, and variable-rate applications. For optimal performance, reconfiguration of the network element is required at different time intervals from very fast (few 10s of milliseconds) to relatively long (few hours), affecting the design of backhaul and RAN components. To tackle the complex problem of reconfiguration, we propose to follow an SDN-based approach for managing network elements, as shown in Figure 1. Network optimization in the proposed architecture is assigned to a set of controllers, which are virtual entities deployed dynamically over the physical devices. These controllers are technology-agnostic and vendor-independent, which allows full exploitation of the diversity of deployment/equipment characteristics. They expose a northbound interface, which is an open API to the control applications. We define control applications as the algorithm that actually performs the optimization of network elements, for example ABSF. The northbound interface does not need be concerned with either the details of the DAQ from the network or the enforcement of decisions. Instead, the southbound interface is responsible for managing the interaction between controllers and network elements.
We propose two types of controllers in the network (see Figure 1): the CROWD regional controller (CRC), which is a logical centralized entity that executes long-term optimizations, and the CROWD local controller (CLC), which runs short-term optimizations. The CRC requires only aggregate data from the network, and is in charge of the dynamic deployment and life cycle management of the CLC. The CLC requires data from the network at a more granular time scale. For this reason, CLC covers only a limited number of base stations . The CLC can be hosted by a backhaul/RAN node itself, for example, a macro-cell BS, so as to keep the optimization intelligence close to the network. On the other hand, the CRC is likely to run on dedicated hardware in the network operator data center. Such an SDN-based architecture provides the freedom to run many control applications that can fine-tune the network operation with different optimization criteria, for example, capacity, energy efficiency, and so on. The CROWD vision provides a common set of functions as part of the southbound interface, which the control applications can use, for example LTE access selection  and LTE interference mitigation  to configure network elements of a dense deployment.
Figure 2A. Hardware/Software Architecture
Figure 2B. Protocol Stack Architecture
Figure 2 shows the general overview of the testbed architecture. The functions of the MAC and higher layer protocols (including CLC) run on a Linux computer. The protocol stack communicates with the PHY layer running on the NI PXI system over Ethernet using an L1-L2 API that is based on a small cell forum API . We have implemented the complex high-throughput baseband signal processing for an “LTE-like” OFDM transceiver for the eNB and UE in LabVIEW FPGA using several NI FlexRIO FPGA modules because of the high-throughput requirements. We use the NI 5791 adapter module for NI FlexRIO as the RF transceiver. This module has continuous frequency coverage from 200 MHz to 4.4 GHz and 100 MHz of instantaneous bandwidth on both TX and RX chains. It features a single-stage, direct conversion architecture, which povides high bandwidth in the small form factor of an NI FlexRIO adapter module.
|Subcarrier Spacing (f)
|FFT Size (N)
|Cyclic Prefix (CP) length (Ng)
|Sampling Frequency (Fs)
||1.4, 3, 5, 10, 15, 20 MHz
|Number of used subcarriers
|Pilots/Reference symbols (RS) spacing
||Uniform (6 subcarriers)
Table 1. LTE-Like OFDM System Parameters
Introduction to LabVIEW-Based LTE-Like PHY
The current PHY implementation has only one antenna port (that is, SISO) supported per node with FDD operation. We have chosen to implement only the downlink transmitter/receiver and plan to show the performance of our algorithms in downlink direction. However, we use the same PHY layer in uplink. Thus, resulting in symmetric uplink based on OFDMA. This greatly simplifies the PHY layer design and allows easy inter-connection to the MAC layer of protocol stack. However, in future we plan to implement SC-FDMA based uplink transport channel. The PHY layer implementation is real-time and accepts real-time configuration from MAC layer of protocol stack every TTI(1ms). We have designed the PHY modules to loosely follow 3GPP specifications, and hence referred to as an “LTE-like” system because our testbed is intended for research instead of commercial development. We describe main LTE OFDMA downlink system parameters in Table I. However, we omitted some components and procedures of a commercial LTE transceiver (for example, random access and broadcast channel) because they fall outside of the scope and requirements of our testbed. Only essential data and control channel functions are implemented.
Figure 3. LTE-DL Transmitter FPGA Block Diagram
We used the Xilinx CORE Generator library for the channel coding, FFT/IFFT, and filter blocks and developed custom algorithms in LabVIEW for all the other blocks. On the transmitter side (see Figure 3), physical downlink shared channel (PDSCH) and physical downlink control channel (PDCCH) transport blocks (TB) are transferred from the MAC layer and processed by each subsystem block as they are synchronously streamed through the system. Handshaking and synchronization logic between each subsystem coordinate each module’s operations on the stream of data. The fields of the downlink control information (DCI), including parameters specifying the modulation and coding scheme (MCS) and resource block (RB) mapping, are generated by the MAC layer and provided to the respective blocks for controlling the data channel processing. When the PDCCH and PDSCH data are appropriately encoded, scrambled, and modulated, they are fed to the resource element (RE) mapper to be multiplexed with RSs and primary and secondary synchronization sequences (PSS/SSS), which are stored in static lookup tables on the FPGA. Currently, the RB pattern is fixed and supports only one user. However, multiuser and dynamic resource allocation will be included in later versions. OFDM symbols are generated as shown in Figure 3 and converted to analog for transmission over the air by the NI 5791 RF transceiver.
Figure 4. LTE-DL Receiver FPGA Block Diagram
Figure 4 shows the high-level block diagram of our OFDM receiver implementation. The NI 5791 RF transceiver receives the analog signal and converts it to digital samples for processing by the FPGA. This is followed by time synchronization based on the LTE PSS/SSS. The cyclic prefix (CP) is then removed, OFDM symbols are segmented out, and the carrier frequency offset (CFO) compensation module corrects for CFO impairments. Fast Fourier transform (FFT) is then performed on the samples and reference symbols are extracted for channel estimation and equalization. The equalized modulation symbols for the data and control channels are then fed to a separate decoder FPGA, which is connected to the receiver FPGA using a peer-to-peer (P2P) stream over the PCI Express backplane of the PXI chassis. Figure 5 shows the implementation of the downlink channel decoder. The first stage of the decoding process is to de-multiplex the symbols belonging to the PDSCH and PDCCH. The downlink control information (DCI) is then decoded from the PDCCH control channel elements (CCEs) and passed to the SCH turbo decoder module to decode the PDSCH data, which are finally sent to the MAC using the small cell forum API .
Figure 5. LTE-DL PDCCH and PDSCH Decoder FPGA Block Diagram