Why is Latency Important in SDRs?

Oct 18, 2021

When it comes to software defined radio (SDR) applications requiring time sensitive reception and processing of signals, latency is of great concern. Components, devices and computing which contribute to latency in Per Vices SDRs are discussed here, as well as how these can be mitigated.

What is and what causes latency?

Latency is a term thrown around in any sort of electrical and computer networking field. By definition, latency is the amount of time delay in sending data from one point to the next. But to be more aligned with SDR, a distinction can be made between the receive/transmit latency and round trip latency. Receive/transmit latency is the time required for the unidirectional reception or transmission of data between the antenna of the radio chain and the host computer. The round trip time (RTT) latency is the time required for radio data that is received on one radio chain to be transmitted on another chain, or vice versa. Latency is often measured in milliseconds, and although seemingly negligible, we will see that this short amount of time is actually of great concern in a variety of applications.

Various parts of the SDR and host system contribute to latency. At a high level, latency is found in the electrical components of the radio front end (RFE), within the digital boards containing a field-programmable gate array (FPGA) and digital signal processing (DSP), as well as the network interface and host system. The major transmit (Tx) and receive (Rx) latency contributors are summed up in Equation 1 below:

Equation 1: Tx/Rx latency contributions

When considering round trip latency, we see that this is the sum of Tx and Rx latency as well as application/processing in an application, for instance, in GNU Radio or a program written in C++, as shown in Equation 2 below:

Equation 2: RTT latency in Per Vices SDR

It is important to note that latency is not always a constant or invariable to changes to the radio front end, networking or host system parameters; we can briefly discuss some latency contributors in more detail here:

1. Radio group delay: the receive (Rx) and transmit (Tx) boards of a SDR contain components such as amplifiers, modulator/demodulator, down/up converters, filters and more which cause delay. It is fair to say that the radio group delay is approximately constant and largely invariant to changes in sample rate or frequency.

2. Converter latency: analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) cause latency due to (de)serialization. This is relatively deterministic latency.

To dive deeper, we can look at the latency of an ADC. Per Vices SDRs use a pipelined ADC architecture (similar to Figure 1), where latency will depend on the number of internal stages in the pipeline within the ADC core, as well as buffering and digital interleave correction. Data latency is associated with pipelined ADCs since each sample must propagate through the entire pipeline until all its associated bits are available for combination in the digital error correction logic and then outputted as a serial JESD204B for the FPGA.

With Per Vices' products, the challenge is also to synchronize the many ADCs. The JESD204B Subclass 1 interface is equipped for data alignment down to the sample level across multiple ADCs by using a system reference event signal (SYSREF) to synchronize internal framing clocks in the SDR’s transmitter and receiver. This creates a deterministic latency for the devices using the JESD204B link. The propagation delay through different blocks (buffer, multiplexer), conversion time (in the ADC core), interleaving correction and delay through the decimation filter (N) all contribute to the overall latency of a pipeline ADC in Per Vices SDRs.

Figure 1: Block diagram of a simplified Pipeline ADC signal chain (from https://www.ti.com/product/ADC32RF45)

Interleaving correction is necessary since a pipeline ADC’s clock phases are interleaving (meaning when odd phases are sampled then even phases are evaluated), and thus, as in the example, outputs are present at half of a clock cycle period for a given input sample while proceeding down the pipeline (see Figure 2). This entails that all the digital outputs are not available simultaneously since output data from the second stage is only available after half a clock cycle, which is dependent on if the output from the first stage is ready, and so on. It is not until the final conversion output, which combines each stage’s digital results in the output latch to correct for the interleaving errors, that the final output would be ready. This contributes to significant delay since the number of stages often matches the number of ADC bits and results in data latency of several clock cycles.

Figure 2: Data latency due to interleaving clock cycles (time interleaving) in a pipeline ADC

3. FPGA DSP Latency: when the FPGA sends/receives the JESD204B serial interface link a number of processes occur. This includes digital up/down conversion, decimation, interpolation, filtering and framing/deframing of Ethernet packets for use over the SFP+ ports before being sent/received over Ethernet cables to/from the host system. All of this DSP will introduce latency, as shown in Figure 3 below.

Figure 3: DSP done on the FPGA results in latency.

4. FPGA Buffering and (de)framing: the receive and transmit sample buffers within the FPGA are examples of latency that changes depending on sample rate. This is because the sample buffers in Per Vices SDRs are located immediately before and after the 10GBASE-R (de)framing code. In the case of the receive chain, the samples accumulate in the sample buffer, at a divisor of the sample rate clock, until a sufficient number of samples have accumulated to make up a complete UDP payload. Once a sufficient number of samples have accumulated to make up an entire UDP packet payload, those samples are popped off from the FIFO, at the network clock rate, and assembled into a complete UDP packet that is immediately transmitted. Thus, can be related as follows:

Equation 3: Buffering and (de)framing

When analyzing transmission latency, consideration needs to be applied to the transmit sample buffer. The Per Vices Crimson TNG / Cyan communicates over a packetized Ethernet network - that is, the minimum unit of sample data transmission is a UDP packet, with a payload (made up of a number of complex radio samples as VITA 49 IQ pair data) size is generally determined by the amount of data the application passes to Crimson TNG / Cyan when sending it data.

5. Network latency & host system latency: the network latency, and especially the transmit latency and operating system latencies can be especially variable, with a strong dependency on the type of network card, operating system, and system load.

When interfacing with an external PC, running a traditional operating system, a number of different considerations come into play. In addition to not sharing a common clock (which requires us to address crossing clock domains with a potentially large variance), there are two major sources of variance: the operating system and the 10GBASE-R NIC. When a host PC application calls send(), the UHD library needs to make a number of system calls to the operating system in order to actually send the data over the network, to the correct address, and at the correct time. The time required for these calls to be recognized by the operating system, can vary quite substantially, and thus, is a non-determinate source of latency.

In addition to the operating system, different 10GBASE-R Ethernet PHYs can have substantially different latencies that can also vary substantially. Part of this behavior is intrinsic to the Ethernet protocol, which requires random back-off periods, and part of it is due to the design and implementation of specific network cards, which are broadly optimized for throughput, and not latency. Also important to recognize is that as the user sample rate increases, the variation in τos, Txτos, Tx and (τnet, Txτnet, Tx ) can rapidly become greater than the temporal duration represented by the payload of a single UDP packet.

Applications requiring low latency: Why does this all matter?

There are several reasons why latency is an important issue to consider in an RF system, some of which are a matter of life and death and the other which is a matter of millions of dollars. Regardless of the application, latency is a key metric as it determines the response time of an SDR at various stages. A known latency or deterministic latency is a key requirement for modern SDR applications. Three applications of particular interest are mentioned here:

1. High Frequency Trading: This field requires trading bid/ask orders to be executed extremely fast; a millisecond difference of latency can mean the difference between profits or losses, as the faster an algorithm can act on data to make a decision, the more money that can be potentially made. Moreover, running trading applications/algorithms on a low latency FPGA can make trading even quicker.

2. Civil and Defense Communications: both of these markets require FPGA with determinate and low latency. With new 5G networks coming online, the use of SDRs has been paramount for the beamforming and beam steering capabilities which rely on FPGA based processing to adjust to moving user equipment (UE). Techniques like beamforming are being adopted to improve sensitivity and selectivity in cellular communications systems.

In the defense industry, information is becoming the new weapon on the battlefield. For drones, UAVs, and various internet of military things (IoMT) or internet of battlefield things (IoBT), the ability for low latency communications becomes crucial to ensure a mission goes smoothly and according to plan.

3. Radar and Electronic Warfare (EW): modern radar systems rely on adapting to changing conditions. This includes the so-called adaptive radar countermeasures (ARC), which use processing techniques and algorithms to counter adaptive radar threats through real-time analysis of the threat’s over-the-air observable properties. FPGAs and high speed converters are particularly useful in this application due to their ability to deliver fast responses needed for EW systems.

4. Distributed Networking: distributed networks such as cloud computing offer the ability to work from anywhere, anytime, and have full access to a company’s or service’s infrastructure and resources. Latency sensitive distributed networking applications range from multimedia streaming, video transcoding, multiplayer network gaming to telesurgery. In order for everything to operate smoothly in these networks, low latency is a must-- and that’s where SDRs come in.

Figure 4: Image of equipment/robotics involved in telesurgery or remote surgery

How to Optimize an SDR for Minimal Latency?

There are ways to truly minimize latency in an SDR system. One of the primary considerations when interfacing with a host PC is reducing the variation in latency. By moving to a real time operating system (such as the real time Linux kernel), using the latest kernel drivers (to ensure optimal network card performance), and processor and core affinity provide the most immediate benefit. In addition, purchasing a network card optimized for low latency applications provides additional benefits.

Lower latency applications may benefit from a modified FPGA stock image that uses a low latency IP core, and reduces sample buffer size. In addition, interfacing with another FPGA or a real time, synchronous system, allows for reductions in payload size, which can also provide substantial opportunities to reduce transmission latencies. In ultra-low latency applications, custom interface protocols using the SFP+ connectors can be implemented to further reduce the latency between the SDR and the host machine application.

Of course, for the lowest latency applications, you can consider embedding application logic on the FPGA, which due to the highly parallel nature of the device, can perform computational/processing tasks much faster than a traditional host system’s CPU. This is particularly valuable in very time sensitive applications.

Conclusion

As discussed, latency is an inevitable and generally unavoidable result of the various devices/components and computation carried out in SDRs. While this is true, various means to minimize latency are possible using high performance SDRs.

Click here to learn more about Per Vices on everything RF.