Minimizing Latency in Satellite Networks
Latency always stands at the forefront whenever satellite communications is discussed. Forget quality of service, bit error rate, availability, reliability, circuit speeds, geographic coverage or cost-effectiveness of a network, the delay associated with transmitting information over a satellite link always rises to the top, receiving more than its fair share of attention. But latency in a satellite network can vary greatly and engineers should be aware of all the different strategies to deal with this variable.
Latency can be defined several different ways, but in general it is the time it takes a bit of information to traverse a network from its originating point to its final destination. The time needs to be doubled if the latency for a round trip is being calculated. Although we speak of latency as a finite term, it actually is an accumulation of delays which occur in different facets of a network. The term “propagation delay” is widely used but can mean different things to different people. To some, propagation delay is a catch-all, including all cumulative delays in the transmission chain and is therefore a synonym for latency. In this article, latency in a satellite network will be described differently and broken down into two discrete components: propagation delay and processing delay.
Specifically, propagation delay is the finite time it takes a radio wave travelling at the speed of light to cover the distance from the Earth’s surface to a satellite and back to the Earth’s surface. Processing delay, on the other hand, is the cumulative delayed caused by the hardware and software in every device the signal passes through.
The one constant in calculating latency is the speed of light; everything else is variable. Although we cannot change the speed of light, operators can change the satellite’s altitude. Not all satellites are in geosynchronous orbit and it is useful to contrast the propagation delay associated with a particular orbital altitude.
There are numerous military and commercial satellites operating in low-Earth orbit (LEO). Orbcom, Iridium and Globalstar each operate a fleet of commercial satellites which orbit relatively close to Earth. Iridium’s constellation operates at an altitude of 780 kilometers; Orbcomm’s constellation is a bit higher at 825 kilometers; and Globalstar’s satellites are farther up at 1,414 kilometers. The propagation delay seen in a LEO satellite system actually varies since the satellite’s positions change, but would be 4.3, 4.5 and 7.8 milliseconds per hop, respectively, for bent-pipe applications if the satellite is directly overhead. To calculate roundtrip propagation delay, these figures should be doubled. (This example is a bit of an oversimplification in the case of Iridium, since a signal can pass through multiple satellites before being sent to a ground station. The times are given to illustrate the differences in propagation delay.) O3b has announced a constellation of five medium-Earth orbit (MEO) satellites which will orbit at an altitude of 8,068 kilometers. The expected propagation delay will be a little more than 100 milliseconds per hop. Geosynchronous satellites, on the other hand, orbit the Earth at about 36,000 kilometers above the equator and single hop is calculated to take 270 milliseconds one way or 540 milliseconds for a round trip.
Processing delay is the cumulative total that every network component contributes to. Each device in a network is guilty of slowing the overall end-to-end flow of information, contributing some tangible amount to the total processing delay. Since the speed of light is fixed and the choice of satellites often is limited, satellite engineers should turn their attention to network design — and individual components and their associated configuration — in their quest to minimize network latency. As noted earlier, every device interconnected in a network has some adverse affect on end-to-end latency. Many devices are highly configurable and designed to serve a wide range of applications. Keep in mind that a different configuration may result in lower latency, and there are multiple areas to consider.
Although the world is rapidly moving toward Ethernet connectivity, there are still a number of legacy applications which utilize serial connections. Recently, an unnamed Fortune 500 company had a large number of remote terminal units (RTU) on its pipeline communicating at 1,200 bits per second (BPS). In this application, the pipeline control system would poll each RTU, with the RTU sending back 200 bytes of data over the VSAT link. The entire character string had to be delivered to the satellite modem before the transmission would start. In this case, it took 1.66 seconds just to transfer the data from the RTU into the VSAT. Increasing the speed of the serial port to 9600 BPS reduced the serialization delay to 0.20 seconds. By increasing the speed of the serial port the company saved almost 1.5 seconds per transaction.
Forward Error Correction and Modulation
Modulation techniques have improved dramatically over the years, so much so that most include some sort of forward error detection. Forward error correction involves adding extra data to the information to be sent. The redundant data allows the receiving modem to detect and correct data which has been garbled during transmission, thereby minimizing the amount of data which needs to be retransmitted. It would be incorrect to say that different modulation schemes causes latency. In reality it is the forward error correction associated with the modulation scheme which causes delay. Since forward error correction and modulation go hand-in-hand, it is understandable that engineers sometimes link modulation schemes and latency in the same sentence.
The forward error correction schemes used in satellite communications have evolved over the last two decades. Viterbi Coding gave way to Reed Solomon Coding, which was then topped by Turbo Codes. As the size of carriers used in broadcasting grew, the need for a powerful new forward error correction scheme was recognized. Low-density parity check (LDPC) code was adopted by the DVB (digital video broadcast) committee and is now part of the DVB-S2 standard. There is a 3-dimensional trade-off when you look at different forward error correction schemes. The different factors include: processing gain the forward error correction provides; depth of the forward error correction (how big are the block of data the forward error correction must digest); and the bandwidth of the link. As block sizes get bigger, the forward error correction scheme work better, but at the expense of latency. The bigger the block size, the longer it takes to do the math.
Broadcast applications are affected by the additional delay caused by LDPC coding, but since their information flow generally is one way, several hundred milliseconds of added delay is not critical. Adding several hundred milliseconds of latency to a Voice over IP application or cellular backhaul, on the other hand, has negative consequences and adversely affects user satisfaction.
Comtech EF Data recently introduced a modified version of LDPC, dubbed VersaFEC, which uses smaller forward error correction block sizes. This is particularly important in lower data rate satellite links, up to 2 megabits per second. By reducing the amount of data to be processed at one time, a typical 200-millisecond forward error correction-induced delay can be reduced to 50 milliseconds. In this case, 300 milliseconds of latency can be shaved from the round-trip total simply by changing forward error correction schemes.
Forward error correction schemes are important to help balance power and bandwidth but they can add significant delay. Check all of your options before making a final decision.
TCP is a chatty protocol, one that requires a lot of back and forth between the server and the remote location, and using it over a satellite link can be a challenge. One of the problems is the protocol’s requirement to send three acknowledgements (acks) back and forth to determine the amount of bandwidth that is available before any data is exchanged between two points. This requires three round trips, or six satellite hops (propagation delay), just to start the data flowing. If the connection is idle for a short period, the acks must be sent again. The end result is extremely slow loading of Web pages and user frustration.
TCP Fast Start is a straight forward method used to minimize the start-up delay. Since the bandwidth of a satellite network is known, the bandwidth discovery process can be eliminated. At either end of the network, TCP is spoofed, which minimizes the time needed to download a web page. Many major manufacturers have integrated TCP Fast Start into their products.
In addition to TCP Fast Start, caching technology has progressed rapidly over the last decade. A cache is a temporary store of data. Caching technology uses a software proxy to automatically download Web pages a user has visited and then stores them so the data can be accessed locally instead of retrieving it again over the network. In the case of a satellite network, where bandwidth is limited, the collection of data can be done during off-peak hours. Caching not only enhances the user experience because the Web pages load extremely quickly, it reduces the demand for bandwidth during peak hours.
Caching software downloads content from Web sites that users visit with some regularity. But what about other sites they have never been to? TCP Pre-fetch is another technique which engineers can use to reduce latency. HTML pages displayed on a Web site generally are made up of several files. Hyper Text Transfer Protocol (HTTP) downloads these fields sequentially. Pre-fetching involves a software proxy which begins downloading linked content before an end user requests it. When a user clicks on a link embedded in a Web page, the content already has been downloaded, thereby reducing the perceived download time.
The throughput of a TCP link is affected by window size, however sending large window sizes over a satellite link increases the number of acknowledged bits which are outstanding. The latency of a satellite link generally forces network engineers to use less than favorable maximum transmission units, which in turn equates to poor link utilization. There is hardware available which logically turns a TCP session into multiple sessions for transmission across the network, where it is reassembled back into a single session on the other end. This approach basically increases the widow size by increasing the number of sessions.
Two interesting quirks of TCP are how the protocol handles latency and packet loss. Whenever a packet is sent, an acknowledgement of some type will return letting the sender know that the packet was either delivered or that delivery failed. If the packet is delayed getting back to the sender the protocol assumes that the network is congested and automatically throttles back the download speed. Satellite delay is misinterpreted as network congestion and, as a result, throughput suffers. If a packet is lost in a TCP network, the sender waits for three duplicate acknowledgments confirming that delivery failed. At that point, the sending server goes back to the missing packet and resends everything from that point forward.
In the late 1990s a suite of protocols was developed by NASA and the U.S. Department of Defense for commanding space craft. The result was Space Communications Protocol Standards (SCPS). A few years later, a group of engineers who had worked on the original SCPS (pronounced skips) specification realized that the protocols had applicability in commercial and military satellite networks. The team formed a company called Global Protocols and productized the software and added some new features. The resulting software, called SkipWare, is a version of TCP with extensions that have been adapted to the stresses and rigors of space communications. SkipWare elegantly solves both problems noted above. Since the software knows how much bandwidth is available and what the round-trip time is, it does not automatically neck down window size. The net result is substantially higher link utilization, which in turn reduces latency.
Global Protocols included a version of an acknowledgement scheme known as selective negative acknowledgement (SNACK) in SkipWare. Instead of going back to the missing packet and retransmitting everything from that point forward, SNACK resends only the missing packet, thereby saving retransmit time. SNACK also allows much larger window sizes to be sent, which in turn increases link utilization. While SkipWare may appear to be an important link utilization tool, improving link utilization rates from 25 percent to near 90 percent, it does so by circumventing the problems associated with the latency caused by a satellite link. Due to the significant performance enhancement it provides, manufacturers, such as Comtech EF Data have made SkipWare the standard acceleration technology powering their line of Performance Enhancing Proxies.
The terms WAN optimizer and WAN accelerators cover a wide assortment of vendors and products — all designed to improve the performance characteristics of a network. While most work at Layer 4, some specifically function at the application layer. One vendor which comes to mind is Riverbed Technology, which recently was recognized as an industry leader in the Gartner’s Magic Quadrant Report for WAN Optimization Controllers. Riverbed uses three techniques to enhance network performance, including: transport streamlining, data streamlining and application streamlining. The first two techniques are similar to techniques described above which operate at Layer 4; application streamlining is completely different. Application streamlining modules eliminate upcoming round trips that would have been generated by the application protocol. Reducing round trips can be necessary even with a very efficient implementation of TCP, because otherwise the inefficiency of the application-level protocol can overwhelm any improvements made at the transport layer. Application streamlining modules can eliminate up to 98 percent of the round trips taken by specific applications, such as Microsoft Office, Microsoft Exchange, Lotus Notes and Oracle 11i, delivering a significant improvement in throughput in addition to what data streamlining and transport streamlining already provide.
Overall network latency is the combination of propagation delay and processing delay. If moving from a geosynchronous satellite to one in low-Earth or medium-Earth orbit is not an option, the only other way to minimize latency is to reduce processing delay. Map out a detailed network diagram and evaluate the processing delay associated with every device. Keep in mind that there are options which may shave significant time off the latency you are experiencing, thereby improving network performance.