Latency always stands at the forefront whenever satellite communications is discussed. Forget quality of service, bit error rate, availability, reliability, circuit speeds, geographic coverage or cost-effectiveness of a network, the delay associated with transmitting information over a satellite link always rises to the top, receiving more than its fair share of attention. But latency in a satellite network can vary greatly and engineers should be aware of all the different strategies to deal with this variable.
Latency can be defined several different ways, but in general it is the time it takes a bit of information to traverse a network from its originating point to its final destination. The time needs to be doubled if the latency for a round trip is being calculated. Although we speak of latency as a finite term, it actually is an accumulation of delays which occur in different facets of a network. The term "propagation delay" is widely used but can mean different things to different people. To some, propagation delay is a catch-all, including all cumulative delays in the transmission chain and is therefore a synonym for latency. In this article, latency in a satellite network will be described differently and broken down into two discrete components: propagation delay and processing delay.
Specifically, propagation delay is the finite time it takes a radio wave travelling at the speed of light to cover the distance from the Earth’s surface to a satellite and back to the Earth’s surface. Processing delay, on the other hand, is the cumulative delayed caused by the hardware and software in every device the signal passes through.
The one constant in calculating latency is the speed of light; everything else is variable. Although we cannot change the speed of light, operators can change the satellite’s altitude. Not all satellites are in geosynchronous orbit and it is useful to contrast the propagation delay associated with a particular orbital altitude.
There are numerous military and commercial satellites operating in low-Earth orbit (LEO). Orbcom, Iridium and Globalstar each operate a fleet of commercial satellites which orbit relatively close to Earth. Iridium’s constellation operates at an altitude of 780 kilometers; Orbcomm’s constellation is a bit higher at 825 kilometers; and Globalstar’s satellites are farther up at 1,414 kilometers. The propagation delay seen in a LEO satellite system actually varies since the satellite’s positions change, but would be 4.3, 4.5 and 7.8 milliseconds per hop, respectively, for bent-pipe applications if the satellite is directly overhead. To calculate roundtrip propagation delay, these figures should be doubled. (This example is a bit of an oversimplification in the case of Iridium, since a signal can pass through multiple satellites before being sent to a ground station. The times are given to illustrate the differences in propagation delay.) O3b has announced a constellation of five medium-Earth orbit (MEO) satellites which will orbit at an altitude of 8,068 kilometers. The expected propagation delay will be a little more than 100 milliseconds per hop. Geosynchronous satellites, on the other hand, orbit the Earth at about 36,000 kilometers above the equator and single hop is calculated to take 270 milliseconds one way or 540 milliseconds for a round trip.
Processing delay is the cumulative total that every network component contributes to. Each device in a network is guilty of slowing the overall end-to-end flow of information, contributing some tangible amount to the total processing delay. Since the speed of light is fixed and the choice of satellites often is limited, satellite engineers should turn their attention to network design — and individual components and their associated configuration — in their quest to minimize network latency. As noted earlier, every device interconnected in a network has some adverse affect on end-to-end latency. Many devices are highly configurable and designed to serve a wide range of applications. Keep in mind that a different configuration may result in lower latency, and there are multiple areas to consider.