Skip to content
Thoughtful, detailed coverage of everything Apple for 33 years
and the TidBITS Content Network for Apple professionals

Bandwidth and Latency: It’s the Latency, Stupid (Part 2)

[Last week in TidBITS-367, Stuart examined issues of latency and delay in typical modem-based Internet communications. This week, Stuart offers general observations on how bandwidth can be used more efficiently and how it effects the overall latency of a connection.]

Last week, I asked readers to imagine a world where the only network connection you can get to your house is a modem running over a telephone line at 33 Kbps. Now, imagine that this is not enough bandwidth for your needs. You have a problem.

Making Bandwidth is Easy — Technically, the solution is simple. You can install two telephone lines and use them in parallel, giving you a total of 66 Kbps. If you need more capacity you can install ten telephone lines, for a total of 330 Kbps. Sure, it’s expensive, having ten modems in a pile is inconvenient, and you may have to write networking software to share the data evenly between the ten lines. But if it was sufficiently important, it could be done. People with ISDN lines already do this using a process called BONDING (which is short for "Bandwidth ON Demand INteroperability Group"), which enables them to use two 64 Kbps ISDN channels in parallel for a combined throughput of 128 Kbps.

Getting additional bandwidth is possible, even if it’s not always economical. However, equally important is that making limited bandwidth go further is easy.

Compression — Compression is an easy way to increase bandwidth. You can apply general purpose compression (such as StuffIt) to the data. Even better, you can apply data-specific compression (such as JPEG for still images and MPEG for video), which can provide much higher compression ratios.

These compression techniques trade off use of CPU power for lower bandwidth requirements. However, there’s no equivalent way to trade off use of extra CPU power to make up for poor latency.

All modern modems utilize internal compression algorithms. Unfortunately, having your modem do compression is nowhere near as good as having your computer do it. Your computer has a powerful, expensive, fast CPU, whereas your modem has a feeble, cheap, slow processor. In addition, as we noted last week, a modem must hold on to data until it has a block big enough to compress effectively. This requirement adds latency, and once added, latency can’t be eliminated. Also, since the modem doesn’t know what kind of data you’re sending, it can’t use superior data-specific compression algorithms. In fact, since most images and sounds on Web pages are already compressed, a modem’s attempts to compress the data a second time adds more latency without any benefit.

This is not to say that having a modem do compression never helps. When the host software at the endpoints of the connection is not smart and doesn’t compress data appropriately, then the modem’s own compression can compensate somewhat and improve throughput. The bottom line is that modem compression only helps dumb software, and it hurts smart software by adding extra delay.

Send Less Data — Another way to cope with limited bandwidth is to write programs that take care not to waste bandwidth. For example, to reduce packet size, wherever possible Bolo (my interactive network tank game) uses bytes instead of 16-bit or 32-bit words.

<http://rescomp.stanford.edu/~cheshire/Bolo.html>

For many kinds of interactive software like games, it’s not important to carry a lot of data. What’s important is that when the little bits of data are delivered, they are delivered quickly. Bolo was originally developed running over serial ports at 4800 bps and could support eight players that way. Over 28.8 Kbps modems it can barely support two players with acceptable response time. Why? A direct-connect serial port at 4800 bps has a latency of 2 ms. A 28.8 Kbps baud modem has a latency of 100 ms, 50 times worse than the 4800 bps serial connection.

Software can cope with limited bandwidth by sending less data. If a program doesn’t have enough bandwidth to send high-resolution pictures, it could use a lower resolution. If a program doesn’t have enough bandwidth to send colour images, it could send black-and-white images, or images with dramatically reduced colour detail (which is what NTSC television does). If there isn’t enough bandwidth to send 30 frames per second, the software could send 15 fps, 5 fps, or fewer.

These trade-offs aren’t pleasant, but they are possible. You can pay for more bandwidth or send less data to stay within your limited available bandwidth. However, if the latency is not good enough to meet your needs you don’t have the same option. Running multiple circuits in parallel won’t improve latency, and sending less data won’t help either.

Caching — One of the most effective techniques for improving computer and network performance is caching. If you visit a Web site, your browser can copy the text and images to your hard disk. If you visit the site again, the browser verifies that the stored copies are up-to-date, and – if so – the browser just displays the local copies.

Checking the date and time a file was last modified is a tiny request to send across the network – so small that modem throughput makes no difference. Latency is all that matters.

Recently, some companies have begun providing CD-ROMs of entire Web sites to speed Web browsing. When browsing these Web sites, all the Web browser does is check the modification date of each file it accesses to verify that CD-ROM copy is up-to-date. It must download from the Web only files that have changed since the CD-ROM was made. Since most large files on a Web site are images, and since images on a Web site change far less frequently than the HTML text files, in most cases little data has to transfer.

Once again, because the Web browser is primarily doing small, modification date queries to the Web server, latency determines performance and throughput is virtually irrelevant.

Latency Workarounds — ISDN has a latency of about 10 ms. Its throughput may be twice that of a modem, but its latency is ten times better, and that’s the key reason why browsing the Web over an ISDN link feels faster than over a modem.

One reason standard modems have such poor latency is that they don’t know what you’re doing with your computer, or why. An external modem is usually connected through a serial port, and all it sees is an unstructured stream of bytes coming down the serial port.

Ironically, the much-maligned Apple GeoPort Telecom Adapter may solve this problem. The Apple GeoPort Telecom Adapter connects your computer to a telephone line, but it’s not a modem. Instead, all modem functions are performed by software running on the Mac. The main reason for the criticism is that this extra software takes up memory and slows down the Mac, but in theory it could offer an advantage no external modem could match. When you use the GeoPort Telecom Adapter, the modem software is running on the same CPU as your TCP/IP software and your Web browser, so it could know exactly what you are doing. When your Web browser sends a TCP packet, the GeoPort modem software doesn’t have to mimic the behaviour of current modems. It could take that packet, encode it, and send it over the telephone line immediately, with almost zero latency.

Sending 36 bytes of data, a typical game-sized packet, over an Apple GeoPort Telecom Adapter running at 28.8 Kbps could take as little as 10 ms, making it as fast as ISDN, and ten times faster than the best modem you can buy today. For less than the price of a typical modem, the GeoPort Telecom Adapter could give you Web browsing performance close to that of ISDN. Even better, people who already own Apple GeoPort Telecom Adapters would need only a software upgrade.

Bandwidth Still Matters — Having said all this, you should not conclude that I believe bandwidth is unimportant. It is very important, but not in the way most people think. Bandwidth is valuable for its own sake, but also for its effect on overall latency – the important issue is the total end-to-end transmission delay for a data packet.

Remember the example in the first part of this article comparing the capacity of a Boeing 747 to a 737? Here’s a real world example of the same issue. Many people believe that a private 64 Kbps ISDN connection is as good (or even better) as a 1/160 share of a 10 Mbps Ethernet connection. Telephone companies argue that ISDN is as good as new technologies like cable modems because though cable modems have much higher bandwidth, that bandwidth is shared between lots of users so the average works out the same. This reasoning is flawed, as the following example will show.

Say we have a game where the data representing the game’s overall state amounts to 40K. We have a game server, and in this simple example, the server transmits the entire game state to a player once every ten seconds. That’s 40K every 10 seconds, an average of 4K per second or 32 Kbps. That’s only half the capacity of a 64 Kbps ISDN line, and 160 users doing this on an Ethernet network will utilize only half the capacity of the Ethernet. So far so good: both links are running at 50 percent capacity, so the performance should be the same, right?

Wrong. On the Ethernet, when the server sends the 40K to a player, the player can receive that data as little as 32 ms later (40K / 10 Mbps). If the game server is not the only machine sending packets on the Ethernet, then there could be contention for the shared medium, but even in that case the average delay before the player receives the data is only 64 ms. On the ISDN line, when the server sends the 40K to a player, the player receives that data five seconds later (40K / 64 Kbps). In both cases the users have the same average bandwidth, but the actual performance is different. In the Ethernet case, the player receives the data almost instantly because of the connection’s high capacity. But in the ISDN case, the connection’s lower capacity means the information is already 5 seconds old when the player receives it.

The problem is that sending a 40K chunk every ten seconds and sending data at a uniform rate of 4K per second are not the same thing. If they were, ISDN, ATM, and other telephone company schemes would be good ideas. Telephone companies assume all communications are like the flow of fluid in a pipe. You just tell them the rate of flow you need, and they tell you how big the pipe has to be. Voice calls work like the flow of fluid in a pipe, but computer data does not. Computer data comes in lumps. A common mistake is to think that sending 60K of data once per minute is exactly the same as sending 1K per second. It’s not. A 1K per second connection may be sufficient capacity to carry the amount of data you’re sending, but that doesn’t mean it will deliver the entire 60K in a timely fashion. It won’t. By the time the lump finishes arriving, it will be one minute old.

The conclusion here is obvious: the capacity of a connection has a profound affect on its performance. If you’re given the choice between a low bandwidth private connection, or a small share of a larger bandwidth connection, take the small share. Again, this is painfully obvious outside the computer world. If a government said it would build either a large shared freeway, or a million, tiny, separate footpaths, one reserved for each citizen, which would you vote for?

What Can You Do? I’ve received numerous messages from people who want to know what they can do to spread the word about these latency and bandwidth problems. I’ve found that calling modem vendors directly is futile, so I recommend that you circulate these two articles to friends who might find them interesting, and most important, send letters to editors of major magazines asking them to include latency times via ping and traceroute when testing modems for review. Perhaps if we can raise awareness about the horrible latency problems that all modems suffer, modem manufacturers will start putting effort into decreasing latency instead of just increasing throughput.

[Portions of this article come from Stuart Cheshire’s white paper entitled "Latency and the Quest for Interactivity," commissioned by Volpe Welty Asset Management, L.L.C.]

<http://rescomp.stanford.edu/~cheshire/papers/ LatencyQuest.html>


Subscribe today so you don’t miss any TidBITS articles!

Every week you’ll get tech tips, in-depth reviews, and insightful news analysis for discerning Apple users. For over 33 years, we’ve published professional, member-supported tech journalism that makes you smarter.

Registration confirmation will be emailed to you.

This site is protected by reCAPTCHA. The Google Privacy Policy and Terms of Service apply.