Monday, 10 March 2003

Commons Sense

Ed Felten wonders why the net hasn't suffered a Tragedy of the Commons.
The key reason is the deep one that people are smarter than economists think they are, and find solutions that resolve such tragedies.

In the case of TCP, it is actually a very good solution to utilising available bandwidth to maximum efficiency. If you ignore the back-off requirement, you're going to get more dropped packets, and thus need more retransmissions. At an endpoint there is no way to distinguish a packet drop due to congestion from a packet drop due to a bottleneck link, so being able to push other traffic out of the way may not be possible. Also, you need co-operation from both ends to do this, which puts constraints on widespread adoption.

Looked at another way, there are attempts to ignore this constraint. In the extreme they are called Distributed Denial of Service attacks, where packet floods knock out servers. Pretty quickly the commoners identify such activity and find a way to block it.

Another attempt is real-time streaming protocols, that just send out UDP packets at a source-dependent rate and assume some will be lost. They are inherently inefficient, as they don't accommodate bandwidth variation the way TCP does. Over time, these protocols have gradually changed to add retransmission and packet thinning so they converge on TCP. In practice, far more media is sent over TCP than is sent using real-time protocols.

Populations can tolerate a small proportion of free-riders like the streamers, but they end up being self-limiting, as the received quality is not usable with too much packet loss.

No comments:

Post a Comment