Connectivity Convergence
I have to admit that reading Dave Weinberger's live coverage of Connectivity 2002 I wished I coud have been there with Stuart (as he has explained a lot of networking subtleties to me over the years).
However, the conversation continues through weblogs. Let me try and round up a few points that others have made, helping me to clarify my thoughts.
First, lets split what seemed to be two alternatives into three. The ideal that we're all after is a commoditized 'stupid' packet switching network, with intelligence at the ends in applications. There are two alternative paradigms that are fighting aginst this, and we need to separate them as they attack from different directions (though often in concert).
The first alternative is 'circuit switching' instead of packet switching. This is the bad solution that gets reinvented continuously by people who like thinking about wires. The notion is that there is a continuous connection between two endpoints that is guaranteed to be unbroken. This requires a much 'smarter' (and hence far more costly to implement and maintain) network that keeps data flowing between two nodes and doesn't fail.
What this really means is that when it does fail, it can't cope at all - you get hung up on. Examples of this are ATM (instead of TCP), PPPoE (instead of TCP) and 3G wireless (instead of 802.11) and BlueTooth (instead of 802.11).
Cheshire's laws of Network Dynamics summarize it this way:
For every Network Service there's an equal and opposite Network Disservice
Nothing in networking comes for free, which is a fact many people seem to forget. Any time a network technology offers "guaranteed reliability" and other similar properties, you should ask what it is going to cost you, because it is going to cost you. It may cost you in terms of money, in terms of lower throughput (bandwidth), or in terms of higher delay, but one way or another it is going to cost you something. Nothing comes for free.
1. A guaranteed network service guarantees to be a low quality overpriced service
2. For every guarantee there's a corresponding refusal
3. For every Network Connection there's a corresponding Network Disconnection
Or more succinctly in The ATM Paradox
So I continue to contend (contra Roxann Googinas cited by Dave) that running this kind of network is a bad business to be in compared to a commodity connectivity one, as its existing fixed costs that can be supported by expensive voice calls are going away can be replaced by a bigger fatter packet-switching network. Peter Cochrane explains how the Telcos messed up their opportuinity, and summarizes this way:
TelCos will never deliver wide-band communications. It is not in their interests to do so; it isn't in their business minds or models. They are into call minutes and billing systems. They are old, gray and don't get it, and they don't intend getting it.
2. The cable companies are slightly better, but have a broadcast mindset, where wide-band is an add-on, a kluge, and not a primary business or technology.
3. Both (1) and (2) have missed their opportunity, wasted time and money on a vast scale, and are now going bust. Five years ago they could have rolled fiber into the local loop, they had the money and the people back then - now they have neither.
So lets get onto bad paradigm 2 - the broadcast mindset of networks optimised for 'content delivery'.
The 'content' industry is really several different pseudo-marketplaces joined together in odd ways through vertical integration. At one end is the VC-like fashion business of choosing which movie or pop singer to invest in.
Then there is the long chain of distribution selling the resulting works, often controlled by the Studio or Label.
Finally there is the weird inverted marketplace of broadcasting where the audience is sold to advertisers with the 'content' as bait, but then the 'content' used gets manipulated to promote sales through 'payola'.
Because of the tangled nature of these shenanigans, it is never clear what the business really is, but it is this group that presents the biggest threat to an open network, as they would like to impose huge restrictions to protect themselves from competition.
Doc points to an article attacking Lessig in a crude and formulaic way, assuming that all creativity comes from the centre, and advocating DRM as providing new service to consumers.
Doc then slightly mis-states Lessig's argument in defence.
Lessig isn't saying the net has 'natural' laws; he is saying that the current architecture of the net (the end-to-end, stupid architecture) has these kind of characteristics, but that programmers, not poets, are now the unacknowledged legislators of the world.
He explains well that law, code, norms and markets influence behaviour, and appreciates that all of these can be modified, and urges those modfying them to build in and expand the values of openness we currently enjoy.
Finally, I liked David Reed's dicussion of options for funding - I've been reading 'Extreme Programming explained' this week too, and it expresses much the same idea, but applied to coding choices.
Saturday 25 May 2002
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment