In the 1982 House Hearing on Home Recording of Copyrighted Works Jack valenti said:
I know of no technological device at this time that would bar taping in the home and if it did exist, it would only be a matter of days before the Japanese manufacturers would have an override piece of equipment on their machine and you would start from ground zero again.
So why is he trying to force such a thing now?
Friday, 31 May 2002
MacOPINION : Matthew Ruben | Celine Dion Killed My iMac!
Detailed article on how the Key2Audio protection racket works:
MacOPINION : Matthew Ruben | Celine Dion Killed My iMac!
Interesting conspiracy theory too:
...we see piracy continuing more or less undisturbed, with fair use being seriously disrupted.
It would be paranoid and silly to think that Sony and other record companies would want to destroy fair use just for the heck of it. There has to be a method to their madness, yes?
[...]
Key2Audio is the first step in a dreadful double perversion of Fair Use. The first perversion is the idea that by making a copy of music for yourself, you are depriving the copyright holder of the ability to obtain revenue from selling you additional copies of the same music. The second, linked, perversion is that by destroying your ability to exercise fair use, the record company extends its copyright power beyond the content (the music) to the delivery medium (the CD).
MacOPINION : Matthew Ruben | Celine Dion Killed My iMac!
Interesting conspiracy theory too:
...we see piracy continuing more or less undisturbed, with fair use being seriously disrupted.
It would be paranoid and silly to think that Sony and other record companies would want to destroy fair use just for the heck of it. There has to be a method to their madness, yes?
[...]
Key2Audio is the first step in a dreadful double perversion of Fair Use. The first perversion is the idea that by making a copy of music for yourself, you are depriving the copyright holder of the ability to obtain revenue from selling you additional copies of the same music. The second, linked, perversion is that by destroying your ability to exercise fair use, the record company extends its copyright power beyond the content (the music) to the delivery medium (the CD).
Wednesday, 29 May 2002
Fighting Terrorism with Google?
A couple of posts on Dave Farber's 'Interesting people' list set this thought off.
First this one
WASHINGTON - An experimental computer program designed to analyze
intelligence gave U.S. Special Forces a mission recommendation in 2000 that
some say could have prevented the Sept. 11 attacks.
In truth, though, the CIA does study foreign press, but before Sept. 11 made
little use of computers to collect and analyze classified and unclassified
information together, which is what Special Forces began doing in 1997,
enabling them to get a read-out on the terror cells.
This smells of fish to me. Computers can't analyze intelligence (unless a lot of AI breakthroughs have happened in secret). People can analyze intelligence. Computers can aggregate and help them link and inform each other.
If the 'intelligence community' worked with the kind of hyperlinking tools that the rest of us use to help make Google the best way to find anything, maybe they'd have got somewhere.
The idea of how this works isn't hard to grasp - my 7-year-old son got it straight away - but it is hard to map to an insufficiently public space, like an intranet or (in particular) a hierarchically organised intelligence network that is more concerned with 'need to know' and secrecy classification.
Today, David Reed suggested that we harness the public:
An open/transparent world reduces imagination of potential threats. It also increases the reliability of assessing actual threats.
Which tends to synergize with Moynihan's sound-bite: "Secrecy is for losers".
So here's a radical proposal: openly publish most (if not all) of the information collected by the CIA, NSA, ... to public inspection. Figure out how to avoid compromising sources where needed, but get all of it out, efficiently. Use the Internet, because it scales, rather than TV, print, and Radio, which don't.
This will enable all of civil society to become outsourcers of the costly mundane details of threat management, leaving the difficult and specialized functions to experts with specialized resources.
In this world, terrorist's ability to use the leverage of "unknown" threats and rampant paranoia of their targets to amplify their meager efforts would be dramatically reduced.
This would also mean that the emergent properties of Google indexing millions of individual human's links could come into play, as discussed in Cory's article 'How I learned to stop worrying and love the Panopticon'
AltaVista for them, Google for us
But what do they do with all of that data that they collect? Filter it for keywords? Fat chance. The volume of false positives (e.g., people talking about child pornography who aren't child pornographers) far exceeds the volume of actual criminal activity. Even creaky old Lycos gave up on plain-old keyword matching a long, long time ago.
Maybe they manually check it. After all, that approach worked for Yahoo, right? Oh, right, it didn't work. Scratch that.
Then they must use some hybrid approach: human editors and AI (Artificial Intelligence or Almost Implemented, take your pick) working in concert to tweeze out the most relevant material as quickly and efficiently as possible.
Right. AltaVista.
Poor bastards.
A couple of posts on Dave Farber's 'Interesting people' list set this thought off.
First this one
WASHINGTON - An experimental computer program designed to analyze
intelligence gave U.S. Special Forces a mission recommendation in 2000 that
some say could have prevented the Sept. 11 attacks.
In truth, though, the CIA does study foreign press, but before Sept. 11 made
little use of computers to collect and analyze classified and unclassified
information together, which is what Special Forces began doing in 1997,
enabling them to get a read-out on the terror cells.
This smells of fish to me. Computers can't analyze intelligence (unless a lot of AI breakthroughs have happened in secret). People can analyze intelligence. Computers can aggregate and help them link and inform each other.
If the 'intelligence community' worked with the kind of hyperlinking tools that the rest of us use to help make Google the best way to find anything, maybe they'd have got somewhere.
The idea of how this works isn't hard to grasp - my 7-year-old son got it straight away - but it is hard to map to an insufficiently public space, like an intranet or (in particular) a hierarchically organised intelligence network that is more concerned with 'need to know' and secrecy classification.
Today, David Reed suggested that we harness the public:
An open/transparent world reduces imagination of potential threats. It also increases the reliability of assessing actual threats.
Which tends to synergize with Moynihan's sound-bite: "Secrecy is for losers".
So here's a radical proposal: openly publish most (if not all) of the information collected by the CIA, NSA, ... to public inspection. Figure out how to avoid compromising sources where needed, but get all of it out, efficiently. Use the Internet, because it scales, rather than TV, print, and Radio, which don't.
This will enable all of civil society to become outsourcers of the costly mundane details of threat management, leaving the difficult and specialized functions to experts with specialized resources.
In this world, terrorist's ability to use the leverage of "unknown" threats and rampant paranoia of their targets to amplify their meager efforts would be dramatically reduced.
This would also mean that the emergent properties of Google indexing millions of individual human's links could come into play, as discussed in Cory's article 'How I learned to stop worrying and love the Panopticon'
AltaVista for them, Google for us
But what do they do with all of that data that they collect? Filter it for keywords? Fat chance. The volume of false positives (e.g., people talking about child pornography who aren't child pornographers) far exceeds the volume of actual criminal activity. Even creaky old Lycos gave up on plain-old keyword matching a long, long time ago.
Maybe they manually check it. After all, that approach worked for Yahoo, right? Oh, right, it didn't work. Scratch that.
Then they must use some hybrid approach: human editors and AI (Artificial Intelligence or Almost Implemented, take your pick) working in concert to tweeze out the most relevant material as quickly and efficiently as possible.
Right. AltaVista.
Poor bastards.
BBspot - Copies of Spider-Man 2 Available on the Web
BBspot - Copies of Spider-Man 2 Available on the Web
Those darned pirates get smarter all the time...
Those darned pirates get smarter all the time...
The EFF parody 'The Mickey Mouse Club' to fight the CBDTPA.
Cory explains what parody is.
The phrases 'Mickey Mouse Copy Protection' and 'Mickey Mouse Computer' need to enter the language in this context - as in 'Do you want a Mickey Mouse computer that stops you making music?'
Cory explains what parody is.
The phrases 'Mickey Mouse Copy Protection' and 'Mickey Mouse Computer' need to enter the language in this context - as in 'Do you want a Mickey Mouse computer that stops you making music?'
Tuesday, 28 May 2002
Dave is complaining about how uncomfortable outdoors is. His basic problem is that he lives on the wrong coast. We had a little burst of humidity here yesterday, amid the standard perfect 75�F day with a light breeze, and it reminded me of what is wrong with living in the kind of climate zone where that is expected.
Virginia Postrel developed this into a theory of why Silicon Valley beats Boston for innovation...
Virginia Postrel developed this into a theory of why Silicon Valley beats Boston for innovation...
The US Senate Committee on the Judiciary is collecting comments on the CBTPA
here are mine:
The CBDTPA is based on 3 delusions.
1.That computers can be prevented from copying.
This is wrong. The most basic definition of a computer, described by Alan Turing in 1936, is a device that reads and copies symbols, and modifies its internal state. He showed that anything capable of doing these is a digital computer. This bill would thus outlaw any Universal Turing machine. This includes the DNA copying mechanism in your cells. Stephen Wolfram has just shown that you can create a universal computer using 2 internal states and 5 symbols. Any Universal computer can emulate any other one, so that the software running on it does not know that it is running on the emulation. Consequently, all copy protection can be subverted in this manner. The only way the stated goals of this bill can be achieved is by outlawing computation itself.
2. That copyright law gives a monopoly over copying.
A better reading of the law gives a monopoly over redistribution to others, and even this is mitigated by fair use. Preventing copying at source is a huge over-reach, and unconstitutional by any reading.
3. That making something uncopyable increases its value.
This is the key mistake from a business point of view. People will pay more for content in a useful form and content that can be copied and transformed is more useful. Ignoring the logical, moral and constitutional issues for a second, this is crazy from a business point of view.
send yours too
here are mine:
The CBDTPA is based on 3 delusions.
1.That computers can be prevented from copying.
This is wrong. The most basic definition of a computer, described by Alan Turing in 1936, is a device that reads and copies symbols, and modifies its internal state. He showed that anything capable of doing these is a digital computer. This bill would thus outlaw any Universal Turing machine. This includes the DNA copying mechanism in your cells. Stephen Wolfram has just shown that you can create a universal computer using 2 internal states and 5 symbols. Any Universal computer can emulate any other one, so that the software running on it does not know that it is running on the emulation. Consequently, all copy protection can be subverted in this manner. The only way the stated goals of this bill can be achieved is by outlawing computation itself.
2. That copyright law gives a monopoly over copying.
A better reading of the law gives a monopoly over redistribution to others, and even this is mitigated by fair use. Preventing copying at source is a huge over-reach, and unconstitutional by any reading.
3. That making something uncopyable increases its value.
This is the key mistake from a business point of view. People will pay more for content in a useful form and content that can be copied and transformed is more useful. Ignoring the logical, moral and constitutional issues for a second, this is crazy from a business point of view.
send yours too
Monday, 27 May 2002
John Dvorak gets it on the DMCA:
I have to ask, does anyone in power care at all about the public and its needs? For example, when the copyright laws were first written, the idea was that certain creations would belong to the creator exclusively for a limited period of time and then pass into the public domain to benefit society as a whole. New laws see things differently, though. Now, society as a whole is meaningless. The vested interests of a few already rich individuals and corporations dominate the thinking surrounding copyrights.
I have to ask, does anyone in power care at all about the public and its needs? For example, when the copyright laws were first written, the idea was that certain creations would belong to the creator exclusively for a limited period of time and then pass into the public domain to benefit society as a whole. New laws see things differently, though. Now, society as a whole is meaningless. The vested interests of a few already rich individuals and corporations dominate the thinking surrounding copyrights.
Sunday, 26 May 2002
Alastair Cooke mourns Peter Bauer, notes China's claims that the US is closer to Communism than China is, and China needs to get Capitalist first... and reminds us how broke California is.
A great Defence of Lessig by Ernie the Attorney leads me to Taking Copy out of Copyright which expresses clearly in legal terms what I have thought for some time - it is not copying that harms content owners, but re-distribution. The 'content' industry count each copy as a foregone full-price sale, which is odd economics.
The idea that they can prevent all copying by legally enforced technical means is wrong. What they should be doing is focusing on those who are making money from parallel distribution.
A thoughtful comment by 'pyramid termite' on the slashdot discussion thread:
The reason why organizations such as the mass media and the companies that distribute art were able to lock out live performers is that the "public" was reinvented -- instead of the "public" being anyone a performer could possibly meet, the public became anyone a mass media organization could reach by TV, movies, radio, print, etc.
Now the public is being reinvented again and is becoming anyone the artist or a fan of the art can communicate with. What we are seeing is not simply a war over copyright - it's a war over what the public will be and who will have the right to communicate with it. The mass media would prefer to have a public that remains large with easily controllable desires and means of distribution to it. The new public wants to control its own desires and means of distribution; it wants to be the artist, the publisher and the audience.
There can't be laws to enforce the old mass media copyrights without enforcing the old, outdated mass media model. This is not just a battle over who has the right to distribute a work but who has the right to distribute any work and who can create a public to communicate with. The performers would like to have their public to be anyone they communicate with - the mass media moguls are calling for laws against the technologies that would make this communication impossible.
The idea that they can prevent all copying by legally enforced technical means is wrong. What they should be doing is focusing on those who are making money from parallel distribution.
A thoughtful comment by 'pyramid termite' on the slashdot discussion thread:
The reason why organizations such as the mass media and the companies that distribute art were able to lock out live performers is that the "public" was reinvented -- instead of the "public" being anyone a performer could possibly meet, the public became anyone a mass media organization could reach by TV, movies, radio, print, etc.
Now the public is being reinvented again and is becoming anyone the artist or a fan of the art can communicate with. What we are seeing is not simply a war over copyright - it's a war over what the public will be and who will have the right to communicate with it. The mass media would prefer to have a public that remains large with easily controllable desires and means of distribution to it. The new public wants to control its own desires and means of distribution; it wants to be the artist, the publisher and the audience.
There can't be laws to enforce the old mass media copyrights without enforcing the old, outdated mass media model. This is not just a battle over who has the right to distribute a work but who has the right to distribute any work and who can create a public to communicate with. The performers would like to have their public to be anyone they communicate with - the mass media moguls are calling for laws against the technologies that would make this communication impossible.
Saturday, 25 May 2002
2 more quick points on connectivity. The FCC just lost on appeal its regulations forcing Regional Bells to open their wires to alternative DSL suppliers:
"The commission ... completely failed to consider the relevance of competition in broadband services coming from cable (and to a lesser extent satellite)," Judge Stephen Williams wrote.
So here's a thought - can the FCC instead mandate that the Telcos have to give access to their physical poles and conduits , so that someone else can run fibre through them? (Peter Cochrane linked below explains why the telco's won't ever do this). Or would this be something that needs to happen on a local basis?
Secondly, on the 'commodity' argument, Andrew Odlyzko's 'The History of Communications' is a must read - it shows how communications become fixed-price commodities over time, covering everything from postage to the net.
He also has a detailed commentary on Roxann Googin's predictions A couple of snips:
Is the "first mile" a natural monopoly? That is what the failure of the CLECs has led many observers to conclude. Yet there are some contrary indicators. After all, most households do have three separate communication systems, the copper-based one from their ILEC, a coax-based one from their cable TV provider, and a cell phone from a wireless carrier. Thus a much deeper look is needed to understand what is going on, far beyond the scope of this note. A key factor, though, is that change is slow but inevitable. Hence a static analysis of technology choices, without taking account how quickly consumer are likely to move, is bound to be inadequate.
[...]
Policy makers who are interested in promoting competition could help this move along by forcing those ILECs that have not yet done so to completely sever their ties with cellular carriers. This would be a much simpler move, both technically and politically, than the separation of wireline industry that is widely discussed.
Competition from cellular carriers for voice is likely to force ILECs to concentrate on exploiting their natural advantage in bandwidth, and to emphasize Internet access. (Note again the UK statistics, where internet access traffic on the voice network is fast approaching that of voice itself, especially since the latter figure includes some modem and fax traffic.) This will likely also force them to emphasize broadband, as a way to segment the market, and to create a natural progression path for their customers, towards higher and higher bandwidth.
[...]
The most promising area for [long distance carriers] is to manage networks that are largely owned by their customers. This will be a huge change, but the IBM example shows that it possible, and also that there is time to do it. The ILECs might be tempted to follow in this same direction, but are less likely to succeed, and may have to resign themselves to operating at lower levels of the networking hierarchy. However, there is likely to be enough opportunity for them even there to thrive.
"The commission ... completely failed to consider the relevance of competition in broadband services coming from cable (and to a lesser extent satellite)," Judge Stephen Williams wrote.
So here's a thought - can the FCC instead mandate that the Telcos have to give access to their physical poles and conduits , so that someone else can run fibre through them? (Peter Cochrane linked below explains why the telco's won't ever do this). Or would this be something that needs to happen on a local basis?
Secondly, on the 'commodity' argument, Andrew Odlyzko's 'The History of Communications' is a must read - it shows how communications become fixed-price commodities over time, covering everything from postage to the net.
He also has a detailed commentary on Roxann Googin's predictions A couple of snips:
Is the "first mile" a natural monopoly? That is what the failure of the CLECs has led many observers to conclude. Yet there are some contrary indicators. After all, most households do have three separate communication systems, the copper-based one from their ILEC, a coax-based one from their cable TV provider, and a cell phone from a wireless carrier. Thus a much deeper look is needed to understand what is going on, far beyond the scope of this note. A key factor, though, is that change is slow but inevitable. Hence a static analysis of technology choices, without taking account how quickly consumer are likely to move, is bound to be inadequate.
[...]
Policy makers who are interested in promoting competition could help this move along by forcing those ILECs that have not yet done so to completely sever their ties with cellular carriers. This would be a much simpler move, both technically and politically, than the separation of wireline industry that is widely discussed.
Competition from cellular carriers for voice is likely to force ILECs to concentrate on exploiting their natural advantage in bandwidth, and to emphasize Internet access. (Note again the UK statistics, where internet access traffic on the voice network is fast approaching that of voice itself, especially since the latter figure includes some modem and fax traffic.) This will likely also force them to emphasize broadband, as a way to segment the market, and to create a natural progression path for their customers, towards higher and higher bandwidth.
[...]
The most promising area for [long distance carriers] is to manage networks that are largely owned by their customers. This will be a huge change, but the IBM example shows that it possible, and also that there is time to do it. The ILECs might be tempted to follow in this same direction, but are less likely to succeed, and may have to resign themselves to operating at lower levels of the networking hierarchy. However, there is likely to be enough opportunity for them even there to thrive.
Connectivity Convergence
I have to admit that reading Dave Weinberger's live coverage of Connectivity 2002 I wished I coud have been there with Stuart (as he has explained a lot of networking subtleties to me over the years).
However, the conversation continues through weblogs. Let me try and round up a few points that others have made, helping me to clarify my thoughts.
First, lets split what seemed to be two alternatives into three. The ideal that we're all after is a commoditized 'stupid' packet switching network, with intelligence at the ends in applications. There are two alternative paradigms that are fighting aginst this, and we need to separate them as they attack from different directions (though often in concert).
The first alternative is 'circuit switching' instead of packet switching. This is the bad solution that gets reinvented continuously by people who like thinking about wires. The notion is that there is a continuous connection between two endpoints that is guaranteed to be unbroken. This requires a much 'smarter' (and hence far more costly to implement and maintain) network that keeps data flowing between two nodes and doesn't fail.
What this really means is that when it does fail, it can't cope at all - you get hung up on. Examples of this are ATM (instead of TCP), PPPoE (instead of TCP) and 3G wireless (instead of 802.11) and BlueTooth (instead of 802.11).
Cheshire's laws of Network Dynamics summarize it this way:
For every Network Service there's an equal and opposite Network Disservice
Nothing in networking comes for free, which is a fact many people seem to forget. Any time a network technology offers "guaranteed reliability" and other similar properties, you should ask what it is going to cost you, because it is going to cost you. It may cost you in terms of money, in terms of lower throughput (bandwidth), or in terms of higher delay, but one way or another it is going to cost you something. Nothing comes for free.
1. A guaranteed network service guarantees to be a low quality overpriced service
2. For every guarantee there's a corresponding refusal
3. For every Network Connection there's a corresponding Network Disconnection
Or more succinctly in The ATM Paradox
So I continue to contend (contra Roxann Googinas cited by Dave) that running this kind of network is a bad business to be in compared to a commodity connectivity one, as its existing fixed costs that can be supported by expensive voice calls are going away can be replaced by a bigger fatter packet-switching network. Peter Cochrane explains how the Telcos messed up their opportuinity, and summarizes this way:
TelCos will never deliver wide-band communications. It is not in their interests to do so; it isn't in their business minds or models. They are into call minutes and billing systems. They are old, gray and don't get it, and they don't intend getting it.
2. The cable companies are slightly better, but have a broadcast mindset, where wide-band is an add-on, a kluge, and not a primary business or technology.
3. Both (1) and (2) have missed their opportunity, wasted time and money on a vast scale, and are now going bust. Five years ago they could have rolled fiber into the local loop, they had the money and the people back then - now they have neither.
So lets get onto bad paradigm 2 - the broadcast mindset of networks optimised for 'content delivery'.
The 'content' industry is really several different pseudo-marketplaces joined together in odd ways through vertical integration. At one end is the VC-like fashion business of choosing which movie or pop singer to invest in.
Then there is the long chain of distribution selling the resulting works, often controlled by the Studio or Label.
Finally there is the weird inverted marketplace of broadcasting where the audience is sold to advertisers with the 'content' as bait, but then the 'content' used gets manipulated to promote sales through 'payola'.
Because of the tangled nature of these shenanigans, it is never clear what the business really is, but it is this group that presents the biggest threat to an open network, as they would like to impose huge restrictions to protect themselves from competition.
Doc points to an article attacking Lessig in a crude and formulaic way, assuming that all creativity comes from the centre, and advocating DRM as providing new service to consumers.
Doc then slightly mis-states Lessig's argument in defence.
Lessig isn't saying the net has 'natural' laws; he is saying that the current architecture of the net (the end-to-end, stupid architecture) has these kind of characteristics, but that programmers, not poets, are now the unacknowledged legislators of the world.
He explains well that law, code, norms and markets influence behaviour, and appreciates that all of these can be modified, and urges those modfying them to build in and expand the values of openness we currently enjoy.
Finally, I liked David Reed's dicussion of options for funding - I've been reading 'Extreme Programming explained' this week too, and it expresses much the same idea, but applied to coding choices.
I have to admit that reading Dave Weinberger's live coverage of Connectivity 2002 I wished I coud have been there with Stuart (as he has explained a lot of networking subtleties to me over the years).
However, the conversation continues through weblogs. Let me try and round up a few points that others have made, helping me to clarify my thoughts.
First, lets split what seemed to be two alternatives into three. The ideal that we're all after is a commoditized 'stupid' packet switching network, with intelligence at the ends in applications. There are two alternative paradigms that are fighting aginst this, and we need to separate them as they attack from different directions (though often in concert).
The first alternative is 'circuit switching' instead of packet switching. This is the bad solution that gets reinvented continuously by people who like thinking about wires. The notion is that there is a continuous connection between two endpoints that is guaranteed to be unbroken. This requires a much 'smarter' (and hence far more costly to implement and maintain) network that keeps data flowing between two nodes and doesn't fail.
What this really means is that when it does fail, it can't cope at all - you get hung up on. Examples of this are ATM (instead of TCP), PPPoE (instead of TCP) and 3G wireless (instead of 802.11) and BlueTooth (instead of 802.11).
Cheshire's laws of Network Dynamics summarize it this way:
For every Network Service there's an equal and opposite Network Disservice
Nothing in networking comes for free, which is a fact many people seem to forget. Any time a network technology offers "guaranteed reliability" and other similar properties, you should ask what it is going to cost you, because it is going to cost you. It may cost you in terms of money, in terms of lower throughput (bandwidth), or in terms of higher delay, but one way or another it is going to cost you something. Nothing comes for free.
1. A guaranteed network service guarantees to be a low quality overpriced service
2. For every guarantee there's a corresponding refusal
3. For every Network Connection there's a corresponding Network Disconnection
Or more succinctly in The ATM Paradox
So I continue to contend (contra Roxann Googinas cited by Dave) that running this kind of network is a bad business to be in compared to a commodity connectivity one, as its existing fixed costs that can be supported by expensive voice calls are going away can be replaced by a bigger fatter packet-switching network. Peter Cochrane explains how the Telcos messed up their opportuinity, and summarizes this way:
TelCos will never deliver wide-band communications. It is not in their interests to do so; it isn't in their business minds or models. They are into call minutes and billing systems. They are old, gray and don't get it, and they don't intend getting it.
2. The cable companies are slightly better, but have a broadcast mindset, where wide-band is an add-on, a kluge, and not a primary business or technology.
3. Both (1) and (2) have missed their opportunity, wasted time and money on a vast scale, and are now going bust. Five years ago they could have rolled fiber into the local loop, they had the money and the people back then - now they have neither.
So lets get onto bad paradigm 2 - the broadcast mindset of networks optimised for 'content delivery'.
The 'content' industry is really several different pseudo-marketplaces joined together in odd ways through vertical integration. At one end is the VC-like fashion business of choosing which movie or pop singer to invest in.
Then there is the long chain of distribution selling the resulting works, often controlled by the Studio or Label.
Finally there is the weird inverted marketplace of broadcasting where the audience is sold to advertisers with the 'content' as bait, but then the 'content' used gets manipulated to promote sales through 'payola'.
Because of the tangled nature of these shenanigans, it is never clear what the business really is, but it is this group that presents the biggest threat to an open network, as they would like to impose huge restrictions to protect themselves from competition.
Doc points to an article attacking Lessig in a crude and formulaic way, assuming that all creativity comes from the centre, and advocating DRM as providing new service to consumers.
Doc then slightly mis-states Lessig's argument in defence.
Lessig isn't saying the net has 'natural' laws; he is saying that the current architecture of the net (the end-to-end, stupid architecture) has these kind of characteristics, but that programmers, not poets, are now the unacknowledged legislators of the world.
He explains well that law, code, norms and markets influence behaviour, and appreciates that all of these can be modified, and urges those modfying them to build in and expand the values of openness we currently enjoy.
Finally, I liked David Reed's dicussion of options for funding - I've been reading 'Extreme Programming explained' this week too, and it expresses much the same idea, but applied to coding choices.
Friday, 24 May 2002
Einstein quotes
for Akma:
Things should be made as simple as possible, but not any simpler.
For Dave:
The wireless telegraph is not difficult to understand. The ordinary telegraph is like a very long cat. You pull the tail in New York, and it meows in Los Angeles. The wireless is the same, only without the cat.
for Akma:
Things should be made as simple as possible, but not any simpler.
For Dave:
The wireless telegraph is not difficult to understand. The ordinary telegraph is like a very long cat. You pull the tail in New York, and it meows in Los Angeles. The wireless is the same, only without the cat.
Thursday, 23 May 2002
Dave is trying to explain end-to-end for a putative intelligent senator
There is another way of looking at this that may appeal to regulators, which is a separation of powers issue.
Moving bits is (or should be) a commodity business, and thus should be consistently profitable for the right kind of company (after all, Walmart makes big bucks running a commodity business). The thing that made me uneasy about Netparadox was the bit that implied you can't make money with a well-designed network. It was a good soundbite, but wrong on a deeper level.
I think that is the wrong message to be sending. You can make money delivering a commodity product well. In fact, you can make more reliable money that way than running a 'content' business, which involves betting large sums on the fickle tastes of the public, then spending larger sums trying to persuade them that they really want to watch your 'content', not someone else's.
If someone owns both businesses, they will be tempted to cheat by making the bit-moving business favour their offerings over other people's. In the long run, this is foolish, as they will undermine the value proposition of the commodity 'moving bits' business, and undermine the real competitiveness of the content
The temptation is, as Dave say, inevitable, but it should be resisted as it will destroy the ongoing low risk business with a steady return in favour of supposed short term gains from the high risk business. (Lessig and Winer call this 'strategy tax').
If there is competition in provision, this will work itself out, as the companies that make this kind of mistake will be dumped in favour of smarter ones.
The problem is when there aren't available alternatives because of a regulatory monopoly (only one local phone co; only one cable co). This is where regulation to ensure separation of powers comes in.
Lessig argues convincingly that it was this kind of regulation to ensure open access to the phone network that allowed the net to grow in the first place:
But there is one part of the Internet where end-to-end is more than just a norm. Here the principle has the force of law, and the network owner cannot favor one kind of content over another or prefer one form of service over another. Instead the network owner must keep its network open for any application or use the customers might demand. Competitors must be allowed to interconnect; consumers must be allowed to try new uses. In this part of the Internet, "open access" is the rule.
This part of the Internet is--ironically enough--the telephone network, where because of increasing regulation imposed by the D.C. Circuit Court of Appeals in the 1970s--leading to a breakup of AT&T by the Justice Department in 1984 and culminating with the Telecommunications Act of 1996--the old telephone network has been replaced with a new one over which the owner has very little control. Instead, the FCC spends an extraordinary amount of effort making sure the telephone lines remain open to innovators and consumers on terms analogous to the terms required by an end-to-end principle: nondiscrimination and a right to access.
The FCC is convinced that this regulatory burden is severe and costly to maintain. And no doubt it is costly. But the question is not simply how much the regulation costs; it is also about its benefit. What is the benefit of effectively enforcing end-to-end on the telephone system?
In my view, the benefit has been the Internet. Though the Internet proper was initially a network among universities, had it not been for the ability of ordinary consumers to connect to the Internet, that network would have gone nowhere. (Universities are fun, but they aren't enough to fuel commercial revolutions.) Ordinary consumers connected to the Net across phone lines. And had it not been for the open-access rules that the government imposed upon telephones, the telephone companies would most likely have behaved just as every network owner in history has behaved--to control access and use architecture to minimize competition. If it hadn't been as cheap to dial a local bulletin-board system (BBS) as it was to dial a local friend; had the Baby Bells kept the power to force customers to a Baby Bell ISP; had the government not insisted that competitors be connected and had it not policed pricing to ensure nondiscrimination--had it not, in short, used the power of law to force a competitive neutrality onto the telephone system, the telephone system would not have inspired the extraordinary innovation that it did.
By keeping the network neutral, by keeping it open to innovation, the FCC has made possible the extraordinary innovation that the Internet has produced. Open access was the rule; a regulation produced that rule.
Lots more detail on this in Lessig's book The Future of Ideas
There is another way of looking at this that may appeal to regulators, which is a separation of powers issue.
Moving bits is (or should be) a commodity business, and thus should be consistently profitable for the right kind of company (after all, Walmart makes big bucks running a commodity business). The thing that made me uneasy about Netparadox was the bit that implied you can't make money with a well-designed network. It was a good soundbite, but wrong on a deeper level.
I think that is the wrong message to be sending. You can make money delivering a commodity product well. In fact, you can make more reliable money that way than running a 'content' business, which involves betting large sums on the fickle tastes of the public, then spending larger sums trying to persuade them that they really want to watch your 'content', not someone else's.
If someone owns both businesses, they will be tempted to cheat by making the bit-moving business favour their offerings over other people's. In the long run, this is foolish, as they will undermine the value proposition of the commodity 'moving bits' business, and undermine the real competitiveness of the content
The temptation is, as Dave say, inevitable, but it should be resisted as it will destroy the ongoing low risk business with a steady return in favour of supposed short term gains from the high risk business. (Lessig and Winer call this 'strategy tax').
If there is competition in provision, this will work itself out, as the companies that make this kind of mistake will be dumped in favour of smarter ones.
The problem is when there aren't available alternatives because of a regulatory monopoly (only one local phone co; only one cable co). This is where regulation to ensure separation of powers comes in.
Lessig argues convincingly that it was this kind of regulation to ensure open access to the phone network that allowed the net to grow in the first place:
But there is one part of the Internet where end-to-end is more than just a norm. Here the principle has the force of law, and the network owner cannot favor one kind of content over another or prefer one form of service over another. Instead the network owner must keep its network open for any application or use the customers might demand. Competitors must be allowed to interconnect; consumers must be allowed to try new uses. In this part of the Internet, "open access" is the rule.
This part of the Internet is--ironically enough--the telephone network, where because of increasing regulation imposed by the D.C. Circuit Court of Appeals in the 1970s--leading to a breakup of AT&T by the Justice Department in 1984 and culminating with the Telecommunications Act of 1996--the old telephone network has been replaced with a new one over which the owner has very little control. Instead, the FCC spends an extraordinary amount of effort making sure the telephone lines remain open to innovators and consumers on terms analogous to the terms required by an end-to-end principle: nondiscrimination and a right to access.
The FCC is convinced that this regulatory burden is severe and costly to maintain. And no doubt it is costly. But the question is not simply how much the regulation costs; it is also about its benefit. What is the benefit of effectively enforcing end-to-end on the telephone system?
In my view, the benefit has been the Internet. Though the Internet proper was initially a network among universities, had it not been for the ability of ordinary consumers to connect to the Internet, that network would have gone nowhere. (Universities are fun, but they aren't enough to fuel commercial revolutions.) Ordinary consumers connected to the Net across phone lines. And had it not been for the open-access rules that the government imposed upon telephones, the telephone companies would most likely have behaved just as every network owner in history has behaved--to control access and use architecture to minimize competition. If it hadn't been as cheap to dial a local bulletin-board system (BBS) as it was to dial a local friend; had the Baby Bells kept the power to force customers to a Baby Bell ISP; had the government not insisted that competitors be connected and had it not policed pricing to ensure nondiscrimination--had it not, in short, used the power of law to force a competitive neutrality onto the telephone system, the telephone system would not have inspired the extraordinary innovation that it did.
By keeping the network neutral, by keeping it open to innovation, the FCC has made possible the extraordinary innovation that the Internet has produced. Open access was the rule; a regulation produced that rule.
Lots more detail on this in Lessig's book The Future of Ideas
Tuesday, 21 May 2002
Dave is live-blogging Connectivity 2002
One thing they are talking about is email & privacy.
Here's my 3 stage plan for the elimination of spam through email user experience improvement:
1. Integrate PGP signing and encryption so that they happen automatically, and are on by default. Adopt one of the various proposals for Key exchange through email interaction too. Yes, this may be less than perfectly secure but its a damn sight better than spoofable headers; hard core crypto heads can validate keys through side channels and mark them that way in address book (see 3 below).
2. Provide subtle UI cueing for different priorities - Bigger bolder type for high priorities, smaller lighter type for low priorities. Think how a print Newspaper uses type size to convey story importance, but get a smart visual designer to come up with the actual mappings. Have indicators for verified signed and encrypted mails.
3. Create a fuzzy-logic prioritization engine, that takes into account lots of info about the mail (Think trust metrics). Key input values are:
-Who is this from? (verified signed mail > someone you've sent mail to > someone in address book > random user)
Ideally, have a trust hierarchy in the address book - like a fuzzy kill file that can reward as well as punish
-Who is this to? (uses my public key > addressed just to me > me as part of CC list > mailing list address)
-Does it contain money? (Paypal emails etc).
-Does it match keyword/semantic signatures I like or dislike (eg I would like 'QuickTime' and dislike 'millions of email addresses')
-Has it been sitting in my inbox a while and I still haven't read it?
-etc.
The point of this is that each of these stages is a fluid extension of existing UI and features that is useful in itself, and capable of further refinement, but acting in concert they provide a way for email to hold back the tragedy of the commons that spam represents.
One thing they are talking about is email & privacy.
Here's my 3 stage plan for the elimination of spam through email user experience improvement:
1. Integrate PGP signing and encryption so that they happen automatically, and are on by default. Adopt one of the various proposals for Key exchange through email interaction too. Yes, this may be less than perfectly secure but its a damn sight better than spoofable headers; hard core crypto heads can validate keys through side channels and mark them that way in address book (see 3 below).
2. Provide subtle UI cueing for different priorities - Bigger bolder type for high priorities, smaller lighter type for low priorities. Think how a print Newspaper uses type size to convey story importance, but get a smart visual designer to come up with the actual mappings. Have indicators for verified signed and encrypted mails.
3. Create a fuzzy-logic prioritization engine, that takes into account lots of info about the mail (Think trust metrics). Key input values are:
-Who is this from? (verified signed mail > someone you've sent mail to > someone in address book > random user)
Ideally, have a trust hierarchy in the address book - like a fuzzy kill file that can reward as well as punish
-Who is this to? (uses my public key > addressed just to me > me as part of CC list > mailing list address)
-Does it contain money? (Paypal emails etc).
-Does it match keyword/semantic signatures I like or dislike (eg I would like 'QuickTime' and dislike 'millions of email addresses')
-Has it been sitting in my inbox a while and I still haven't read it?
-etc.
The point of this is that each of these stages is a fluid extension of existing UI and features that is useful in itself, and capable of further refinement, but acting in concert they provide a way for email to hold back the tragedy of the commons that spam represents.
Sunday, 19 May 2002
Akma, who was kind enough recently to confirm my appointment as Dean of Memetic Engineering and Reader of Thoughts at the University of Blogaria, has been thinking about authenticity, complexity and binary thought (us v. them).
Symmetrically, I attended church today, to see some friends confirmed, and also saw the priest officially installed.
I was struck by the formal and contractual sounding nature of the installation, (it was very like installing software - a licence we could reject as parishioners, but then have no priest) and also by the 'binary' (in AKMA's terms) discussion of Christian doctrine as part of the confirmation - the idea that we have a common heritage in the Book of Common Prayer, and that one might suffer for being identified as a Christian. I know persecution casts a long shadow, but the small Anglican congregation gathered in the hills above Silicon Valley seems very far from Rome.
On a related note, I just ordered Stephen Wolfram's A New Kind of Science, which claims that complexity is explicable (or at least able to be generated) from simple rules.
I'm looking forward to reading this tome, and I will let Akma know if I recommend its memes. I do agree that appreciating compexity and eschewing a zero-sum viewpoint is important, but to assert that complex outcomes require complex explanations (which Akma does not, directly) is another common logical flaw. Occam's razor needs to be kept sharp too.
Symmetrically, I attended church today, to see some friends confirmed, and also saw the priest officially installed.
I was struck by the formal and contractual sounding nature of the installation, (it was very like installing software - a licence we could reject as parishioners, but then have no priest) and also by the 'binary' (in AKMA's terms) discussion of Christian doctrine as part of the confirmation - the idea that we have a common heritage in the Book of Common Prayer, and that one might suffer for being identified as a Christian. I know persecution casts a long shadow, but the small Anglican congregation gathered in the hills above Silicon Valley seems very far from Rome.
On a related note, I just ordered Stephen Wolfram's A New Kind of Science, which claims that complexity is explicable (or at least able to be generated) from simple rules.
I'm looking forward to reading this tome, and I will let Akma know if I recommend its memes. I do agree that appreciating compexity and eschewing a zero-sum viewpoint is important, but to assert that complex outcomes require complex explanations (which Akma does not, directly) is another common logical flaw. Occam's razor needs to be kept sharp too.
If you're at all interested in the CBTPA (Hollywood's bill to outlaw computers) read LawMeme's clear dicussion of the issues. Even if you accept the bogus claims at face value, it still doesn't make sense.
Friday, 17 May 2002
"How does the computer know so much?" Andrew asked tonight.
"It doesn't - people know things, and write them on web pages." I replied.
So Andrew got his own weblog tonight. Read more there.
"It doesn't - people know things, and write them on web pages." I replied.
So Andrew got his own weblog tonight. Read more there.
Thursday, 16 May 2002
Eisnerwatch
Fortune has a shallow article on the content clone wars, full of muddy zero-sum thinking.
Michael Eisner loves his iPod. "It's one of the most fabulous things I've seen in the past couple of years," he says. Eisner has no problem with the technology itself, but he deplores the fact that people are using it to avoid paying for Disney products, in effect stealing from the company. "Nothing about technology is threatening or upsetting or negative," he insists. "This is simply about conscious behavior, about right and wrong, and I just don't understand the enormous tidal wave of rhetoric that this issue has created from the so-called technology side. Shakespeare would find it interesting."
Rhetoric from the technology side? Scroll down a bit for the choice rhetoric from the copyright horders...
Anyway, Dan says;
Articles like this are infuriating. They cast the debate in binary terms, industry versus industry, as if that's really the issue. It isn't.
Guess who's missing from the story, and all too often from the debate? That's right, the customers. You. Me.
Not only the customers, but also the creators.
Dave says it makes Jobs look clueless, but I disagree. Jobs is quoted as saying:
"To say this intractable technology problem is going to be solved by something in the back pockets of technology companies, and they are not sharing it, is unbelievable. This is an important issue, and it's not going to be solved by threatening rhetoric. It's going to be solved by a computer scientist who has an incredibly original idea. We just don't know who or when."
Which Dave presumably takes to mean that Jobs believes that the intractable problem of selective copy prevention can be solved. I think he means that the meta-problem of the distribution of creative works and paying their creators can be solved.
Fortune has a shallow article on the content clone wars, full of muddy zero-sum thinking.
Michael Eisner loves his iPod. "It's one of the most fabulous things I've seen in the past couple of years," he says. Eisner has no problem with the technology itself, but he deplores the fact that people are using it to avoid paying for Disney products, in effect stealing from the company. "Nothing about technology is threatening or upsetting or negative," he insists. "This is simply about conscious behavior, about right and wrong, and I just don't understand the enormous tidal wave of rhetoric that this issue has created from the so-called technology side. Shakespeare would find it interesting."
Rhetoric from the technology side? Scroll down a bit for the choice rhetoric from the copyright horders...
Anyway, Dan says;
Articles like this are infuriating. They cast the debate in binary terms, industry versus industry, as if that's really the issue. It isn't.
Guess who's missing from the story, and all too often from the debate? That's right, the customers. You. Me.
Not only the customers, but also the creators.
Dave says it makes Jobs look clueless, but I disagree. Jobs is quoted as saying:
"To say this intractable technology problem is going to be solved by something in the back pockets of technology companies, and they are not sharing it, is unbelievable. This is an important issue, and it's not going to be solved by threatening rhetoric. It's going to be solved by a computer scientist who has an incredibly original idea. We just don't know who or when."
Which Dave presumably takes to mean that Jobs believes that the intractable problem of selective copy prevention can be solved. I think he means that the meta-problem of the distribution of creative works and paying their creators can be solved.
Creative Commons launched today, and Doc is blogging the launch presentation.
The basic idea is to make it easy to put works into the public domain, or disclaim copyright protection for some uses. This is useful, but it isn't really a solution to the growing chasm between the ease of creating and distributing digital creative works, and the difficulty of paying the creators for them.
It is a piece of the puzzle though, and very welcome. Well done Larry et al.
The basic idea is to make it easy to put works into the public domain, or disclaim copyright protection for some uses. This is useful, but it isn't really a solution to the growing chasm between the ease of creating and distributing digital creative works, and the difficulty of paying the creators for them.
It is a piece of the puzzle though, and very welcome. Well done Larry et al.
Tuesday, 14 May 2002
Felt the earth move tonight
Magnitude 5.2 in Gilroy. Shook the firepace here, and we took the boys outside. Nothing damaged though.
Magnitude 5.2 in Gilroy. Shook the firepace here, and we took the boys outside. Nothing damaged though.
Monday, 13 May 2002
Dave's latest JOHO is out.
In the email section he somehow repeats the original version of a little back and forth we had on my sadly-neglected Nonzero blog, implying I retracted it, when in fact we both forgot which blog it was on. Dave kindly corrected this in his blog at the time
Anyway, the point I was failing to make well by exaggerating and parodying was that Dave's orginal 'Web as Utopia' piece makes sense for those of us who are familiar with the web and have fond our place in it, but confuses those for whom it is an alien experience.
I know Dave doesn't really think that the web is 'a transcendent Platonic ideal of Socratic discourse'; I was exaggerating to make the point that we find online what we go looking for, and the web we see is a reflection of ourselves individually as well as collectively.
With 2 billion pages and counting, we can never see it all, and when we venture outside the well trodden paths of the personal web we know, we are more likely to make mistakes in our maps, and come back with 'here be dragons' written across entire continents and tales of men with no heads.
I think this effect, rather than malice or wilful misrepresentation is what is behind such things as journalists' clueless articles on weblogs or congressman fulminating against the net consisting mostly of porn and piracy.
This is part of what I got from reading SPLJ, and I'm glad I provoked Dave into such a clearly expressed retort about connection.
And talking of connecting, try out the Amazon connection browser that (appropriately enough) defaults to starting with SPLJ.
Just to make sure I don't lose this version, I'm 'syndicating' it to the Small Pieces Loosely Joined blog and nonzero too.
In the email section he somehow repeats the original version of a little back and forth we had on my sadly-neglected Nonzero blog, implying I retracted it, when in fact we both forgot which blog it was on. Dave kindly corrected this in his blog at the time
Anyway, the point I was failing to make well by exaggerating and parodying was that Dave's orginal 'Web as Utopia' piece makes sense for those of us who are familiar with the web and have fond our place in it, but confuses those for whom it is an alien experience.
I know Dave doesn't really think that the web is 'a transcendent Platonic ideal of Socratic discourse'; I was exaggerating to make the point that we find online what we go looking for, and the web we see is a reflection of ourselves individually as well as collectively.
With 2 billion pages and counting, we can never see it all, and when we venture outside the well trodden paths of the personal web we know, we are more likely to make mistakes in our maps, and come back with 'here be dragons' written across entire continents and tales of men with no heads.
I think this effect, rather than malice or wilful misrepresentation is what is behind such things as journalists' clueless articles on weblogs or congressman fulminating against the net consisting mostly of porn and piracy.
This is part of what I got from reading SPLJ, and I'm glad I provoked Dave into such a clearly expressed retort about connection.
And talking of connecting, try out the Amazon connection browser that (appropriately enough) defaults to starting with SPLJ.
Just to make sure I don't lose this version, I'm 'syndicating' it to the Small Pieces Loosely Joined blog and nonzero too.
John Glimore explains the problem with Intel's appeasement
Intel builds machines that process data. "Content" is just data.
Every piece of data that an Intel processor or networking component
handles is copyrighted by somebody, under the Berne Convention. It's
all "content". You could talk about "protecting data" but people
would realize that preventing it from being copied does not "protect"
their data. Frequently you NEED to copy your data -- e.g. onto a
backup tape -- to protect it. So instead you use this made-up word
"content". Since nobody knows a definition for "content", you can say
the most outrageous things about it and get away with it.
Intel's chips have no way to tell what permission that individual chip
owner has under the copyright law, for many reasons. (The laws
change, copyrights expire, individuals or companies get more rights
than the general public does because they signed licenses, there are
things that everyone has the legal right to do with data whether or
not it is copyrighted, etc etc etc.)
If Intel really thought that there was a "need to protect content", it
would have built features into its chips to make sure that none of
Intel's OWN chip designs, hardware designs, software, documents, trade
secrets, and other intellectual property could ever be stolen or
copied using Intel equipment. That's Intel's first and foremost
interest in intellectual property protection -- but it has expended
zero effort toward "fixing" its chips to provide technical barriers
against such unlawful copying. Therefore I discount the claim that
Intel sees a "need to protect content".
What Intel is doing is a cynical scheme to buy off an oligopoly.
Also, Intel has a vested interest in people buying new hardware, as that is what they sell. They have done very well for decades by making the stuff faster and better so people want to upgrade.They should stick to this, instead of requiring an upgrade to get a machine that plays what it should. This is likely to backfire hard, as peoepl wil prefer the older unrestricted hardware over the new.
Intel builds machines that process data. "Content" is just data.
Every piece of data that an Intel processor or networking component
handles is copyrighted by somebody, under the Berne Convention. It's
all "content". You could talk about "protecting data" but people
would realize that preventing it from being copied does not "protect"
their data. Frequently you NEED to copy your data -- e.g. onto a
backup tape -- to protect it. So instead you use this made-up word
"content". Since nobody knows a definition for "content", you can say
the most outrageous things about it and get away with it.
Intel's chips have no way to tell what permission that individual chip
owner has under the copyright law, for many reasons. (The laws
change, copyrights expire, individuals or companies get more rights
than the general public does because they signed licenses, there are
things that everyone has the legal right to do with data whether or
not it is copyrighted, etc etc etc.)
If Intel really thought that there was a "need to protect content", it
would have built features into its chips to make sure that none of
Intel's OWN chip designs, hardware designs, software, documents, trade
secrets, and other intellectual property could ever be stolen or
copied using Intel equipment. That's Intel's first and foremost
interest in intellectual property protection -- but it has expended
zero effort toward "fixing" its chips to provide technical barriers
against such unlawful copying. Therefore I discount the claim that
Intel sees a "need to protect content".
What Intel is doing is a cynical scheme to buy off an oligopoly.
Also, Intel has a vested interest in people buying new hardware, as that is what they sell. They have done very well for decades by making the stuff faster and better so people want to upgrade.They should stick to this, instead of requiring an upgrade to get a machine that plays what it should. This is likely to backfire hard, as peoepl wil prefer the older unrestricted hardware over the new.
Sunday, 12 May 2002
Book people
AKMA and Dorothea have been discussing 'Book People' - I am certainly one (why else would I be writing this at half-past one on a Saturday night?).
It is certainly hereditary. (My father extended our house when my youngest sister was born. He added a small playroom, a small bedroom for my middle sister, and a large room lined with book-cases and a desk for him to work in. Since have now left home, the other 2 rooms are full of books as is the more recent loft conversion, as is his offfice about a mile away).
Here's another one - Book people would rather spend an hour on public transport (where they can read) than 20 minutes driving.
Anyway, I recently encountered The Folio Society - a company who does understand 'Book people', and they have successfully tempted me to sign up...
It is certainly hereditary. (My father extended our house when my youngest sister was born. He added a small playroom, a small bedroom for my middle sister, and a large room lined with book-cases and a desk for him to work in. Since have now left home, the other 2 rooms are full of books as is the more recent loft conversion, as is his offfice about a mile away).
Here's another one - Book people would rather spend an hour on public transport (where they can read) than 20 minutes driving.
Anyway, I recently encountered The Folio Society - a company who does understand 'Book people', and they have successfully tempted me to sign up...
Teaching children to program
Doc pointed me at David Scott Williams' DustyScript idea.
I sent him a brief email saying I thought it was missing the point, and suggesting existing software tools, but made a hash of describing the point, so I'll try again here.
Being a good programmer is not about the language you use, it is about the way you think, and the way you approach problems. You need to be able to keep high-level and low level goals in mind at once, to analyse and model situations, to express the model in rules, and to adapt the rules to new situations. Its perfectly possibel to write good clear code in almost any language, just as you can write bad, unclear code in any language or even 4 at once
Young children are fantastically good at learning languages by example, but often not good at predicate logic or deductive reasoning, which takes a lot of training. (As an aside, the book Reading Reflex applies this insight to teaching reading - instead of teaching deductive rules parrot fashion, it groups different representations of the same sound and gets the children to work through them until they derive an unconscious model that way).
The best 'programming' exercise with small children is the 'I am a robot' game. You play their robot slave, and do what you are told, but very literally, and in small stages, with 'error messages' returned in a robot voice. Just getting you to walk from the sofa to the bedroom can take ages and they love it. They naturally want to be the simple-minded robot too (just make sure they don't get too attached to it, or they may end up working in telephone support).
I've seen a huge amount of 'educational' software - I used to work in the CD-ROM business, and I buy up remaindered CD's from Marshalls for my 2 boys and watch how they use them. Most of them are dross, with the same few ideas (Pelmanism, missing words etc.) recycled with a different character or brand attached. Some have genuine insight, and I can see them learning to reason using them. Here are a selection:
Logical Journey of the Zoombinis is a wonderful introduction to deductive logic through a compelling game. It was designed with this in mind and my boys have been playing this since they were 3, and are still enjoying it now at 5 and 7 (as do I).
The Pajama Sam series of adventures from Humongous are good at teaching the global/local focus, but one that is great fun and teaches valuable debugging skills is Pajama Sam's SockWorks which features a long series of machines that have socks in them that you have to get into the right coloured baskets. As you can also build your own puzzles, the idea of solvable and unsolvable problems naturally comes up.
Zap! is another great game that teaches by stealth. You have to help 3 wisecracking cartoon charcters to fix their electrical, optical and audio-visual gadgets to get their show on the road. It manages to include a compelte circuit simulator, an optical workbench simulator and sound environment simulator, and still be lots of fun for Kindergarten children.
To teach programming concepts without writing textual code, Cocoa is perfect (if you have a Mac). It is a tool that enables you to create 2d video games by drawing the characters and defining what happens when they encounter each other by example. Andrew has made about 65 games with this, some original, some homages to TV programs or his brother's films.
Finally, if you want a comprehensible textual language, use Runtime Revolution, whose language Transcript is based on the old Apple HyperCard language, and as such has completely human-readable programs. This is what I plan to get Andrew into next.
I sent him a brief email saying I thought it was missing the point, and suggesting existing software tools, but made a hash of describing the point, so I'll try again here.
Being a good programmer is not about the language you use, it is about the way you think, and the way you approach problems. You need to be able to keep high-level and low level goals in mind at once, to analyse and model situations, to express the model in rules, and to adapt the rules to new situations. Its perfectly possibel to write good clear code in almost any language, just as you can write bad, unclear code in any language or even 4 at once
Young children are fantastically good at learning languages by example, but often not good at predicate logic or deductive reasoning, which takes a lot of training. (As an aside, the book Reading Reflex applies this insight to teaching reading - instead of teaching deductive rules parrot fashion, it groups different representations of the same sound and gets the children to work through them until they derive an unconscious model that way).
The best 'programming' exercise with small children is the 'I am a robot' game. You play their robot slave, and do what you are told, but very literally, and in small stages, with 'error messages' returned in a robot voice. Just getting you to walk from the sofa to the bedroom can take ages and they love it. They naturally want to be the simple-minded robot too (just make sure they don't get too attached to it, or they may end up working in telephone support).
I've seen a huge amount of 'educational' software - I used to work in the CD-ROM business, and I buy up remaindered CD's from Marshalls for my 2 boys and watch how they use them. Most of them are dross, with the same few ideas (Pelmanism, missing words etc.) recycled with a different character or brand attached. Some have genuine insight, and I can see them learning to reason using them. Here are a selection:
Logical Journey of the Zoombinis is a wonderful introduction to deductive logic through a compelling game. It was designed with this in mind and my boys have been playing this since they were 3, and are still enjoying it now at 5 and 7 (as do I).
The Pajama Sam series of adventures from Humongous are good at teaching the global/local focus, but one that is great fun and teaches valuable debugging skills is Pajama Sam's SockWorks which features a long series of machines that have socks in them that you have to get into the right coloured baskets. As you can also build your own puzzles, the idea of solvable and unsolvable problems naturally comes up.
Zap! is another great game that teaches by stealth. You have to help 3 wisecracking cartoon charcters to fix their electrical, optical and audio-visual gadgets to get their show on the road. It manages to include a compelte circuit simulator, an optical workbench simulator and sound environment simulator, and still be lots of fun for Kindergarten children.
To teach programming concepts without writing textual code, Cocoa is perfect (if you have a Mac). It is a tool that enables you to create 2d video games by drawing the characters and defining what happens when they encounter each other by example. Andrew has made about 65 games with this, some original, some homages to TV programs or his brother's films.
Finally, if you want a comprehensible textual language, use Runtime Revolution, whose language Transcript is based on the old Apple HyperCard language, and as such has completely human-readable programs. This is what I plan to get Andrew into next.
Friday, 3 May 2002
Study says Internet music sharing helps, does not hurt industry - [04/05/2002] - Hindustantimes.com
Study says Internet music sharing helps, does not hurt industry - [04/05/2002] - Hindustantimes.com
Last month, the International Federation of the Phonographic Industry said worldwide sales of albums fell last year for the first time since CDs were introduced into shops in the early 1980s, and blamed illegal piracy for the decline.
The Jupiter study found that while Internet file-swapping may deter some purchases, on balance the effect is not damaging to the industry.
"File sharing has a polarizing effect on music spending, spurring increases among some users and decreases among others," said the report by Jupiter analyst Aram Sinnreich.
"However, the boost outweighs the bust. Experienced file sharers were 75 percent more likely than the average online music fan to have increased their music spending levels."
Wilco turned up very high in the charts after making their album available online.
Now I want a survey about the effect of 'protected' CDs on sales.
Last month, the International Federation of the Phonographic Industry said worldwide sales of albums fell last year for the first time since CDs were introduced into shops in the early 1980s, and blamed illegal piracy for the decline.
The Jupiter study found that while Internet file-swapping may deter some purchases, on balance the effect is not damaging to the industry.
"File sharing has a polarizing effect on music spending, spurring increases among some users and decreases among others," said the report by Jupiter analyst Aram Sinnreich.
"However, the boost outweighs the bust. Experienced file sharers were 75 percent more likely than the average online music fan to have increased their music spending levels."
Wilco turned up very high in the charts after making their album available online.
Now I want a survey about the effect of 'protected' CDs on sales.