Sunday, October 05, 2008
Straight, no chaser: Why journalists must be troublemakers
by Milverton Wallace
Human beings are fallible creatures. And although some may find it barely credible, politicians are also humans. Mr Audley Shaw is a politician, therefore he is fallible.
To be fallible means, among other things, that we are less than perfect, we make mistakes, we take wrong decisions, our actions do not always match our intentions.
We commit errors in thought and action. This is perfectly normal; it’s a natural consequence of our human fallibility.
Now, some errors are more consequential than others.
A housewife checking on her grocery bill who makes an error in her calculations may at worst find herself a little short of butter or salt that week, but the error only affects her family and in any event she’ll make do with a little creative substitution. If she’s profligate in her expenditure and incompetent in her accounting, it’s still a family matter and no-one else’s business.
A politician, however, has no such defence. His/her parsimony or profligacy with public monies, his/her competence or incompetence in managing the public finance, is everybody’s business. We can poke our noses into every cupboard and closet of the public exchequer to find the answers we seek. Nothing is out of bounds. Evocations of “national security”, “confidentiality”, “executive privilege” and other such pleas should be treated as the phantoms they are.
One of the weaknesses of our educational system is the absence of any instruction in basic economics. Know anyone besides business people who can read a balance sheet? Or knows the difference between GDP and GNP? Or understands the public accounts or the credit rating system?
Why, you might ask, should the man in Claverty Cottage need to know these things? So he doesn’t get sold a six for a nine.
Politicians will bend and twist the facts to serve their needs; that is the nature of their vocation. The journalists’ duty is to truthfully seek the facts; that is the purpose of their profession. And because the majority of the citizens are not accountants, economists, lawyers or finance ministers (thank God!) it falls to the journalist to enquire into these things on their behalf.
We do not need to justify ourselves by invoking the freedom of the press, the role of the 4th estate and similar shiboleths. We are what we are: busybodies, nosey parkers, muckrakers, newsmongers.
Above all, we are storytellers. We identify the story, pursue it doggedly, apply our forensic skills to get to the heart of the matter, and lay it bare before the public without tricks of light or smoke and mirrors. Straight, no chaser. And let the chips fall where they will.
As the good bacteria in the human body attacks and repairs faulty or malign cells, so we target injustice, corruption, greed, mismanagement, incompetence and hubris in the body politic. As the don says, it’s nothing personal, it’s just business. Troublemaking is what we do. Deal with it.
--endit--
Human beings are fallible creatures. And although some may find it barely credible, politicians are also humans. Mr Audley Shaw is a politician, therefore he is fallible.
To be fallible means, among other things, that we are less than perfect, we make mistakes, we take wrong decisions, our actions do not always match our intentions.
We commit errors in thought and action. This is perfectly normal; it’s a natural consequence of our human fallibility.
Now, some errors are more consequential than others.
A housewife checking on her grocery bill who makes an error in her calculations may at worst find herself a little short of butter or salt that week, but the error only affects her family and in any event she’ll make do with a little creative substitution. If she’s profligate in her expenditure and incompetent in her accounting, it’s still a family matter and no-one else’s business.
A politician, however, has no such defence. His/her parsimony or profligacy with public monies, his/her competence or incompetence in managing the public finance, is everybody’s business. We can poke our noses into every cupboard and closet of the public exchequer to find the answers we seek. Nothing is out of bounds. Evocations of “national security”, “confidentiality”, “executive privilege” and other such pleas should be treated as the phantoms they are.
One of the weaknesses of our educational system is the absence of any instruction in basic economics. Know anyone besides business people who can read a balance sheet? Or knows the difference between GDP and GNP? Or understands the public accounts or the credit rating system?
Why, you might ask, should the man in Claverty Cottage need to know these things? So he doesn’t get sold a six for a nine.
Politicians will bend and twist the facts to serve their needs; that is the nature of their vocation. The journalists’ duty is to truthfully seek the facts; that is the purpose of their profession. And because the majority of the citizens are not accountants, economists, lawyers or finance ministers (thank God!) it falls to the journalist to enquire into these things on their behalf.
We do not need to justify ourselves by invoking the freedom of the press, the role of the 4th estate and similar shiboleths. We are what we are: busybodies, nosey parkers, muckrakers, newsmongers.
Above all, we are storytellers. We identify the story, pursue it doggedly, apply our forensic skills to get to the heart of the matter, and lay it bare before the public without tricks of light or smoke and mirrors. Straight, no chaser. And let the chips fall where they will.
As the good bacteria in the human body attacks and repairs faulty or malign cells, so we target injustice, corruption, greed, mismanagement, incompetence and hubris in the body politic. As the don says, it’s nothing personal, it’s just business. Troublemaking is what we do. Deal with it.
--endit--
Tuesday, June 17, 2008
The tale of a mouse and the end of TV as we know it
A Provocation for Media Futures
A one-day BBC conference. 20 June 2008. Alexandra Palace, London
Extracts from “Live TV over Live Web: Or what happens when John Logie Baird meets Tim Berners-Lee”, a talk presented by Milverton Wallace at the 4th One Day Conference on Service TV, Barcelona, 5th June 2008. (http://mobayboy.blogspot.com/2008/06/live-tv-over-live-web-or-what-happens.html).
In his speech at the 2008 Web 2.0 conference (San Francisco, April 2008), Clay Shirky tells this story:
“I was having dinner with a group of friends about a month ago, and one of them was talking about sitting with his four-year-old daughter watching a DVD. And in the middle of the movie, apropos nothing, she jumps up off the couch and runs around behind the screen. That seems like a cute moment. Maybe she's going back there to see if Dora is really back there or whatever. But that wasn't what she was doing. She started rooting around in the cables. And her dad said, "What you doing?" And she stuck her head out from behind the screen and said, ‘Looking for the mouse’ ”.
What lessons do you draw from the story? This is what Shirky concluded:
“Here's something four-year-olds know: A screen that ships without a mouse ships broken. Here's something four-year-olds know: Media that's targeted at you but doesn't include you may not be worth sitting still for”.
So I put myself in the place of this four-year-old ten years from now and imagined what a 14 year-old would expect to see when she looks at a “TV” screen in 2018.
Here it is:
The first thing to notice:
• There's no brand. No BBC, No ITV, No TV3, No Canal Plus
• Just feeds
• Programmes must compete on quality, not marketing
Our 14 year-old doesn't care which brand supplies the programmes that make up the menu of things that interest her. She checks "nature", and the feed reader trawls the network and serves up say, six items from various channels. She samples (previews) them and chooses the one which most closely matches her requirements and decides whether to pay for it or endure the commercials and watch it for free.
The second thing is:
• Our kid is not a passive consumer
• She can socialize the experience
How? By subsuming it into her social networks.
She can:
• Share it with her friends
• Or blog it or bookmark it
• Review it
• Send comment to programme-makers
• Tag it and rate it
Adding a new social media application is as simple as downloading a widget.
What's happening here?
Each programme is wrenched from its corporate wrapper and stands or falls on its own merits in the marketplace. Its been Individualized. Re-distributed. Networked.
Question: Are the TV networks ready for this de-centralised world?
Ready or not, this is the shape of things to come.
Why?
The network favours:
• Plurality over monopoly
• Personalization over aggregation
• Sharing and networking over monopoly and control.
Because of this, consumer electronic companies will build web services into the TV screen, because anything less would not command the attention of our kid.
Oh brave new world that hath such features in it (begging your pardon, Mr Huxley).
A Provocation for Media Futures
A one-day BBC conference. 20 June 2008. Alexandra Palace, London
Extracts from “Live TV over Live Web: Or what happens when John Logie Baird meets Tim Berners-Lee”, a talk presented by Milverton Wallace at the 4th One Day Conference on Service TV, Barcelona, 5th June 2008. (http://mobayboy.blogspot.com/2008/06/live-tv-over-live-web-or-what-happens.html).
In his speech at the 2008 Web 2.0 conference (San Francisco, April 2008), Clay Shirky tells this story:
“I was having dinner with a group of friends about a month ago, and one of them was talking about sitting with his four-year-old daughter watching a DVD. And in the middle of the movie, apropos nothing, she jumps up off the couch and runs around behind the screen. That seems like a cute moment. Maybe she's going back there to see if Dora is really back there or whatever. But that wasn't what she was doing. She started rooting around in the cables. And her dad said, "What you doing?" And she stuck her head out from behind the screen and said, ‘Looking for the mouse’ ”.
What lessons do you draw from the story? This is what Shirky concluded:
“Here's something four-year-olds know: A screen that ships without a mouse ships broken. Here's something four-year-olds know: Media that's targeted at you but doesn't include you may not be worth sitting still for”.
So I put myself in the place of this four-year-old ten years from now and imagined what a 14 year-old would expect to see when she looks at a “TV” screen in 2018.
Here it is:
The first thing to notice:
• There's no brand. No BBC, No ITV, No TV3, No Canal Plus
• Just feeds
• Programmes must compete on quality, not marketing
Our 14 year-old doesn't care which brand supplies the programmes that make up the menu of things that interest her. She checks "nature", and the feed reader trawls the network and serves up say, six items from various channels. She samples (previews) them and chooses the one which most closely matches her requirements and decides whether to pay for it or endure the commercials and watch it for free.
The second thing is:
• Our kid is not a passive consumer
• She can socialize the experience
How? By subsuming it into her social networks.
She can:
• Share it with her friends
• Or blog it or bookmark it
• Review it
• Send comment to programme-makers
• Tag it and rate it
Adding a new social media application is as simple as downloading a widget.
What's happening here?
Each programme is wrenched from its corporate wrapper and stands or falls on its own merits in the marketplace. Its been Individualized. Re-distributed. Networked.
Question: Are the TV networks ready for this de-centralised world?
Ready or not, this is the shape of things to come.
Why?
The network favours:
• Plurality over monopoly
• Personalization over aggregation
• Sharing and networking over monopoly and control.
Because of this, consumer electronic companies will build web services into the TV screen, because anything less would not command the attention of our kid.
Oh brave new world that hath such features in it (begging your pardon, Mr Huxley).
Friday, June 06, 2008
Live TV over Live Web: Or what happens when John Logie Baird meets Tim Berners-Lee
Milverton Wallace
Notes of a speech delivered at the 4th One Day Conference on Service TV
(http://www.activamultimedia.com/conference )
Barcelona, 5th June 2008
Centre de Cultura Contemporania de Barcelona
There was a British entertainer called Max Bygraves who used to begin his show by saying, “I wanna tell you a story”. Well, I want to tell you a story but I’d like you to read it yourself. It’s quite short. Keep it in mind; we’ll come back to it shortly.
Here it is:
The tale of a mouse
“I was having dinner with a group of friends about a month ago, and one of them was talking about sitting with his four-year-old daughter watching a DVD. And in the middle of the movie, apropos nothing, she jumps up off the couch and runs around behind the screen. That seems like a cute moment. Maybe she's going back there to see if Dora is really back there or whatever. But that wasn't what she was doing. She started rooting around in the cables. And her dad said, "What you doing?" And she stuck her head out from behind the screen and said, ‘Looking for the mouse’ ”.
[From an edited transcription of a speech given by Clay Shirky at the Web 2.0 conference, April 23, 2008.)
Before we come back to our story I’d like to briefly sketch some features of the social and cultural world brought into being by two Brits: John Logie Baird and Tim Berners-Lee.
Baird invented the first television system which actually worked, and although though his system was based on analogue technology and was quickly overtaken by digital systems, he was the first to broadcast colour TV pictures. Henceforth broadcasters would bring the world to your living room seven days a week. No need to do anything; just sit back and enjoy it.
By the early 1950s, a few years after he died, the outlines of the TV business as we know it today was essentially in place.
Fifty years later, in March 1989, Tim Berners-Lee, an English scientist working at CERN, the European research centre in Switzerland, completed the first draft of the code for the WWW which envisaged people a world apart producing and sharing hyperlinked data. So at birth, the web was conceived as a read-write, ie, a two-way system of communication. [For many years the “write” half of the web was nearly forgotten as researchers and institutions focused on sending unalterable documents with hyperlinks so you could “read” them but not change them. Thankfully, the growth of wikis and blogs restored the duality].
The structure and operation of the two systems have developed in ways which reflect the motives and impetus of their invention.
Let’s compare them:
Organisation:
• The world of TV is built on Aggregation—regulation—centralisation—control.
NB: This organisation is not entirely political or driven by pure business motives. Eg scarcity of spectrum leads to rationing and regulation. BUT it is convenient for reasons of state. Control of the airwaves is one of the few beliefs shared by democrats and dictators.
TV achieves scale by mass marketing and advertising.
Since users/viewers are only consumers, broadcasters have to sell their wares to them; like Sisyphus endlessly pushing the boulder up the hill, they have to persuade you to buy day after day after day.
Thus the multiplier effect of a participatory culture is squandered.
In a world where almost all the personal, social, business and recreational transactions of daily life is moving to the web, this makes no sense.
The world of the web ie the networked world, is built on:
• participation—collaboration-- discovery—sharing.
• Users as producers
• An architecture of participation
It achieves scale by employing web-based network effects.
Here friends send links, IM, URLs to each other. These cycles of re-inforcements add value no amount to advertising can achieve.
Management and distribution:
In the TV broadcast world, management is by an architecture of control exercised via regulation, licensing, franchising, IP enforcement, affiliation etc.
In the network world, productions are distributed by feeds, links, bookmarking, social networks and other social media tools. etc.
So why is TV not embracing social media when it’s pretty clear that
a) openness beats closed most, if not all of the time
b) participation beats control
Some reasons:
1) In the old media business, control is the surest way to win. Exclusive ownership of a chunk of the airwaves is a lucrative business. Lew Grade, one of the early British TV moguls, is reported to have boasted that his TV franchise in London was like owning a press to print his own money.
2) The brutal unbundling of the newspaper business by quick-footed web start-ups is not (yet) threatening the survival of the big TV conglomerates.
[NB: high quality Video production is hard to do. But the industry should not become complacent because of the (current) low quality of the fare on YouTube. Remember DTP? The early web sites? And YouTube is not the only kid on the block. There are other video networks with failry high production values, eg, Miro, Fireant, Meefedia, all of which came out of the videoblogging movement.
3) There is a cultural bias that creativity and innovation are the preserve of a highly trained “producer elite”, ie, that they come from the centre.
[Note on the core architecture of the net, which is built in such a way that most of the innovation comes from the edges. This is because nobody owns it, everybody can use it, and anybody can improve it. The expression used by network architects is that “the network is stupid”]
4) Social quantities such as collaboration, participation and sharing are not valued by media executives, because they are information values and not easy to monetize like ”pay-per-view” or VOD.
So what happens when these two world’s meet?
First, IMHO, there’s bewilderment. How can a video site such as YouTube, barely two years old, become such a dominant force so quickly? It is the number one site and accounts for nearly four percent of all Internet traffic.
Second, the old methods –which worked so effectively against Kazaa and Napster--of meeting the challenge of upstarts who threaten to web-roll the mainstream media into oblivion no longer works. YouTube welcomed the protective embrace of the mighty Google and is now more powerful as a s result.
Now Redlasso, a web TV and radio web clipping service for bloggers, which employs the now familiar social media elements (search, discovery, sharing), is threatening to napsterize TV and radio networks. It is the latest upstart to have corporate lawyers issuing “cease and desist” orders.
Third, as Internet ad revenue is poised to overtake television ad spend in the UK, according to he latest Zenith Optimedia report (May 2008), most commercial TV companies will rush to build IPTV/broadband services. This promises to be the new front line in the battle between TV and the Web. Judging from past experience, the TV networks will lose the battle. According to ZO, of the £4.,36 billion projected UK Internet advertising in 2010, £2.46 will be from search advertising. And in this arena, Google is the undisputed champion, so the outlook for the companies is bleak.
We return to the story we began with.
What lessons did you draw from it? Here’s what Shirky concluded:
“Here's something four-year-olds know: A screen that ships without a mouse ships broken. Here's something four-year-olds know: Media that's targeted at you but doesn't include you may not be worth sitting still for”.
So I put myself in the place of this four-year-old ten years from now and imagines what she’d expect to see when she looks at a “TV” screen.
Here it is.
The first thing to notice is that there's no brand. No BBC, No ITV, No TV3, No Canal Plus. Just feeds. Here the strength of the channel hinges on the quality of its offerings, not the power of its marketing.
Our 14 year-old doesn't care which brand supplies the programmes that make up the menu of things that interest her. She checks "nature", and the feed reader trawls the network and serves up say, six items from various channels. She samples (previews) them and chooses the one which most closely matches her requirements and decides whether to pay for it or endure the commercials and watch it for free.
The second thing is that our kid is not compelled to be a passive consumer, but is able to socialize the experience by subsuming it into her social networks.
If it's good, she might share it with her friends, or blog it or bookmark it. If she has a critical issue with some aspects of the programme, she writes a review or send a comment to the programme-makers.
She can tag it and rate it and send it as a gift to her classmates. Adding a new social media application is as simple as downloading a widget.
What's happening here? Each individual programme is wrenched from its corporate wrapper and stands or falls on its own merits in the marketplace. Its been Individualized. Re-distributed. Networked.
Are the TV networks ready for this de-centralised world? Ready or not, this is the shape of things to come. The network favours plurality over monopoly; personalization over aggregation; sharing and networking over monopoly and control.
Oh brave new world that hath such features in it (begging your pardon, Mr Huxley).
Milverton Wallace
Notes of a speech delivered at the 4th One Day Conference on Service TV
(http://www.activamultimedia.com
Barcelona, 5th June 2008
Centre de Cultura Contemporania de Barcelona
There was a British entertainer called Max Bygraves who used to begin his show by saying, “I wanna tell you a story”. Well, I want to tell you a story but I’d like you to read it yourself. It’s quite short. Keep it in mind; we’ll come back to it shortly.
Here it is:
The tale of a mouse
“I was having dinner with a group of friends about a month ago, and one of them was talking about sitting with his four-year-old daughter watching a DVD. And in the middle of the movie, apropos nothing, she jumps up off the couch and runs around behind the screen. That seems like a cute moment. Maybe she's going back there to see if Dora is really back there or whatever. But that wasn't what she was doing. She started rooting around in the cables. And her dad said, "What you doing?" And she stuck her head out from behind the screen and said, ‘Looking for the mouse’ ”.
[From an edited transcription of a speech given by Clay Shirky at the Web 2.0 conference, April 23, 2008.)
Before we come back to our story I’d like to briefly sketch some features of the social and cultural world brought into being by two Brits: John Logie Baird and Tim Berners-Lee.
Baird invented the first television system which actually worked, and although though his system was based on analogue technology and was quickly overtaken by digital systems, he was the first to broadcast colour TV pictures. Henceforth broadcasters would bring the world to your living room seven days a week. No need to do anything; just sit back and enjoy it.
By the early 1950s, a few years after he died, the outlines of the TV business as we know it today was essentially in place.
Fifty years later, in March 1989, Tim Berners-Lee, an English scientist working at CERN, the European research centre in Switzerland, completed the first draft of the code for the WWW which envisaged people a world apart producing and sharing hyperlinked data. So at birth, the web was conceived as a read-write, ie, a two-way system of communication. [For many years the “write” half of the web was nearly forgotten as researchers and institutions focused on sending unalterable documents with hyperlinks so you could “read” them but not change them. Thankfully, the growth of wikis and blogs restored the duality].
The structure and operation of the two systems have developed in ways which reflect the motives and impetus of their invention.
Let’s compare them:
Organisation:
• The world of TV is built on Aggregation—regulation—centralisation—control.
NB: This organisation is not entirely political or driven by pure business motives. Eg scarcity of spectrum leads to rationing and regulation. BUT it is convenient for reasons of state. Control of the airwaves is one of the few beliefs shared by democrats and dictators.
TV achieves scale by mass marketing and advertising.
Since users/viewers are only consumers, broadcasters have to sell their wares to them; like Sisyphus endlessly pushing the boulder up the hill, they have to persuade you to buy day after day after day.
Thus the multiplier effect of a participatory culture is squandered.
In a world where almost all the personal, social, business and recreational transactions of daily life is moving to the web, this makes no sense.
The world of the web ie the networked world, is built on:
• participation—collaboration-- discovery—sharing.
• Users as producers
• An architecture of participation
It achieves scale by employing web-based network effects.
Here friends send links, IM, URLs to each other. These cycles of re-inforcements add value no amount to advertising can achieve.
Management and distribution:
In the TV broadcast world, management is by an architecture of control exercised via regulation, licensing, franchising, IP enforcement, affiliation etc.
In the network world, productions are distributed by feeds, links, bookmarking, social networks and other social media tools. etc.
So why is TV not embracing social media when it’s pretty clear that
a) openness beats closed most, if not all of the time
b) participation beats control
Some reasons:
1) In the old media business, control is the surest way to win. Exclusive ownership of a chunk of the airwaves is a lucrative business. Lew Grade, one of the early British TV moguls, is reported to have boasted that his TV franchise in London was like owning a press to print his own money.
2) The brutal unbundling of the newspaper business by quick-footed web start-ups is not (yet) threatening the survival of the big TV conglomerates.
[NB: high quality Video production is hard to do. But the industry should not become complacent because of the (current) low quality of the fare on YouTube. Remember DTP? The early web sites? And YouTube is not the only kid on the block. There are other video networks with failry high production values, eg, Miro, Fireant, Meefedia, all of which came out of the videoblogging movement.
3) There is a cultural bias that creativity and innovation are the preserve of a highly trained “producer elite”, ie, that they come from the centre.
[Note on the core architecture of the net, which is built in such a way that most of the innovation comes from the edges. This is because nobody owns it, everybody can use it, and anybody can improve it. The expression used by network architects is that “the network is stupid”]
4) Social quantities such as collaboration, participation and sharing are not valued by media executives, because they are information values and not easy to monetize like ”pay-per-view” or VOD.
So what happens when these two world’s meet?
First, IMHO, there’s bewilderment. How can a video site such as YouTube, barely two years old, become such a dominant force so quickly? It is the number one site and accounts for nearly four percent of all Internet traffic.
Second, the old methods –which worked so effectively against Kazaa and Napster--of meeting the challenge of upstarts who threaten to web-roll the mainstream media into oblivion no longer works. YouTube welcomed the protective embrace of the mighty Google and is now more powerful as a s result.
Now Redlasso, a web TV and radio web clipping service for bloggers, which employs the now familiar social media elements (search, discovery, sharing), is threatening to napsterize TV and radio networks. It is the latest upstart to have corporate lawyers issuing “cease and desist” orders.
Third, as Internet ad revenue is poised to overtake television ad spend in the UK, according to he latest Zenith Optimedia report (May 2008), most commercial TV companies will rush to build IPTV/broadband services. This promises to be the new front line in the battle between TV and the Web. Judging from past experience, the TV networks will lose the battle. According to ZO, of the £4.,36 billion projected UK Internet advertising in 2010, £2.46 will be from search advertising. And in this arena, Google is the undisputed champion, so the outlook for the companies is bleak.
We return to the story we began with.
What lessons did you draw from it? Here’s what Shirky concluded:
“Here's something four-year-olds know: A screen that ships without a mouse ships broken. Here's something four-year-olds know: Media that's targeted at you but doesn't include you may not be worth sitting still for”.
So I put myself in the place of this four-year-old ten years from now and imagines what she’d expect to see when she looks at a “TV” screen.
Here it is.
The first thing to notice is that there's no brand. No BBC, No ITV, No TV3, No Canal Plus. Just feeds. Here the strength of the channel hinges on the quality of its offerings, not the power of its marketing.
Our 14 year-old doesn't care which brand supplies the programmes that make up the menu of things that interest her. She checks "nature", and the feed reader trawls the network and serves up say, six items from various channels. She samples (previews) them and chooses the one which most closely matches her requirements and decides whether to pay for it or endure the commercials and watch it for free.
The second thing is that our kid is not compelled to be a passive consumer, but is able to socialize the experience by subsuming it into her social networks.
If it's good, she might share it with her friends, or blog it or bookmark it. If she has a critical issue with some aspects of the programme, she writes a review or send a comment to the programme-makers.
She can tag it and rate it and send it as a gift to her classmates. Adding a new social media application is as simple as downloading a widget.
What's happening here? Each individual programme is wrenched from its corporate wrapper and stands or falls on its own merits in the marketplace. Its been Individualized. Re-distributed. Networked.
Are the TV networks ready for this de-centralised world? Ready or not, this is the shape of things to come. The network favours plurality over monopoly; personalization over aggregation; sharing and networking over monopoly and control.
Oh brave new world that hath such features in it (begging your pardon, Mr Huxley).
Monday, October 16, 2006
The new Corinthians: How the Web is socialising journalism
James Cameron (1911-1985), arguably the greatest British journalist of the last 100 years, always insisted that journalism is a craft. Now "craft" implies pride in work, integrity in dealing with customers, rites of passage, and long years of training to acquire the requisite skills/knowledge.
But that was then. Today, journalism is a "profession". Many aspiring hacks now need a university or other accredited "qualification", and, except in the Anglo-American world, a government issued licence to "qualify" as a journalist. In some countries you’re compelled by regulations to belong to a recognised association and to obey its code of standards in order to practise and earn a living as a journalist.
The march towards professionalism began with the rise of the mass media in the latter part of the 19th century, a development made possible by the invention of the rotary printing press, cheap papermaking from wood pulp, and mass literacy.
Cheap mass circulation newspapers gave proprietors the kind of political influence they never had before. The press was becoming an increasingly powerful social force, a counter-balance to big business and the state. However, this power was fragile. Corporations and governments resisted the press’s self-appointed role of watchdog and muckraker. But the press barons fought back.
In response to state and corporate resistance to openness and disclosure of information, they raised the banner of "the public’s right to know" as a fundamental democratic freedom. To counter charges of irresponsible reporting, journalists developed rigorous techniques for gathering, distilling and presenting information; and, to standardise these procedures and wrap them in an ethical framework, a normative model for reporting, carved in stone, was crafted: impartiality, objectivity, accuracy, transparency.
Thus was Cameron’s craft gradually "professionalised", and, in the process, turned into an exclusive club with a privileged membership.
Today, this carefully constructed edifice is crumbling as the read/write web blows away the need to be a member of any such club to be able to practise journalism. Arguments about who is or isn’t a journalist is a sideshow, a pre-occupation mostly of self-styled guardians of truth. The inexorable fact is that the genie is out of the bottle and a significant number of "unqualified" people are "doing journalism" without permission from anyone.
So, let us accept that the "authorities" can no longer decide who is or isn’t a journalist. We have no choice. But we need to ask some crucial questions: Who will now enforce the rules and codes? What is to become of them? Should we care? Do we still need them? Are they "fit for purpose" in the digital age?
Digital media, and in particular, it’s social offsprings—social media such as blogs, vlogs, wikis, IM; social networks such as MySpace, Facebook, Bebo, Tagworld, Orkut etc., and social bookmarking services such as Furl, Del.icio.us, DIGG, StumbleUpon, MyWeb—have enabled the amateurisation of the media. The barbarians have entered the gates. Is the empire on the verge of collapse?
Nowadays, the word "amateur" is being deployed by media professionals to belittle the media-making efforts of bloggers and others who create media productions outside the journalism guilds. Such reporting is deemed "unreliable", "biased", "subjective"; they are "unaccountable", the facts and the sources "unverifiable".
All of this must be puzzling to historians of the modern mass media. Consider the first newspaper in English, a translation of a Dutch coranto, printed in Amsterdam in December 1620 and exported to England. It began with an apology, a typographical error, a number of lies and disinformation. The apology appeared in the first line of the publication: "The new tydings out of Italie are not yet com". The error (in spelling) was in the date: "The 2. of Decemember". The lies? The dates of many events were brought forward to make the news appear fresher than they were. The disinformation? Many news items in the Dutch edition which might have displeased the English government were not translated for the English edition out of fear that the authorities would seize or ban the publication.2 Verily, a very unprofessional beginning!
And who were the "reporters" for the early periodical press? Postmasters, clergymen, sheriffs, burghers, shipping clerks, court officials, merchants, travellers. In a word, "amateurs"!
So now we’ve come full circle: from 17th /18th century amateurism, to 19th/20th century professionalism and back to amateurism in the 21st century.
Here we use "amateur" in the noble, Corinthian sense—someone or an activity motivated by love. And therein lies the problem. Amateur ethics, motivated by love, crashes against professional ethics, driven by commercial gain. Can they be reconciled?
The opposing principles characterising the amateur and professional worldviews may be summarised thus:
Amateur Professional
Play for love Play for pay
Participation primary, Winning is everything
winning secondary
Play to develop team spirit, Play only to win
Cooperation, org skills
Fair play, the game’s the thing Zero sum game, win at all cost
However, the differences between 17th century amateur reporters and 21st century citizen journalists go beyond stark polarities. The former were contributors to the new media of their age but over whose operation, growth and development they had no influence or control; their 21st century counterparts, on the other hand, are contributors to a new media which they themselves are creating. What started out as people’s desire for unfiltered, independent self-expression is threatening to overthrow the old order in the world of media. How come?
The old media model was/is based on assembling disparate and varied information—news reports, share prices, weather reports, crosswords, classified ads, sports scores, horoscopes etc. and selling this ensemble to readers. Today that cornucopia is being unbundled: content is cut loose from the formal wrapper, messages from their media container. (Note the dire fate of newspaper classified ads, financial information, product reviews, real estate and job ads as they become Craiglisted and Monsterised).
This unbundling has serious implications for the economic foundation of the media business as we’ve known it. For the journalists employed in these institutions, two critical changes, among many, stand out: their roles as gatekeepers between you and the world outside your window is irrevocably undermined and the line between themselves as producers of "tydings" and the former audience as consumers has become blurred.
There’s a big misconception among professional journalists that the new media is about news. Wrong. It’s about self-expression, it’s about participating in defining and shaping the information/communication environments in which we live. The various forms of digital media— blogging, podcasting, social bookmarking and networking etc—are merely the means and the channels for achieving this. An entire generation—call them the digital natives or the new Corinthians—is creating an open, collaborative, networked communications infrastructure in opposition to the closed, top down, hierarchical traditional media organisations which have dominated the media universe since the 19t century.
Demanding that these digital natives adhere to old methods of discovering and learning about the world won’t do. They’re crafting their own methods, thank you very much. Ten years ago Slashdot, Kuro5hin and others pioneered peer-to-peer coverage of technology. Stories gained credibility through the trust and reputation of peers. Digg has added collaborative filtering via powerful algorithms; Del.icio.us lets you organise the world via shared social taxonomies. Even some of the backend functions of the news business have been socialised: Wikipedia for reference, Answers.com for expert sources, Flickr for pictures.
All these new ways of understanding, making and managing media are only a specific case of the mass participatory culture made possible by digital technology. All of a sudden, unprecedented numbers of people can express themselves and connect with each other on a global scale. And here’s a salient feature of this mass participation: it’s organised activity without a central organisation. More precisely, it’s a self-organised collaborative endeavour in which people combine their ideas, knowledge, talents, skills without an hierarchy controlling and co-ordinating their activities.
Confronted by a disruptive technology, process or service, the disrupted party has only a limited number of responses: they can ignore it—not a viable choice for survival; they can try to destroy it—this is the "kill the messenger" option which may destroy the messenger (e.g. Napster) but fail to kill the message (i.e. file sharing); they can posit competitive offerings—but note the fate of newspaper "facsimile editions" versus RSS; or they can co-opt or embrace the new—note media mogul Rupert Murdoch’s "Damascene conversion" and his subsequent moves in the digital media space.3
It is hard for a mature, long-dominant culture to make radical changes to its ideology and practice. And that’s why many newspaper groups still cling to the command and control model even as their businesses head for the butchers4 and their customers "head into the cemetery"5. Bold and adventurous though he is, Rupert Murdoch has only chosen co-optation (buying the number one social networking service MySpace); however, full embrace of the new world is a revolutionary step, a rupture in the old order. Anyone doubting the difficulty of such a move need only look at the upheavals and dislocations being experienced by the UK’s Telegraph Groups as it re-engineers it news gathering/reporting processes towards a networked journalism model.
The momentum of change is with the new Corinthians. The open source ethos and method of work/production, which began in the periphery with collaborative software development, is moving to centre stage by way of the blogging revolution and open standards in web services. In tagging, syndication, ranking and bookmarking we have the rudiments of a peer-to-peer trust, reputation and recommendation system well suited to self-regulating collaborative networks6. These could be taken as analogous, but not identical to, the "checks and balances" of traditional journalism, but we shouldn’t belabour the points of difference too much.
In mainstream media "editorial authority" is concentrated in the hands of a single, all-powerful person whereas in social media it is distributed among many voices. This could be seen as a weakness and critics point to it as the Achilles heel of Web journalism. Yet in many instances, the networked world, e.g. the blogosphere, has proven to be much better (and quicker) at correcting errors, falsity, lies and distortions than the mainstream media.
As the number of people who participate in open, collaborative, networked communications increases, the veracity of messages will improve and the need for corporate gatekeepers and standards-setters will decrease. Will we all become Corinthians then?
© 2006 Milverton Wallace
Notes
1) See http://tinyurl.com/ykdalv
2) Mitchell Stephens, A History of News. Wadsworth Publishing. 1996.
3) "Speech by Rupert Murdoch to the American Society of Newspaper Editors"
(http://www.newscorp.com/news/news_247.html)
4) Vin Crosbie, "A Date with the Butcher" (http://tinyurl.com/ljjh3)
5) "Buffett: Newspapers are ‘a business in permanent decline’ " (http://tinyurl.com/ycx4a5)
6) Tim O’Reilly, "The Architecture of Participation" (http://www.oreillynet.com/pub/wlg/3017)
But that was then. Today, journalism is a "profession". Many aspiring hacks now need a university or other accredited "qualification", and, except in the Anglo-American world, a government issued licence to "qualify" as a journalist. In some countries you’re compelled by regulations to belong to a recognised association and to obey its code of standards in order to practise and earn a living as a journalist.
The march towards professionalism began with the rise of the mass media in the latter part of the 19th century, a development made possible by the invention of the rotary printing press, cheap papermaking from wood pulp, and mass literacy.
Cheap mass circulation newspapers gave proprietors the kind of political influence they never had before. The press was becoming an increasingly powerful social force, a counter-balance to big business and the state. However, this power was fragile. Corporations and governments resisted the press’s self-appointed role of watchdog and muckraker. But the press barons fought back.
In response to state and corporate resistance to openness and disclosure of information, they raised the banner of "the public’s right to know" as a fundamental democratic freedom. To counter charges of irresponsible reporting, journalists developed rigorous techniques for gathering, distilling and presenting information; and, to standardise these procedures and wrap them in an ethical framework, a normative model for reporting, carved in stone, was crafted: impartiality, objectivity, accuracy, transparency.
Thus was Cameron’s craft gradually "professionalised", and, in the process, turned into an exclusive club with a privileged membership.
Today, this carefully constructed edifice is crumbling as the read/write web blows away the need to be a member of any such club to be able to practise journalism. Arguments about who is or isn’t a journalist is a sideshow, a pre-occupation mostly of self-styled guardians of truth. The inexorable fact is that the genie is out of the bottle and a significant number of "unqualified" people are "doing journalism" without permission from anyone.
So, let us accept that the "authorities" can no longer decide who is or isn’t a journalist. We have no choice. But we need to ask some crucial questions: Who will now enforce the rules and codes? What is to become of them? Should we care? Do we still need them? Are they "fit for purpose" in the digital age?
Digital media, and in particular, it’s social offsprings—social media such as blogs, vlogs, wikis, IM; social networks such as MySpace, Facebook, Bebo, Tagworld, Orkut etc., and social bookmarking services such as Furl, Del.icio.us, DIGG, StumbleUpon, MyWeb—have enabled the amateurisation of the media. The barbarians have entered the gates. Is the empire on the verge of collapse?
Nowadays, the word "amateur" is being deployed by media professionals to belittle the media-making efforts of bloggers and others who create media productions outside the journalism guilds. Such reporting is deemed "unreliable", "biased", "subjective"; they are "unaccountable", the facts and the sources "unverifiable".
All of this must be puzzling to historians of the modern mass media. Consider the first newspaper in English, a translation of a Dutch coranto, printed in Amsterdam in December 1620 and exported to England. It began with an apology, a typographical error, a number of lies and disinformation. The apology appeared in the first line of the publication: "The new tydings out of Italie are not yet com". The error (in spelling) was in the date: "The 2. of Decemember". The lies? The dates of many events were brought forward to make the news appear fresher than they were. The disinformation? Many news items in the Dutch edition which might have displeased the English government were not translated for the English edition out of fear that the authorities would seize or ban the publication.2 Verily, a very unprofessional beginning!
And who were the "reporters" for the early periodical press? Postmasters, clergymen, sheriffs, burghers, shipping clerks, court officials, merchants, travellers. In a word, "amateurs"!
So now we’ve come full circle: from 17th /18th century amateurism, to 19th/20th century professionalism and back to amateurism in the 21st century.
Here we use "amateur" in the noble, Corinthian sense—someone or an activity motivated by love. And therein lies the problem. Amateur ethics, motivated by love, crashes against professional ethics, driven by commercial gain. Can they be reconciled?
The opposing principles characterising the amateur and professional worldviews may be summarised thus:
Amateur Professional
Play for love Play for pay
Participation primary, Winning is everything
winning secondary
Play to develop team spirit, Play only to win
Cooperation, org skills
Fair play, the game’s the thing Zero sum game, win at all cost
However, the differences between 17th century amateur reporters and 21st century citizen journalists go beyond stark polarities. The former were contributors to the new media of their age but over whose operation, growth and development they had no influence or control; their 21st century counterparts, on the other hand, are contributors to a new media which they themselves are creating. What started out as people’s desire for unfiltered, independent self-expression is threatening to overthrow the old order in the world of media. How come?
The old media model was/is based on assembling disparate and varied information—news reports, share prices, weather reports, crosswords, classified ads, sports scores, horoscopes etc. and selling this ensemble to readers. Today that cornucopia is being unbundled: content is cut loose from the formal wrapper, messages from their media container. (Note the dire fate of newspaper classified ads, financial information, product reviews, real estate and job ads as they become Craiglisted and Monsterised).
This unbundling has serious implications for the economic foundation of the media business as we’ve known it. For the journalists employed in these institutions, two critical changes, among many, stand out: their roles as gatekeepers between you and the world outside your window is irrevocably undermined and the line between themselves as producers of "tydings" and the former audience as consumers has become blurred.
There’s a big misconception among professional journalists that the new media is about news. Wrong. It’s about self-expression, it’s about participating in defining and shaping the information/communication environments in which we live. The various forms of digital media— blogging, podcasting, social bookmarking and networking etc—are merely the means and the channels for achieving this. An entire generation—call them the digital natives or the new Corinthians—is creating an open, collaborative, networked communications infrastructure in opposition to the closed, top down, hierarchical traditional media organisations which have dominated the media universe since the 19t century.
Demanding that these digital natives adhere to old methods of discovering and learning about the world won’t do. They’re crafting their own methods, thank you very much. Ten years ago Slashdot, Kuro5hin and others pioneered peer-to-peer coverage of technology. Stories gained credibility through the trust and reputation of peers. Digg has added collaborative filtering via powerful algorithms; Del.icio.us lets you organise the world via shared social taxonomies. Even some of the backend functions of the news business have been socialised: Wikipedia for reference, Answers.com for expert sources, Flickr for pictures.
All these new ways of understanding, making and managing media are only a specific case of the mass participatory culture made possible by digital technology. All of a sudden, unprecedented numbers of people can express themselves and connect with each other on a global scale. And here’s a salient feature of this mass participation: it’s organised activity without a central organisation. More precisely, it’s a self-organised collaborative endeavour in which people combine their ideas, knowledge, talents, skills without an hierarchy controlling and co-ordinating their activities.
Confronted by a disruptive technology, process or service, the disrupted party has only a limited number of responses: they can ignore it—not a viable choice for survival; they can try to destroy it—this is the "kill the messenger" option which may destroy the messenger (e.g. Napster) but fail to kill the message (i.e. file sharing); they can posit competitive offerings—but note the fate of newspaper "facsimile editions" versus RSS; or they can co-opt or embrace the new—note media mogul Rupert Murdoch’s "Damascene conversion" and his subsequent moves in the digital media space.3
It is hard for a mature, long-dominant culture to make radical changes to its ideology and practice. And that’s why many newspaper groups still cling to the command and control model even as their businesses head for the butchers4 and their customers "head into the cemetery"5. Bold and adventurous though he is, Rupert Murdoch has only chosen co-optation (buying the number one social networking service MySpace); however, full embrace of the new world is a revolutionary step, a rupture in the old order. Anyone doubting the difficulty of such a move need only look at the upheavals and dislocations being experienced by the UK’s Telegraph Groups as it re-engineers it news gathering/reporting processes towards a networked journalism model.
The momentum of change is with the new Corinthians. The open source ethos and method of work/production, which began in the periphery with collaborative software development, is moving to centre stage by way of the blogging revolution and open standards in web services. In tagging, syndication, ranking and bookmarking we have the rudiments of a peer-to-peer trust, reputation and recommendation system well suited to self-regulating collaborative networks6. These could be taken as analogous, but not identical to, the "checks and balances" of traditional journalism, but we shouldn’t belabour the points of difference too much.
In mainstream media "editorial authority" is concentrated in the hands of a single, all-powerful person whereas in social media it is distributed among many voices. This could be seen as a weakness and critics point to it as the Achilles heel of Web journalism. Yet in many instances, the networked world, e.g. the blogosphere, has proven to be much better (and quicker) at correcting errors, falsity, lies and distortions than the mainstream media.
As the number of people who participate in open, collaborative, networked communications increases, the veracity of messages will improve and the need for corporate gatekeepers and standards-setters will decrease. Will we all become Corinthians then?
© 2006 Milverton Wallace
Notes
1) See http://tinyurl.com/ykdalv
2) Mitchell Stephens, A History of News. Wadsworth Publishing. 1996.
3) "Speech by Rupert Murdoch to the American Society of Newspaper Editors"
(http://www.newscorp.com/news/news_247.html)
4) Vin Crosbie, "A Date with the Butcher" (http://tinyurl.com/ljjh3)
5) "Buffett: Newspapers are ‘a business in permanent decline’ " (http://tinyurl.com/ycx4a5)
6) Tim O’Reilly, "The Architecture of Participation" (http://www.oreillynet.com/pub/wlg/3017)
Wednesday, September 21, 2005
The digital media challenge from the periphery
By Milverton Wallace
A new date has been indelibly imprinted on the global memory: 7 July 2005, the day of the first suicide bombings in London.
As with 9/11, people might in future ask each other: "Where were you when you first heard the news on 07/07?" Unlike 9/11, they might also ask: "Which blogs did you read to keep up with the news?" or "Did you see the pictures on Flickr?"
In September 2001, America and the world were informed by newsgroups, mailing lists, bulleting boards, web sites and plain old email, and eyewitnesses shooting amateur video footage. For the first few days after the events, the mainstream media were sidelined and trumped by amateur reporters. It was a major warning to the former media monopolies about the power and reach of the Internet.
On 7 July 2005 in London, they were again caught unawares. TV news programmes and newspaper web sites were reduced to grabbing the images from Flickr and appealing to bloggers to submit pictures to them. On the newsstands, the daily newspapers never looked more outdated and irrelevant. While the Web was buzzing with images and live reports of the events, their front pages trumpeted London’s success in winning the bidding for the 2012 Olympic games the day before.
Even if it was an overstatement to describe the reporting of the July events in London as a revolution in communication, as many mainstream media commentators did, it certainly was a bold announcement that the reporting of major events would never again be the exclusive preserve of the mainstream media. The amateur reporter had made an entrance and would be a permanent guest at the table where the first draft of history is written.
What was the difference between 9/1 New York and 07/07 London? Back then the army of New York citizens who supplied eyewitness reports and pics to the press and who went online to email friends and relations with news about family members, did not see themselves as citizen reporters. They were just ordinary folk caught up in an extraordinary situation who were simply doing their civic duty.
Not so our bloggers, mobloggers and videobloggers of 07/07 in London. Four years on from 9/11, people are conscious of themselves not only as newsmakers but as news reporters and producers. Within minutes of the first blast, commuters were shooting still and video images of the carnage inside the underground trains and posting them on weblogs, complete with copyright notices, as soon as they reached the safety of the streets above ground. The ccpyrights in the images were asserted under the Creative Commons licensing regime. Shortly after the second London bombings on 21 July, a new agency was created specifically to market citizen reports and pics to the world’s media.
What had changed between 2001 and 2005? On that infamous September day in New York, eyewitnesses were happy to hand over their images and first-hand accounts to the traditional news outlets. They had little choice: in 2001 there were fewer than 100,000 blogs and the vast majority were owned by computer geeks. Since then, a combination of technological innovations--easy-to-use blog creation software; cheap web storage; free web photo archives; and affordable mobile camera phones have resulted in a rapid rise in the number of blogs . By the time of the London bombings, Technorati, a leading blog search engine, was tracking over 16 million webblogs, a 160 fold increase since 2001. According to David Sifry, founder of Technotrati, 30,000-40,000 blogs are being created every day and the blogoshere is doubling every five months.
(See http://www.sifry.com/alerts/archives/000298.html)
But why was the press caught so unprepared? Afterall, the movement for citizen participation in public newsgathering and reporting has been underway for some time. The November 2004 US presidential election was a wake-up call for the mainstream media, with the Dean campaign demonstrating in a dramatic way the power of social software to turn ordinary citizens into political activists and conservative bloggers probably tipping the balance by mobilising the Republican vote. The second wake-up call was the Asian tsunami of December 2004, when the traditional media had no assets on the ground and the region’s bloggers and videoblogger stepped up to lead the global reporting of the disaster.
It is understandable that the old media find it hard to embrace the new realities of decentralised newsgathering: they have big investments in plant, machinery, distribution channels and monopoly advertising revenues to protect, all of which require centralised control of newsgathering and distribution. (But note that Rupert Murdoch, the biggest media mogul, has recently signalled his recognition that social media is a major threat to News Corps’ global media empire. (See his now famous speech at http://www.newscorp.com/news/news_247.html).
The independent media has no such baggage. They can and should convert their readers and viewers from being mere sources to collaborators, not only because they generally know more about most things than professional journalists, but are more likely to be at the scene of events and possess the local knowledge most reporters cannot hope to have.
Start by deciding the level of engagement you want to establish with them. This ranges from inviting readers to comment on stories to fully integrating citizen reporting into your publication. See Steve Outing’s "The 11 layers of citizen journalism " (http://poynter.org/content/content_view.asp?id=83126) for a useful guide to the potential and pitfalls of the different levels of interaction.
Second, set up panels of citizen contributors drawn from different social strata, occupational groups and professional sectors. These will be your eyes and ears, a way to mobilise the diverse perspectives and local knowledge of your readers/viewers so you’re on top of stories before they become news. There are numerous open source tools to help to manage such networks. (see, for example, http://civicspacelabs.org/home/node/12296)
Third, institute a programme to familiarise collaborators with the professional practices and standards of the journalism profession (See www.j-learning.org). There’s nothing extraordinary about being a journalist; the principles we hold dear—trust, duty, freedom—are also cherished by the majority of the people we write for.
All of this does not mean that it’s possible , or even desirable, to turn every citizen into journalists. Nevertheless, it is prudent to educate people about the codes of the profession, including the ethical and legal issues, so that they do a better job of truthfully seeking and reporting the facts. Some citizen journalism operations, e.g., Dan Gillmor’s Bayosphere, ask collaborators to sign a "Citizen Journalist Pledge" (see http://bayosphere.com/node/594) and many bloggers and citizen reporters have signed up to honour tags (see http://honortags.com/) in an effort to establish a framework of trustworthiness in this new frontier.
The independent media must carefully build the infrastructure and the culture of this new practice, because what we have here is not simply a new lifestyle based on the latest digital gadgets. The new form of journalism is a challenge to the prevailing monopoly of knowledge (which is also a monopoly of power) exercised by the mainstream media.
The old order is changing, and it is fitting that this challenge is coming from the periphery—from the former audience who are no longer willing to be mere consumers, and from the new digital media that is destined to replace the old. You can either embrace it or be engulfed by it.
© copyright Milverton Wallace August 2005
A new date has been indelibly imprinted on the global memory: 7 July 2005, the day of the first suicide bombings in London.
As with 9/11, people might in future ask each other: "Where were you when you first heard the news on 07/07?" Unlike 9/11, they might also ask: "Which blogs did you read to keep up with the news?" or "Did you see the pictures on Flickr?"
In September 2001, America and the world were informed by newsgroups, mailing lists, bulleting boards, web sites and plain old email, and eyewitnesses shooting amateur video footage. For the first few days after the events, the mainstream media were sidelined and trumped by amateur reporters. It was a major warning to the former media monopolies about the power and reach of the Internet.
On 7 July 2005 in London, they were again caught unawares. TV news programmes and newspaper web sites were reduced to grabbing the images from Flickr and appealing to bloggers to submit pictures to them. On the newsstands, the daily newspapers never looked more outdated and irrelevant. While the Web was buzzing with images and live reports of the events, their front pages trumpeted London’s success in winning the bidding for the 2012 Olympic games the day before.
Even if it was an overstatement to describe the reporting of the July events in London as a revolution in communication, as many mainstream media commentators did, it certainly was a bold announcement that the reporting of major events would never again be the exclusive preserve of the mainstream media. The amateur reporter had made an entrance and would be a permanent guest at the table where the first draft of history is written.
What was the difference between 9/1 New York and 07/07 London? Back then the army of New York citizens who supplied eyewitness reports and pics to the press and who went online to email friends and relations with news about family members, did not see themselves as citizen reporters. They were just ordinary folk caught up in an extraordinary situation who were simply doing their civic duty.
Not so our bloggers, mobloggers and videobloggers of 07/07 in London. Four years on from 9/11, people are conscious of themselves not only as newsmakers but as news reporters and producers. Within minutes of the first blast, commuters were shooting still and video images of the carnage inside the underground trains and posting them on weblogs, complete with copyright notices, as soon as they reached the safety of the streets above ground. The ccpyrights in the images were asserted under the Creative Commons licensing regime. Shortly after the second London bombings on 21 July, a new agency was created specifically to market citizen reports and pics to the world’s media.
What had changed between 2001 and 2005? On that infamous September day in New York, eyewitnesses were happy to hand over their images and first-hand accounts to the traditional news outlets. They had little choice: in 2001 there were fewer than 100,000 blogs and the vast majority were owned by computer geeks. Since then, a combination of technological innovations--easy-to-use blog creation software; cheap web storage; free web photo archives; and affordable mobile camera phones have resulted in a rapid rise in the number of blogs . By the time of the London bombings, Technorati, a leading blog search engine, was tracking over 16 million webblogs, a 160 fold increase since 2001. According to David Sifry, founder of Technotrati, 30,000-40,000 blogs are being created every day and the blogoshere is doubling every five months.
(See http://www.sifry.com/alerts/archives/000298.html)
But why was the press caught so unprepared? Afterall, the movement for citizen participation in public newsgathering and reporting has been underway for some time. The November 2004 US presidential election was a wake-up call for the mainstream media, with the Dean campaign demonstrating in a dramatic way the power of social software to turn ordinary citizens into political activists and conservative bloggers probably tipping the balance by mobilising the Republican vote. The second wake-up call was the Asian tsunami of December 2004, when the traditional media had no assets on the ground and the region’s bloggers and videoblogger stepped up to lead the global reporting of the disaster.
It is understandable that the old media find it hard to embrace the new realities of decentralised newsgathering: they have big investments in plant, machinery, distribution channels and monopoly advertising revenues to protect, all of which require centralised control of newsgathering and distribution. (But note that Rupert Murdoch, the biggest media mogul, has recently signalled his recognition that social media is a major threat to News Corps’ global media empire. (See his now famous speech at http://www.newscorp.com/news/news_247.html).
The independent media has no such baggage. They can and should convert their readers and viewers from being mere sources to collaborators, not only because they generally know more about most things than professional journalists, but are more likely to be at the scene of events and possess the local knowledge most reporters cannot hope to have.
Start by deciding the level of engagement you want to establish with them. This ranges from inviting readers to comment on stories to fully integrating citizen reporting into your publication. See Steve Outing’s "The 11 layers of citizen journalism " (http://poynter.org/content/content_view.asp?id=83126) for a useful guide to the potential and pitfalls of the different levels of interaction.
Second, set up panels of citizen contributors drawn from different social strata, occupational groups and professional sectors. These will be your eyes and ears, a way to mobilise the diverse perspectives and local knowledge of your readers/viewers so you’re on top of stories before they become news. There are numerous open source tools to help to manage such networks. (see, for example, http://civicspacelabs.org/home/node/12296)
Third, institute a programme to familiarise collaborators with the professional practices and standards of the journalism profession (See www.j-learning.org). There’s nothing extraordinary about being a journalist; the principles we hold dear—trust, duty, freedom—are also cherished by the majority of the people we write for.
All of this does not mean that it’s possible , or even desirable, to turn every citizen into journalists. Nevertheless, it is prudent to educate people about the codes of the profession, including the ethical and legal issues, so that they do a better job of truthfully seeking and reporting the facts. Some citizen journalism operations, e.g., Dan Gillmor’s Bayosphere, ask collaborators to sign a "Citizen Journalist Pledge" (see http://bayosphere.com/node/594) and many bloggers and citizen reporters have signed up to honour tags (see http://honortags.com/) in an effort to establish a framework of trustworthiness in this new frontier.
The independent media must carefully build the infrastructure and the culture of this new practice, because what we have here is not simply a new lifestyle based on the latest digital gadgets. The new form of journalism is a challenge to the prevailing monopoly of knowledge (which is also a monopoly of power) exercised by the mainstream media.
The old order is changing, and it is fitting that this challenge is coming from the periphery—from the former audience who are no longer willing to be mere consumers, and from the new digital media that is destined to replace the old. You can either embrace it or be engulfed by it.
© copyright Milverton Wallace August 2005
Notes towards a literacy for the digital age
by Milverton Wallace
The kid enters the coffee shop and is greeted excitedly by her friends. They jostle to exchange high fives, knuckle greetings and finger snaps with her.
What is the cause of their admiration? Her Rocaway jeans? Her high tan Jimmy Choo boots? Her Armani sun-glasses? Her Karl Lagerfeld jacket? Nah! It's the gleaming silver object dangling from a pair of white wires plugged into her ears.
It’s an iPod, the must-have digital gadget of today's young people. With this tiny digital audio player Apple stole Napster's thunder and replaced the CD player as the cutting-edge portable music player of choice.
But if you think this is just another device for playing pre-recorded music, think again. Within two years of the iPod's debut, developers had created software to allow anyone to produce audio content –- words and music -- for it and other portable digital players. This technology, known as podcasting, turns consumers into producers, and every wannabe DJ and talk-show host into broadcasters. It's a distribution channel that plugs directly into the hippest, hottest communication network on the planet.
In advanced industrial countries, and increasingly in less-developed regions, social life is being digitised. Cheap camera phones and videocams allow everyday activities to be recorded and stored on personal computers or online services; more and more conversations are conducted via email, IM and SMS; private thoughts, opinions and reflections on public affairs or private passions are instantly posted on weblogs. Because they are in digital form, all these different types of record -- moving images, photographs, sounds and texts -- can be stored on computers. And the Internet makes it possible for all of this to be shared with family, friends and strangers.
Welcome to the agora of the 21st century, a space where a diverse array of digital modes of communication intersect in cyberspace -- email, instant messaging, text messaging, multimedia messaging, weblogging, audioblogging, moblogging, mobcasting, podcasting.
Like it or not, this is the new cultural landscape for learning, entertainment, and communicating with each other. And it is being constructed without consultation with, or permission from, regulatory authorities or self-appointed gatekeepers.
All well and good, but what is the point of all this digital g-soup when school-leavers can't spell and do sums, or believe Winston Churchill was an insurance salesman? Relax. This isn't the end of literacy, just a groping towards a new kind of literacy which is capable of fulfilling the knowledge acquisition, informational and cultural needs of the digital age.
There is nothing immutable about the mental and manual competencies that constitute literacy. What it means to be literate has constantly changed throughout the ages as economic, social and cultural necessities impose new demands on the population. In addition, the number and classes of people which needed to possess these competencies have changed. In ancient Egypt, the ability to read and write, and therefore to manage the state, was a monopoly of the priestly caste and court officials. On the other hand, the assembly, the council and the court, the key institutions of the first democracy in Athens, championed by the literate Pericles, were made up primarily of ordinary people1 who were mostly educated in the oral, not the literate, culture of 5th century BC Greece. In both cases the vast majority of the people did not need to be literate; you didn't need reading, writing and arithmetic to be a farmer, an artisan or a soldier2. The same was true in the ancient Chinese, Persian, Babylonian and Roman empires.
The industrial age changed everything. The mass manufacturing of goods, the introduction of machine tools and the technologising of ancient craft skills required a work force which could read, write, and do sums. The ceaseless need to innovate in order to remain competitive forced workers to think critically and creatively about the industrial processes in which they were engaged. This led them to invent new goods and technologies to feed the insatiable engine of industrial capitalism. For the first time in human history, education, both literary and technical, became a job requirement.
Thus the invention of printing was a pre-requisite of the industrial age3. Mechanical reproduction of texts was superseded by mass production of books and newspapers to satisfy the growing need for widespread diffusion of the elements of literacy required for industrial production and social advancement.
Mass production of information and knowledge produced the mass media which, by the end of the 19th century, became a monolith that controlled access to information about everyday life. Other information monopolies arose during the period, most based on close and exclusive control of specialised knowledge: trade guilds, which regulated the transmission of craft skills; learned societies and associations, which regulated access to scientific information and entry into the professions. These and other institutions were important in codifying and regulating the competencies which powered industrial production and commerce. However, the mass media occupy a special place because of their central role in the organisation and control of social communications, and hence the structure of cultural, political and economic life4.
The trouble with monopolies is not only that they tend to centralise power, but they also wield this power to enforce their definitions of reality on the world. So the scientific establishment decrees that a particular body of knowledge is "science", and everything else is hocus-pocus; the medical authorities declare that a favoured corpus of practices is "medicine", and all others are quackery; and the teaching profession holds that literacy is the three "Rs", and evermore shall it be.
But these edicts are losing their force and authority as people first challenge the information/knowledge monopolies and then develop their own communication media to find things out for themselves and explore truths other than received wisdom or the official version. Rather than the established media talking to them, people are talking to one another in their own self-created space, their own time and at their own speed5.
To participate in creating this autonomous space, you must possess not only the print literacy of the industrial age but also the competencies required to engage in online conversations and be at ease with using 21st century digital products and services.
What are the competencies that should be included in any model of literacy for the digital age?
First, you should get used to interacting with screen-based devices for sending, receiving and viewing digital information because this is the way one interacts with the interface -- the collection of words, icons, buttons, menus, and other symbols -- connecting the user to the database which stores the data and the network which transmits it. To interact with your computers, mobile phones, PDAs, media players etc requires that you have the knowledge to understand these symbols and the tactile skills to manipulate them to achieve a desired purpose e.g., open a document, save a file, view a picture, play a song, send a message.
Second, you must be able to create a document, store it and retrieve it at a later date. By "document" is meant any information element or object in digital form -- words, pictures, sounds, still and moving images.
Third, you need to acquire some knowledge of the theory and practice of hypermedia6, because it is in this space that information is communicated on the screens of computers and digital media devices. A paper document allows only text and two dimensional images, while radio and television have been completely linear media. The hypermedia document, now the standard form in which information is displayed and communicated, is changing all that. By allowing interaction with non-linear, multi-dimensional documents to take place, it has radically altered the practice of reading and writing.
Hypermedia is the electronic palette on which diverse information objects -- texts, still and moving images and sound -- combine. Cross-referencing devices called hyperlinks allow us to create a non-linear mode of information production and consumption which more closely follows the patterns of thought. Hyperlinks are gateways to other "objects" -- click on one and the desired object is retrieved and played. This is the typical organisation of a Web document.
But some features of a hypermedia document are counter-intuitive (or, at least, contrary to the processes we have learned through paper-age education) and so require new literacies in order to make sense of the message.
For example, a key feature of a hypermedia composition is that all objects have equal status. They can therefore be read -- and possibly understood -- in any order, so you can enter the hypermedia space at any point, and structure your reading of the story in any manner you choose. As a result, each individual reading experience is different, as are the connections and associations made.
We have to learn how to use this space, to make sense of it. How do we critically evaluate what we see and hear? How do we assign weight and significance to the objects? Clearly, we need to learn to use a range of tools to help us evaluate the accuracy, authority, completeness, bias and timeliness of the information.
This goes against much that we know about written communication since the invention of the codex, the form of the book that succeeded the scroll as the repository of written knowledge and culture.
The codex transformed the way texts were written -- introducing page numbers, chapters, indexing -- and therefore the way authors constructed their work. It also changed the reading process: readers could now navigate from one page to another with ease, quickly find specific items, mark passages for future reference, and write while reading. The codex introduced a linear order and sequence in which texts are to be read and understood and an hierarchy of elements -– title page, imprint, contents page, preface, introduction, main body, references, bibliography, appendices. To be literate meant understanding these elements and what they signify.
The book is both receptacle and transmitter of knowledge. The change in its material form, from scroll to codex, engendered a revolution in writing and reading. People had to learn new skills in order to produce and consume information and knowledge in the new form. The same is the case with the change to a screen-based, hypertext form of information and knowledge creation and dissemination, with one big difference.
The move from an oral to a literary culture was a drastic change from social, collective learning to private, individual learning; from the primacy of the voice to the primacy of the text; from understanding of the world through public performances and storytelling to understanding through private reading and personal reflection. Now these two modes are united in cyberspace as hypermedia combines almost all aspects of oral and literary cultures. Every minute of every day the Internet buzzes with the sound of music and of voices in many tongues; with animations and videos in glorious technicolor; with words and pictures; with the colour of magic, to paraphrase Arthur C. Clarke7.
Here is the genius of cyberspace: it has created a world of endless possibilities by refusing to be constrained by what went before.
In most cosmologies, the world begins with the Word. In the pre-industrial and industrial eras, two expressions of the Word, reading and writing, have been central to people's notion of literacy. Digital technology doesn't abolish literacy; what it augurs is a radical re-definition of it. This is nothing new -- we have been here before. Think of the momentous, world-changing shift from oral to print culture; think also of the changes in writing instruments (stone, stick, pen), writing materials (bark, leaf, clay tablet, parchment, paper), text production processes (from handwriting to hot-metal printing, from lithography to laser printing) and the intellectual and technical adjustments required to deal with them.
As the digitisation of economic, social and cultural life gathers pace, those who embrace and internalise the literacy of the digital age will be so much better off than those who do not.
So if you are an educator, desperate to interest our iPod kid and her friends in your remedial classes; a health information officer anxious to get the message of safe sex to her and her cohorts; a training instructor eager to recruit them on a job skills programme; get familiar with their world. You won't be able to communicate with them if you don't.
© Milverton Wallace 2005.
Notes
1) See C. L. R. James, Every Cook Can Govern: A Study of Democracy in Ancient Greece. Correspondence, 2 (12) June 1956.
2) Even if they wanted to acquire literacy, they couldn’t. Only rich individuals and families could afford to buy books. Papyrus and parchment, the materials on which most books in Europe were written until the introduction of paper from China (via Korea, Japan, India, Baghdad and Damascus) in the 12th century AD, were scarce and expensive commodities. Moreover, several ingredients—the technique of papermaking, the invention of printing, the spread of religion, public education and libraries, the development of the scientific method, the Industrial Revolution etc--had to come together before mass literacy became possible, desirable and necessary for societies. And it took more than two thousand years after the first flowering of Athenian democracy for these conditions to become a reality. (Note that the fabled ancient libraries at Nineveh, Alexandria, Pergamum and Herculaneum were for the use of clerics, scholars and rulers, not the masses).
3) See Elizabeth Eisenstein, The Printing Press as an Agent of Change (Cambridge University Press, 1982) for an excellent treatment of the way the spread of printing contributed to the Protestant Reformation, the Renaissance and the scientific revolution, and, therefore, modern liberal democracies and the industrial society.
4) See Harold Innis, Empire and Communication (University of Toronto Press, 1972) and The Bias of Communication (University of Toronto Press, 1964) for a discussion of the relationship between the dominant mode and technical properties of communication and the social, political and economic organisation of society. Innis argues that fundamental changes in social structures come about when the old, dominant form of communication is challenged and replaced by new forms.
5) Dan Gillmor, former technology columnist on the San Jose Mercury News, describes this movement in the arena of news gathering and dissemination as "citizen journalism". See his book, We the Media: Grassroots Journalism by the People, for the People (O'Reilly Media, 2004).
6)) Jakob Nielsen, Multimedia and Hypertext: The Internet and Beyond (AP Professional, 1995).
7) "Any sufficiently advanced technology is indistinguishable from magic". Quoted in Profiles of the Future by Arthur C. Clarke (Victor Gollancz, 1999).
The kid enters the coffee shop and is greeted excitedly by her friends. They jostle to exchange high fives, knuckle greetings and finger snaps with her.
What is the cause of their admiration? Her Rocaway jeans? Her high tan Jimmy Choo boots? Her Armani sun-glasses? Her Karl Lagerfeld jacket? Nah! It's the gleaming silver object dangling from a pair of white wires plugged into her ears.
It’s an iPod, the must-have digital gadget of today's young people. With this tiny digital audio player Apple stole Napster's thunder and replaced the CD player as the cutting-edge portable music player of choice.
But if you think this is just another device for playing pre-recorded music, think again. Within two years of the iPod's debut, developers had created software to allow anyone to produce audio content –- words and music -- for it and other portable digital players. This technology, known as podcasting, turns consumers into producers, and every wannabe DJ and talk-show host into broadcasters. It's a distribution channel that plugs directly into the hippest, hottest communication network on the planet.
In advanced industrial countries, and increasingly in less-developed regions, social life is being digitised. Cheap camera phones and videocams allow everyday activities to be recorded and stored on personal computers or online services; more and more conversations are conducted via email, IM and SMS; private thoughts, opinions and reflections on public affairs or private passions are instantly posted on weblogs. Because they are in digital form, all these different types of record -- moving images, photographs, sounds and texts -- can be stored on computers. And the Internet makes it possible for all of this to be shared with family, friends and strangers.
Welcome to the agora of the 21st century, a space where a diverse array of digital modes of communication intersect in cyberspace -- email, instant messaging, text messaging, multimedia messaging, weblogging, audioblogging, moblogging, mobcasting, podcasting.
Like it or not, this is the new cultural landscape for learning, entertainment, and communicating with each other. And it is being constructed without consultation with, or permission from, regulatory authorities or self-appointed gatekeepers.
All well and good, but what is the point of all this digital g-soup when school-leavers can't spell and do sums, or believe Winston Churchill was an insurance salesman? Relax. This isn't the end of literacy, just a groping towards a new kind of literacy which is capable of fulfilling the knowledge acquisition, informational and cultural needs of the digital age.
There is nothing immutable about the mental and manual competencies that constitute literacy. What it means to be literate has constantly changed throughout the ages as economic, social and cultural necessities impose new demands on the population. In addition, the number and classes of people which needed to possess these competencies have changed. In ancient Egypt, the ability to read and write, and therefore to manage the state, was a monopoly of the priestly caste and court officials. On the other hand, the assembly, the council and the court, the key institutions of the first democracy in Athens, championed by the literate Pericles, were made up primarily of ordinary people1 who were mostly educated in the oral, not the literate, culture of 5th century BC Greece. In both cases the vast majority of the people did not need to be literate; you didn't need reading, writing and arithmetic to be a farmer, an artisan or a soldier2. The same was true in the ancient Chinese, Persian, Babylonian and Roman empires.
The industrial age changed everything. The mass manufacturing of goods, the introduction of machine tools and the technologising of ancient craft skills required a work force which could read, write, and do sums. The ceaseless need to innovate in order to remain competitive forced workers to think critically and creatively about the industrial processes in which they were engaged. This led them to invent new goods and technologies to feed the insatiable engine of industrial capitalism. For the first time in human history, education, both literary and technical, became a job requirement.
Thus the invention of printing was a pre-requisite of the industrial age3. Mechanical reproduction of texts was superseded by mass production of books and newspapers to satisfy the growing need for widespread diffusion of the elements of literacy required for industrial production and social advancement.
Mass production of information and knowledge produced the mass media which, by the end of the 19th century, became a monolith that controlled access to information about everyday life. Other information monopolies arose during the period, most based on close and exclusive control of specialised knowledge: trade guilds, which regulated the transmission of craft skills; learned societies and associations, which regulated access to scientific information and entry into the professions. These and other institutions were important in codifying and regulating the competencies which powered industrial production and commerce. However, the mass media occupy a special place because of their central role in the organisation and control of social communications, and hence the structure of cultural, political and economic life4.
The trouble with monopolies is not only that they tend to centralise power, but they also wield this power to enforce their definitions of reality on the world. So the scientific establishment decrees that a particular body of knowledge is "science", and everything else is hocus-pocus; the medical authorities declare that a favoured corpus of practices is "medicine", and all others are quackery; and the teaching profession holds that literacy is the three "Rs", and evermore shall it be.
But these edicts are losing their force and authority as people first challenge the information/knowledge monopolies and then develop their own communication media to find things out for themselves and explore truths other than received wisdom or the official version. Rather than the established media talking to them, people are talking to one another in their own self-created space, their own time and at their own speed5.
To participate in creating this autonomous space, you must possess not only the print literacy of the industrial age but also the competencies required to engage in online conversations and be at ease with using 21st century digital products and services.
What are the competencies that should be included in any model of literacy for the digital age?
First, you should get used to interacting with screen-based devices for sending, receiving and viewing digital information because this is the way one interacts with the interface -- the collection of words, icons, buttons, menus, and other symbols -- connecting the user to the database which stores the data and the network which transmits it. To interact with your computers, mobile phones, PDAs, media players etc requires that you have the knowledge to understand these symbols and the tactile skills to manipulate them to achieve a desired purpose e.g., open a document, save a file, view a picture, play a song, send a message.
Second, you must be able to create a document, store it and retrieve it at a later date. By "document" is meant any information element or object in digital form -- words, pictures, sounds, still and moving images.
Third, you need to acquire some knowledge of the theory and practice of hypermedia6, because it is in this space that information is communicated on the screens of computers and digital media devices. A paper document allows only text and two dimensional images, while radio and television have been completely linear media. The hypermedia document, now the standard form in which information is displayed and communicated, is changing all that. By allowing interaction with non-linear, multi-dimensional documents to take place, it has radically altered the practice of reading and writing.
Hypermedia is the electronic palette on which diverse information objects -- texts, still and moving images and sound -- combine. Cross-referencing devices called hyperlinks allow us to create a non-linear mode of information production and consumption which more closely follows the patterns of thought. Hyperlinks are gateways to other "objects" -- click on one and the desired object is retrieved and played. This is the typical organisation of a Web document.
But some features of a hypermedia document are counter-intuitive (or, at least, contrary to the processes we have learned through paper-age education) and so require new literacies in order to make sense of the message.
For example, a key feature of a hypermedia composition is that all objects have equal status. They can therefore be read -- and possibly understood -- in any order, so you can enter the hypermedia space at any point, and structure your reading of the story in any manner you choose. As a result, each individual reading experience is different, as are the connections and associations made.
We have to learn how to use this space, to make sense of it. How do we critically evaluate what we see and hear? How do we assign weight and significance to the objects? Clearly, we need to learn to use a range of tools to help us evaluate the accuracy, authority, completeness, bias and timeliness of the information.
This goes against much that we know about written communication since the invention of the codex, the form of the book that succeeded the scroll as the repository of written knowledge and culture.
The codex transformed the way texts were written -- introducing page numbers, chapters, indexing -- and therefore the way authors constructed their work. It also changed the reading process: readers could now navigate from one page to another with ease, quickly find specific items, mark passages for future reference, and write while reading. The codex introduced a linear order and sequence in which texts are to be read and understood and an hierarchy of elements -– title page, imprint, contents page, preface, introduction, main body, references, bibliography, appendices. To be literate meant understanding these elements and what they signify.
The book is both receptacle and transmitter of knowledge. The change in its material form, from scroll to codex, engendered a revolution in writing and reading. People had to learn new skills in order to produce and consume information and knowledge in the new form. The same is the case with the change to a screen-based, hypertext form of information and knowledge creation and dissemination, with one big difference.
The move from an oral to a literary culture was a drastic change from social, collective learning to private, individual learning; from the primacy of the voice to the primacy of the text; from understanding of the world through public performances and storytelling to understanding through private reading and personal reflection. Now these two modes are united in cyberspace as hypermedia combines almost all aspects of oral and literary cultures. Every minute of every day the Internet buzzes with the sound of music and of voices in many tongues; with animations and videos in glorious technicolor; with words and pictures; with the colour of magic, to paraphrase Arthur C. Clarke7.
Here is the genius of cyberspace: it has created a world of endless possibilities by refusing to be constrained by what went before.
In most cosmologies, the world begins with the Word. In the pre-industrial and industrial eras, two expressions of the Word, reading and writing, have been central to people's notion of literacy. Digital technology doesn't abolish literacy; what it augurs is a radical re-definition of it. This is nothing new -- we have been here before. Think of the momentous, world-changing shift from oral to print culture; think also of the changes in writing instruments (stone, stick, pen), writing materials (bark, leaf, clay tablet, parchment, paper), text production processes (from handwriting to hot-metal printing, from lithography to laser printing) and the intellectual and technical adjustments required to deal with them.
As the digitisation of economic, social and cultural life gathers pace, those who embrace and internalise the literacy of the digital age will be so much better off than those who do not.
So if you are an educator, desperate to interest our iPod kid and her friends in your remedial classes; a health information officer anxious to get the message of safe sex to her and her cohorts; a training instructor eager to recruit them on a job skills programme; get familiar with their world. You won't be able to communicate with them if you don't.
© Milverton Wallace 2005.
Notes
1) See C. L. R. James, Every Cook Can Govern: A Study of Democracy in Ancient Greece. Correspondence, 2 (12) June 1956.
2) Even if they wanted to acquire literacy, they couldn’t. Only rich individuals and families could afford to buy books. Papyrus and parchment, the materials on which most books in Europe were written until the introduction of paper from China (via Korea, Japan, India, Baghdad and Damascus) in the 12th century AD, were scarce and expensive commodities. Moreover, several ingredients—the technique of papermaking, the invention of printing, the spread of religion, public education and libraries, the development of the scientific method, the Industrial Revolution etc--had to come together before mass literacy became possible, desirable and necessary for societies. And it took more than two thousand years after the first flowering of Athenian democracy for these conditions to become a reality. (Note that the fabled ancient libraries at Nineveh, Alexandria, Pergamum and Herculaneum were for the use of clerics, scholars and rulers, not the masses).
3) See Elizabeth Eisenstein, The Printing Press as an Agent of Change (Cambridge University Press, 1982) for an excellent treatment of the way the spread of printing contributed to the Protestant Reformation, the Renaissance and the scientific revolution, and, therefore, modern liberal democracies and the industrial society.
4) See Harold Innis, Empire and Communication (University of Toronto Press, 1972) and The Bias of Communication (University of Toronto Press, 1964) for a discussion of the relationship between the dominant mode and technical properties of communication and the social, political and economic organisation of society. Innis argues that fundamental changes in social structures come about when the old, dominant form of communication is challenged and replaced by new forms.
5) Dan Gillmor, former technology columnist on the San Jose Mercury News, describes this movement in the arena of news gathering and dissemination as "citizen journalism". See his book, We the Media: Grassroots Journalism by the People, for the People (O'Reilly Media, 2004).
6)) Jakob Nielsen, Multimedia and Hypertext: The Internet and Beyond (AP Professional, 1995).
7) "Any sufficiently advanced technology is indistinguishable from magic". Quoted in Profiles of the Future by Arthur C. Clarke (Victor Gollancz, 1999).