Jump to content
IndiaDivine.org

Fahrenheit 451.2: Is Cyberspace Burning? (3/17/2002)

Rate this topic


Guest guest

Recommended Posts

(Historical: 3/17/2002)

 

 

http://www.aclu.org/privacy/speech/15145pub20020317.html

 

Fahrenheit 451.2: Is Cyberspace Burning? (3/17/2002)

 

Executive Summary

 

In the landmark case Reno v. ACLU, the Supreme Court overturned the

Communications Decency Act, declaring that the Internet deserves the

same high level of free speech protection afforded to books and other

printed matter.

 

But today, all that we have achieved may now be lost, if not in the

bright flames of censorship then in the dense smoke of the many

ratings and blocking schemes promoted by some of the very people who

fought for freedom.

 

The ACLU and others in the cyber-liberties community were genuinely

alarmed by the tenor of a recent White House summit meeting on

Internet censorship at which industry leaders pledged to create a

variety of schemes to regulate and block controversial online speech.

 

But it was not any one proposal or announcement that caused our alarm;

rather, it was the failure to examine the longer-term implications for

the Internet of rating and blocking schemes.

 

The White House meeting was clearly the first step away from the

principle that protection of the electronic word is analogous to

protection of the printed word. Despite the Supreme Court's strong

rejection of a broadcast analogy for the Internet, government and

industry leaders alike are now inching toward the dangerous and

incorrect position that the Internet is like television, and should be

rated and censored accordingly.

 

Is Cyberspace burning? Not yet, perhaps. But where there's smoke,

there's fire.

 

" Any content-based regulation of the Internet, no matter how

benign the purpose, could burn the global village to roast the pig. "

 

­ U.S. Supreme Court majority decision, Reno v. ACLU (June 26,

1997)

 

Introduction

 

In his chilling (and prescient) novel about censorship, Fahrenheit

451, author Ray Bradbury describes a futuristic society where books

are outlawed. " Fahrenheit 451 " is, of course, the temperature at which

books burn.

 

In Bradbury's novel ­ and in the physical world ­ people censor the

printed word by burning books. But in the virtual world, one can just

as easily censor controversial speech by banishing it to the farthest

corners of cyberspace using rating and blocking programs. Today, will

Fahrenheit, version 451.2 ­ a new kind of virtual censorship ­ be the

temperature at which cyberspace goes up in smoke?

 

The first flames of Internet censorship appeared two years ago, with

the introduction of the Federal Communications Decency Act (CDA),

outlawing " indecent " online speech. But in the landmark case Reno v.

ACLU, the Supreme Court overturned the CDA, declaring that the

Internet is entitled to the highest level of free speech protection.

In other words, the Court said that online speech deserved the

protection afforded to books and other printed matter.

 

Today, all that we have achieved may now be lost, if not in the bright

flames of censorship then in the dense smoke of the many ratings and

blocking schemes promoted by some of the very people who fought for

freedom. And in the end, we may find that the censors have indeed

succeeded in " burning down the house to roast the pig. "

Is Cyberspace Burning?

 

The ashes of the CDA were barely smoldering when the White House

called a summit meeting to encourage Internet users to self-rate their

speech and to urge industry leaders to develop and deploy the tools

for blocking " inappropriate " speech. The meeting was " voluntary, " of

course: the White House claimed it wasn't holding anyone's feet to the

fire.

 

The ACLU and others in the cyber-liberties community were genuinely

alarmed by the tenor of the White House summit and the unabashed

enthusiasm for technological fixes that will make it easier to block

or render invisible controversial speech. (Note: see appendix for

detailed explanations of the various technologies.)

 

Industry leaders responded to the White House call with a barrage of

announcements:

 

* Netscape announced plans to join Microsoft ­ together the two

giants have 90% or more of the web browser market ­ in adopting PICS

(Platform for Internet Content Selection) the rating standard that

establishes a consistent way to rate and block online content;

* IBM announced it was making a $100,000 grant to RSAC

(Recreational Software Advisory Council) to encourage the use of its

RSACi rating system. Microsoft Explorer already employs the RSACi

ratings system, Compuserve encourages its use and it is fast becoming

the de facto industry standard rating system;

* Four of the major search engines ­ the services which allow

users to conduct searches of the Internet for relevant sites ­

announced a plan to cooperate in the promotion of " self-regulation " of

the Internet. The president of one, Lycos, was quoted in a news

account as having " thrown down the gauntlet " to the other three,

challenging them to agree to exclude unrated sites from search

results;

* Following announcement of proposed legislation by Sen. Patty

Murray (D Wash.), which would impose civil and ultimately criminal

penalties on those who mis-rate a site, the makers of the blocking

program Safe Surf proposed similar legislation, the " Online

Cooperative Publishing Act. "

 

But it was not any one proposal or announcement that caused our alarm;

rather, it was the failure to examine the longer-term implications for

the Internet of rating and blocking schemes.

 

What may be the result? The Internet will become bland and

homogenized. The major commercial sites will still be readily

available they will have the resources and inclination to self-rate,

and third-party rating services will be inclined to give them

acceptable ratings. People who disseminate quirky and idiosyncratic

speech, create individual home pages, or post to controversial news

groups, will be among the first Internet users blocked by filters and

made invisible by the search engines. Controversial speech will still

exist, but will only be visible to those with the tools and know-how

to penetrate the dense smokescreen of industry " self-regulation. "

 

As bad as this very real prospect is, it can get worse. Faced with the

reality that, although harder to reach, sex, hate speech and other

controversial matter is still available on the Internet, how long will

it be before governments begin to make use of an Internet already

configured to accommodate massive censorship?

If you look at these various proposals in a larger context, a very

plausible scenario emerges.

It is a scenario which in some respects has already been set in motion:

 

* First, the use of PICS becomes universal; providing a uniform

method for content rating.

* Next, one or two rating systems dominate the market and become

the de facto standard for the Internet.

* PICS and the dominant rating(s) system are built into Internet

software as an automatic default.

* Unrated speech on the Internet is effectively blocked by these

defaults.

* Search engines refuse to report on the existence of unrated or

" unacceptably " rated sites.

* Governments frustrated by " indecency " still on the Internet make

self-rating mandatory and mis-rating a crime.

 

The scenario is, for now, theoretical ­ but inevitable. It is clear

that any scheme that allows access to unrated speech will fall afoul

of the government-coerced push for a " family friendly " Internet. We

are moving inexorably toward a system that blocks speech simply

because it is unrated and makes criminals of those who mis-rate.

 

The White House meeting was clearly the first step in that direction

and away from the principle that protection of the electronic word is

analogous to protection of the printed word. Despite the Supreme

Court's strong rejection of a broadcast analogy for the Internet,

government and industry leaders alike are now inching toward the

dangerous and incorrect position that the Internet is like television,

and should berated and censored accordingly.

 

Is Cyberspace burning? Not yet, perhaps. But where there's smoke,

there's fire.

Free Speech Online: A Victory Under Siege

 

On June 26, 1997, the Supreme Court held in Reno v. ACLU that the

Communications Decency Act, which would have made it a crime to

communicate anything " indecent " on the Internet, violated the First

Amendment. It was the nature of the Internet itself, and the quality

of speech on the Internet, that led the Court to declare that the

Internet is entitled to the same broad free speech protections given

to books, magazines, and casual conversation.

 

The ACLU argued, and the Supreme Court agreed, that the CDA was

unconstitutional because, although aimed at protecting minors, it

effectively banned speech among adults. Similarly, many of the rating

and blocking proposals, though designed to limit minors' access, will

inevitably restrict the ability of adults to communicate on the

Internet. In addition, such proposals will restrict the rights of

older minors to gain access to material that clearly has value for them.

Rethinking the Rush to Rate

 

This paper examines the free speech implications of the various

proposals for Internet blocking and rating. Individually, each of the

proposals poses some threat to open and robust speech on the Internet;

some pose a considerably greater threat than others.

 

Even more ominous is the fact that the various schemes for rating and

blocking, taken together, could create a black cloud of private

" voluntary " censorship that is every bit as threatening as the CDA

itself to what the Supreme Court called " the most participatory form

of mass speech yet developed. "

 

We call on industry leaders, Internet users, policy makers and parents

groups to engage in a genuine debate about the free speech

ramifications of the rating and blocking schemes being proposed.

 

To open the door to a meaningful discussion, we offer the following

recommendations and principles:

Recommendations and Principles

 

* Internet users know best. The primary responsibility for

determining what speech to access should remain with the individual

Internet user; parents should take primary responsibility for

determining what their children should access.

* Default setting on free speech. Industry should not develop

products that require speakers to rate their own speech or be blocked

by default.

* Buyers beware. The producers of user-based software programs

should make their lists of blocked speech available to consumers. The

industry should develop products that provide maximum user control.

* No government coercion or censorship. The First Amendment

prevents the government from imposing, or from coercing industry into

imposing, a mandatory Internet ratings scheme.

* Libraries are free speech zones. The First Amendment prevents

the government, including public libraries, from mandating the use of

user-based blocking software.

 

Six Reasons Why Self-Rating Schemes Are Wrong for the Internet

 

To begin with, the notion that citizens should " self-rate " their

speech is contrary to the entire history of free speech in America. A

proposal that we rate our online speech is no less offensive to the

First Amendment than a proposal that publishers of books and magazines

rate each and every article or story, or a proposal that everyone

engaged in a street corner conversation rate his or her comments. But

that is exactly what will happen to books, magazines, and any kind of

speech that appears online under a self-rating scheme.

 

In order to illustrate the very practical consequences of these

schemes, consider the following six reasons, and their accompanying

examples, illustrating why the ACLU is against self-rating:

 

Reason #1: Self-Rating Schemes Will Cause Controversial Speech To

Be Censored.

 

Kiyoshi Kuromiya, founder and sole operator of Critical Path Aids

Project, has a web site that includes safer sex information written in

street language with explicit diagrams, in order to reach the widest

possible audience. Kuromiya doesn't want to apply the rating " crude "

or " explicit " to his speech, but if he doesn't, his site will be

blocked as an unrated site. If he does rate, his speech will be lumped

in with " pornography " and blocked from view. Under either choice,

Kuromiya has been effectively blocked from reaching a large portion of

his intended audience ­ teenage Internet users ­ as well as adults.

 

As this example shows, the consequences of rating are far from

neutral. The ratings themselves are all pejorative by definition, and

they result in certain speech being blocked.

 

The White House has compared Internet ratings to " food labels " ­

but that analogy is simply wrong. Food labels provide objective,

scientifically verifiable information to help the consumer make

choices about what to buy, e.g. the percentage of fat in a food

product like milk. Internet ratings are subjective value judgments

that result in certain speech being blocked to many viewers. Further,

food labels are placed on products that are readily available to

consumers ­ unlike Internet labels, which would place certain kinds of

speech out of reach of Internet users.

 

What is most critical to this issue is that speech like Kuromiya's

is entitled to the highest degree of Constitutional protection. This

is why ratings requirements have never been imposed on those who speak

via the printed word. Kuromiya could distribute the same material in

print form on any street corner or in any bookstore without worrying

about having to rate it. In fact, a number of Supreme Court cases have

established that the First Amendment does not allow government to

compel speakers to say something they don't want to say ­ and that

includes pejorative ratings. There is simply no justification for

treating the Internet any differently.

Reason #2: Self-Rating Is Burdensome, Unwieldy, and Costly.

 

Art on the Net is a large, non-profit web site that hosts online

" studios " where hundreds of artists display their work. The vast

majority of the artwork has no sexual content, although there's an

occasional Rubenesque painting. The ratings systems don't make sense

when applied to art. Yet Art on the Net would still have to review and

apply a rating to the more than 26,000 pages on its site, which would

require time and staff that they just don't have. Or, they would have

to require the artists themselves to self-rate, an option they find

objectionable. If they decline to rate, they will blocked as an

unrated site even though most Internet users would hardly object to

the art reaching minors, let alone adults.

 

As the Supreme Court noted in Reno v. ACLU, one of the virtues of

the Internet is that it provides " relatively unlimited, low-cost

capacity for communication of all kinds. " In striking down the CDA,

the Court held that imposing age-verification costs on Internet

speakers would be " prohibitively expensive for noncommercial ­ as well

as some commercial ­ speakers. " Similarly, the burdensome requirement

of self-rating thousands of pages of information would effectively

shut most noncommercial speakers out of the Internet marketplace.

 

The technology of embedding the rating is also far from trivial.

In a winning ACLU case that challenged a New York state online

censorship statute, ALA v. Pataki, one long-time Internet expert

testified that he tried to embed an RSACi label in his online

newsletter site but finally gave up after several hours.

 

In addition, the ratings systems are simply unequipped to deal

with the diversity of content now available on the Internet. There is

perhaps nothing as subjective as a viewer's reaction to art. As

history has shown again and again, one woman's masterpiece is another

woman's pornography. How can ratings such as " explicit " or " crude " be

used to categorize art? Even ratings systems that try to take artistic

value into account will be inherently subjective, especially when

applied by artists themselves, who will naturally consider their own

work to have merit.

 

The variety of news-related sites on the Web will be equally

difficult to rate. Should explicit war footage be labeled " violent "

and blocked from view to teenagers? If along news article has one

curse word, is the curse word rated individually, or is the entire

story rated and then blocked?

 

Even those who propose that " legitimate " news organizations should

not be required to rate their sites stumble over the question of who

will decide what is legitimate news.

Reason #3: Conversation Can't Be Rated.

 

You are in a chat room or a discussion group ­ one of the

thousands of conversational areas of the Net. A victim of sexual abuse

has posted a plea for help, and you want to respond. You've heard

about a variety of ratings systems, but you've never used one. You

read the RSACi web page, but you can't figure out how to rate the

discussion of sex and violence in your response. Aware of the

penalties for mis-labeling, you decide not to send your message after all.

The burdens of self-rating really hit home when applied to the

vibrant, conversational areas of the Internet. Most Internet users

don't run web pages, but millions of people around the world send

messages, short and long, every day, to chat rooms, news groups and

mailing lists. A rating requirement for these areas of the Internet

would be analogous to requiring all of us to rate our telephone or

streetcorner or dinner party or water cooler conversations.

 

The only other way to rate these areas of cyberspace would be to

rate entire chatrooms or news groups rather than individual messages.

But most discussion groups aren't controlled by a specific person, so

who would be responsible for rating them? In addition, discussion

groups that contain some objectionable material would likely also have

a wide variety of speech totally appropriate and valuable for minors­

but the entire forum would be blocked from view for everyone.

Reason #4: Self-Rating Will Create " Fortress America " on the Internet.

 

You are a native of Papua, New Guinea, and as an anthropologist

you have published several papers about your native culture. You

create a web site and post electronic versions of your papers, in

order to share them with colleagues and other interested people around

the world. You haven't heard about the move in America to rate

Internet content. You don't know it, but since your site is unrated

none of your colleagues in America will be able to access it.

 

People from all corners of the globe ­ people who might otherwise

never connect because of their vast geographical differences ­ can now

communicate on the Internet both easily and cheaply. One of the most

dangerous aspects of ratings systems is their potential to build

borders around American- and foreign-created speech. It is important

to remember that today, nearly half of all Internet speech originates

from outside the United States.

 

Even if powerful American industry leaders coerced other countries

into adopting American ratings systems, how would these ratings make

any sense to a New Guinean? Imagine that one of the anthropology

papers explicitly describes a ritual in which teenage boys engage in

self-mutilation as part of a rite of passage in achieving manhood.

Would you look at it through the eyes of an American and rate it

" torture, " or would you rate it " appropriate for minors " for the New

Guinea audience?

Reason #5: Self-Ratings Will Only Encourage, Not Prevent,

Government Regulation.

 

The webmaster for Betty's Smut Shack, a web site that sells

sexually explicit photos, learns that many people won't get to his

site if he either rates his site " sexually explicit " or fails to rate

at all. He rates his entire web site " okay for minors. " A powerful

Congressman from the Midwest learns that the site is now available to

minors. He is outraged, and quickly introduces a bill imposing

criminal penalties for mis-rated sites.

 

Without a penalty system for mis-rating, the entire concept of a

self-ratings system breaks down. The Supreme Court that decided Reno

v. ACLU would probably agree that the statute theorized above would

violate the First Amendment, but as we saw with the CDA, that won't

necessarily prevent lawmakers from passing it.

 

In fact, as noted earlier, a senator from Washington state ­ home

of Industry giant Microsoft, among others ­ has already proposed a law

that creates criminal penalties for mis-rating. Not to be outdone, the

filtering software company Safe Surf has proposed the introduction of

a virtually identical federal law, including a provision that allows

parents to sue speakers for damages if they " negligently " mis-rate

their speech.

 

The example above shows that, despite all good intentions, the

application of ratings systems is likely to lead to heavy-handed

government censorship. Moreover, the targets of that censorship are

likely to be just the sort of relatively powerless and controversial

speakers, like the groups Critical Path Aids Project, Stop Prisoner

Rape, Planned Parenthood, Human Rights Watch, and the various gay and

lesbian organizations we represented in Reno v. ACLU.

Reason #6: Self-Ratings Schemes Will Turn the Internet into a

Homogenized Medium Dominated by Commercial Speakers.

 

Huge entertainment conglomerates, such as the Disney Corporation

or Time Warner, consult their platoons of lawyers who advise that

their web sites must berated to reach the widest possible audience.

They then hire and train staff to rate all of their web pages.

Everybody in the world will have access to their speech.

 

There is no question that there may be some speakers on the

Internet for whom the ratings systems will impose only minimal

burdens: the large, powerful corporate speakers with the money to hire

legal counsel and staff to apply the necessary ratings. The commercial

side of the Net continues to grow, but so far the democratic nature of

the Internet has put commercial speakers on equal footing with all of

the other non-commercial and individual speakers.

 

Today, it is just as easy to find the Critical Path AIDS web site

as it is to find the Disney site. Both speakers are able to reach a

worldwide audience. But mandatory Internet self-rating could easily

turn the most participatory communications medium the world has yet

seen into a bland, homogenized, medium dominated by powerful American

corporate speakers.

 

Is Third-Party Rating the Answer?

 

Third-party ratings systems, designed to work in tandem with PICS

labeling, have been held out by some as the answer to the free speech

problems posed by self-rating schemes. On the plus side, some argue,

ratings by an independent third party could minimize the burden of

self-rating on speakers and could reduce the inaccuracy and mis-rating

problems of self-rating. In fact, one of the touted strengths of the

original PICS proposal was that a variety of third-party ratings

systems would develop and users could pick and choose from the system

that best fit their values. But third party ratings systems still pose

serious free speech concerns.

 

First, a multiplicity of ratings systems has not yet emerged on the

market, probably due to the difficulty of any one company or

organization trying to rate over a million web sites, with hundreds of

new sites ­ not to mention discussion groups and chat rooms ­

springing up daily.

 

Second, under third-party rating systems, unrated sites still may be

blocked.

 

When choosing which sites to rate first, it is likely that third-party

raters will rate the most popular web sites first, marginalizing

individual and non-commercial sites. And like the self-rating systems,

third-party ratings will apply subjective and value-laden ratings that

could result in valuable material being blocked to adults and older

minors. In addition, available third-party rating systems have no

notification procedure, so speakers have no way of knowing whether

their speech has received a negative rating.

 

The fewer the third-party ratings products available, the greater the

potential for arbitrary censorship. Powerful industry forces may lead

one product to dominate the marketplace. If, for example, virtually

all households use Microsoft Internet Explorer and Netscape, and the

browsers, in turn, use RSACi as their system, RSACi could become the

default censorship system for the Internet. In addition, federal and

state governments could pass laws mandating use of a particular

ratings system in schools or libraries. Either of these scenarios

could devastate the diversity of the Internet marketplace.

 

Pro-censorship groups have argued that a third-party rating system for

the Internet is no different from the voluntary Motion Picture

Association of America ratings for movies that we've all lived with

for years. But there is an important distinction: only a finite number

of movies are produced in a given year. In contrast, the amount of

content on the Internet is infinite. Movies are a static, definable

product created by a small number of producers; speech on the Internet

is seamless, interactive, and conversational. MPAA ratings also don't

come with automatic blocking mechanisms.

The Problems With User-Based Blocking Software in the Home

 

With the explosive growth of the Internet, and in the wake of the

recent censorship battles, the marketplace has responded with a wide

variety of user-based blocking programs. Each company touts the speed

and efficiency of its staff members in blocking speech that they have

determined is inappropriate for minors. The programs also often block

speech based on keywords. (This can result in sites such as

www.middlesex.gov or www.SuperBowlXXX.com being blocked because they

contain the keywords " sex " and " XXX. " ).

 

In Reno v. ACLU, the ACLU successfully argued that the CDA violated

the First Amendment because it was not the least restrictive means of

addressing the government's asserted interest in protecting children

from inappropriate material. In supporting this argument, we suggested

that a less restrictive alternative was the availability of user-based

blocking programs, e.g. Net Nanny, that parents could use in the home

if they wished to limit their child's Internet access.

 

While user-based blocking programs

Link to comment
Share on other sites

Join the conversation

You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...