Home » Articles » Topic » Communications Decency Act and Section 230 (1996)

Written by Sara L. Zeigler, published on May 23, 2023 , last updated on February 18, 2024

Select Dynamic field

Donna Rice Hughes of the anti-pornography organization Enough is Enough meets reporters outside the U.S. Supreme Court in Washington on March 19, 1997, after the court heard arguments challenging the 1996 Communications Decency Act. The Court, taking its first look at free speech on the internet, found the law that made it a crime to put indecent words or pictures online where children can find them was overly broad and infringed on other speech protected by the First Amendment. (AP Photo/Susan Walsh)

Congress enacted the Communications Decency Act as part of the Telecommunications Act of 1996 in an attempt to prevent minors from gaining access to sexually explicit materials on the internet.

 

It prohibited any individual from transmitting “obscene or indecent” messages to a recipient under 18 and outlawed the knowing display of “patently offensive” materials in a manner available to those under 18.

 

To encourage internet service providers to remove harmful content, Section 230 was added to provide immunity to those that screened or removed offensive or indecent material that was posted on their sites by third parties.

 

In 1997, the U.S. Supreme Court struck down the portions of the Communications Decency Act that criminalized the transmission of obscene, indecent and patently offensive material, finding that the law was overbroad and criminalized speech protected by the First Amendment.

 

Section 230 remained and has been the focus of numerous challenges and debate as people have been harmed by content posted on websites and social media. Since 2020, Congress has filed several bills to repeal or rewrite Section 230. A case before the U.S. Supreme Court was heard in February 2023 regarding the extent of immunity for social media company algorithms that recommend content.

 

Law prohibited transmitting obscenity to minors

The purpose of the Telecommunications Act of 1996 was to update the Communications Act of 1934 to encourage new technologies and reduce regulation of the relevant industries to promote competition among service providers.

 

The Communications Decency Act was added as an amendment out of concern about pornography and other sexual and indecent material on the internet reaching minors. It included a provision to verify the age of site visitors.

 

The potential penalties for violating the law included fines, imprisonment or both.

 

Congress uses Miller test for Communications Decency Act

Congress tried to inoculate the Communications Decency Act against constitutional challenge under the First Amendment by using the Miller test in defining prohibited material.

 

The Miller test was developed by the Supreme Court in Miller v. California (1973) to define obscene speech, which is not protected by the First Amendment.

 

The three prongs of the Miller test are:

 

  • whether the average person applying contemporary community standards would find the work, taken as a whole, appeals to the prurient interest;
  • whether the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law; and
  • whether the work, taken as a whole, lacks serious literary, artistic, political or scientific value.

The Communications Decency Act borrowed this language to bar the use of computer services to display to minors “any comment, request, suggestion, proposal, image or other communication that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards, sexual or excretory activities or organs.”

 

Immediately after President Bill Clinton signed the bill into law, the American Civil Liberties Union and numerous other organizations challenged its constitutionality. The American Library Association filed a separate suit. Both lawsuits targeted the provisions criminalizing “indecent” and “patently offensive” online communications.

 

Supreme Court: Law restricting indecent material on internet violates First Amendment

The U.S. Supreme Court agreed and, in Reno v. American Civil Liberties Union (1997), ruled the law was unconstitutionally overbroad because it suppressed a significant amount of protected adult speech.

 

Justice John Paul Stevens acknowledged the legitimacy of the government’s interest in protecting children from harm.  But he also noted that the level of suppression was unacceptable. The use of the terms “indecent” and “patently offensive,” far from narrowing the scope of the act, broadened its provisions to include any materials concerning sexual or excretory functions regardless of whether such materials conformed to the other prongs of the Miller test.

 

The Court worried that health care materials, explicit discussions of techniques to prevent the transmission of AIDS and other useful protected speech could be affected.

 

After the Court’s decision, Congress drafted another online pornography law called the Child Online Protection Act (COPA) of 1998 that barred communication of materials online that were deemed harmful to individuals under 17 years of age, using the Miller test to describe such materials.

 

In 2004, the U.S. Supreme Court in Ashcroft v American Civil Liberties Union allowed an injunction to stand because the government did not show that less restrictive means were possible to protect children, such as by using blocking or filtering software.

 

Section 230 comes under increased scrutiny

The origin of Section 230 as part of the Communications Decency Act can be traced to two New York court decisions in the early 1990s that caused alarm among lawmakers who wanted online service providers to remove as much indecent material from the internet as possible so it would be safe for children.

 

In the first decision, Cubby, Inc., v. CompuServe, a New York court in 1991 found that CompuServe could not be held liable for defamatory comments posted in one of the company’s special-interest forums because it did not review any of content before it was posted on its boards. It simply hosted the content.

 

A few years later in 1995, a New York court found in Stratton Oakmont, Inc., v. Prodigy Services Co. that because Prodigy moderated its online message boards and deleted some messages for “offensiveness and ‘bad taste,’” it could be held liable for content published on its boards.

 

Lawmakers were alarmed that these opinions discouraged internet service providers from developing new moderation tools and methods. They also were concerned that providers who were trying to moderate content would either stop or start putting severe restrictions on what they allowed on their sites to avoid liability.

 

Over the years, several lawsuits have been filed against internet service providers to try to overcome the immunity provided by Section 230. For example, lawsuits targeted companies including a “revenge porn” operator whose business was devoted to posting people’s nude images without consent, a gossip site that urged users to send in “dirt,” a message board that knew about users’ illegal activity but refused to collect information to hold them accountable, and a purveyor of sex-trade advertisements whose policies were designed to prevent the detection of sex trafficking.

 

However, courts over the years have consistently upheld the immunity granted to internet platforms.

 

Supreme Court declines to rule on Section 230 immunity in social media case

In 2023, Section 230’s immunity clause reached the U.S. Supreme Court.

 

In that case, Gonzalez v. Google (2023), the family of a 23-year-old student killed in an ISIS attack in Paris claimed that Section 230 does not immunize Google, which owns YouTube, from being held civilly liable for aiding and abetting terrorists under the Anti-Terrorism Act.

 

The plaintiffs argued that YouTube’s algorithms recommending videos from the terrorist group ISIS constitute content created by YouTube and thus Google is not immune from liability under Section 230.

 

Though taking the case to review the extent of Section 230’s shield of liability, the Court decided that based on its decision the same day in Twitter v. Taamneh (2023), the plaintiffs would have little, if any, relief under the Anti-Terrorism Act. It had decided in Twitter that the social media companies could not be shown to “aid and abet” terrorists simply by hosting their content, failing to remove it or recommending the content to others. Thus, the Court said it “is sufficient to acknowledge that much (if not all) of plaintiffs’ complaint seems to fail.”

 

Meanwhile, politicians from both political parties, including President Joe Biden and Republican Senator Lindsey Graham, have called for a rewrite of Section 230. Lawmakers have filed several bills since 2020 to repeal or rewrite the law.

 

This article was most recently updated in May 2023 by Deborah Fisher, director of the John Seigenthaler Chair of Excellence in First Amendment Studies. It was originally published in 2009. Sara L. Zeigler is the Dean of the College of Letters, Arts, and Social Sciences at Eastern Kentucky University.

 

How To Contribute

The Free Speech Center operates with your generosity! Please donate now!