Childrens Internet Protection Act (CIPA) ruling by the US District Court for the Eastern District | Page 3

Not Available
suit against the United States and others alleging that CIPA is facially unconstitutional because: (1) it induces public libraries to violate their patrons' First Amendment rights contrary to the requirements of South Dakota v. Dole, 483 U.S. 203 (1987); and (2) it requires libraries to relinquish their First Amendment rights as a condition on the receipt of federal funds and is therefore impermissible under the doctrine of unconstitutional conditions. In arguing that CIPA will induce public libraries to violate the First Amendment, the plaintiffs contend that given the limits of the filtering technology, CIPA's conditions effectively require libraries to impose content-based restrictions on their patrons' access to constitutionally protected speech. According to the plaintiffs, these content- based restrictions are subject to strict scrutiny under public forum doctrine, see Rosenberger v. Rector & Visitors of Univ. of Va., 515 U.S. 819, 837 (1995), and are therefore permissible only if they are narrowly tailored to further a compelling state interest and no less restrictive alternatives would further that interest, see Reno v. ACLU, 521 U.S. 844, 874 (1997). The government responds that CIPA will not induce public libraries to violate the First Amendment, since it is possible for at least some public libraries to constitutionally comply with CIPA's conditions. Even if some libraries' use of filters might violate the First Amendment, the government submits that CIPA can be facially invalidated only if it is impossible for any public library to comply with its conditions without violating the First Amendment.
Pursuant to CIPA, a three-judge Court was convened to try the issues. Pub. L. No. 106-554. Following an intensive period of discovery on an expedited schedule to allow public libraries to know whether they need to certify compliance with CIPA by July 1, 2002, to receive subsidies for the upcoming year, the Court conducted an eight-day trial at which we heard 20 witnesses, and received numerous depositions, stipulations and documents. The principal focus of the trial was on the capacity of currently available filtering software. The plaintiffs adduced substantial evidence not only that filtering programs bar access to a substantial amount of speech on the Internet that is clearly constitutionally protected for adults and minors, but also that these programs are intrinsically unable to block only illegal Internet content while simultaneously allowing access to all protected speech. As our extensive findings of fact reflect, the plaintiffs demonstrated that thousands of Web pages containing protected speech are wrongly blocked by the four leading filtering programs, and these pages represent only a fraction of Web pages wrongly blocked by the programs. The plaintiffs' evidence explained that the problems faced by the manufacturers and vendors of filtering software are legion. The Web is extremely dynamic, with an estimated 1.5 million new pages added every day and the contents of existing Web pages changing very rapidly. The category lists maintained by the blocking programs are considered to be proprietary information, and hence are unavailable to customers or the general public for review, so that public libraries that select categories when implementing filtering software do not really know what they are blocking.
There are many reasons why filtering software suffers from extensive over- and underblocking, which we will explain below in great detail. They center on the limitations on filtering companies' ability to: (1) accurately collect Web pages that potentially fall into a blocked category (e.g., pornography); (2) review and categorize Web pages that they have collected; and (3) engage in regular re-review of Web pages that they have previously reviewed. These failures spring from constraints on the technology of automated classification systems, and the limitations inherent in human review, including error, misjudgment, and scarce resources, which we describe in detail infra at 58-74. One failure of critical importance is that the automated systems that filtering companies use to collect Web pages for classification are able to search only text, not images. This is crippling to filtering companies' ability to collect pages containing "visual depictions" that are obscene, child pornography, or harmful to minors, as CIPA requires. As will appear, we find that it is currently impossible, given the Internet's size, rate of growth, rate of change, and architecture, and given the state of the art of automated classification systems, to develop a filter that neither underblocks nor overblocks a substantial amount of speech.
The government, while acknowledging that the filtering software is imperfect, maintains that it is nonetheless quite effective, and that it successfully blocks the vast majority of the Web pages that meet filtering companies' category definitions (e.g., pornography). The government contends that no more is required. In its view, so long as the filtering software selected by the libraries screens out the bulk of the Web pages proscribed by CIPA, the libraries have made a reasonable choice which suffices, under the applicable legal principles, to pass constitutional muster
Continue reading on your phone by scaning this QR Code

 / 69
Tip: The current page has been bookmarked automatically. If you wish to continue reading later, just open the Dertz Homepage, and click on the 'continue reading' link at the bottom of the page.