On 11/18/2013 1:35 PM, Richard Pieri wrote:
As a more serious take on the topic, hosting providers are -- or are supposed to be -- common carriers. They can't scan users' content. If they did that then they'd cease being common carriers and they'd lose their safe harbor and Good Samaritan protections.

I think you misunderstand the current state of the law. What you say was true as of a 1995 court decision, but was changed by Section 230 of the 1996 Communications Decency Act, the only part of it that hasn't been overturned. Wikipedia's entry on Section 230 of the CDA <http://en.wikipedia.org/wiki/Section_230_of_the_Communications_Decency_Act> explains it as follows:

   Unlike the more controversial anti-indecency provisions which were
   later ruled unconstitutional, this portion of the Act remains in
   force, and enhances free speech by making it unnecessary for ISPs
   and other service providers to unduly restrict customers' actions
   for fear of being found legally liable for customers' conduct. The
   act was passed in part in reaction to the 1995 decision in Stratton
   Oakmont, Inc. v. Prodigy Services Co., which suggested that service
   providers who assumed an editorial role with regard to customer
   content, thus became publishers, and legally responsible for libel
   and other torts committed by customers. This act was passed to
   specifically enhance service providers' ability to delete or
   otherwise monitor content without themselves becoming publishers. In
   Zeran v. America Online, Inc., the Court notes that "Congress
   enacted Section 230 to remove the disincentives to self-regulation
   created by the Stratton Oakmont decision. Under that court's
   holding, computer service providers who regulated the dissemination
   of offensive material on their services risked subjecting themselves
   to liability, because such regulation cast the service provider in
   the role of a publisher. Fearing that the specter of liability would
   therefore deter service providers from blocking and screening
   offensive material, Congress enacted Section 230's broad immunity
   "to remove disincentives for the development and utilization of
   blocking and filtering technologies that empower parents to restrict
   their children's access to objectionable or inappropriate online
   material." In addition, Zeran notes "the amount of information
   communicated via interactive computer services is . . . staggering.
   The specter of tort liability in an area of such prolific speech
   would have an obviously chilling effect. It would be impossible for
   service providers to screen each of their millions of postings for
   possible problems. Faced with potential liability for each message
   republished by their services, interactive computer service
   providers might choose to severely restrict the number and type of
   messages posted. Congress considered the weight of the speech
   interests implicated and chose to immunize service providers to
   avoid any such restrictive effect."

So, since service providers can now remove material they consider offensive without subjecting themselves to liability, there's no reason they couldn't scan users' content for malware without subjecting themselves to liability.

Of course, as with many things, this is a double-edged sword. There's also nothing to stop them from censoring anything else they disagree with, say articles arguing in favor of network neutrality, for example.

               -- Mark Rosenthal





_______________________________________________
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss

Reply via email to