How Libel Law Applies to Automated Journalism
Abstract
Automated journalism—the use of algorithms to translate data into narrative news content—is enabling all manner of outlets to increase efficiency while scaling up their reporting in areas as diverse as financial earnings and professional baseball. With these technological advancements, however, come serious risks. Algorithms are not good at interpreting or contextualizing complex information, and they are subject to biases and errors that ultimately could produce content that is misleading or false, even libelous. It is imperative, then, to examine how libel law might apply to automated news content that harms the reputation of a person or an organization.
Conducting that examination from the perspective of U.S. law, because of its uniquely expansive constitutional protections in the area of libel, it appears that the First Amendment would cover algorithmic speech—meaning that the First Amendment’s full supply of tools and principles, and presumptions would apply to determine if particular automated news content would be protected. In the area of libel, the most significant issues come under the plaintiff’s burden to prove that the libelous content was published by the defendant (with a focus on whether automated journalism would qualify for immunity available to providers of interactive computer services) and that the content was published through the defendant’s fault (with a focus on whether an algorithm could behave with the actual malice or negligence usually required to satisfy this inquiry). There is also a significant issue under the opinion defense, which provides broad constitutional protection for statements of opinion (with a focus on whether an algorithm itself is capable of having beliefs or ideas, which generally inform an opinion).
Repository Citation
Jonathan Peters,
How Libel Law Applies to Automated Journalism
Oxford Research Encyclopedia of Communication
(2021),
Available at: https://digitalcommons.law.uga.edu/fac_artchop/1544