Boss Digital

Expert defends anti-AI misinformation law using chatbot-written misinformation


Facepalm: Large language models have a long, steep hill to climb before they prove trustworthy and reliable. For now, they are helpful in starting research, but only fools would trust them enough to write a legal document. A professor specializing in the subject should know better.

A Stanford professor has an egg on his face after submitting an affidavit to the court in support of a controversial Minnesota law aimed at curbing the use of deepfakes and AI to influence election outcomes. The proposed amendment to existing legislation states that candidates convicted of using deepfakes during an election campaign must forfeit the race and face fines and imprisonment of up to five years and $10,000, depending on the number of previous convictions.

Minnesota State Representative Mary Franson and YouTuber Christopher Kohls have challenged the law, claiming it violates the First Amendment. During the pretrial proceedings, Minnesota Attorney General Keith Ellison asked the founding director of Stanford’s Social Media Lab, Professor Jeff Hancock, to provide an affidavit declaring his support of the law (below).

The Minnesota Reformer notes that Hancock drew up a well-worded argument for why the legislation is essential. He cites several sources for his conviction, including a study titled “The Influence of Deepfake Videos on Political Attitudes and Behavior” in the Journal of Information Technology & Politics. He also referenced another academic paper called “Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance.” The problem is that neither of these studies exist in the journal mentioned or any other academic resource.

The plaintiffs filed a memorandum suggesting that the citations could be AI-generated. The dubious attributions challenge the declaration’s validity, even if they aren’t from an LLM, so the judge should throw it out.

“The citation bears the hallmarks of being an artificial intelligence ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT,” the memorandum reads. “Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question.”

If the citations are AI-generated, it is highly likely that portions, or even the entirety of the affidavit, are, too. In experiments with ChatGPT, TechSpot has found that the LLM will make up quotations that do not exist in an apparent attempt to lend validity to a story. When confronted about it, the chatbot will admit that it made the material up and will revise it with even more dubious content (above).

It is conceivable that Hancock, who is undoubtedly a very busy man, wrote a draft declaration and passed it on to an aide to edit, who ran it through an LLM to clean it up, and the model added the references unprompted. However, that doesn’t excuse the document from rightful scrutiny and criticism, which is the main problem with LLMs today.

The irony that a self-proclaimed expert submitted a document containing AI-generated misinformation to a legal body in support of a law that outlaws that very information is not lost to anyone involved. Ellison and Hancock have not commented on the situation and likely want the embarrassing faux pas to disappear.

The more tantalizing question is whether the court will consider this perjury since Hancock signed under the statement, “I declare under penalty of perjury that everything I have stated in this document is true and correct.” If people are not held accountable for misusing AI, how can it ever get better?



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top