A conservative YouTuber who is suing to challenge Minnesota’s law against so-called “deepfake” videos has accused Minnesota Attorney General Keith Ellison of submitting court documents that cite fake expert studies allegedly generated by artificial intelligence.
The court documents were filed in connection with a lawsuit filed by Christopher Kohls, a content creator who uses the name “Mr. Regan,” over a Minnesota law entitled “Use of Deep Fake Technology to Influence An Election,” which was passed in 2023 and amended in August 2024. The law makes it a crime to share deepfake images with intent to injure a political candidate within 90 days of a nominating convention of election. Violation of the law is punishable by up to 90 days imprisonment and a fine of up to $1,000. If a candidate for office is found guilty of violating the law, they must forfeit the nomination or office and are thereafter disqualified from holding public office.
The law does not require that the deepfake image must be shared with intent to deceive, which means that even parody — which is traditionally protected by the First Amendment — could fall within its scope.
Kohls and Minnesota state Rep. Mary Franson, a Republican, challenged the law as an unreasonable restriction on protected speech. Kohls maintains a YouTube page on which he has posted multiple videos labeled as parodies. Many of these videos appear to use voice cloning to mimic Vice President Kamala Harris making statements that she actually did not make.
Ellison, in defense of the anti-deepfake law, argued in the lawsuit that the statute is necessary to protect Minnesotans against dangers posed by deepfakes. In support of his argument, Ellison, a Democrat, filed two expert declarations, one of which was by Stanford Professor Jeff Hancock, the founding director of the Stanford Social Media Lab. Hancock is known as a leading expert on technology-based deception on social media.
Hancock’s declaration supporting the Minnesota statute cited numerous academic works, but according to a motion filed by Kohls and Franson, one of the studies cited was a “hallucination” made up by ChatGPT.
According to the filing, “the Declaration of Prof. Jeff Hancock cites a study that does not exist. No article by the title exists. The publication exists, but the cited pages belong to unrelated articles. Likely, the study was a “hallucination” generated by an AI large language model like ChatGPT.”
Kohls and Franson argue that Hancock’s declaration is partially fabricated and entirely unreliable and ask that the court exclude entirely. According to the law’s challengers, Hancock’s citation “bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,”” and Hancock’s commentary on it “contains no methodology or analytic logic whatsoever.”
They argue that Hancock attested — under penalty of perjury — that he “identified the academic, scientific, and other materials referenced in this declaration in the references attached as Exhibit C.”
However, said the plaintiffs, one reference was cited as:
Hwang, J., Zhang, X., & Wang, Y. (2023). The Influence of Deepfake Videos on Political Attitudes and Behavior. Journal of Information Technology & Politics, 20(2), 165-182. Dkt. 23-1 at 39.
“The ‘doi’ url is supposed to be a ‘Digital Object Identifier,’ which academics use to provide permanent links to studies,” explained the plaintiffs in their filing.
“Such links normally redirect users to the current location of the publication, but a DOI Foundation error page appears for this link: ‘DOI NOT FOUND.’ Bednarz Ex. 1,” the filing says. “Perhaps this was simply a copy-paste error? It’s not. The article doesn’t exist.
Kohls and Franson said that not “even a snippet” of the article cited appears” anywhere on the internet as indexed by Google and Bing, the most commonly used search engines,” and that even Google Scholar found no match. The publication, “The Journal of Information Technology & Politics” exists, they said, but the cited pages do not contain the article listed.
They maintained that “Hancock (or his assistants) didn’t even bother to click” the links, and that this failure calls into question Hancocks’ entire declaration.
“This is especially true given that the conclusions that Ellison most relies on have no methodology behind them and consist entirely of expert say-so,” argued the plaintiffs.
Ellison has yet to respond to the motion to exclude the expert declaration.
AI-generated content has increasingly become an issue in litigation. In 2023, two New York lawyers were fined $5,000 for submitting a brief containing “gibberish” citations of fake legal cases made up by ChatGPT.
“We will reply in court shortly,” a spokesperson for Ellison’s office told Law&Crime in an email on Monday.
Attorneys for Hancock did not immediately respond to Law&Crime’s request for comment.
Have a tip we should know? [email protected]