Did an AI Hallucination Help Convict Fugees Rapper Pras?

fiverr
Did an AI Hallucination Help Convict Fugees Rapper Pras?
Blockcard



Artificial intelligence is finding its way into every facet of life, including the American legal system. But as the technology becomes more ubiquitous, the issue of AI-generated lies or nonsense—aka “hallucinations”—remains.

These AI hallucinations are at the center of the claims by former Fugees member Prakazrel “Pras” Michel, who accused an AI model created by EyeLevel of torpedoing his multi-million dollar fraud case—a claim that EyeLevel co-founder and COO Neil Katz calls untrue.

In April, Michel was convicted of 10 counts in his conspiracy trial, including witness tampering, falsifying documents, and serving as an unregistered foreign agent. Michel faces up to 20 years in prison after his conviction as an agent of China, as prosecutors said he funneled to try and influence U.S. politicians.

“We were brought in by Pras Michel’s attorneys to do something unique—something that hadn’t been done before,” Katz told Decrypt in an interview.

Betfury

According to a report by the Associated Press, during closing arguments by Michel’s lawyer at the time, defense attorney David Kenner misquoted a lyric to the song “I’ll Be Missing You” by Sean “Diddy” Combs, incorrectly attributing the song to the Fugees.

As Katz explained, EyeLevel was tasked to build an AI trained on court transcripts that would allow lawyers to ask complex natural language questions about what has occurred during the trial. He said that it did not pull other information from the internet, for example.

Court proceedings notoriously generate tons of paperwork. The criminal trial of FTX founder Sam Bankman-Fried, which is still ongoing, has already generated hundreds of documents. Separately, the fallen cryptocurrency exchange’s bankruptcy has more than 3,300 documents—some of them dozens of pages long.

“This is an absolute game changer for complex litigation,” Kenner wrote in an EyeLevel blog post. “The system turned hours or days of legal work into seconds. This is a look into the future of how cases will be conducted.”

On Monday, Michel’s new defense attorney, Peter Zeidenberg, filed a motion—posted online by Reuters—for a new trial in the U.S. District Court for the District of Columbia.

“Kenner used an experimental AI program to write his closing argument, which made frivolous arguments, conflated the schemes, and failed to highlight key weaknesses in the government’s case,” Zeidenberg wrote. He added that Michael is seeking a new trial “because numerous errors—many of them precipitated by his ineffective trial counsel—undermine confidence in the verdict.”

Katz refuted the claims.

“It did not occur as they say; this team has no knowledge of artificial intelligence whatsoever nor of our particular product,” Katz told Decrypt. “Their claim is riddled with misinformation. I wish they had used our AI software; they might have been able to write the truth.”

Attorneys for Michel have not yet responded to Decrypt’s request for comment. Katz also refuted claims that Kenner has a financial interest in EyeLevel, saying the company was hired to help Michel’s legal team.

“The accusation in their filing that David Kenner and his associates have some kind of secret financial interest in our companies is categorically untrue,” Katz told Decrypt. “Kenner wrote a very positive review of the performance of our software because he felt that was the case. He wasn’t paid for that; he wasn’t given stock.”

Launched in 2019, Berkley-based EyeLevel develops generative AI models for consumers (EyeLevel for CX) and legal professionals (EyeLevel for Law). As Katz explained, EyeLevel was one of the first developers to work with ChatGPT creator OpenAI, and said the company aims to provide “truthful AI”—or hallucination-free and robust tools for people and legal professionals who may not have access to funds to pay for a large team.

Typically, generative AI models are trained on large datasets gathered from various sources, including the internet. What makes EyeLevel different, Katz said, is that this AI model is trained only on court documents.

“The [AI] was trained exclusively on the transcripts, exclusively on the facts as presented in court, by both sides and also what was said by the judge,” Katz said. “And so when you ask questions of this AI, it provides only factual, hallucination-free responses based on what has transpired.”

Despite how an AI model is trained, experts warn about the program’s habit of lying or hallucinating. In April, ChatGPT falsely accused U.S. criminal defense attorney Jonathan Turley of committing sexual assault. The chatbot went so far as to provide a fake link to a Washington Post article to prove its claim.

OpenAI is investing heavily in combating AI hallucinations, even bringing in third-party red teams to test its suite of AI tools.

“When users sign up to use the tool, we strive to be as transparent as possible that ChatGPT may not always be accurate,” OpenAI says on its website. “However, we recognize that there is much more work to do to further reduce the likelihood of hallucinations and to educate the public on the current limitations of these AI tools.”

Edited by Stacy Elliott and Andrew Hayward

Stay on top of crypto news, get daily updates in your inbox.



Source link

Hashflare

Be the first to comment

Leave a Reply