Pras Michél with his former lawyer David Kenner.
Photo: Tasos Katopodis/Getty Images
Ready or not, Fugees member Pras Michél is facing AI head-on. Michél is demanding a new trial following his recent conviction on conspiracy charges in April. His legal argument is a novel one, as he’s claiming his defense attorney used artificial intelligence in court, derailing a possible acquittal. Michél, legal name Prakazrel Michél, filed paperwork on October 16, 2023, alleging that defense attorney David Kenner botched his closing argument by relying on an “experimental” AI program that neglected important arguments — potentially leading to the rapper’s conviction. “The closing was damaging to the defense,” Michél’s new attorneys, Peter Zeidenberg, Michael F. Dearington, David M. Tafuri, and Sarah “Cissy” Jackson wrote in court papers.
Owing to the use of AI, the new lawyers claim, Kenner effectively admitted that Michél committed a crime during his closing. According to the prosecution, Michél had allegedly moved money from a fugitive businessman, Jho Low, into Barack Obama’s 2012 campaign using bogus donors. During his closing statement, Kenner said that Low wanted a photograph with Obama and was willing to pay anything to get it — and he didn’t deny that Michél accepted money from Low. “Michél was trying to arrange to get a photograph for someone named Jho Low and he did try his very, very best to get Jho Low that photo,” Kenner said.
Michél’s allegation about AI is part of a broader claim that Kenner “was ineffective and severely prejudiced the defense.” “The AI program failed Kenner, and Kenner failed Michél,” his new lawyers said in court papers. “The closing argument was deficient, unhelpful, and a missed opportunity that prejudiced the defense.” Vulture spoke to several veteran attorneys about an alleged use of AI’s potential impact on the push for a new trial.
What courtroom arguments does Michél accuse of being generated by artificial intelligence?
In his telling of the Obama story, his language to jurors “appeared to be an admission of guilt” of Michél’s misusing campaign funds for influence peddling, his new lawyers state. They also note Kenner’s mangled use of song lyrics. He misattributed Puff Daddy’s song to Fugees — a very well-known Puff Daddy song at that, with the lyrics “Every single day, every time I pray, I will be missing you” — and he “misattributed Michél’s worldwide hit ‘Ghetto Supastar (That is What You Are)’ to the Fugees, when it was actually a single by Michél,” they wrote.
Michél’s new lawyers don’t just claim that Kenner used perused chatbot-generated text for legal direction. Their court papers also suggest he used a bot because of monetary reasons. “Kenner generated his closing argument— perhaps the single most important portion of any jury trial — using a proprietary prototype AI program in which he and Alon Israely appear to have had an undisclosed financial stake.” They claim Kenner’s alleged use of EyeLevel.AI, the tech used in the defense, bungled his closing, saying it “made frivolous arguments, misapprehended the required elements, conflated the schemes, and ignored critical weaknesses in the government’s case.”
Has EyeLevel.AI responded to the allegations?
Before Michél complained about AI use, EyeLevel.AI said in a statement on its website that the company’s “litigation assistance technology made history last week, becoming the first use of generative AI in a federal trial. The case involved Pras Michél, a former member of the hip-hop band The Fugees, who was on trial for international fraud charges.” The company claims that its “Lit Assist” tech can provide “offers critical insights faster than human efforts and conventional technologies alone” for things from drafting papers to appeals work.
“This is an absolute game changer for complex litigation,” Kenner is quoted as saying in a press release after the trial. “The system turned hours or days of legal work into seconds. This is a look into the future of how cases will be conducted.” Michél’s former lawyers didn’t respond to a request for comment.
However, EyeLevel.AI is insisting nothing improper happened. The company’s chief operating officer, Neil Katz, told the New York Times that Michél’s ex-lawyers didn’t rely on it. “The idea here is not that you would take what is outputted by a computer and walk it into a courtroom and read it into the record,” Katz told the newspaper. “That’s not what happened here.” He insisted that “human lawyers take this as one important input that helps them get to the ideas faster…They ultimately write the legal arguments that they present in a court.” He also denied that the lawyers had some sort of financial stake in his company.
“EyeLevel’s AI for legal is a powerful tool for human lawyers to make human decisions, but do so faster and with far greater information at their fingertips,” Katz said in a statement to Vulture. “EyeLevel is able to ingest and understand complex legal transcripts based solely on the facts of the case as presented in court. This can then be used by lawyers to prepare legal arguments, cross examine witnesses, predict arguments of opposing counsel and more.” Specifically, with Michél’s case, “the AI was trained specifically on the transcripts of the trial and produced research and analysis for the legal team in a revolutionary way, based solely on the facts of the case as presented at trial. We take issue with this idea that somehow our technology produced inaccurate or incorrect or un-useful information for the legal team.”
Has AI led to ineffective assistance of counsel claims before?
Carl Tobias, a law professor at the University of Richmond, said that this is a novel area, and very possible this type of claim hasn’t happened yet. “The whole area of ineffective assistance of counsel has a lot of background and lots of jurisprudence—protection of peoples’ rights in certain situations,” Tobias said, “but whether would extend to relying on new technologies is not clear and so, it sounds to me that that’s on the cutting edge.” Because this is such an “uncharted territory,” Tobias said it’s hard to say what a judge would make of it.
Could Michél actually get a new trial because of his lawyer using AI?
Getting a new trial is not an easy task, including on ineffective assistance of counsel grounds. One aspect to consider is whether AI is actually doing the work a lawyer is supposed to do. Murray Richman, a longtime defense attorney whose roster of celebrity clients includes the late rapper DMX, said scope is incredibly important:“It depends upon how much of that argument was based on the AI or not.” Some of the generic phrases that are invoked in opening statements — ladies and gentlemans, for example — are one thing, but if AI does heavy lifting, then that’s another thing entirely. Richman explains that could be grounds for ineffective assistance of counsel, because the lawyer isn’t doing the work — it’s analogous to asking a law student to write a summation. “Is that ineffective assistance of counsel? Of course it is,” Richman said. “It’s getting somebody else to do what that person himself must have done or should have done.”
Has a lawyer ever gotten in trouble for using AI in court?
In June 2023, two New York City lawyers were fined $5,000 for filing a legal brief that was chock-full of fictional cases that were all created by ChatGPT, according to the Times. The case, unremarkable litigation against an airline, Avanca, drew international attention. The airline’s lawyers said they couldn’t find cases filed by their opponents. The judge in this case asked the lawyers to send him copies of the cases. They couldn’t because said cases didn’t exist. “I heard about this new site, which I falsely assumed was, like, a super–search engine,” the ChatGPT-using lawyer, Steven A. Schwartz, told the judge. He also said, “I did not comprehend that ChatGPT could fabricate cases.”