Bloomberg Law
December 8, 2023, 10:02 AM UTC

AI Tool to Redact Minors’ Info in Testing for Los Angeles Court

Maia Spoto
Maia Spoto
Correspondent

Los Angeles’ trial court system is testing an artificial intelligence tool that would redact the personal information of minors from court records, a new use of the exploding technology by the nation’s largest trial court.

The system would remove personal data such as social security numbers, addresses, and medical information from minors’ case files based on a list of specific names and information provided by court staff, the court said. Accenture PLC is the vendor for the AI, which will be used specifically to help read and identify characters and patterns in documents, the court said.

“I’m a strong believer in the fact that I think [AI] can be used for good,” Los Angeles Superior Court Presiding Judge Samantha Jessner told Bloomberg Law. The redaction tool “would help us be efficient, handle a lot of like tasks,” she said.

Currently, the court uses software that works much like Microsoft Word’s function to find and replace names with information input by court staff, a Los Angeles Superior Court spokesperson said.

Chinmayi Sharma, a Fordham University School of Law associate professor, said that long wait times due to backlogs often plague family proceedings and this could help speed the process.

Lingering Questions

The tool is still in development for Los Angeles Superior Court, which could even choose never to use it, the court said.

However, the plan is for LASC’s Court Technology Services division to control access to the tool, the court said, with clerks logged into the system feeding it documents. After the tool redacts documents, logged-in users would be able to view them. Los Angeles Superior Court staff will review the redactions to ensure they are correct, the court said.

The stakes are somewhat low because it appears a person will decide whether to take or leave the tool’s suggestions, and because if one of the parties in the suit sees an error, they could notify the court to get the document taken down, redacted, and republished, Sharma said.

However, significant evidence in other contexts shows that AI models tend to be more competent with information associated with white people, who are more represented in training data, Sharma said. Still, it’s hard to say yet whether this tool would be biased, she said. Bias in this case is less likely if the tool only relies on court documents for training, which better reflect the demographics that go through the court system than general internet data.

It won’t be clear whether the tool is low-risk until more information is made available, University of California, Berkeley scholar David Evan Harris said.

“There needs to be public scrutiny, testing for bias, and transparency about who this vendor is,” Harris said.

Such an evaluation needs to consider “the whole ecosystem of factors around the tool,” including whether staff responsible for operating it are overworked and whether the user interface is easy to use, Harris said. And the court needs to rigorously track whether the new system produces less error than the old system, he said.

Sharma also noted risks associated with employees that rely too heavily on the system and forgo checking it carefully.

“If law firm associates are relying on AI made-up sources,” she said, “you can imagine how an overworked state court clerk may be tempted to assume the system was correct.”

To contact the reporter on this story: Maia Spoto in Los Angeles at mspoto@bloombergindustry.com

To contact the editors responsible for this story: Stephanie Gleason at sgleason@bloombergindustry.com; Andrew Childers at achilders@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.