December 18, 2025

California vs. the deepfakes: How a Haas faculty member helped write first-in-the-nation AI law  

Featured Researcher

David Evan Harris

Continuing Lecturer

By

Stella Kotik

Haas professional faculty member David Evan Harris (center) with (left to right): Ken Wang of CITED; California Assemblymember Buffy Wicks; Mariko Yoshihara of CITED; Shara Murphy of the Office of Assemblymember Wicks; and Leora Gershenzon of CITED. 

When you scroll through your social feed, can you tell the real videos and photos from AI fakes? Increasingly, many of us can’t. And as deepfakes blur the line between fact and fiction, trust in the media is evaporating—and faith in democracy is on the line.

A critical step toward fighting this problem was architected right here in Berkeley. David Evan Harris, a UC Berkeley Haas professional faculty member since 2015 and a UC Berkeley Chancellor’s Public Scholar, took a lead role in drafting California’s AI Transparency Act of 2025 (AB 853). Introduced by Berkeley Assemblywoman Buffy Wicks and signed into law by Gov. Gavin Newsom on Oct. 13, this first-in-the-nation law requires online platforms to make it easier for users to discern whether content is authentic or AI-generated. The law, which takes effect in January 2027, applies to social media, search engines, and mass messaging platforms like WhatsApp and Telegram.

Harris had never drafted legislation before, but he brought deep expertise in online integrity, having worked in the tech industry and served as an advisor to a number of organizations and government agencies on tech policy. In his previous role at Meta, he worked on the Responsible AI, Civic Integrity, and Social Impact teams, and managed teams of researchers working on these topics. He is a faculty advisor at UC Berkeley’s Center for Information Technology Research in the Interest of Society (CITRIS) Tech Policy Initiative and has taught courses including AI Ethics for Leaders, Tech Policy Design, AI Law & Governance, and Scenario Planning.

We asked Harris to walk us through what these new rules will look like in practice for everyday users, as well as the broader implications for online integrity.

Haas News: What will the new law do? How might it change the way users interact with content online?

David Evan Harris: The California AI Transparency Act of 2025 (AB 853) from Assemblymember Buffy Wicks will give people the tools to tell the difference between authentic and AI-generated images, video, and audio content. The new law requires all large online platforms to give users a way to inspect any content and see if it contains reliable provenance information—including watermarks, metadata, digital signatures, or fingerprints—that can tell us whether the content originated with a camera or other capture device or an AI system. The large online platforms covered by the bill include social media platforms, search engines, and mass messaging platforms that have more than 2 million users, which covers most of the ways that AI-generated and authentic content are spread online.

The new law requires all large online platforms to give users a way to inspect any content and see if it contains reliable provenance information—including watermarks, metadata, digital signatures, or fingerprints—that can tell us whether the content originated with a camera or other capture device or an AI system.

—David Evan Harris

AB 853 builds on last year’s California AI Transparency Act of 2024 (SB 942), which will require generative AI companies to embed difficult-to-remove provenance data in all AI-generated images, audio, and video. When a user generates content with an AI tool, the tool’s provider must include a watermark or other form of invisible provenance data that conveys that it was generated with AI. The AI provider that makes the content must also create a website (a “detection tool”) that allows anyone to upload AI-generated content and find out if it was made by that company’s AI system. While this requirement was an important step forward, it would place a huge burden on everyday people to upload content to numerous websites to see if it was AI-generated. 

The new law, AB 853, addresses this by requiring all online platforms to also act as detection tools, giving users information about a content’s provenance without them having to navigate to another website or app.

AB 853 also requires manufacturers of capture devices, like phones, cameras, and audio recorders, to give users the option to embed this provenance data into the content they create. This enables you to let the world know that content you create was authentically captured. For example, if you upload an image to Instagram that has a digital signature that reads “captured with Google Pixel phone,” Instagram will now be required to allow viewers of that content to read the data that discloses the content’s origin.

David Evan Harris testifying on the California legislation and the risks of AI at the U.S. Senate Judiciary Committee, Subcommittee on Privacy, Technology and the Law in Sept. 2024.

HN: What was your role in developing the bill?

DEH: Back in 2023, the California Initiative for Technology and Democracy (CITED), an initiative of California Common Cause, was looking for someone with experience working within a tech company and with issues around the information environment online to help draft the bill. When they first reached out to me, I told them I had never drafted legislation before but was excited by the opportunity. They told me not to worry because, at that point in time, the CITED team was made up of five lawyers, all with ample legislative drafting experience, who would be there to help me through the whole process. With that, I became the first nonlawyer on the core CITED team. What also helped immensely was the brilliant legislative staff at the office of Assemblymember Buffy Wicks, who also happens to represent the city of Berkeley! 

The draft of the bill underwent more changes than I can count—definitely in the hundreds, possibly upward of a thousand. I was involved in every single one of the edits over the course of dozens of drafts produced over the two-year period. Through this extended drafting process, I was fortunate to work closely with several highly skilled lawyers at CITED, including my close collaborator for AB 853, Ken Wang. Other great input came from the staff of numerous committees in Sacramento. The Assembly Privacy and Consumer Protection Committee, chaired by Assemblymember Rebecca Bauer-Kahan, and the Senate Judiciary Committee, chaired by Assemblymember Thomas Umberg, both contributed significantly to the revision and eventual passage of the bill. 

For me, one of the most fun parts of the bill’s development was interacting with a really diverse set of stakeholders who all wanted to contribute. This included reps from Big Tech companies, small companies, and nonprofits focused on human rights, civil rights, privacy, and social justice. Although it was challenging to incorporate all of their sometimes contradictory feedback, it was gratifying to finally land on a final version of the bill that got broad support from the legislature and was signed into law. 

HN: With technology moving so fast, 2027 seems far in the future. Why do we have to wait so long for this to take effect?

The first law, SB 942, takes effect in August 2026, timed to coincide with EU AI Act, which has parallel requirements in Article 50. It’s exciting that we’ll get to see Europe and the U.S. implementing these laws in parallel because it will create a de facto global standard. The law I worked on, AB 853, goes into effect for large platforms five months later in January 2027 (and in January 2028). That’s because the tech industry argued they weren’t ready. I wish it had come into force sooner—I think that if these companies had legal pressure to deploy these technologies more quickly they would, because a number of companies have already started deploying it. 

HN: Will it be easy for large online platforms to get around these rules? What are the potential consequences for companies that fail to comply with the law?

DEH: The new law will be very much enforceable, and violators will face significant fines if they fail to comply. The fines are $5,000 per violation per day, which could quickly add up to a very significant amount of money. Enforcement can come via a civil action filed by the attorney general, a city attorney, or a county counsel. 

The new law will be very much enforceable, and violators will face significant fines if they fail to comply.

—David Evan Harris

The extent to which companies will look for and exploit loopholes in our law is still to be seen. We’ve worked really hard to write reasonable legislation that will improve the quality of the information environment online without overburdening the platforms and capture-device manufacturers involved. More broadly, we believe that preserving content integrity online is crucial for maintaining fair and meaningful democratic integrity in our state, and we see AB 853 as an important step toward this goal. 

The good news is that a lot of companies are already working to comply with the law, and some may already be compliant. Since 2019, a huge coalition of companies and groups have been pushing to launch provenance tools to verify authentic content, including Adobe, The New York Times, Leica Camera Ltd, Truepic, Digimarc, Sony, Nikon, Canon, Microsoft, Google, Intel Corporation, BBC, The Washington Post, Qualcomm, WITNESS, OpenAI, Meta, Amazon, and many more, under the banner of the Content Authenticity Initiative (CAI), Coalition for Content Provenance and Authenticity (C2PA), and Project Origin

The good news is that a lot of companies are already working to comply with the law, and some may already be compliant.

—David Evan Harris

LinkedIn is a good place to go to get a preview of what a platform that offers provenance data to users looks like, via their implementation of the Content Credentials interface on posts like this one that contain images taken with C2PA-compliant cameras like the ones I used from Leica and Google to take the pictures.

HN: The online platforms covered by the law span state and national boundaries. How will this California bill affect companies with nationwide operations and/or set a precedent for federal standards?

DEH: National companies that do business in California are subject to state law. For internet companies like those that are the target of this bill, doing business in California might simply mean that their platform can be accessed by people in California. Where it gets a little more complicated is when a company is from outside of the U.S., such as WeChat. It is technically possible for the California Attorney General to sue a foreign company for violating this law, but how those fines may be levied is uncertain. Interestingly though, China implemented a similar law to what we now have in California only a few months ago. Unfortunately, these questions about jurisdiction often get very complicated very fast, and it may take a while to see how they unfold in court. 

I have already had inquiries from legislators in multiple U.S. states and countries around the world sharing their intent to pass similar laws. While California laws don’t set a legal precedent for other states, they can become de facto national policies, as has been the case with laws in other domains including privacy and environmental standards. This extraterritorial impact of California lawmaking is known as the “California Effect”—a term coined by Haas’ own Professor Emeritus David Vogel—and it is particularly apparent in situations where it is easier for companies to provide a single product to the whole country (or the world) that complies with California laws, rather than multiple versions for multiple jurisdictions. 

The federal law question is a little different here because federal law can preempt state law under the supremacy clause of the Constitution. And in terms of federal preemption, it is traditional that if the federal government wants to preempt a California law, it would enact a law that would apply to the same subject matter—in this case, provenance information. But the most recent proposals for AI regulation preemption out of Washington are uniquely harmful, because they have the potential to preempt CA state laws like AB 853 and replace them with much weaker laws or even no laws at all. This is due to a dangerous effort from the tech industry to push an ideology of “tech exceptionalism,” essentially an argument that the tech industry is inherently special and different from all other industries and must not be regulated. This is still a very active battle in DC, unfolding as we speak.

But thankfully, the federal legislators are still trending in the right direction. Notably, the bipartisan duo of Reps. Anna Eshoo and Neal Dunn introduced the “ Protecting Consumers from Deceptive AI Act” in 2024, which had similar ideas around provenance technology and the online information environment. Unfortunately, this bill never passed. It is possible that additional legislation will be introduced soon in Congress, so keep your eyes peeled.