Skip to main contentSkip to navigationSkip to navigation
President of the European Commission Ursula von der Leyen at a meeting of EC members in Strasbourg, France, 12 December 2023.
President of the European Commission Ursula von der Leyen at a meeting of EC members in Strasbourg, France, 12 December 2023. Photograph: Ronald Wittek/EPA
President of the European Commission Ursula von der Leyen at a meeting of EC members in Strasbourg, France, 12 December 2023. Photograph: Ronald Wittek/EPA

Europe has made a great leap forward in regulating AI. Now the rest of the world must step up

This article is more than 4 months old
David Evan Harris

Like the climate crisis, artificial intelligence is global. The threats it poses can be resolved if we all work together

  • David Evan Harris is a public scholar at UC Berkeley and senior fellow at the Centre for International Governance Innovation

The European Union AI laws – which leaders finally announced had been agreed at nearly midnight on Saturday – are on track to entirely overshadow Britain’s six-week-old Bletchley declaration on artificial intelligence. The text of the agreement on a suite of comprehensive laws to regulate AI is not finalised, and many devils will be found in the details, but its impending arrival signals a sea change in how democracy can steer AI towards the public interest.

The Bletchley declaration was a huge achievement, especially for bringing countries such China, Saudi Arabia and the UAE to agree on a formal statement about AI regulation. The problem is that it was just that: a statement, with no legal power or enforcement mechanism. Now the EU is taking action to impose firm legal requirements on the developers of artificial intelligence, it’s up to other countries to step up and complete the puzzle.

The final hurdle that negotiators cleared at the weekend was over the question of which uses of AI would be banned outright. Prohibited practices include “cognitive behavioural manipulation” – a broad term for technologies that interpret behaviours and preferences with the intent of influencing our decisions. They also include the “untargeted scraping of facial images from the internet or CCTV footage”, a practice that is already in use by some companies that sell databases used for surveillance; “emotion recognition in the workplace and educational institutions”, which could be used by companies to discipline, rank or micromanage employees; “social scoring”, a dystopian surveillance tool used in China to rate individuals on everyday activities and allocate (or withhold) “social credit” from them; “biometric categorisation”, a practice where characteristics such as skin tone or facial structure are used to make inferences about gender, sexual orientation, or even the likelihood of committing a crime; and “some cases of predictive policing for individuals”, which has already been shown to have racially discriminatory impacts.

But don’t breathe a sigh of relief yet. In the same way that climate crisis is a global problem and can only be solved if all countries reduce emissions, AI is global in nature, and can only be kept in check by many nations working together. Powerful “general purpose AI” (GPAI) systems, such as the one underlying ChatGPT, can churn out personalised misinformation and manipulation campaigns, non-consensual intimate imagery (NCII, sometimes known as deepfake pornography) and even designs for biological weapons.

If one part of the world regulates these but then another releases unsecured, “open-source” versions of these tools that bad actors can weaponise at will, the whole world can still suffer the consequences. These bad actors could include Russia’s military intelligence agency, the GRU, or digital mercenaries (troll farms for hire) which may not have the funds or technology to make their own world-class models, but which could get hold of powerful AI tools built without these safeguards and use them to try to manipulate elections around the world.

The planned EU AI act is, unfortunately, not perfect. While it places laudably strong regulations on GPAI, including “open-source” systems, there are still gaps. If AI tools such as “undressing” apps are used to create NCII, it appears liability could fall only on the individual user creating this content, not the developer of the AI system that created it, according to one European Commission official I spoke to. I would prefer developers be prohibited from distributing tools capable of causing such potentially irreparable harm, especially when children could be both perpetrators and victims.

Another worry is that the EU AI act won’t come fully into force until at least 2026. Some parts of it will phase in sooner, and it is designed to be “future proof”, but AI tech is improving so quickly that there’s a strong possibility the technology will outrun legislation. This is an even bigger risk if the EU stands alone on legislating AI.

The Bletchley declaration, which came out of the first AI safety summit, was an important part of a series of parallel efforts taking place within the G7, G20, UN and Organisation for Economic Co-operation and Development. Follow-on AI safety summits are planned for South Korea and France in 2024.

Here are the most important binding regulations that these summits and parallel governance processes need to put in place. 1) Affirm the prohibition of uses described above. 2) Firmly regulate high-risk AI systems including GPAI, requiring thorough risk assessments, testing and mitigations. 3) Require companies to secure their high-risk GPAI systems and not release them under “open-source” licences unless they are determined by independent experts to be safe. 4) Clearly place liability on the developers of GPAI systems as well as their deployers for harms that they cause. 5) Require that AI-generated content be “watermarked” in a way that it can be easily detected by lay consumers as well as experts. 6) Respect the copyright of creators such as authors and artists when training AI systems. And finally, 7), Tax AI companies and use the revenue to protect society from any harms caused by AI, from misinformation to job losses.

Ensuring that AI is developed in ways that serve the public interest is a gargantuan task that will require participation from citizens and government around the world. Now is the time for everyone, everywhere, to get educated about the risks and benefits of AI, and demand that your elected representatives take its threats seriously. The EU has made a good start; now the rest of the world needs to enact binding legislation to make AI serve you and your community.

  • David Evan Harris is chancellor’s public scholar at UC Berkeley, senior fellow at the Centre for International Governance Innovation, senior research fellow at the International Computer Science Institute and visiting fellow at the Integrity Institute

Most viewed

Most viewed