DAVID EVAN HARRIS

How to prepare Greece for the onslaught of AI

An expert on artificial intelligence (AI) ethics for leaders at Berkeley speaks in depth about the challenges and prospects of this ‘valuable but dangerous’ technology

How to prepare Greece for the onslaught of AI

In an interview with Kathimerini, David Evan Harris, a CIGI senior fellow, a Chancellor’s Public Scholar at the University of California (UC), Berkeley, and faculty member at the Haas School of Business, who teaches artificial intelligence (AI) ethics for leaders at Berkeley, suggests identifying at-risk jobs and investing in education when asked what he would advise the Greek government to do to prevent the country from becoming a “pariah” in the field.

Harris has devoted himself to the responsible use of this powerful tool, AI, which he likens to nuclear technology: valuable but extremely dangerous. He has worked at Meta (the former Facebook company), writes a column in the Guardian, and follows European efforts to legislate in this area with some trepidation. [Editor’s note: It must be noted that this interview took place before representatives from EU member-states unanimously voted on February 2 to advance the Artificial Intelligence Act, paving the way for a paradigm-shifting set of rules that will influence how AI is governed in the region and around the world.] Our conversation begins with concerns about the class division that the spread of AI could create.

Do you think economic and social inequalities will be exacerbated by AI?

We do not yet know the answer to this question. This is because we do not know yet whether AI will serve the public interest or whether it will be controlled by a few companies. Its development will depend on whether there is public investment in its development to make it accessible to people and therefore safe. If, on the contrary, all investment comes from the private sector, this mission will be undermined. In other words, the problem is not only the control and regulation of the AI landscape, but also the channeling of public resources, because it is a very expensive business. So, in order for universities and research centers to be able to develop it for the public good, it will require significant investment, either from the EU or from national governments. Thus, there are two factors that ensure that AI does not increase inequalities: control and public investment.

Do you think AI is a blessing or a curse for a small country like Greece?

The key question we are dealing with is whether AI will change the whole labor market. Initially, we thought that the first jobs to be affected would be jobs like truck drivers or other manual workers who would be replaced by robots. But we ended up seeing that the current AI systems are mostly affecting jobs like yours and mine, professors, journalists, writers. We didn’t expect these systems to become excellent writers before they became excellent truck drivers!

If you were an AI adviser to the Greek government, what would you suggest?

I would say that countries that have started to think about the future and plan for it in terms of labor market shifts will have the opportunity to benefit from AI. My first priority for Greece would be to identify the jobs that are most vulnerable to the introduction of AI and help those people transition to other jobs. The second would be to invest in education.

I have heard that Silicon Valley is recruiting people in Greece for engineering positions as part of their outsourcing. Greece is an attractive country for tech companies to open offices. If the right infrastructure is in place in the education system, your country could take advantage of the economic opportunities that are opening up. But if what I am describing is not done in time, the course could well be reversed: The jobs needed will migrate to other countries, and Greece will find itself unprepared for the challenge of re-education created by advances in AI.

‘We do not know yet whether AI will serve the public interest or whether it will be controlled by a few companies’

You have written that the legislation under discussion in the EU is the most serious attempt to regulate the AI landscape worldwide. Is there anything that concerns you?

My main concern is the prospect of this effort being torpedoed by the EU’s big tech giants. In fact, two companies, one French, Mistral AI, and one German, Aleph Alpha, are lobbying to block the relevant legislation, in effect limiting the application of EU legislation to the types of AI that existed before ChatGPT. In other words, they want to make the regulations obsolete before they are even implemented. If they succeed, the EU legislation will be obsolete and outdated. It will allow many more abuses of AI in the future.

Is the erosion of democracy one of your fears?

It is already happening without us realizing it. The big social media companies determine what we see on their platforms, such as extreme political views, and increase the polarization of public opinion. But with the use of generative AI, which has the ability to create new content, it will now be possible to create fake national narratives with great speed. I fear that these systems will be particularly persuasive in manipulating citizens, encouraging them either to vote a certain way or not to vote at all. I also fear that they will be effective in spreading such content through encrypted applications such as WhatsApp, which I understand is also very popular in Greece. Don’t forget that the company that owns the platform, Meta, doesn’t want to be involved in controlling the content.

When Meta’s founder, Mark Zuckerberg, promises to build a powerful AI system and ensure free access to the public, it sounds very democratic. What is hidden behind this commitment?

When a private company promises this, it must also be accountable for the misuse of this tool. When Mark Zuckerberg made Facebook free for users, at first, we loved it and thought it would help spread democracy, we experienced the Arab Spring, the #MeToo and Black Lives Matter movements. We believed it was a compelling platform for democracy. But then we found out that it was a much more powerful tool in the hands of connected authoritarian leaders who decided to exploit these technologies. I fear that this will happen again if AI falls into the hands of people who want to harm our societies.

Many people have compared AI to nuclear technology…

Yes, because there is a lot of material and information around nuclear technology that, if you use it properly, you can run a plant, produce energy, something valuable to society. But you can also use it to make nuclear weapons. AI is just that: You can use it for the common good or you can use it for military purposes. If you make it freely available to anyone who wants it, it can fall into the hands of bad actors. It should not be left to a private company to make such an important decision with such huge implications for all of humanity.

And you say this despite having worked at Meta. Do you regret it?

I did work at Meta for five years. However, I worked in groups that had a good purpose, such as curbing platform abuse and election interference, as well as curbing misinformation, but also strengthening the accountability of the company. Unfortunately, the company has decided to either eliminate these departments or drastically reduce their staff. This is what we need to keep in mind when we expect private companies to self-regulate. When there is money and the economy is doing well, these companies are highly profitable. They have the luxury of hiring people for the departments I worked in and making sure their products are beneficial to the world. But when the crisis hits and they have to cut their operating costs, they look for the departments that bring the least profit to the company.

And then people like you are the first to get laid off.

Exactly! And to be fair to Mark Zuckerberg, it’s even worse at Elon Musk’s X, when he laid off over 80% of his company’s employees, including an entire team dedicated to “AI ethics” whose mission was to make Twitter a safe environment for democracy. Ironically, the company that is investing the most in this area is TikTok, even though the Chinese are doing the opposite: censoring a lot of content, going to the other extreme.

how-to-prepare-greece-for-the-onslaught-of-ai0
‘Greece is an attractive country for tech companies to open offices. If the right infrastructure is in place in the education system, your country could take advantage of the economic opportunities that are opening up,’ David Evan Harris tells Kathimerini.

Subscribe to our Newsletters

Enter your information below to receive our weekly newsletters with the latest insights, opinion pieces and current events straight to your inbox.

By signing up you are agreeing to our Terms of Service and Privacy Policy.