Jordi Calvet-Bademunt (00:00):
The impact of the deep fake wave and the impact of AI in the elections in 2024 was hugely overblown.
Eric Wilson (00:11):
Welcome to the Campaign Trend Podcast, where you are joining in on a conversation with the entrepreneurs, operatives, and experts who make professional politics happen. I'm your host, Eric Wilson. Our guest today is Jordy CalVet Badman, a senior research fellow at the future of Free Speech and a visiting scholar at Vanderbilt University. In our conversation today, we're talking about AI regulations and their potential unintended consequences for free speech. Now, Jordy, there was so much panic last year ahead of the 2024 elections that AI was going to disrupt everything, and none of those fears came to be. Why not?
Jordi Calvet-Bademunt (00:53):
Yeah, that that's right. I mean, just to give the listeners some background, there was so much anxiety all around the world about how AI would flood elections. And with Deepfake and with this information, and many, many institutions, many major institutions were echoing and amplifying those fears. The World Economic Forum, for example, called disinformation as the key global top global risk for 2024. Many media, and of course, we saw that translate into public concern. It feel public concern, and we saw 50%, according to P, 50% of Americans were worried about AI spreading disinformation. So after the elections, after the elections in Europe where I'm from and after the US elections, we asked ourselves what happened. In the end, what we saw across the board is that the impact of the elections was significantly overblown across the world. And we saw that in research from Princeton. We saw that research from the British AI Institute leading European hub for fact-checkers and academics.
(02:03):
All of them found no evidence for the meaning. What they said was basically, there's no meaningful impact of the debate on European and American elections. So yes, we did see defects. I think all of us saw some defects, but they didn't shift the result. And you ask, so why is that? And one big reason is that actually we saw significantly fewer defects that we had thought we would see. We didn't see that huge wave that we were expecting. For example, I mentioned the British AI Institute. They looked at the defects in the UK and they, in total, they saw 16 viral cases of AI information during the whole election period. One six. That's right. And when they looked into Europe and France, they only saw 11. So that's for the French elections and the European Union White elections, only 11 viral dead fakes.
Eric Wilson (03:05):
And I feel like I have to jump in here and say that this is not because the bad guys of the world were behaving themselves. It's because, at least from what I've seen, that the non deep fakes, maybe cheap fakes or even your just classic non-AI disinformation was still really effective. So why put in the extra effort?
Jordi Calvet-Bademunt (03:23):
That's correct. That's actually one of the key findings of some of the researchers. They said, well, first we didn't see many cases of these fakes at many as we expected. We also saw a huge wave, let's say, of positive or let's say node defective defect. So basically politicians using fake or others for parody, but not trying to deceive just as a political tool, let's say. And then also we saw, as you say, significant prevalence of cheap fakes of these low quality deep defects, which basically they said, well, I mean AI hasn't changed really much compared to what we had in the past when it comes to this cheap fakes. We had Photoshop, we had video editing, we had all that. So really not such a big change. And then let me add a second reason. So one is that we saw significantly fewer, but then the second reason why we didn't see a meaningful impact is because actually the main consumers of the defects of those viral defects were people who already believed on those things. What they did was reinforce the preexisting belief. They didn't change them that much. If you were a conservative, you were more likely to see a conservative dip defect, and similarly for progressive. So there wasn't a huge change. I will laugh that there were some concerns regarding the access to information, the transparency when it came to analyzing those effects. But there's, I think, no doubt, there is broad consensus that let's say everyone agrees that the impact of the deepfake wave and the impact of AI in the elections in 2024 was hugely overblow.
Eric Wilson (05:17):
With that background, I think we have to ask this because that has not stopped the progression of regulation and clause meant to address AI specifically as it relates to politics. And my argument has always been, as we saw in the case in New Hampshire with the fake Joe Biden call, there were laws already on the books ready to take care of that, regardless of whether it's an ai. So do we really need new laws for regulating AI or the existing laws about voter suppression and libel? Are they sufficient?
Jordi Calvet-Bademunt (05:49):
Yeah, I think you're right to point out that we already have lost to tackle, let's say, a good amount of the harms that we feared AI might have. I think what we've seen is that AI has gone from a very niche technology that is only known to researchers to this mainstream technology used by hundreds of millions. And with that, we've seen huge concern about the power of ai. Then all of this has happened in the context of this tech lash, let's say, against tech companies in general and social media particularly this concern about the potential risk towards society. So naturally, there was this push to regulate ai. We must regulate AI in some way. Now, there are serious harms, like for example, the generation of explicit fake images of real people, often women, even minors.
Eric Wilson (06:48):
That
Jordi Calvet-Bademunt (06:48):
Is absolutely horrifying. We need to see how to address that problem. But there are also what similar for now perceived risks, and this is let's say, I think an important one when it comes to political speech, this sphere say of AI in elections. And I said there's quite a broad consensus that the impact of AI in elections was overblown. There's no evidence of meaningful impact. Still. This week I was in a conference in Europe with major EU policy makers there, and one of the key messages of many of them was AI driven. Disinformation is the core threat. So one of the key threats, if not the key threat, of course, we've seen policies, this has translated, these fields have translated into policies. We've seen, even here in the us we've seen states like California, like Minnesota, who have passed laws banning political defects in elections.
(07:49):
In some instances, like when the AI content can influence elections, and of course free speech advocates have warn that this type of laws could chill, dissent could chill political speech. Even federal judge in California blocked the law warning about the free speech implications of the law. And there's also a worrying legislation in Europe. So I know that all of this comes from good intentions, at least I think, right? I would say most of it comes from good intentions, if not all, but we should be very careful in determining what are the real risks, what are the perceived risks? What tools do we have, have tools to fight against defamation? We have tools to fight against the incitement, to lawless actions. We already have tools. So let's see what evidence we have for the harms, what tools do we have? And then what any other regulation, if any, should be adopted, ensuring that we keep our democratic liberal system
Eric Wilson (08:57):
And help us understand why is AI being treated so differently than Is it that tech lash of social media came and went and the regulators didn't do anything until they perceived that it was too late and now they're not going to be caught flat footed with ai? Is that what's driving this? Why are we treating AI differently?
Jordi Calvet-Bademunt (09:16):
I think AI really feels different. It feels almost magical. We can create very quickly content of all types. It can be good content, it can also be harmful content, and we can do it very quickly. So it feels like, oh, we should definitely pay attention to this the same time. I will add that I think we've been here before and my colleague Jacob Muhan had a book on the history of free speech, and there he speaks about how major technologies have tended to spark panic when they come out. And there are a couple of examples that I really like. One is about the printing press in the 16th century. There's this philosopher of Rotterdam. He was warning everyone that the printing press would fill the world with libelous and subversive books. Then in the mid 19th century, we had the New York Times warning that the telegraph was too fast for the truth.
(10:14):
And we've already talked about the internet. So we've seen that before. So even if it feels different, we should be aware that that has happened. We should make sure that we are remitting ourselves to our free speech principles that technology can change, but our principles should remain there and we should apply them. And look, I'm not saying that AI doesn't come with risk. We've already talked about some of them, like the concerns regarding the generation of intimate images, and I also understand concerns about hate speech, disinformation, all of that. So I'm not denying that harms do exist, but that's true for any communication technology. That's true for books, that's true for the internet, that's true for the telegraphs. And we should aim to eliminate all those harms. What we should try to do, in my opinion, is see how to manage them, but without undermining the foundations of these free and democratic societies that we have.
Eric Wilson (11:13):
So to come to the defense of Erasmus, he was correct, right? The printing press did lead to all those things, but more importantly, it led to lots of benefits. And I think that's one thing that's missing from the conversation is we're not seeing enough people discuss the potential good uses of DeepFakes. So I know you've followed AI in India in their elections, they use DeepFakes as part of their day-to-day strategy, so they can speak in the language of whichever region they're campaigning in, even if they themselves don't speak it. And there are tons of examples like that. And I think that is missing from the conversation of here are the positive ways. And so we can only respond to the sort of fearmongering. And in reality, we are not seeing the use case either positive or negative, yet still so new.
Jordi Calvet-Bademunt (12:06):
Yeah, absolutely agree. I mean, I'm happy to talk about it. There's plenty of positive use cases of AI when it comes to empowering challengers, politicians to fight against incumbent by facilitating the generation of content. There's also the possibility of in countries that speak many languages to allow people to allow politicians to reach those users that don't speak their own language in terms of civic engagement as well, fighting against authoritarians. So for sure, there are plenty of good users, and I think that's something, as you say, we should paying more attention to that as well.
Eric Wilson (12:51):
You're listening to the campaign trend podcast. I'm speaking with Jordy Calve, Badman from the future of Free Speech in Nashville, Tennessee. Talk to us a little bit, I mean, you've alluded to this, but say more about the potential risks to free speech of overly broad AI regulation. It is almost like we don't have to defend free speech, but give some examples of where it might go wrong.
Jordi Calvet-Bademunt (13:17):
Yeah, I think that the risk when it comes to overly broad AI regulations are I see at least three based on the models that I've looked into, say we have say China. In China, they are reportedly testing models to see if they comply with core socialist values. So that's a tool of clear tool of control. And that we see when we use deep seek and we ask deep seek about things like Taiwan or the Gers or other sensitive topics. If you're using the platform from Dipsy, it's going to reject or it's going to parro some communist party talking points. So we've seen that. We've seen that as a cool tool of control and propaganda. Then if we look at Europe, there's these obligations to what is called assert and mitigate systemic risk. Now, the issue with systemic risk is that nobody knows exactly what they are.
(14:13):
So they are very subjective and they are very big. So that's the risk there. It's the risk of one self censorship from companies. I'm going to be very cautious if I'm a company not to publish any content that might be perceived negatively by authorities because I don't want to get into trouble because of the fines. And there's also, of course, given how vague and how subjective it is, there's also the risk of that type of provisions being misused by public authorities. Now of course, China and Europe are very different, world. Europe is a democracy with rule of law. China's an authoritarian regime, but we should also be paying attention at risk in democracy. And then I already talked about the US and say limiting or banning political defects. I've talked about California, I've talked about Minnesota, and that free speech advocates have already been warning about the risk to steal political speech like satire, like parity. So I think those are some careers that I'm seeing.
Eric Wilson (15:19):
So what, if anything should be done by governments about AI in politics or around political speech? Is it, Hey, let's stop and wait and see, or are there some targeted interventions that you think might be appropriate at this stage?
Jordi Calvet-Bademunt (15:35):
I think for policymakers, let's say, my recommendation would be I would not be in favor of stopping. I would be in favor of let's continue the research and let's ensure that we have access to data and that we promote the research that we need. Let's see what the real impact is, where the risks, the real risks are, where the real harms are. And then based on that, and without falling into panic mode, let's adopt any policies that we need. And then second, let's focus on, let's call them pro speech, non restrictive policies. I mean like media and AI literacy. Let's make sure that everyone, all of us are equipped since we're young, but including, so even kids from school or including kids in school, children in school, but also adults. Let's make sure that we empower people to spot this information, to think critically, to assess sources. Because I think realistically, AI content is not going away, so we need to make sure that we're ready to really deal with it.
Eric Wilson (16:44):
And I think at the top of our conversation, we were pretty critical of the panic and people really overblowing hard to say whether that was actually effective in sort of racing the risks and people being on guard. We obviously can't know that, but that media literacy and being able to know what the technology is and things like that is incredibly helpful.
Jordi Calvet-Bademunt (17:04):
Yeah, I think we need to do it in a responsible way, right? Warning people about the risks, but without inflating them, without making them look at this terrible thing that we cannot avoid. And there's one interesting study that pointed out that actually the consumption of let's say content warning about AI risks, but in this inflated way, especially on TV, led to huge concerns to public society. So I think and then subsequently distrust. So let's do it. Absolutely. Let's warn people and let's do it in a responsible way. I think that's a great way to approach it.
Eric Wilson (17:39):
So one thing that I am concerned about as a practitioner in the political space is there's sort of that preemptive regulation that we're seeing with open ai, Claude, others coming out and saying, you're not allowed to use this for politics and sort of self-censoring. And I think some of that's a, Hey, this is still too new. We're trying to figure things out, but how are the various AI companies restricting free speech in their policies and how should they address that?
Jordi Calvet-Bademunt (18:11):
Right? That's a very important part of the think tank. I work in the future of free speech. We focus not only on loss, but we also focus on the free speech culture. I think this is an indispensable part of the free patch culture when it comes to AI models. And what we saw initially was what you're saying that models tended to overcorrect all the time. We did this analysis in early 2024 where we analyzed the major models, and when we looked into chat, GPT, Gemini, clo three major models, they were refusing to answer prompts 50% of the time.
(18:49):
And those prompts concerned, controversial, but legal topics, we tried to be balanced. So I don't know the participation of transgender athletes in women's sports, the COVID-19 lab theory, whether white Protestants hold too much power in the us. So controversial topics from both sides. And we saw this that half of the problems were rejected, but we've seen good news, we've seen that at least some AI companies have reacted open AI publicly committed to an intellectual freedom policy. Anthropic also announced that they are reducing by 45% unnecessary s. And so we repeated this exercise again this year, and we did see that model has significantly improved. Now the rejection, the refusal rate is not 50%, it's 25%. So we're on the way of using empowerment hopefully,
Eric Wilson (19:45):
And it is such a broad instrument banning certain words from responses and things like that. I was at a conference of public opinion researchers this week and they were talking about how it can be difficult to use the AI to do analysis because you're asking people for their opinions on these controversial issues. And if you can't use the AI to query certain things, I mean, it's an axe when a scalpel is needed. For sure. Absolutely.
(20:13):
Well, my thanks to Jordy for a great conversation. Go check out his work at Tech Policy Press and the Future for Free Speech. A great organization that, as he mentioned, is doing some really good stuff to make sure that our free speech is protected in the frontier technologies of our day. And you know that if this episode made you a little bit smarter, maybe it gave you something to think about, all we ask is that you share it with a friend. You'll look smarter in the process and more people find out about the show. So it's win-win all around. Remember to subscribe to the Campaign Trend podcast wherever you get your episodes, and be sure to visit campaign trend.com. With that, I'll say thanks for listening. We'll see you next time. The Campaign Trend Podcast is produced by Advocacy Content Kitchen, a media production studio. I.