Experts

Rules Of The Road For AI In Political Ads – Scott Brennen (University of North Carolina)

"Public policy should promote learning about generative AI."

Scott Brennen, Head of Online Expression Policy at the University of North Carolina's Center on Technology Policy, discusses the current state of AI in political ads and the potential harms and benefits of its usage. He categorizes AI usage in political ads into three categories: publicity, alteration, and fabrication. While concerns about AI in political ads have been overstated in some areas, risks concerning bias and down-ballot races have been understated. Brennen provides recommendations for public policy to promote learning about generative AI and target electoral harms. He also suggests ways for campaigns to protect themselves from unintended consequences.

  • A policy framework to govern the use of generative AI in political ads (Brookings.edu)

Takeaways

  • AI usage in political ads can be categorized into publicity, alteration, and fabrication.
  • Concerns about AI in political ads have been overstated in some areas, but risks concerning bias and down-ballot races have been understated.
  • Public policy should promote learning about generative AI and target electoral harms rather than the technologies themselves.
  • Tech platforms and regulatory agencies are taking steps to address AI in political ads, such as requiring disclosures and prohibiting deceptive content.
  • Campaigns can protect themselves from unintended consequences by being mindful of biases in AI-generated content and working with local election officials to promote factual information.

Chapters

00:00 Introduction
00:31 Current State of AI in Political Ads
01:03 Categories of AI Usage in Political Ads
03:04 Concerns and Overblown Panic
06:17 Harms and Recommendations
13:30 Actions by Tech Platforms and Regulatory Agencies
18:01 Recommendations for AI in Political Ads
26:27 Protecting Campaigns from Harms
29:39 Conclusion

View Transcript >>

Continue Listening