Skip to content
1 min read Podcast

Why AI Disclaimers May Hurt the Ads They Were Meant to Inform

The latest research from the AAPC

Why AI Disclaimers May Hurt the Ads They Were Meant to Inform

In this episode of the Campaign Trend Podcast, Julie Sweet, Director of Advocacy and Industry Relations at the American Association of Political Consultants (AAPC), joins me to discuss the AAPC's March 2026 member survey on AI adoption and the AAPC Foundation's new disclaimer effect study – the first empirical research on how AI disclosures in political ads actually move voters.

The survey paints a picture of an industry moving fast, but unevenly. Daily AI use among AAPC members jumped from 34% to 57% in a single year. About 11% of firms are now building AI-first services, and a large cohort is integrating AI into core workflows like RFP responses, invoicing, and client communications. But the gap between large firms running agentic experiments and small shops still hesitant to subscribe to ChatGPT is widening. Julie worries that firms on the wrong side of that line will struggle to get off the launchpad in time for 2028.

The disclaimer effect study is the most provocative finding of the year. The AAPC Foundation produced two identical ads, one shot traditionally and one generated entirely with AI, then tested each with and without an "AI-generated" label. The label caused trust, believability, and credibility to drop sharply – even on the human-shot version. Dial tests showed a "speed bump effect" the moment the disclaimer appeared on screen. Voters with the least exposure to AI and politics, the people lawmakers most want to protect, were the most confused by the labels.

We also walk through the regulatory landscape: 30 state AI disclosure laws with no two alike, two already enjoined on First Amendment grounds, and a political industry being singled out while commercial AI content floods every other channel. The AAPC's emerging framework argues for meaningful labels like "dramatization" or "reenactment" instead of blanket "AI-generated" warnings, and a sharper test for what is actually deceptive: are you fabricating evidence that doesn't exist?

The conversation closes with a look ahead at agentic systems, generative engine optimization, and a world in which AI persuades AI – right up to the moment campaigns go back to knocking on doors.