The Impact

Your weekly look at the intersection of AI, advocacy, and politics from the team at MFStrategies | www.MFStrategies.com

This Week’s Toplines

American AI politics is entering a new phase as the companies building the models race to write the guardrails, while states and campaigns behave as if the guardrails don’t exist.

The tension is speed versus legitimacy: labs push Washington for rapid buildout and lighter touch, even as voters—yes, in places like Utah—want liability and child-safety backstops that feel closer to product regulation than speech policing.

On the trail, Texas-style synthetic attack ads are turning “proof” into a style choice, raising the premium on trusted messengers and the cost of a single viral fake.

Underneath it all is an economic argument breaking open: who owns the gains from AI, and how do you promise stability in a world that won’t sit still? The next fight won’t just be about safety—it’ll be about who gets to define reality first.

News of the Week

Houston Public Media
Takeaway
Texas statewide and legislative candidates from both parties are now using AI-generated images and videos in 2026 primary ads to attack and mock opponents, with no Texas law requiring labels or limits after a disclosure bill died in the Senate last year. High-profile figures like Ken Paxton, John Cornyn, Jasmine Crockett, and others have posted AI content ranging from obvious cartoons to more polished deepfakes, sometimes without any disclosure, making it harder for voters and platforms to tell what’s real in campaign messaging.
Why it matters
AI attack ads are evolving faster than Texas rules, normalizing blurred, synthetic “evidence” in front-line races. That deepens mistrust and raises fights between free-speech defenders, regulators, and candidates harmed by fakes. Campaigns now operate where every image or clip can be doubted—or maliciously forged.
 
Institute for Family Studies
Takeaway
A new Institute for Family Studies poll of 6,200 voters, including 500+ in Utah, finds strong bipartisan support in Utah for state power to regulate AI and hold AI companies financially liable for harms, especially to children. This comes as the Trump White House, via a December 2025 executive order and a one-sentence memo calling Utah’s HB 286 “unfixable,” tries to block a Utah AI safety bill that would force frontier AI firms to publish plans to reduce risks to kids and the public.
Why it matters
Broad, bipartisan support in a deep-red state weakens the White House case for blocking state AI rules and reframes deregulation as politically risky. It pits federal tech allies against state-level child-safety coalitions. Expect more red states to test Washington’s limits, forcing campaigns to pick a side.
 

You didn't get hired to spend 5 hours merging donor lists.

But when clients send ActBlue exports, random files from 2(?) years ago, and personal contact lists the day before call time, that's exactly what happens.

Kit cleans it in minutes. Merge multiple sources, dedupe by name, segment by donation value, export for CRM. Candidates and staff are saving hours already.

Save it once, reuse it for every client.

Try it free for 14 days.

Forbes
Takeaway
OpenAI and Anthropic have ramped up federal and state lobbying, spending over $6 million in 2025 and backing specific bills and executive orders on AI rules, data centers, and export limits. They’re pushing Washington for lighter-touch regulation and faster buildout—tax credits, easier permits, and national-security framing—while also funding a $20 million pro-regulation political group, making them central players in how U.S. AI policy and enforcement will be written.
Why it matters
AI giants are turning lobbying dollars into control over rules, contracts and infrastructure. That will tilt federal policy toward scale and speed, not limits. Scrutiny will rise between labs, regulators and states. Smaller campaigns and orgs may soon navigate AI policy written largely by its vendors.
 
Noahpinion | Noah Smith
Takeaway
Noah Smith argues that Democrats need a new economic playbook for the AI era because the 2010s-style agenda of big social spending, subsidies, and promised billionaire taxes no longer fits today’s higher-inflation, fast-changing economy. He says future Democratic policy should be “robust” to AI uncertainty—focusing on abundance, some government ownership in the corporate system, and policies that support human work, rather than relying on deficit-funded social programs that proved politically and fiscally fragile.
Why it matters
This reframes AI as a long-run economic shock that makes old “tax and spend” playbooks risky. Expect fights inside the left between redistribution, public ownership, and pro-growth AI agendas. Campaigns will need clearer stories on jobs, inflation, and who owns AI profits, not just safety rules.
 

Worth Thinking About This Week

"So what do you do when you can’t predict the future? You come up with ideas that will be likely to work no matter what the future ends up looking like." -Noah Smith, "Democratic economic policy in the age of AI" (Noahpinion)

Keep Reading