| |
The Impact A weekly look at the intersection of AI, advocacy, and politics from the team at MFStrategies | www.MFStrategies.com |
|
| |
First and foremost, thank you for an amazing first four months of The Impact!
When we launched in August, all we knew was that there was a gap. Political and policy professionals lacked information tailored to their work on Artificial Intelligence and some of the most consequential developments of our lives. From that simple idea, The Impact has grown in ways we could not have predicted.
Our weekly readership now reaches the thousands and represents an incredibly diverse group. We have true believers and hardened skeptics alike. That range matters to us and our goal is to speak to all of you. Whether we like it or not, the money and power behind AI today will shape far more than how we draft emails.
Many of you have shared that this newsletter brings a fresh perspective to your inbox each week and helps make the fast moving worlds of AI and politics feel more manageable. We take that responsibility seriously and because of that The Impact remains intentionally experimental. We listen, we adjust, and we keep working to deliver information that is useful, thoughtful, and grounded in the realities you face.
Reading back through the news of 2025 is a reminder of just how quickly things can change, and how much more change is ahead. Whatever this year brought, 2026 will bring even more. Through late nights, long days, and constant noise, we hope to continue being a trusted resource that helps you stay ahead of the technology shaping our era.
Thank you for taking this journey with us, and happy new year. Marty Santalucia Chief Executive Officer MFStrategies |
|
|
| |
The Impact Podcast Hosts Addie and Hal break down this week's news in 10 minutes |
| The AI Campaign Playbook Our roadmap for how to implement AI safely and effectively in your organization. |
| Vendor Scorecards Coming soon |
| |
|
|
|
| |
|
| |
In 2025, U.S. AI policy snapped from “cautious guardrails” to “full throttle” with Washington using AI money, rules, and infrastructure to reward speed and punish restraint. The Trump AI Action Plan and new OMB memos cast AI as a growth engine and culture‑war weapon: fast‑tracking data centers and federal adoption, favoring open‑source and “Buy American,” and threatening to starve states that insist on tougher protections or “woke” standards. Meanwhile, the public moved the other way, more anxious than optimistic, demanding testing, audits, and deepfake bans, and bipartisan in wanting someone to be in charge but not trusting either Big Tech or government alone. Taken together, 2025 revealed a quiet struggle over whose fears count more: voters worried about jobs, fraud, and fairness, or political leaders betting that visible AI expansion, especially in health care, benefits, and security, will pay off before the harms do. For political and policy professionals, 2026 will test whether leaders can turn that gap into leverage: helping candidates to pick a side on deepfakes, state authority, and public‑sector AI tools. And answering the real question hanging over next year’s fights: when AI hits home harder for voters, will they decide they were served or sold out? |
|
|
| |
|
AI / Political News of the Year
|
|
| |
| |
Wiley Rein LLP Takeaway The White House released an AI Action Plan and three executive orders to drive innovation, data center buildout, and exports—and to block “woke AI” in federal use. The plan updates NIST’s AI Risk Management Framework, creates a DHS-led AI Information Sharing and Analysis Center, speeds federal permitting for data centers, and may limit funds to states with “burdensome” AI rules. It also revokes Biden’s January 2025 AI infrastructure order and sets up a program to export full‑stack U.S. AI tech.
Why it matters The plan shifts federal AI policy from “guardrails first” to “growth first,” giving agencies cover to roll back Biden-era investigations and state-level rules as “burdensome.” That opens space for rapid AI rollout—especially in federal services and defense—while picking a partisan fight over “woke AI” that will shape what tools governments and vendors can use. |
|
| | | Wiley Rein LLP Takeaway OMB issued two AI memos (M-25-21 and M-25-22) that replace Biden-era guidance and speed AI use across federal agencies. They set rules for “high-impact” AI, require each agency to name a Chief AI Officer, and push “buy American,” with new acquisition terms starting this fall. Expect tighter limits on vendors using agency data to train models without explicit consent.
Why it matters These memos clear the runway for rapid AI expansion across federal agencies while loosening the Biden-era focus on civil rights language. That shift could normalize “high‑impact” AI in benefits, enforcement, and infrastructure decisions before guardrails are stress‑tested—setting up future fights over bias, transparency, and who controls government data. |
|
| | | AJMC Takeaway Amid the shutdown, Paul Ryan urged Congress to tighten Medicaid rules, shift Medicare to private‑plan choice with capped subsidies, and fund state programs that cover the sickest patients. He called AI a cost fix and warned against strict rules, and said vague laws let presidents set health policy by executive order.
Why it matters Ryan is packaging familiar GOP goals—shrinking Medicaid, raising retirement ages, shifting costs to “choice” and private plans—as technocratic fixes and AI optimism. For campaigns, this frames future fights: who pays for aging and innovation, and whether AI justifies cutting public coverage under the banner of “sustainability.” |
|
| | | Tech Policy Press Takeaway GSA moved to make Elon Musk’s Grok AI available across federal agencies even though it has produced racist, antisemitic, and false content. Public Citizen and other groups asked OMB to stop the rollout, arguing it violates Executive Order 14319 that requires federal AI to be accurate and ideologically neutral. A White House science adviser also said Grok’s behavior breaks the administration’s own standard.
Why it matters It signals that “AI safety” rules can be bent when they benefit powerful vendors and aligned ideologies—normalizing partisan, error-prone tools inside core government workflows. If Grok stands, it sets a precedent: future models won’t be judged by neutrality or truth, but by whose interests they serve. |
|
| | | GovTech Takeaway A University of Maryland survey finds broad bipartisan support for government rules on AI. Majorities in both parties back testing AI before use, auditing systems already in use, banning deepfake political ads, and pursuing a treaty to ban autonomous weapons. The Trump AI Action Plan could still curb stricter state rules by tying federal funds to compliance, setting up a clash.
Why it matters Public backing for AI rules undercuts GOP-style “hands off” arguments and gives cover to states and Hill offices to push harder—especially on deepfakes and audits. The fight now is whether Washington preempts stricter state laws and who gets blamed when AI harms show up in hiring, health care, or elections. |
|
| | | Brookings Takeaway A review of 218 AI opinion studies finds people in the U.S. and U.K. are more concerned than optimistic about AI. Voters fear economy‑wide job loss more than personal job loss and want rules, but they don’t trust tech firms or government to do it alone. The authors launch AI SHARE to track attitudes and urge standardized, long‑term polling to guide policy.
Why it matters This database shows AI is already a “live” political issue: voters are more worried than excited, fear job losses for society more than for themselves, and want strong rules but don’t trust either tech or government to write them alone. For campaigns, that’s a signal—and an opening—before narratives harden. |
|
| | | Consumer Finance Monitor Takeaway The Trump Administration released America’s AI Action Plan, prioritizing deregulation and tying AI money to states with fewer rules. The plan pushes open-source tools, speeds permits for data centers and chip plants, and tightens export rules while funding AI job training. Expect winners in low-regulation states and new risks in compliance and supply chains, especially around deepfakes and chips.
Why it matters This plan weaponizes federal AI money to punish blue states and normalize deregulation, shifting power from safety-focused lawmakers to industry-aligned agencies. By preferring open-source and faster buildouts, it trades oversight for speed—raising the odds that harmful AI, synthetic media, and chip shortages hit voters before Congress can react. |
|
| | |
| |
Worth thinking about “The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.” — America’s AI Action Plan (July 2025) |
|
|
| |
|