The Impact – MFStrategies
 
The Impact
A weekly look at the intersection of AI, advocacy, and politics from the team at MFStrategies | www.MFStrategies.com
 
Weekly Announcements

Is your campaign doing everything it can to dominate 2025 and win big in 2026?

MFStrategies is a cutting-edge, full-service political strategy firm. We combine the reach and tools of a national consultancy with the hands-on agility of a local team. We’ve helped clients smash fundraising records, secure historic wins, and scale influence for over a decade.

You’re running for office to make a positive change in your community. Thank you!

Let’s spend 30 minutes talking about your vision, your obstacles, and how to turn your big plans into big wins.

 
Resources and Tools
The Impact Podcast
Hosts Addie and Hal break down this week's news in 10 minutes.
The AI Campaign Playbook
Our roadmap for how to implement AI safely and effectively in your organization.
Vendor Scorecards
Coming soon
More Tools
Coming soon
 
The Toplines
 
AI rules are moving from talk to teeth. California passed a safety law for the biggest models, and OpenAI says it will follow it. Congress may add $1M-per-day fines and DOE tests before release. A watchdog sued four agencies to expose any AI used to push Trump's rule rollbacks. The White House also touts $100M for AI cancer work while seeking big HHS cuts, so expect winners and losers.
 
AI / Political News of the Week
 
Yahoo News
Takeaway
California enacted a new AI safety law aimed at the largest “frontier” models. Developers must run risk tests, keep a shutdown plan, log and report serious incidents, and face penalties if they fail. Smaller and open-source models are largely carved out, and the Attorney General can enforce.

Why it matters
California just set the toughest state rules, which vendors may follow nationwide to avoid a patchwork. Campaign tech and ad platforms could tighten deepfake and risk controls to stay compliant. Expect industry pushback and a federal preemption play from Republicans, plus lawsuits testing how the state defines “catastrophic risk.”
Read the full story
 
Biometric Update
Takeaway
A bipartisan bill would require federal testing of the most powerful AI before release. Models trained with more than 10^26 operations must let the Department of Energy test them and hand over code, data, and weights; deployment, even open‑source, is banned without compliance and carries at least $1M per day in fines. DOE would run red‑team and third‑party tests and draft a permanent oversight plan.

Why it matters
Shifts AI policy from voluntary pledges to mandatory checks with real penalties. Puts DOE, not an independent agency, in charge—inviting civil‑liberties pushback and aligning with a Republican push to curb stronger state rules, a win Big Tech has sought for years. A simple compute trigger is easy to measure but blunt, so where the line lands will become a lobbying fight.
Read the full story
 
KCRA
Takeaway
OpenAI policy lead Chan Park said California’s new AI safety law lays out a clear way for both state and federal watchdogs to oversee big AI systems. He signaled OpenAI will work under the rules and sees them as a model for broader oversight. The law places new duties on large AI developers to assess risks and document safety steps, with enforcement by the state.

Why it matters
California rules often set the floor nationwide once big vendors comply. Campaigns and nonprofits will feel the effects through product changes, more logs and disclosures, and possible price bumps as vendors pass on compliance costs. Advocates may welcome guardrails, while industry will push to water them down or seek federal preemption; Republicans in Congress have already floated preemption to blunt state-level rules in the name of “innovation.”
Read the full story
 
USA TODAY
Takeaway
President Trump signed an executive order to double the Childhood Cancer Data Initiative to $100 million a year. The order tells the Make America Healthy Again Commission to work with the White House science office on AI for pediatric cancer data. Officials say AI will speed trials, sharpen diagnoses, and guide treatments.

Why it matters
Federal money means more AI projects in health care, new contracts, and new data rules. The same White House is also pushing a 26% cut to HHS next year, including NIH and CDC, which could blunt the impact and shift priorities. Children’s health data raises privacy and consent risks, so watch who wins the data work and how results are tested and shared.
Read the full story
 
Nextgov/FCW
Takeaway
Democracy Forward sued OPM, GSA, HUD, and OMB after the agencies did not answer FOIA requests about their use of AI to carry out Trump policies. The group seeks records on AI tools used to analyze public comments on OPM’s civil service rule and to implement new directives tied to the Department of Government Efficiency. The agencies have 30 days to respond in court.

Why it matters
Public records fight tests how open the government is about AI in rulemaking. Using opaque AI to triage comments or shape policy can tilt outcomes and speed deregulation, which benefits well-connected industries and the White House. A court order could force clearer guardrails or reveal gaps in OMB guidance; agencies may argue searches take time or that no such tools were used.
Read the full story
 
 
Worth thinking about
“Pre-deployment testing is important, but it is not enough.” — Daniel Ho, Stanford HAI
 

Spend more time winning

Struggling to engage voters or maximize your campaign's impact? With over a decade of experience in Democratic fundraising and strategy, MFStrategies has accelerated countless campaigns, raised tens of millions of dollars, and smashed records.

If you’re not winning today, your opponent is. Let’s work together to craft a strategy that drives real results. Schedule your free 30-minute strategy session today and join the ranks of successful campaigns we've supported.

Keep Reading

No posts found