|
The Impact A weekly look at the intersection of AI, advocacy, and politics from the team at MFStrategies | www.MFStrategies.com |
|
|
Is your campaign doing everything it can to dominate 2025 and win big in 2026?
MFStrategies is a full-service political strategy firm. We combine the reach and tools of a national consultancy with the hands-on agility of a local team. We’ve helped clients smash fundraising records, secure historic wins, and scale influence for over a decade.
You’re running for office to make a positive change in your community. Thank you!
Let’s spend 30 minutes talking about your vision, your obstacles, and how to turn your big plans into big wins.
|
|
|
|
| | Vendor Scorecards Coming soon |
| |
|
|
|
|
|
|
California just required big AI makers to show their safety homework and report serious incidents, setting a bar others may copy. Meanwhile, CMS will test AI to help approve or deny Medicare care in six states through 2031, with both parties asking what “meaningful human review” really is. Meta is gearing up to sway state laws with a new super PAC as statehouses churn out AI bills. And Washington gave the green light for agencies to use tools built on Llama, boosting open-source options. Net-net: government adoption is speeding up while the guardrails and incentives get hammered out in public. |
|
|
|
AI / Political News of the Week
|
|
| |
|
Governor of California Takeaway California enacted SB 53, the Transparency in Frontier Artificial Intelligence Act. The law makes large AI developers publish safety and standards frameworks, report critical incidents, and protects whistleblowers, with penalties enforced by the Attorney General. It also creates “CalCompute” to plan a public compute framework and directs the Department of Technology to update rules each year.
Why it matters California often sets rules others follow. Vendors you hire may need to show safety reports and public frameworks. Key details—what “frontier” means and how CalCompute gets funded—will come in rulemaking, and industry will push back. |
|
| | KFF Health News Takeaway CMS will test an AI tool to help approve or deny care in traditional Medicare, targeting services like skin and tissue substitutes, nerve stimulator implants, and knee arthroscopy. The WISeR pilot starts Jan. 1 in Arizona, Ohio, Oklahoma, New Jersey, Texas, and Washington, and runs through 2031. CMS promises human review of denials and no pay tied to denial rates, but lawmakers in both parties are pushing back and the House has moved to block funding in FY26.
Why it matters Federal use of AI to gate care sets a new baseline for public programs. Seniors and providers could face more delays and appeals even with guardrails, and “shared savings” still reward delivering less care. Expect fights in FY26 spending bills and pressure to define real “meaningful human review.” |
|
| | CIO Dive Takeaway Meta formed a new super PAC, the American Technology Excellence Project, to sway state AI policy. The group will spend unlimited funds to back candidates aligned with its views on AI and tech. Meta says 1,100+ state AI bills in 2025 risk a patchwork that could slow innovation.
Why it matters Statehouses are setting the next AI rules while Congress stalls, so money will follow. Expect pushes for “light‑touch” laws that favor large platforms over safety, disclosure, and liability. GOP ideas like Trump’s proposed 10‑year freeze on state AI laws signal who benefits when oversight is delayed. |
|
| | Lapaas Voice Takeaway The federal government cleared Meta’s Llama AI for agency use. Agencies can now procure Llama-based tools through GSA channels. The move validates open‑source models in government but puts a spotlight on safeguards, support, and accountability.
Why it matters Federal approval can push agencies toward lower‑cost, open models and away from locked‑in vendor stacks. Open models can speed pilots, but they also widen risk if data handling, bias testing, and audit logs aren’t nailed down. Expect heavy industry lobbying and fast‑tracked deployments; ask for red‑team reports, privacy terms, and documented impact assessments before signing anything. |
|
| |
|
Worth thinking about “The Transparency in Frontier Artificial Intelligence Act (TFAIA) moves us towards the transparency and ‘trust but verify’ policy principles outlined in our report.” — Mariano-Florentino (Tino) Cuéllar |
|
|
|
|