The Impact

Your weekly look at the intersection of AI, advocacy, and politics from the team at MFStrategies | www.MFStrategies.com

This Week’s Toplines

The federal government is making clear it wants to dictate who sets AI
policy and how.

A judge froze Colorado's anti-discrimination law just days before legislators were set to rewrite it. The Pentagon cut ties with Anthropic after the company wouldn't drop its safety restrictions. Google quietly gave up its veto over military use of its
models. And a $125 million Silicon Valley super PAC is spending to replace state consumer protections with weaker federal standards, targeting lawmakers who vote for oversight.

None of this is happening through legislation, really. It's happening through contract terms, supply-chain designations, and selective enforcement. Agencies and
investors are using access and funding to redraw the lines of accountability well before democratic institutions can catch up.

Does any guardrail actually hold once the real pressure hits?

News of the Week

Colorado Politics
Takeaway
Colorado's 2024 AI discrimination law hit a wall last week when a federal judge blocked its enforcement. The lawsuit came from Elon Musk's xAI, and the Trump Justice Department piled on in support. With the law now on hold, state legislators have until May 13 to rework it before its scheduled June 30, 2026, effective date. As originally written, the law would have banned algorithmic bias in hiring, housing, and healthcare decisions. But the DOJ argued it's unconstitutional, pointing specifically to a carve-out that exempts algorithms built to promote diversity.
Why it matters
A federal court blocked Colorado's AI bias law just days before lawmakers were set to rewrite it. That move effectively yanked control away from state regulators and handed it to federal judges and the Trump DOJ. For campaigns and policy shops that rely on AI for voter targeting or hiring, the situation is messy: the rules around discrimination claims are now murky, enforcement is on ice, and nobody has written replacement language yet. At the core of this is a tug-of-war between state consumer protection authority and federal civil rights arguments, and it creates real compliance headaches for any organization using algorithmic tools.
 
Let's Data Science
Takeaway
A new super PAC called Leading the Future pulled in $125 million in 2025 from Silicon Valley investors, with backers that include Andreessen Horowitz and OpenAI President Greg Brockman. The group's goal: supporting candidates who favor industry-friendly federal AI rules over a patchwork of state-by-state regulation. The PAC entered 2026 sitting on $70 million in cash and has already spent against New York lawmakers, including Assembly member Alex Bores, who backed state AI safety bills. Its leaders say they plan to spend heavily to block state-level regulation and push for uniform federal policy instead.
Why it matters
Silicon Valley has been working to consolidate political power at the federal level, and the goal is pretty transparent: replace tougher state AI safety laws with softer federal ones. What that means going forward is that the battles over compliance won't stay in statehouses for long. They're going to move into campaign finance. Lawmakers now have a real problem on their hands. Tech money can be weaponized against anyone who votes in favor of disclosure mandates, testing requirements, or incident reporting rules. That's the new dynamic, and it changes the calculus for every politician weighing in on AI regulation.
 

The call sheets don’t fix themselves.
But it shouldn't take you five hours either.

FEC exports with columns nobody asked for. ActBlue files that don't match NGP. An event spreadsheet from someone's personal Gmail. Someone always ends up cleaning all of it before call time. Usually you.

Kit dedupes and cleans all of it in 90 seconds. Save the workflow once, run it for every file that lands in your inbox.

No formulas. No scripts. No starting from scratch.

The Verge
Takeaway
Google quietly struck a classified agreement with the Pentagon that gives the Defense Department access to its AI models for any lawful government purpose. Under the terms, Google has no veto power over how the military actually puts the technology to use. The contract does include language prohibiting domestic mass surveillance and banning autonomous weapons that lack human oversight, but those guardrails aren't legally enforceable. In practice, Google has no ability to control or block operational decisions once the tools are in the government's hands. On top of that, the agreement requires Google to modify its AI safety filters whenever the government asks.
Why it matters
Google gave up its veto over how the military uses its AI, putting the company in line with competitors who were already selling to the government without restrictions. In practice, that means Pentagon operators now have more say than internal ethics teams when it comes to how these tools get deployed. For Google employees who pushed back against weapons-related projects, this creates real compliance headaches. And for campaigns using Google's cloud products, there's a new wrinkle: those same tools may have military applications, which opens the door to uncomfortable questions from vendors and public backlash over defense connections.
 
Built In
Takeaway
The Pentagon cut ties with Anthropic and labeled it a "supply-chain risk" after the company wouldn't remove safeguards that keep Claude from being used for autonomous weapons and mass surveillance. A federal judge struck down the supply-chain label, calling it a free speech violation, but a separate appeals court upheld it. For now, Anthropic is locked out of new Pentagon contracts while both legal cases work their way through the courts. Defense contractors doing business with the Pentagon are now required to certify that they don't use Claude.
Why it matters
The Pentagon is now using supply-chain designation as a bargaining chip to override private AI safety rules. Vendors are stuck with a lousy choice: go along with whatever the federal government wants (and those demands keep changing) or get locked out of defense contracts. In practice, this hands agencies the power to dictate what counts as acceptable AI use, taking that authority away from the companies that built the tools. And for campaigns and offices that depend on those same platforms, it creates a mess of conflicting compliance requirements and the real possibility of losing access altogether.
 
ABC News - Breaking News, Latest News and Videos
Takeaway
On April 30, 2026, Defense Secretary Pete Hegseth appeared before the Senate Armed Services Committee and laid out the Pentagon's plans to expand AI use in targeting, surveillance, and autonomous weapons. Several Democratic lawmakers pushed back, calling for new guardrails around mass surveillance and lethal decision-making. Last Friday, the Pentagon finalized contracts with seven large tech firms, among them OpenAI, Google, and SpaceX, to integrate AI into classified systems. The Army, meanwhile, has sent close to 10,000 AI-powered drones to the Middle East since the start of the war with Iran. In March, Michigan Sen. Elissa Slotkin put forward legislation that would mandate human sign-off before any autonomous weapon launch and outlaw AI-driven mass surveillance. Her bill came after the Pentagon severed its relationship with AI company Anthropic, which had refused to drop those very restrictions from its own policies.
Why it matters
The Pentagon is rolling out AI across its operations without any enforceable rules governing how it's used in targeting decisions or surveillance programs. That gap between moving fast and maintaining oversight is generating real tension. On the Hill, Democrats are pushing for requirements that keep a human in the loop on lethal decisions, but defense officials don't want their hands tied. For campaigns, the political exposure here is straightforward: if AI use gets linked to civilian casualties abroad, or if major tech contractors start walking away from military deals over ethical terms, it becomes a messaging problem nobody wants to own.
 

Worth Thinking About This Week

"Right now the United States has an edge in military AI performance, but Russia has demonstrated a willingness to introduce AI faster because it doesn't care nearly as much about the risks of civilian casualties and friendly fire." -Gregory Allen, former director of strategy and policy for the Department of Defense's Joint Artificial Intelligence Center

Keep Reading