The Impact

Your weekly look at the intersection of AI, advocacy, and politics from the team at MFStrategies | www.MFStrategies.com

This Week’s Toplines

Washington is trying to figure out AI policy while the technology is already loose in the building.

Federal agencies have more than doubled their AI deployments in the past year, which has completely changed the internal politics, the career risk used to be adopting something untested, and now it's falling behind because you didn't.

Congress, naturally, is several steps behind the agencies it's supposed to oversee. Republicans want a single national standard that would wipe out state-level AI laws, framing the whole thing as a competitiveness play against China. Democrats and state regulators aren't buying it. They've actually been passing laws on bias, privacy, and algorithmic harm, and they're not eager to hand that authority to a Congress that hasn't managed to pass a significant piece of tech legislation in
years.

That tension, between federal preemption and state enforcement, is where the real argument lives. It's less a debate about artificial intelligence than a turf war over who writes the rules, and whether the people writing them have any track record of doing so.

November will go a long way toward settling it, or at least deciding
which side gets to stall the other.

News of the Week

Biometric Update
Takeaway
Republicans want a single federal AI rulebook, backed by the White House, that would wipe out the growing web of state-level AI laws. Their pitch: letting fifty different regulatory regimes coexist puts the U.S. at a disadvantage against China. Democrats and state regulators aren't buying it. They say preemption is a non-starter unless it comes with real guardrails on privacy, civil rights, and biometric data. A lot hinges on November. If Republicans hold on, Congress could shut down state enforcement altogether. If Democrats pick up enough seats, they'd gain subpoena power to dig into federal AI contracts tied to immigration, policing, and identity verification.
Why it matters
There's a brewing preemption fight over state AI laws, and it could pull rule-making authority away from states like California and Illinois, places that actually got out ahead on this, and hand it to a Congress that can barely pass a budget. For vendors, compliance teams, and anyone running campaigns with biometric tools or identity platforms, that leaves a lot of open questions about what the rules will actually look like. How the midterms shake out will go a long way toward deciding whether real oversight takes shape at the federal level or whether Washington keeps sitting on its hands while deployment rolls forward unchecked.
 
Institute for Family Studies
Takeaway
Senator Marsha Blackburn introduced the TRUMP AMERICA AI Act, a sprawling piece of draft legislation that would phase out Section 230 over two years, force AI companies to disclose ideological bias, ban AI companions for minors, and establish federal product liability standards for AI systems. The bill pulls in provisions from the Kids' Online Safety Act, which cleared the Senate 91-3 in 2024, and tries to wrap protections for children, creators, conservatives, and communities into one federal framework. On the enforcement side, platforms could face liability for hosting unauthorized AI replicas of artists, and chatbots that generate sexual content for minors would carry criminal penalties.
Why it matters
There's a new AI bill gaining real traction with the public, and it's putting tech platforms in a tough spot, caught between federal regulators writing new rules and states ready to enforce their own. The big change? Companies would lose their Section 230 protections and could be held liable when their products cause harm to kids, produce biased outcomes, or push energy costs onto someone else. This is going to set off a wave of lobbying and legal battles over which level of government gets the final say. And if you're running a campaign that relies on chatbots, deepfakes, or third-party data tools, pay attention: you're now potentially on the hook for lawsuits in ways you weren't before.
 

You've got donors everywhere. Kit gets them into one clean call list in under a minute.

Gmail contacts, LinkedIn connections, your phone, that spreadsheet someone gave you from their last race; Kit cleans them all and catches the duplicates.

Kit will even help you research your prospects and find phone numbers!

No spreadsheet formulas. No experience needed. Just upload your files, clean your data, and download a ready-to-use call list.

Start calling donors, not cleaning data.

See how it works: http://www.KitWorkflows.com

The Verge
Takeaway
Most Americans say they want the government to step in and regulate AI, but it's not the kind of thing that's actually changing how people vote with the 2026 midterms coming up. Meanwhile, pushback against data centers has stalled or killed about $64 billion worth of projects so far. Big money is flowing into races on both sides; Leading the Future has pulled in $140 million, and Public First Action is sitting on $50 million. The job loss angle could get a lot louder this summer, too. Layoffs linked to AI are starting to hit beyond just tech companies, creeping into legal work and administrative positions.
Why it matters
Voters are getting angry about AI faster than either party can figure out what to do with that anger, which is unusual, and it's an opening for state-level candidates willing to pick a fight with the tech lobby. But that won't go unanswered. Industry groups are already gearing up to spend big in state races, trying to kill regulation before layoffs pile up enough for people to really start paying attention. The tricky part for candidates is timing: AI as a campaign issue is still new enough that nobody's really owned it yet, but that also means the opposition research and super PAC ads are coming. Whoever wants to run on this needs to move before someone else frames it for them.
 
Global Government Forum
Takeaway
Federal agencies are using AI far more than they were just a year ago. According to the Office of Management and Budget, the number of AI use cases across the government jumped from 1,757 in 2024 to 3,611 in 2025, with 1,818 of those already deployed or in pilot programs at 56 agencies. HHS is out front with 447 active use cases, and Veterans Affairs has racked up the most high-impact applications at 215. Under laws and executive actions dating back to 2023, every agency aside from intelligence and Pentagon national security operations is now required to report its AI use to OMB.
Why it matters
Federal agencies went from dabbling in AI to treating it as a default, and now the ones not using it are the ones who have to explain themselves. The risk has flipped, it used to land on the early adopters, but now it falls on whoever's dragging their feet. Of course, that rush creates its own problems when speed runs headlong into the need for proper oversight. Meanwhile, vendors are showing up with tools they swear are ready for government use, even as regulators are tightening the screws on AI in areas like health care, criminal justice, and veterans services.
 

Worth Thinking About This Week

"The fight over AI regulation in Congress is becoming less a conventional technology policy debate than a struggle over who will control the legal architecture of a rapidly expanding surveillance and identity economy." -Biometric Update

Keep Reading