| |
The Impact A weekly look at the intersection of AI, advocacy, and politics from the team at MFStrategies | www.MFStrategies.com |
|
| |
The hardest part of running for office is not fundraising or door knocking.
No one wakes up one morning and casually decides to run for office. It is a decision that takes weeks, months, or sometimes years.
That is because the hardest part of running is that first commitment to your community. Deciding to step forward and say, “I’m willing to lead.” Saying things could be better, and that you are willing to help find the way forward, requires conviction and courage.
And you do not need to do it alone.
The team at MFStrategies has spent decades helping candidates and causes turn “what if” into “what is,” working with everyday people who chose to start building a better world. Our partnerships have broken records, made history, and navigated uncertainty when the path was anything but clear.
2026 is almost here. If you are considering a run, or if you have declared and are building your team, it is time to talk about your vision with the professionals who can help you tell your story.
|
|
|
| |
The Impact Podcast Hosts Addie and Hal break down this week's news in 10 minutes |
| The AI Campaign Playbook Our roadmap for how to implement AI safely and effectively in your organization. |
| Vendor Scorecards Coming soon |
| |
|
|
|
| |
|
| |
Federal power over AI jolted forward: the White House moved to preempt blue‑state laws, defined “unbiased” AI on partisan terms, and embedded automation deeper across agencies. The fight is over who gets to hard‑code their politics into systems that will quietly shape benefits, speech, and which campaigns get seen. If these moves stick, Trump’s team and Big Tech hold the leverage while states and advocates play defense. |
|
|
| |
|
AI / Political News of the Week
|
|
| |
| |
Washington Examiner Takeaway OMB will now require AI vendors to measure and disclose political bias to win federal contracts. Agencies must collect documentation on training, evaluations, and how models are used in software; national security systems are exempt. The policy carries Trump’s “Unbiased AI Principles” of “truth‑seeking” and “ideological neutrality.”
Why it matters This flips “AI safety” into a political litmus test: vendors now have to prove ideological “neutrality” to win federal business, nudging models toward Republican-friendly baselines and chilling pro‑equity content. Expect pressure campaigns, uneven enforcement across agencies, and new leverage points for both litigation and future rollbacks. |
|
| | | Inside Global Tech Takeaway President Trump signed an order to override state AI laws and push one national policy. It creates a DOJ task force to sue states, tells the FCC and FTC to write rules that override state measures, and threatens broadband and other grants for states with “onerous” AI laws. California and Colorado laws are likely targets, and states plan to fight it in court.
Why it matters This EO is an opening shot in a states-vs.-Trump fight over who gets to set the rules for AI. By trying to knock out “onerous” blue-state laws and tying federal dollars to compliance, it could freeze or roll back the most ambitious consumer and civil-rights protections—long before Congress agrees on any replacement. |
|
| | | Holland & Knight Takeaway HHS released a 21-page AI strategy to make AI central to health programs and operations. It sets five pillars, launches a “OneHHS” approach with shared tools and code, and creates an AI Governance Board with annual public reporting. Divisions must meet new risk controls for high‑impact AI by April 3, 2026, as FDA rolls out an agency‑wide “agentic AI” platform.
Why it matters HHS is moving from AI pilots to AI-by-default across the nation’s biggest health agencies—fast. That could hard‑wire opaque tools into everything from Medicaid decisions to drug approvals before safeguards are fully tested, forcing campaigns and advocates to fight over bias, access, and accountability after systems are already entrenched. |
|
| | | CNN Business Takeaway The US government launched “US Tech Force” to hire 1,000 early‑career AI and tech workers for two‑year roles across agencies. They will work on defense AI, the IRS “Trump Accounts,” and State Department intelligence, with pay between $130,000 and $195,000. Big Tech will mentor and send candidates, with most placements expected by early 2026.
Why it matters This turns federal service into a two‑year AI boot camp feeding talent back to Big Tech, not a long-term public capacity build. For campaigns, it signals more AI embedded in defense, tax, and intel systems—and that tech firms, not civil servants or voters, will keep holding the real leverage. |
|
| | |
| |
Worth thinking about “Produce reliable outputs free from harmful ideological biases or social agendas.” — Trump executive order, July 2025 |
|
|
| |
|