Your weekly look at the intersection of AI, advocacy, and politics from the team at MFStrategies | www.MFStrategies.com
This week, AI governance shifted from “talking about principles” to hard-coding AI into public systems—through school rules, federal rulemaking, and customer-service-style government assistants. Ohio’s model K–12 policy turns AI into a classroom discipline and permission problem, while the Trump DOT’s plan to use Google Gemini to draft transportation regulations turns safety policy into a speed contest where “good enough” becomes the standard. Meanwhile, the UK’s Claude pilot and California’s Google partnership show the next phase: AI that keeps context, guides decisions, and gets embedded via pilots and change-management playbooks. Taken together, the struggle is control versus convenience: leaders want faster service and faster rules, but the public inherits the errors, bias, and accountability gaps. The question heading into next week is who will set the audit trail—teachers, agencies, vendors, or courts—when AI becomes the invisible author of government decisions?
cleveland.com Takeaway Ohio released a model AI policy for K–12. It bans AI-enabled bullying, requires teacher permission before students use AI in class, and says AI should support, not replace, teaching. Districts must adopt their own AI policy by July 1, using the model as a guide.
Why it matters Ohio issued a model AI policy for K–12 that bans AI bullying, requires teacher sign-off for student use, and frames AI as a supplement, not a replacement, for teaching. Districts must adopt their own version by July 1.
This move starts locking in norms on how kids learn with AI—who gets access, who gets punished, and who decides what “ethical” use means. Local fights over implementation will shape digital literacy, surveillance, and discipline, and will likely surface partisan battles over content, bias, and control in the classroom. |
|
ProPublica Takeaway DOT will use Google Gemini to draft transportation rules. Leaders want “good enough” rules fast, with AI writing drafts in minutes and full rules in 30 days; it already helped write an FAA rule. Staff and experts warn AI makes errors, risking weaker protections and more lawsuits.
Why it matters This turns transportation rules into a speed game, shifting power from career experts to a Google model tuned for “good enough” outputs and a White House chasing deregulation headlines. Expect more, sloppier rules that are easier for industry to game and harder to defend in court, raising real safety risks. For campaigns and advocates, this is both a line of attack (outsourcing public safety to Big Tech) and a warning sign: you may need to litigate and message against AI-written rules that quietly weaken protections. |
|
Consulting.us Takeaway Clutch, a Sacramento public sector consulting firm, formed a partnership with Google Public Sector to help California state and local agencies adopt Google Cloud AI tools like Gemini for Government. The firm will run AI readiness work and change management, and start with early‑adopter departments to test high‑value use cases. The move aims to turn pilots into real service delivery across programs.
Why it matters This pushes Google deeper into the machinery of California government, turning “exploring AI” into “buying AI services” at scale. For campaigns, that means new gatekeepers for data and workflows, more opaque systems shaping public services, and a growing gap between well‑tooled agencies and everyone else. |
|
FedScoop Takeaway OMB will soon release the 2025 federal AI use case inventory on GitHub, the first under the Trump administration. The 2024 list had 2,133 use cases across 41 agencies; this year brings a new “high‑impact” label, slipped public deadlines during the shutdown, and still excludes Defense, intel, and non‑public uses. Expect more reported AI, but not a full picture—examples don’t call out election integrity or deepfake voice/likeness issues.
Why it matters The new inventory will quietly reset the “rules of the road” for where AI is used in federal power centers—and what’s kept off the books. Gaps around elections, deepfakes, and defense mean key political uses may grow in the shadows, making outside watchdogging and FOIA work more important than the headline numbers. |
|
AI News Takeaway The UK selected Anthropic to pilot a government AI assistant, starting with employment services. The Claude-powered agent will guide users through tasks, keep context across sessions, and roll out under a “scan, pilot, scale” plan with testing by the AI Safety Institute. Anthropic will co-build with the Government Digital Service to transfer skills and limit vendor lock‑in, with users controlling what data the system remembers.
Why it matters This moves AI from “FAQ bot” to decision-shaping guide inside core welfare systems—shifting power over who gets help and how fast. If it works, expect pressure on other governments and vendors to copy the agentic, stateful model; if it fails, it will harden skepticism about AI in public services. |
|
Worth Thinking About This Week
“Going fast and breaking things means people are going to get hurt.”
— Mike Horton, former acting chief AI officer at DOT
Spend more time winning with MFStrategies
Focus on what matters most in your campaign. MFStrategies helps Democratic candidates and causes turn strategy into success with expert fundraising, voter engagement, and campaign consulting rooted in decades of experience.
Whether you’re launching a first run or scaling up your current effort, start with a 30 minute free strategy session and build momentum with a team that knows how to deliver results.