|
The Impact A weekly look at the intersection of AI, advocacy, and politics from the team at MFStrategies | www.MFStrategies.com |
|
|
We're looking to hear from you! The Impact is read by thousands of political and policy professionals each week. To make sure we're continuing to provide the type of news and analysis that keeps you ahead of the curve, we'd love it if you'd take our quick 1 minute reader survey so we can keep delivering for you!
|
|
|
|
| | Vendor Scorecards Coming soon |
| |
|
|
|
|
|
|
GAO says the feds now face nearly 100 AI rules, which means pilots are turning into real programs with audits and deadlines. Seattle rolled out clear guardrails and metrics, and states are asking for a simple playbook to copy wins in permits, safety, and service access. In Congress, a new sandbox idea could let agencies test tools faster but with oversight. Meanwhile, data centers tied to AI are driving big power and water needs in places like Pennsylvania. And in Canada, adoption is far enough along to shift jobs, so training and human‑in‑the‑loop checks matter more than ever. |
|
|
|
AI / Political News of the Week
|
|
| |
|
Governing Takeaway State and local AI use is growing but still scattered. The piece shows real wins: half of states run chatbots, Austin speeds permits, California uses AI for traffic safety, and models help find lead pipes and flag flood or eviction risk. To scale, leaders need an adaptive policy plan, funding, faster and ethical procurement, and training for workers.
Why it matters Public leaders need a playbook, not pilots. Clear definitions, inventories, and contracts with clear rules can turn experiments into policy. Funding deadlines and long buying cycles make speed urgent, while new training programs can upskill teams now. |
|
| | FedScoop Takeaway GAO counted nearly 100 government-wide AI requirements coming from laws, executive orders, and federal guidance. The report says 10 federal bodies share roles in oversight, and agencies must inventory AI use, update policies, and manage risk. OMB, OSTP, Commerce, GSA, and NSF were looped in on the draft; OMB did not comment.
Why it matters Rising mandates turn AI from experiments into required, trackable programs. Agency leaders will need budget and tools for AI inventories, risk reviews, and clear usage policies to pass audits. Vendors that automate reporting and align to federal standards will have a strong advantage in upcoming buys. |
|
| | Inside Global Tech Takeaway Senator Ted Cruz introduced a bill that sets a national AI policy framework and creates a federal regulatory sandbox. The proposal would let agencies run time‑bound pilots with guardrails so companies can test new AI tools under supervision and share results. The package leans toward a pro‑innovation approach while calling for clearer guidance and accountability from government.
Why it matters A structured sandbox could speed how agencies try and buy AI, giving vendors a safer, faster path from demo to pilot to production. Federal direction here can shape agency playbooks, influence states, and set expectations for risk controls in contracts. Leaders should watch who qualifies for the sandbox, data protection rules, reporting duties, and whether the bill touches preemption of conflicting state policies. |
|
| | Kleinman Center for Energy Policy (University of Pennsylvania) Takeaway AI and data centers are set to grow fast in Pennsylvania, bringing big power and water needs along with jobs and tax revenue. The piece urges state leaders to get ahead of this surge with clear rules for siting, grid upgrades, clean power procurement, and community protections. It sketches practical tools lawmakers can use: faster permitting tied to standards, transparent reporting on energy and water use, and incentives with guardrails.
Why it matters Pennsylvania sits in the PJM grid, where demand from data centers and AI is rising sharply; choices made in the next year will shape reliability, rates, and emissions for a decade. A proactive framework can attract investment while requiring 24/7 clean power, demand response, and local benefits so communities aren’t left with higher bills or pollution. State energy, utility, and environmental agencies will need a single playbook, backed by legislation, to move siting and transmission at the speed of load growth. |
|
| | GovTech Takeaway Seattle released a 2025–2026 AI plan that updates rules and builds on almost 40 pilots. The policy allows AI with oversight, bans uses like emotion reading and social scoring, and sets metrics for accuracy, ROI, and user satisfaction. The city will hire an AI lead, train staff, and run community hackathons to speed work on permits, safety, and access to services.
Why it matters Clear guardrails plus metrics move AI from pilots to real programs. Vendors and agencies get a shared checklist for buying, performance, and risk. Other cities can copy this playbook to scale AI while avoiding bias and waste. |
|
| | CityNews Toronto Takeaway Ottawa’s chief data officer says adopting AI across federal operations will mean some public service job cuts, with retraining and role changes promised. The government signed a deal with Canadian AI company Cohere to find where AI can help and plans to launch a public registry of AI projects, adding to current uses like satellite analysis and visa sorting. Unions are calling for real consultation and limits on AI in hiring and other high‑risk decisions.
Why it matters Signals real government adoption under tight budgets, moving AI from pilots to daily work. A public registry and impact checks for sensitive uses set a path for transparency and guardrails other agencies may copy. Leaders and vendors should plan for training, change management, and clear rules on when humans must stay in the loop to avoid service risks and backlash. |
|
| |
|
Worth thinking about “Governments that embrace this transition will be best positioned for future challenges.” |
|
|
|
|