|
The Impact A weekly look at the intersection of AI, advocacy, and politics from the team at MFStrategies | www.MFStrategies.com |
|
|
We've been publishing The Impact for a little over the month now, and the response has been incredible. If you're reading this, you're one of the thousands of political and advocacy professionals who are staying ahead of the curve with us!
Our #1 goal is to make sure we keep providing quality, relevant information for our readers. So this week, we want to hear from you! Please take just a minute to fill out our reader survey so we can better inform you about what is happening at the intersection of AI, politics, and policy. Your feedback is critical to that mission!
>> Take our 1 minute reader survey here!
|
|
|
|
| | Vendor Scorecards Coming soon |
| |
|
|
|
|
|
|
Governments aren’t just talking about AI—they’re building it into daily work. The feds set up a testbed to try tools. Cities are drafting with chatbots. Banks rush out fraud defenses as rules shift. Big money is going to chips, from a U.S. stake in Intel to South Korea’s budget push. But the AI cloud has real costs: new data centers in Mexico are draining scarce water. Even the U.S. and China plans look alike—grow fast, set the rules, manage risk later—while tech donors try to shape the playbook. |
|
|
|
AI / Political News of the Week
|
|
| |
|
OPB Takeaway In Washington state, records from Everett and Bellingham show city staff using ChatGPT for constituent emails, grant applications, policy drafting, and even image prompts, while state guidance on labeling, human review, and data security is unevenly followed. Everett plans to restrict staff to Microsoft’s government-grade Copilot and recommend disclosure beyond “language refinement,” while Bellingham is taking a more permissive approach with looser citation expectations. The gaps have already exposed privacy risks and eroded trust for some residents, prompting a state AI Task Force to draft recommendations.
Why it matters Public agencies nationwide are confronting the same adoption-versus-guardrails tension, with transparency, records retention, and data protection obligations all at stake. Procurement choices (e.g., moving to Gov-grade tools), labor consultation, and staff training will shape whether AI adds capacity without sacrificing accountability—especially in high‑impact uses like grant applications, where federal funders such as NIH are starting to penalize AI‑written submissions. Campaigns and civic organizations should expect clearer disclosure norms and security requirements to become table stakes for government engagement. |
|
| | JD Supra Takeaway GSA launched USAi, a secure, cloud-based generative AI evaluation suite that lets federal agencies test chat, code-generation, and summarization tools in a shared environment. The platform includes dashboards and analytics to track performance and maturity, aligns with the White House’s America’s AI Action Plan, and aims to speed AI adoption “faster, safer, and at no cost.” Early guidance highlights moderate FISMA-based trust standards, user choice of tools, and regulatory guardrails.
Why it matters A centralized, standards-aligned testbed could quickly set de facto benchmarks for federal AI procurement, governance, and risk management—shaping which vendors and models gain traction. Expect ripple effects for state and local governments and for contractors, who will calibrate offerings and compliance to the analytics, maturity measures, and guardrails USAi normalizes. Faster federal learning cycles may accelerate adoption timelines across the public sector. |
|
| | Just Security Takeaway Despite contrasting rhetoric, the new U.S. and Chinese AI action plans converge on a three-part strategy: accelerate domestic adoption, promote global diffusion (notably via open-source), and manage risks without constraining development. The U.S. plan under Trump leans into export promotion and open models while shifting safety to testing and evaluation led by NIST/CAISI. China’s plan pushes “AI Plus” real-economy deployment, open-source ecosystem building, and WAICO-led multilateral outreach, with ideological controls and gradually expanding attention to frontier risks.
Why it matters Convergence redefines AI competition from an ideological showdown to a race for domestic productivity gains, standard-setting, and technological influence—especially in the Global South. Policy choices on open-source, export packages, and selective alliances will shape global dependencies, governance norms, and market access for firms in regulated sectors. Safety’s secondary status in both plans signals near-term prioritization of growth, with testing regimes kept in reserve if risks or geopolitical dynamics shift. |
|
| | Reuters Takeaway Seoul plans to increase government spending in next year’s budget to jumpstart AI-led growth, with a focus on semiconductors, compute infrastructure, and R&D. The initiative aims to scale AI across industry and public services to lift productivity and export competitiveness as global tech rivalry intensifies.
Why it matters South Korea sits at the heart of the AI supply chain—especially memory chips and advanced packaging—so expanded public investment could influence the pace and cost of AI worldwide. The push also foreshadows policy debates on energy for data centers, talent development, and public-sector adoption that other governments will likely mirror. |
|
| | BBC News Takeaway AI-driven demand is fueling a boom in data centers in Querétaro, Mexico, drawing U.S. tech firms seeking power and business-friendly policies, but intensifying pressure on local water supplies amid historic drought. Cooling approaches vary, yet Microsoft still used 40 million liters in FY2025 and Google’s total water use rose 28% to 8.1bn gallons, sparking activist concerns over transparency and prioritization. Officials say water allocation is a federal responsibility and that citizen consumption comes first, even as operators plan further expansion.
Why it matters AI’s infrastructure build-out is increasingly shaped by scarce resources, shifting facilities to regions with limited water and fewer grid constraints—creating environmental and political trade-offs. Water-intensive cooling and diesel backup generation raise ESG risks, prompting scrutiny that can influence permitting, siting, and community relations. Policymakers and investors face pressure to mandate transparency, adopt water-smart standards, and align approvals with local resilience plans. |
|
| | The Korea Times Takeaway South Korea is preparing a no-fault liability law that would require banks to compensate voice phishing victims even when the bank isn’t at fault. In anticipation, major lenders are accelerating AI-driven fraud detection, 24/7 monitoring, device risk checks, and cross-industry partnerships with telecoms and crypto exchanges to preempt scams. Regulators plan to set detailed requirements and limits with industry as early as this year.
Why it matters Consumer protection rules that shift liability onto firms tend to spur investment in proactive controls—expect faster deployment of AI models, data-sharing consortia, and real-time interdiction across finance and telecom. Korea’s model could influence other jurisdictions wrestling with deepfake-enabled scams and crypto laundering, reshaping risk, compliance, and customer experience in digital finance. Banks, telcos, and exchanges will need interoperable standards and governance to manage shared accountability. |
|
| | Washington Examiner Takeaway Meta is launching a California-focused super PAC—Mobilizing Economic Transformation Across (Meta) California—with an initial investment in the tens of millions to back candidates who support looser AI and tech regulation. The nonpartisan PAC, led by Meta policy VPs Greg Maurer and Brian Rice, comes amid a wave of AI-related bills in Sacramento and the rise of other AI-focused political groups like Leading the Future.
Why it matters State-level AI rules are proliferating, making California a pivotal battleground for how AI is governed in the U.S. A deep-pocketed, nonpartisan PAC from one of the world’s largest tech firms could materially shape which policies—and candidates—advance on issues like model access, liability, and innovation incentives. The move also signals an escalating AI policy spending race across the industry. |
|
| | Gizmodo Takeaway Federal agencies are rapidly rolling out generative AI—from internal chatbots at GSA and SSA to code-writing at the VA and CamoGPT in the Army—amid predictions of up to 300,000 federal job cuts this year. Experts warn the tech isn’t ready for high-stakes legal and procurement work, citing 17–33% hallucination rates in legal tools and risks of wasting scarce legal time. A measured Pennsylvania pilot showed strong admin productivity gains, but researchers say the current federal pace lacks guardrails, workflow integration, and accountability.
Why it matters Government adoption of genAI is shifting from pilots to operations in procurement, public benefits guidance, and internal workflows, where errors carry legal, financial, and equity consequences. Agencies and advocates will need governance frameworks—disclaimers, clear ownership, human-in-the-loop review—to prevent misinformation and protect the public when guidance isn’t legally binding. The choices made now will shape vendor relationships, oversight battles, and workforce impacts for years. |
|
| | AInvest Takeaway Washington is taking a $8.9B, 9.9% passive equity stake in Intel—plus a five-year warrant for up to an additional 5%—to shore up domestic chipmaking and AI leadership. The move sits alongside CHIPS Act support and a recalibration of export rules that loosen some AI chip sales to China while tightening controls on advanced manufacturing tools. While the capital bolsters Intel’s foundry push, it also raises governance and dilution concerns and signals a resilience-first turn in U.S. industrial policy.
Why it matters Semiconductors underpin AI, economic competitiveness, and defense systems, so ownership and control of advanced manufacturing is becoming a strategic imperative. Government equity in a flagship U.S. chipmaker blurs lines between market and state, potentially setting a template for future interventions in AI-critical supply chains. Investors and policy leaders will need to navigate the interplay of export controls, oversight, and corporate strategy that follows. |
|
| | The New York Times Takeaway A group of prominent Silicon Valley donors has pledged about $200 million to launch or expand pro‑A.I. super PACs, according to the New York Times. The effort aims to support candidates in upcoming races who favor AI‑friendly policies around research, innovation, and talent while resisting proposals viewed as overly restrictive. The spending underscores a coordinated push by the tech sector to shape Washington’s approach to A.I. as new rules are debated.
Why it matters AI policy is rapidly becoming a ballot‑line issue, and nine‑figure checks can set agendas, define litmus tests, and influence which ideas get oxygen. Campaigns, advocacy groups, and regulators should expect more targeted persuasion and coalition‑building tied to compute access, open‑source rules, liability, national security, and immigration. Large, centralized funding also raises questions about industry influence, transparency, and alignment with broader public interests. |
|
| |
|
Worth thinking about “USAi isn’t just another tool, it’s infrastructure for America’s AI future.” — David Shive, CIO, U.S. General Services Administration |
|
|
|
|