Your weekly look at the intersection of AI, advocacy, and politics from the team at MFStrategies | www.MFStrategies.com
OpenAI is writing the superintelligence policy agenda before elected officials can, while the infrastructure that decides what voters see is already being determined by which news organizations leave their servers open to AI crawlers.
The gap between those two timelines, one aspirational, one operational, is where power is relocating.
Campaigns face a dual exposure: policy frameworks authored by the companies they might regulate, and recommendation engines that already favor whoever stays visible to algorithms over whoever builds coalitions
Liability is unclear, equity safeguards are being pushed to states, and the loudest voice in the room is the one with the most to gain from shaping the constraints. The question is whether governance can close the distance before the incentives harden into infrastructure.
Berkman Klein Center Takeaway Harvard's Berkman
Klein Center hosted a panel on agentic AI in cybersecurity, focusing
on how autonomous AI systems change the threat landscape, create new
liability questions, and challenge existing legal frameworks. Experts
including a former Deputy National Cyber Director discussed who is
responsible when AI causes or fails to prevent a breach. The panel
addressed gaps in current policy that make regulating these systems
difficult. Why it matters Agentic AI shifts liability and
response authority in cybersecurity from humans to algorithms. That
creates gaps in legal accountability when breaches happen or defenses
fail. Campaigns face unclear rules on who's responsible if autonomous
tools misfire, raising compliance risk and vendor dependency just as
AI becomes essential to threat detection. |
|
Freeman Spogli Institute for International
Studies Takeaway A Stanford study
tested five AI chatbots during Japan's February 2026 election and
found that when users described left-leaning policy views, all five
models overwhelmingly recommended the Japanese Communist Party—not
because of bias in training, but because the JCP runs an open-access
news site that AI search tools can read, while major Japanese news
outlets block AI crawlers. Policy positions swung recommendations by
50 to 98 percentage points, while demographics moved them less than 7
points. Why it matters AI chatbots now shape voter decisions
through web access, not just training data. Models steer left-leaning
users toward fringe parties when mainstream outlets block crawlers but
partisan sites stay open. That gives platforms with open indexing
outsized influence over electoral recommendations. Campaigns face new
asymmetry: visibility to AI may now matter more than voter
outreach. |
|
AI is making the world more complicated. Let’s keep list prep fast and simple.
Campaigns face new risks every cycle. You don't need another one hiding in your donor list workflow.
Kit cleans ActBlue exports, FEC files, and event lists in 90 seconds, then saves the steps so you can reuse them for every client, every race, every file.
No formulas. No black box. Just a workflow you control.
Start Prepping Call Time in Half the Time
fortune.com Takeaway OpenAI released a
13-page policy paper Monday calling for major changes to tax systems,
work schedules, and social safety nets to prepare for AI
superintelligence. The paper offers ideas like public wealth funds and
shorter workweeks, but critics say the proposals mostly repackage
existing AI governance frameworks discussed since ChatGPT launched in
2022. The target audience is Washington policymakers, not the public,
and the paper arrives the same day a New Yorker investigation
questioned CEO Sam Altman's trustworthiness on AI safety. Why it
matters OpenAI is framing the superintelligence debate before
regulators can. That gives the company leverage to define what counts
as reasonable oversight while preempting stricter rules. Critics see
it as agenda-setting disguised as civic dialogue—shaping the
constraints under which OpenAI itself operates. Campaigns may face
public skepticism if they align with industry-authored policy without
independent validation. |
|
Joint Center Takeaway The Joint Center for Political and Economic
Studies released a policy brief on March 31, 2026, outlining how state
and local governments can support Black entrepreneurs in AI amid what
it calls federal policy gaps. The brief identifies four
barriers—training access, broadband infrastructure, venture capital
(Black startups receive less than 0.5% of VC funding), and
representation in AI governance—and offers a roadmap for non-federal
actors to close them. It warns that current federal AI proposals risk
removing explicit protections against discrimination and bias from the
national agenda. Why it matters Federal AI policy is sidelining
equity safeguards, pushing states and local leaders to fill the gap.
Black entrepreneurs risk being locked out of a trillion-dollar economy
through lack of training, capital, and infrastructure. Campaigns may
face pressure to signal AI equity commitments or risk losing
credibility with Black business communities. |
|
Worth Thinking About This Week
"The
key finding is that JCP recommendation rates rise sharply when policy
positions are provided, which is the typical scenario when voters use
these tools in practice." -Andrew Hall and Sho Miyazaki, Stanford
researchers |
|