| |
The Impact A weekly look at the intersection of AI, advocacy, and politics from the team at MFStrategies | www.MFStrategies.com |
|
| |
This week, MFStrategies is excited to highlight PubSent!
PubSent is a two way AI texting platform that turns outreach into real conversations at scale. Instead of one way blasts, PubSent’s agents engage supporters, answer questions using your approved talking points, and drive people to clear calls to action like donate, RSVP, volunteer, or vote.
Every reply is automatically tagged for sentiment, topics, and support level, so you can see what people think in real time and know exactly who needs a human follow up.
By teaming up with MFStrategies, you get priority access to PubSent’s next generation SMS program. Launch smarter persuasion and fundraising conversations with built in measurement from day one.
|
|
|
| |
The Impact Podcast Hosts Addie and Hal break down this week's news in 10 minutes |
| The AI Campaign Playbook Our roadmap for how to implement AI safely and effectively in your organization. |
| Vendor Scorecards Coming soon |
| |
|
|
|
| |
|
| |
The ground is shifting from “AI is coming” to “AI is the default.” Federal agencies are formalizing AI under OMB mandates and GSA-approved models, pushing strict guardrails onto contractors, while states rush to regulate political speech before possible federal preemption. Meanwhile, new research shows chatbots can move voters far more than TV, making quiet, one-to-one persuasion the real risk heading into 2026. |
|
|
| |
|
AI / Political News of the Week
|
|
| |
| |
Ogletree Deakins Takeaway Federal agencies are posting AI strategy plans under OMB M-25-21 and must issue detailed AI-use and acquisition policies by December 29, 2025. The plans aim to speed adoption while adding guardrails: secure development, stronger data governance, workforce training, and minimum safeguards for “high-impact” AI with CAIO oversight. Contractors should expect clauses barring training on non‑public data, tighter documentation and human oversight, and preferences for U.S.-developed AI tools.
Why it matters This locks in AI as a normal part of federal operations and pushes risk and paperwork down onto anyone touching federal dollars. Expect new clauses that force you to map every AI tool, prove human oversight, and avoid foreign or “black box” systems—raising costs and making noncompliance a quiet way to sideline smaller, under-resourced vendors. |
|
| | | MIT Technology Review Takeaway Two new studies find AI chatbots can sway voters far more than traditional ads. Short conversations moved opinions by up to 10 points; optimized models hit 25 points. Persuasive AI is cheap to run at scale, and U.S. policy lags while the EU now treats it as high‑risk.
Why it matters This research shifts AI in elections from “scary deepfakes” to something more serious: cheap, measurable, one‑to‑one persuasion that outperforms TV ads. For campaigns, it means foreign ops and well-funded groups can quietly test and optimize messages at scale long before regulators catch up—and long before you even see them. |
|
| | | Utah News Dispatch Takeaway Utah Gov. Spencer Cox urged lawmakers to pass new AI rules in the 2026 session, including labels on political ads, penalties for deceptive deepfakes, and disclosures when chatbots are used. He warned a coming national bill could override state laws and lock in weaker protections. Business groups want one federal standard, while civil liberties groups warn broad bans could chill speech.
Why it matters Cox is signaling a red-state push to lock in state power over AI before Congress can sweep the field. For campaigns, that means faster, stricter rules on deepfakes and ad labels arriving state-by-state, with legal gray zones and messaging risks if federal preemption later rewrites the map. |
|
| | | DD News (Reuters) Takeaway The GSA approved Meta’s Llama for federal use. Agencies can test the free model under GSA security and legal rules. GSA also cleared tools from AWS, Microsoft, Google, Anthropic, and OpenAI at discounted rates.
Why it matters Opening the door to Llama makes it easier for agencies to quietly bake a partisan-aligned company’s model into everyday government work—normalizing AI dependence while details on bias, data handling, and long‑term costs stay murky. For campaigns, this shapes which tools staffers get used to and whose defaults they start to trust. |
|
| | |
| |
Worth thinking about “If we wait until we can see it happening, it will already be too late.” |
|
|
| |
|