ABC
News - Breaking News, Latest News and Videos Takeaway On April 30,
2026, Defense Secretary Pete Hegseth appeared before the Senate Armed
Services Committee and laid out the Pentagon's plans to expand AI use
in targeting, surveillance, and autonomous weapons. Several Democratic
lawmakers pushed back, calling for new guardrails around mass
surveillance and lethal decision-making. Last Friday, the Pentagon
finalized contracts with seven large tech firms, among them OpenAI,
Google, and SpaceX, to integrate AI into classified systems. The Army,
meanwhile, has sent close to 10,000 AI-powered drones to the Middle
East since the start of the war with Iran. In March, Michigan Sen.
Elissa Slotkin put forward legislation that would mandate human
sign-off before any autonomous weapon launch and outlaw AI-driven mass
surveillance. Her bill came after the Pentagon severed its
relationship with AI company Anthropic, which had refused to drop
those very restrictions from its own policies. Why it
matters The Pentagon is rolling out AI across its operations
without any enforceable rules governing how it's used in targeting
decisions or surveillance programs. That gap between moving fast and
maintaining oversight is generating real tension. On the Hill,
Democrats are pushing for requirements that keep a human in the loop
on lethal decisions, but defense officials don't want their hands
tied. For campaigns, the political exposure here is straightforward:
if AI use gets linked to civilian casualties abroad, or if major tech
contractors start walking away from military deals over ethical terms,
it becomes a messaging problem nobody wants to own. |