It's a scary time when the two top news topics are war in the Middle East and Artificial Intelligence. Even scarier is an article about the confluence of the two. But that's where we are right now.
It was of course only a matter of time before AI was enlisted into the service of war-making. In some areas of military surveillance, intelligence and reconnaissance, it has been used for years, often with alarming repercussions. Take Israel's Lavender program to identify Hamas targets, for example.
AI military use becomes even more problematic with the advent of agentic AI, where decision loops become fully-automated, without any human intervention. And, of course, that is the direction military strategists are going, despite the potential for Artifical General Intelligence (AGI), where AI overtakes human intelligence and even learns to use deception to get its own way (the kind of stuff that AI pioneer Geoffrey Hinton and others have been warning us about for years).
It's no coincidence that several leading AI researchers left Open AI over worries about the company's lack of ethical guardrails, and many of them gravitated to competitor company Anthropic, which seemed to have a more constrained and prudent approach to its research. Anthropic CEO Dario Amodei has publicly warned about the potential for AI to serve some of the worse tendencies of autocratic states.
It was perhaps a surprise, then, that the US Department of Defence (now the Department of War) signed a contract with Anthropic last July, and developed Claude Gov, an extension of Amthropic's publicly-available Claude chat-box, and the first large language AI model trained on classified material. The agreement did come with restrictions and guardrails: that the tool would not be used with fully-autonomous weaponry, and that it would not be used for mass surveillance of American citizens.
Then, predecitably, the Republican administration pushed back. Secretary of Defense/War Hegseth started talking about an "AI-first war-fighting force" and accelerating AI adoption "from campaign planning to kill chain execution" (he has the military lingo down-pat, even if he is perhaps not quite sure of the ramifications). There was talk of freeing the system from "ideological bias" and "red tape" - which is code-speak for imposing ideological bias and removing all inconvenient rules - and using AI to pursue "social engineering" and "cultural agendas", all things that Anthropic explicitly aimed to avoid. Yikes!
Anthropic tried to enforce their explicitly-stated rules, and refused Pentagon officials unrestricted access to Claude Gov's capabilities. Trump too got involved - well, of course he did - calling Anthropic "a radical left, woke company" (the worst insult ever in his books), and accusing it of trying to "strong-arm" the US military. He even threatened to designate Anthropic a national security supply chain risk (a label typically renewed for Chinese and Russian companies), and threatened legal action to force the company to comply with his whims.
Despite Anthropic's concerns, there are reports that Claude Gov was involved in both the US kidnapping of Nicolás Maduro in Venezuela, and the assassination of Ayatollah Ali Khamenei in Iran. It's not clear to what extent these operations involved fully-automated weapons systems, and there is no way the Trump administration is going to tell us.
So, the guard-rails are down, and the training wheel are off. In the hands of an unbalanced and unhinged administration like that in today's White House, all bets on weaponized AI are off. Science fiction tropes are in everyday use, and none of this is good.
No comments:
Post a Comment