It's a scary time when the two top news topics are war in the Middle East and Artificial Intelligence (AI). Even scarier is an article about the confluence of the two. But that's where we are right now.
It was of course only a matter of time before AI was enlisted into the service of war-making. In some areas of military surveillance, intelligence and reconnaissance, it has been used for years, often with alarming repercussions (take Israel's Lavender program to identify Hamas targets, for example). But if it becomes more widely employed in planning, targeting and even decision-making, then we are in deeper trouble.
AI military use has become more problematic with the advent of agentic AI, where decision loops can become fully-automated, without any human intervention. And, of course, that is exactly the direction military strategists are going, despite the potential for Artificial General Intelligence (AGI), where AI overtakes human intelligence and even learns to use deception to get its own way (the kind of stuff that AI pioneer Geoffrey Hinton and others have been warning us about for years).
It's no coincidence that several leading AI researchers left Open AI over worries about the company's lack of ethical guardrails, and many of them gravitated to competitor company Anthropic, which seemed to have a more constrained and prudent approach to its research. For example, Anthropic CEO Dario Amodei has publicly warned about the potential for AI to serve some of the worse tendencies of autocratic states.
It was perhaps a surprise, then, that the US Department of Defense (now the Department of War) signed a contract with Anthropic last July, and it developed Claude Gov, an extension of Anthropic's publicly-available Claude chat-box, and the first large language AI model trained on classified material. The agreement did come with restrictions and guardrails: that the tool would not be used with fully-autonomous weaponry, and that it would not be used for mass surveillance of American citizens.
Then, perhaps predictably, the Republican administration pushed back. Secretary of Defense/War Hegseth started talking about an "AI-first war-fighting force" and accelerating AI adoption "from campaign planning to kill chain execution" (he sure has the military lingo down-pat, even if he is perhaps not quite sure of the ramifications). There was talk of freeing the system from "ideological bias" and "red tape" - which is code-speak for imposing their own ideological bias and removing all inconvenient rules - and even using AI to pursue "social engineering" and "cultural agendas", all things that Anthropic explicitly aimed to avoid. Yikes!
Initially, Anthropic tried to enforce its own explicitly-stated rules, and refused Pentagon officials unrestricted access to Claude Gov's capabilities. But then Trump too got involved - well, of course he did! - calling Anthropic "a radical left, woke company" (the worst insult ever in his books), and accusing it of trying to "strong-arm" the US military. He even threatened to designate Anthropic a national security supply chain risk (a label typically reserved for Chinese and Russian companies), and threatened legal action to force the company to comply with his whims.
Despite Anthropic's concerns, there are reports that Claude Gov was involved in both the US kidnapping of Nicolás Maduro in Venezuela, and the assassination of Ayatollah Ali Khamenei in Iran. It's not clear to what extent these operations involved fully-automated weapons systems, and there is no way the Trump administration is going to tell us.
So, it seems the guard-rails are at least partially down, and the training wheel are off. In the hands of an unbalanced and unhinged administration like that in today's White House, all bets on weaponized AI are off. Science fiction tropes are in everyday use, and none of this is good.
No comments:
Post a Comment