PITTSBURGH (TNS) — In 2017, the World Economic Forum warned that autonomous weapons had already arrived. The question was: “How do we control how they are used?”
The international nonprofit warned again in 2021 that global regulations for militarized artificial intelligence were insufficient. Now, with AI models coordinating drones in Ukraine and steering missiles in Gaza, the question is no longer theoretical. The same AI efficiency driving faster C-suite consulting is making it easier to eviscerate targets on the battlefield, drowning out the voices of critics and researchers in the process.
Carnegie Mellon University was once a leading developer for AI models. I wondered, do its scientists still have a say in how those models are used?
“I hope so,” one researcher, Vincent Conitzer, recently told me.
But it feels a bit like “Oppenheimer,” he said, referencing the movie that portrays physicist Robert Oppenheimer developing the first atomic bomb.
“At some point, after the tests, the military people come in to take it all away. And it’s clear to us that it’s out of the scientists’ hands,” Conitzer said. “I think the best we can hope for is to be listened to.”
In his new book on the topic, Conitzer is quick to note that AI, even in military settings, can be used for good and bad. Prediction software can misidentify civilians, leading to needless death, he writes. But it can also save civilians by identifying incoming enemy fire.
Many Pittsburgh startups are chasing the latter — the lifesaving potential of military deals. They’re building driverless truck convoys to protect American soldiers, or using game theory to build cheaper defense systems, saving taxpayer money.
“Our AI is making the world a safer place,” Tuomas Sandholm, the founder of one of those local companies, Strategy Robot, told me last month.
Others have chosen to keep their tech out of the defense space. Lawrenceville’s Agility Robotics was among the group of robotics companies that pledged in 2022 not to militarize their capable bots. But a competitor in Philadelphia was more than happy to fill the gap, supplying a tactical dog bot that seemed to demonstrate the classic arms race rationale: if you don’t build it, someone else will.
Instead of the development pause that Conitzer and 30,000 other signatories called for last year, AI has become competitive and deadly.
Recent examples of AI targeting systems in Gaza show how wars that were already fought from a distance are now increasingly automated, and potentially inhumane. The Israel Defense Forces reportedly used a targeting system called Lavender to steer missiles toward sleeping Palestinian families shortly after the Hamas attack in October.
The practice drew swift condemnation from the United Nations when it was reported in April, even as the agency struggled to understand exactly how the technology had been used.
In Ukraine, swarms of AI-enabled drones have been viewed more favorably; the United Kingdom is reportedly working with the U.S. and other Western allies to furnish a fleet. The effort comes as the U.S. military doubles down on AI and autonomy investments, supposedly to compete with China.
Militaries have been developing autonomous weapon systems for decades, and the debate over their use stretches just as long. It is also evident in both Ukraine and Gaza that a tolerance for civilian casualties pervades the use of any particular technology.
But the recent examples bring an immediacy to the debate, similar to how the proliferation of ChatGPT awakened many people to the potential harms and good of consumer-friendly AI. Conitzer said there’s a similar arms race in the business world, where companies are pushed to experiment, implement and integrate before the competition.
In that sphere too, it can feel too late for the skeptics to speak up. OpenAI’s co-founder Ilya Sutskever left the company last month after disagreeing with CEO Sam Altman’s approach to safety.
Two weeks later, PricewaterhouseCoopers, one of the largest accounting offices in Pittsburgh, became OpenAI’s first major ChatGPT Enterprise customer, offering the higher powered version to its 75,000 U.S. employees.
The plan, PwC has said, is to test the tech internally, before rolling it out to eager customers around the globe.