What to read this week: Katrina Manson’s terrifying Project Maven

- Advertisement -


What to read this week: Katrina Manson’s terrifying Project Maven

Project Maven began in 2017 as a tool to scour footage from drones

Devon Bistarkey, Defense Innovation Unit

Project Maven
Katrina Manson, W.W. Norton

Israel’s military is using artificial intelligence to identify targets in Gaza, the US is doing the same in Iran and Ukraine pushes on with smart drones. AI war is not the future of conflict, it is the present.

Unpacking global policies on the use of AI by militaries – the potential benefits, pitfalls and murky ethics – will fill books for decades to come. But that’s not what Katrina Manson is setting out to do in Project Maven. Instead, she uses interviews with over 200 people to tell the story of the US military’s journey into AI warfare – or one of them, since there are 800 AI projects hidden in the Pentagon.

In 2017, Project Maven was launched to build a tool to scour footage from drones and extract useful intelligence – the drones had collected more data than any human could interpret. Maven had a rocky start, says Manson. The military deployed it with soldiers in Somalia eight months after the launch of the project, and algorithms told analysts there were schoolbuses in the clouds and trees were people.

We follow one project leader back to his days as an intelligence officer in Afghanistan trying to plan missions and direct troops with nothing more than a dusty laptop loaded with Microsoft Office: where is the enemy, where is safe, what does success look like?

Humans in war are inefficient, get tired, make mistakes. The fog of war could be cleared by AI, believed the usually secretive Project Maven builders who spoke to Manson. But they intended it to go much further: choose targets, hunt them and then kill them. Without slow, deliberate human decision-making, killer robots could overwhelm enemies, fast.

“We kill the wrong people all the time. A machine can’t be worse than a human,” one insider says. The team developed Maven into a host of tools and tried to convince people on the front line to adopt them. Results improved, but mistakes still happened.

Since then, the US and other NATO members have rolled out Maven in conflicts. Some 32 companies are working on it, writes Manson, and 25,000 US military users log in regularly. But she also tells of it being used at border crossings and in hunting drug runners in the Caribbean. Can a state with such tools resist using them on its citizens?

Most worryingly, work is under way to cut humans out of the loop entirely, says Manson. So-called Goalkeeper flying drones and Whiplash naval drones can find their own targets and take them out. And humans have never invented a weapon and not used it.

It’s hard not to think of Stanislav Petrov, the Soviet lieutenant colonel who, in 1983, used his own judgement and decided reports of a US missile launch were a false alarm and avoided all-out nuclear war. Would AI make that call?

For all the fascinating insight into Maven, the book tells us more about Pentagon bureaucracy and Silicon Valley’s willingness to take on any project – no matter how distasteful – if the money is right, than it does about AI. Manson’s access is phenomenal, yet the nature of military secrecy means we likely won’t know exactly what technology the US government has produced, and how and when it is being used, for years to come.

War has always been deeply unpleasant, but modern conflicts where humans watch someone thousands of miles away via drone and decide if they warrant a fatal strike have made it impersonal. Handing this to AI risks making war too easy to wage, and the repercussions too easy to ignore.


Goalkeeper flying drones and Whiplash naval drones can find their own targets and take them out

We need to ensure the power granted by AI weapons is treated with the gravity it deserves, but Manson tells us a chilling story that suggests the reality is otherwise. One interviewee hoping to join Project Maven reportedly told the panel their motivation was to “reduce the non-American population” – and then got the job.

 

Two more great reads on AI and warfare

Book cover - The Making of the Atomic Bomb, Richard Rhodes

The Making of the Atomic Bomb by Richard Rhodes

There are a lot of lessons here about where military AI might go. Like the Manhattan Project, it threatens to permanently ratchet up global tensions and increase the stakes of war, just for starters.

 

Book cover - Should We Ban Killer Robot? Deane Baker.

Should We Ban Killer Robots? by Deane Baker

This is a dive into the debate from an ethics professor, looking at the thorny problems of reliability, control and accountability when governments hand over the work of soldiers to computers.

Topics:

  • war/
  • artificial intelligence/
  • books
- Advertisement -

Latest articles

Related articles

error: Content is protected !!