- DARPA is working with decision makers to train algorithms to make decisions
- The idea is that humans have a bias and can disagree, slowing down decisions
- AI can be trained from the start, based on best practice, to make fast decisions
- The technology is still at the early stages, but DARPA hopes for a wide roll out
Modern military operations, whether it be combat, medical or disaster relief, require complex decisions to be made very quickly, and AI could be used to make them.
The Defense Advanced Research Projects Agency (DARPA) launched a new program aimed at introducing artificial intelligence into the decision making process.
This is because, in a real world emergency situation, that might require instant choices between who does and doesn’t get help, the answer isn’t always clear and people disagree over the correct course of action – AI will make a quick decision.
The latest DARPA initiative, called ‘In the Moment’, will involve new technology that could take difficult decisions in stressful situations, using live analysis of data, such as the condition of patients in a mass-casualty event and drug availability.
It comes as the U.S. military increasingly leans on technology to reduce human error, with DARPA arguing removing human bias from decision making will ‘save lives’.
The new AI will take two years to train, then another 18 months to prepare, before it is likely to be used in a real world scenario, according to DARPA.
‘AI is great at counting things,’ Sally A. Applin, an expert in the interaction of AI and ethics, told Washington Post, adding ‘I think it could set a precedent by which the decision for someone’s life is put in the hands of a machine.’
Modern military operations, whether it be combat, medical or disaster relief, require complex decisions to be made very quickly, and AI could be used to make them. Stock image
According to DARPA, the technology is only part of the problem when it comes to switching to AI decision making, the rest is on building human trust.
‘As AI systems become more advanced in teaming with humans, building appropriate human trust in the AI’s abilities to make sound decisions is vital,’ a spokesperson for the military research organization explained.
‘Capturing the key characteristics underlying expert human decision-making in dynamic settings and computationally representing that data in algorithmic decision-makers may be an essential element to ensure algorithms would make trustworthy choices under difficult circumstances.’
DARPA announced the In the Moment (ITM) program earlier this month, with the first task to work with trusted human decision makers, to explore the best options to take when there is no obvious agreed upon right answer.
The Defense Advanced Research Projects Agency (DARPA) launched a new program aimed at introducing artificial intelligence into the decision making process. Stock image
‘ITM is different from typical AI development approaches that require human agreement on the right outcomes,’ said Matt Turek, ITM program manager.
‘The lack of a right answer in difficult scenarios prevents us from using conventional AI evaluation techniques, which implicitly requires human agreement to create ground-truth data.’
For example, algorithms used by self-driving cars can be based on ground truth for right and wrong driving responses – based on traffic signs and rules of the road.
When the rules don’t change, hard coded risk values can be used to train the AI, but this won’t work for the Department of Defense (DoD).
‘Baking in one-size-fits-all risk values won’t work from a DoD perspective because combat situations evolve rapidly, and commander’s intent changes from scenario to scenario,’ Turek said.
‘The DoD needs rigorous, quantifiable, and scalable approaches to evaluating and building algorithmic systems for difficult decision-making where objective ground truth is unavailable.
‘Difficult decisions are those where trusted decision-makers disagree, no right answer exists, and uncertainty, time-pressure, and conflicting values create significant decision-making challenges.’
To solve the problem, DARPA is taking inspiration from the medical imaging analysis field.
In this area, techniques have been developed for evaluating systems even when skilled experts may disagree.
‘Building on the medical imaging insight, ITM will develop a quantitative framework to evaluate decision-making by algorithms in very difficult domains,’ Turek said.
‘We will create realistic, challenging decision-making scenarios that elicit responses from trusted humans to capture a distribution of key decision-maker attributes.
‘Then we’ll subject a decision-making algorithm to the same challenging scenarios and map its responses into the reference distribution to compare it to the trusted human decision-makers.’
The program has four technical areas, covering different aspects of research.
The first looks at creating decision-maker characterization, that aims to identify key attributes of humans tasked with making decisions in the field.
The second will be to create a score between a human decision-make and an algorithm – with the goal of creating algorithm decisions that humans can trust.
The third will be to create a program, based on these scores, that can be evaluated, and the fourth will be to create policy and practice for its use.
It will be three and a half years before the final stage is reached, according to DARPA, with the first two years spent building a basic AI and testing it on different scenarios.
The second half, covering the final 18 months, will involve expanding the capabilities of the AI and testing it on more complex events with multiple casualties.
NATO are also working to create AI assistants, that can help with decision making, in this case a triage assistant in collaboration with Johns Hopkins University.
Colonel Sohrab Dalal, head of the medical branch for NATO’s Supreme Allied Command Transformation, told Washington Post triage could do with a refresh.
This is the process where clinicians visit soldiers to asses how urgent there care is, and hasn’t changed much in the past 200 years.
His team will use NATA injury data, alongside casualty scoring systems, predictions and input on a patients condition to pick who should get care first.
‘It’s a really good use of artificial intelligence,’ Dalal, a trained doctor, said. ‘The bottom line is that it will treat patients better [and] save lives.’