Artificial intelligence and autonomous technology
Helping the International Committee of the Red Cross ask "Should a weapon be able to make its own “decision” about who to kill?"
Artificial intelligence and autonomous technology is everywhere and playing a part all of our lives in lots of different ways. Its helping people get to work, helping us decide what to have for dinner, even saving lives and helping us live longer.
It’s no wonder we’re investing more and more into artificial intelligence, but what about when AI technology begins to take on moral, ethical, legal and public policy decisions?
What if a weapon could make its own “decision” about who to kill?
The International Committee of the Red Cross, a worldwide operation, helping people affected by conflict and war, wanted to understand how people feel about AI in autonomous weapon and human responsibility, by asking “What if a weapon could make its own “decision” about who to kill?”
ICRC wanted to somehow capture the audience’s feelings towards artificial intelligence and the issue of human control, assessing how comfortable they are with AI, how interested they are in the subject and if at all AI should be allowed to make such moral choices and if so should humans be able to intervene at any point.
This dataset would then be used to inform a wider report made by ICRC and IPSOS, to understand to what extent our audiences are comfortable in leaving life and death decisions to machines, and their critical thinking in relation to who is accountable when things go wrong.
We knew we needed to capture users thoughts and feeling, but our main challenge was how do we turn what is essentially a data capture form into something exciting and engaging?
Our solution was a killer chatbot experience, that immersed our audience into a military combat adventure, taking them on a journey into the future of warfare to capture their thoughts and feelings on autonomous weaponry.
We had to somehow create a two-way conversation between the user and the bot, educating them on what AI tech is, what autonomous weaponry is, while at the same time trying to capture their feelings towards it. But our first question, who is the bot?
Who best to educate users on autonomous weaponry, then an autonomous weapon?
Meet A.I.D.A [Automated Infantry Defence Analysis], a new AI defence unit, capable of killing/ stopping hostile targets, based on data, statistics and past experiences within a matter of seconds.
Our users are dropped within a military checkpoint with A.I.D.A, tasked to protect the border. During this time users are able to ask a question about AI and autonomous weaponry and react based on A.I.D.A’s answers, allowing us to gauge their reaction.
“We loved working with Catch on this!”
Nora Livet, Digital Officer ICRC
We also wanted to add a layer of visuals within the chat experience, creating a number of graphics to support the narrative of the story. We took inspiration from retro computer ads and 80’s movie posters.
Since it’s launch we have had an amazing response and user base interacting with A.I.D.A, giving us a dataset of over 230,000 user reactions. This interactive and engaging experience is allowing ICRC to shape the way in, which AI weaponry is used in the future.
“We absolutely have loved this project and it is informing a lot of our thinking on how to use this platform and this kind of experience-based engagement”
Anita Dullard, Editorial strategy lead ICRC
Related news and work
Hiring during lockdown
Finding a balance between allowing new staff to work independently while feeling supported