A group of tech executives warn military about unintended harm caused by AI in ...

sonos sonos One (Gen 2) - Voice Controlled Smart Speaker with Amazon Alexa Built-in - Black read more
Tech leaders from Google, Microsoft, and Facebook suggest ethics guidelines for using AI in the military to avoid 'unintended harm to humans' The Defense Innovation Board made 12 recommendations for AI in the military The Board warns against unintended adverse consequences of using AI  Members include tech execs from Google, Microsoft, Facebook, and LinkedIn 

By Michael Thomsen For Dailymail.com

Published: 22:10 GMT, 1 November 2019 | Updated: 22:27 GMT, 1 November 2019

View
comments

This week, the Defense Innovation Board issued a series of recommendations to the Department of Defense on how artificial intelligence should be implemented in future military conflict.

The Defense Innovation Board was first created in 2016 to establish a series of best practices on potential collaborations between the US military and Silicon Valley.

There are sixteen current board members from a broad number of disciplines, including former Google CEO Eric Schmidt, Facebook executive Marne Levine, Microsoft’s Chief Digital Officer Kurt Delbene, astrophysicist Neil deGrasse Tyson, Steve Jobs biographer Walter Isaacson, and LinkedIn co-founded Reid Hoffman. 

Scroll down for video 

‘Now is the time, at this early stage of the resurgence of interest in AI, to hold serious discussions about norms of AI development and use in a military context—long before there has been an incident.’ the report says.

The report says that using AI for military actions or decision-making comes with 'the duty to take feasible precautions to reduce the risk of harm to the civilian population and other protected persons and objects.' 

The report outlines five ethical principles that should be at the heart of every major decision related to using AI in the military.

AI in the military should always be: Responsible, Equable, Traceable, Reliable, and Governable. 

WHAT ARE THE RECOMMENDATIONS?

The  Defense Innovation Board defined five ethical principles for using AI in the military: Responsible, Equable, Traceable, Reliable, and Governable.

Based on these five principles it recommended the following 12 things: 

1. Formalize these principles via official DoD channels.

2. Establish a DoD-wide AI Steering Committee.

3.  Cultivate and grow the field of AI engineering.

4. Enhance DoD training and workforce programs.

5. Invest in research on novel security aspects of AI.

6. Invest in research to bolster reproducibility. 

7. Define reliability benchmarks. 

8. Strengthen AI test and evaluation techniques.

9. Develop a risk management methodology. 

10. Ensure a proper implementation of AI ethics principles.

 11. Expand research into understanding how to implement AI ethics principles.

12. Convene an annual conference on AI safety, security, and robustness.  

Going off these principles the report makes twelve concrete recommendations for how to move forward integrating AI in contemporary warfare. 

The Board recommends creating a risk management strategy that would formalize a taxonomy of negative outcomes.

The purpose of this taxonomy would be to ‘encourage and incentivize the rapid adoption of mature technologies in low-risk applications, and emphasize and prioritize greater precaution and scrutiny in applications that are less mature and/or could lead to more significant adverse consequences.’ 

The report recommends the development of a risk management methodology to account for all the potential negative outcomes that could come from deferring a significant amount of work

read more from dailymail.....

Get the latest news delivered to your inbox

Follow us on social media networks

PREV Russian firm is selling ONE pair of AirPods Pro wrapped in 18-karat gold for ...
NEXT Ex-Facebook CPO Chris Cox now advises on climate & campaign tech