B5 Systems

Defense Innovation Unit Publishes ‘Responsible AI Guidelines’

The Defense Innovation Unit released its initial “Responsible AI Guidelines” document Nov. 15, with intent to operationalize the Defense Department’s ethical principles of artificial intelligence into its commercial prototyping and acquisition efforts.

“DIU’s RAI guidelines provide a step-by-step framework for AI companies, DOD stakeholders and program managers that can help to ensure that AI programs are built with the principles of fairness, accountability and transparency at each step in the development cycle of an AI system,” Jared Dunnmon, PhD, technical director of the artificial intelligence/machine learning portfolio at DIU said.

The DIU team has spent the last 18 months working with researchers at the Carnegie Mellon University Software Engineering Institute, and speaking with industry partners, the Joint Artificial Intelligence Center, academia and government officials, and testing these guidelines in order to solicit helpful feedback, Dunnmon said. They are intended specifically for use on DIU programs.

The aim of the guidelines, he said is to:

? Accelerate programs from the outset by clarifying end goals, alignment of expectations, and acknowledgment of risks and trade-offs.

? Increase confidence that AI systems are developed, tested, and vetted with the highest standards of fairness, accountability and transparency.

? Support changes in the way AI technologies are evaluated, selected, prototyped and adopted in order to avoid potential bad outcomes.

? Elicit questions and conversations that are crucial for AI project success.

The guidelines provide examples of how responsible AI considerations can be put into practice in real-world programs, in an effort to create a user-friendly and more easily understood document that expedites the process, Dunnmon said.

“Users want so they can trust and verify that their tools protect American interests without compromising our collective values,” John Stockton, co-founder of Quantifind, a software technology company, that provided DIU feedback on the guidelines during their prototype project said. “These guidelines show promise for actually accelerating technology adoption, as it helps identify and get ahead of potentially show-stopping issues. We’ve found that leaning into this effort has also served us well outside of government, by strengthening internal controls and producing transparency and patterns of trust that can also be leveraged with all users, both public and private.”

To view the guidelines, visit: www.diu.mil/responsible-ai-guidelines.

2 Responses to “Defense Innovation Unit Publishes ‘Responsible AI Guidelines’”

  1. I’m always mystified when people say how we should be worried about AI turning into skynet meanwhile current AI sucks at anything to do with resumes/hiring and cant even tell the difference between porn ass and sand dunes.
    “Responsible AI” is making sure AI shouldn’t be in charge of major decisions to begin with.