Autonomous technologies perform tasks with a high degree of autonomy and often employ artificial intelligence (AI) to simulate human cognition, intelligence, and creativity. Because these systems are critical to our safety, health, and well-being, as well as to the fabric of our system of commerce, new research and engineering methodologies are needed to ensure they behave in safe, reasonable, and acceptable ways.

Assured AI is critical to ensuring autonomous systems can be trusted to perform as intended, alone and as part of a team. They must perceive, decide, act, and learn as intended, cause no harm externally, and not be vulnerable to malicious interference. Based on feedback, some autonomous systems learn from their experiences, modify their own logic to achieve better results, and improve their capabilities. Consequently, AI software and their underlying hardware systems cannot be tested explicitly for all possible configurations and circumstances. Yet these systems are expected to perform reasonably under all circumstances and contexts. In addition, these algorithms are expected to be transparent and explainable to increase their trust level. New techniques are needed to ensure AI-enabled autonomous systems function as intended and explain their actions and conclusions. Furthermore, because these systems may easily be deceived with cleverly manipulated input data, new techniques are needed to ensure autonomous systems are resilient and resistant to malicious deception and spoofing.

Autonomous algorithms must be repeatable, verifiable, and unbiased. Algorithms may initially appear to perform well in limited contexts; however, they may not perform as intended when trying to validate and verify autonomous systems in broader contexts. Ultimately, assurance is achieved when the algorithms work in a repeatable and verifiable fashion without unintended biases in all the contexts for which they were designed to operate. New techniques are needed to ensure AI algorithms are validated, verified, and unbiased.