Velocity by Booz Allen

Credo AI

“As we look to the future of the field of AI ethics, an ideal outcome would be that the phrase Responsible AI becomes redundant.”

Credo AI is a leader in embedding Responsible AI and regulatory requirements into operations and the technical development process. The company’s CEO and founder, Navrina Singh, shared her perspective on this rapidly shifting priority for technology and mission leaders alike. There’s so much hype today around AI. What’s one of the biggest questions you get from organizational leadership? By far, the most frequent questions we come across are related to the pace at which organizations should be moving to adopt AI and AI governance: Should they pause, wait and see, or move ahead at full speed? In research we recently did with customers and industry, we found that organizations with a lack of expertise in generative AI combined with concerns over security, privacy and intellectual property are by and large taking a “wait, review, and test” approach toward generative AI governance. We’ve also seen very public examples of organizations in the private sector banning these tools for their employees altogether. I strongly believe that the emphasis should not be on halting AI development or innovation or halting the use of AI in one company or the other, but rather on catching up as a society to invest; research; and build governance, oversight, and alignment over the remarkable capabilities that have emerged. What dynamic areas of research are you most excited about? On the technical research front, the industry is still in the early innings of understanding these complex AI systems, especially foundation models. There is an active body of technical research trying to make sense of the unexpected capabilities, deployment safety, and proliferation problems that come with these frontier AI systems. I have been critically thinking about these systems from an application

Of course, there are scenarios and critical missions today that are too high-risk to employ AI responsibly, particularly with reinforcement learning, where the machine may act in limitless ways to achieve a goal. Ultimately, complex emergent behavior has the capacity for both risk and reward. However, stopping or slowing AI capability development across the board is not a shortcut to safety. Instead of admiring the problem, the best way to discover risks and solutions is by maximally engaging with the technology in sandbox environments. In those contained environments, teams can demonstrate and address unexpected behaviors— and then move the best designs forward into controlled, real-world applications to monitor deployments, learning and refining over time. At that point, things will go wrong. But what matters is how organizations mitigate the chances of material risk and how they minimize the impact, learn, and correct for the next time. Engineering a Responsible AI Future As we look to the future of the field of AI ethics, an ideal outcome would be that the phrase Responsible AI becomes redundant. The foundational principles of a responsible approach—encompassing safety, governance, and ethical considerations in AI deployment—will simply evolve as the standard for how all AI work is done. And in this future, the work will transcend compliance or risk, becoming an integral component of machine learning operations (MLOps) platforms and advanced, transformative AI tradecraft. While today’s AI systems can be overhyped, tomorrow’s AI is almost certainly underestimated in terms of capability and scale. When deployed responsibly, AI stands as an enduring catalyst, opening new doors of possibility in medicine, public policy, and everyday life. AI will soon integrate data and domains in ways that have never before been integrated or correlated. New technologies will enable physical sensors and communications links in disconnected and denied environments. And as these systems, domains, and effects advance, AI may become the key to securing our nation against the predominant threats of this century. Embracing a new calculus that computes the risks and rewards present in AI systems puts federal agencies and organizations in a better position to harness AI’s potential for transformational good. Here’s where you, the reader, come in. Whether you are an AI engineer, a policy maker, a change management expert, or a human that engages with AI in any capacity, you are a practitioner of Responsible AI. Your journey starts now.

Navigating Risk in Emerging, Uncharted Spaces Without a doubt, focusing on the risks inherent in AI technology is necessary and urgent. Questions surrounding issues such as data privacy protection, bias, disinformation delivery, lack of model transparency, and unintended behaviors within AI must be acknowledged and addressed. But let’s ensure that risk is balanced appropriately within the context of the possibilities and power of AI as it continues to mature and systems have access to more data. Already, some large language models are demonstrating the capacity to self-correct issues related to disinformation delivery and bias. In short order, these risks—and others like them—may become self-contained as AI systems advance. Moreover, drawing a page from classical ethics, one could argue that society has a moral responsibility to shepherd AI to its fullest potential. To do that, organizations and AI engineers will need to navigate uncharted territory on the path to Responsible AI development. Ongoing research is critical to address areas that include: COMPLEX EMERGENT BEHAVIOR. How to design controls and assess risk for unanticipated AI actions or outputs when it’s impossible to predict what the AI system will do. REINFORCEMENT LEARNING. How to specify parameters and avoid AI misalignment when AI systems pursue the intended goal but employ unanticipated or unintended methods to do so. HUMAN-MACHINE TEAMING. How to design controls to mitigate unpredictable human behavior when augmented by AI and maintain human trust in their own operating skills and expertise. As these capabilities emerge and become more sophisticated, there will be more questions than answers along the AI journey. Therefore, a commitment to Responsible AI requires continuous optimization, with humans in the loop. For instance, when using an AI system to inform military pilots about intended targets, the humans involved need proper training and experience to know when to trust themselves over the machine, should a situation demand it.

perspective—exploring how humans and AI systems can best work together, the environmental impact of training large models and running data centers, the safety and long-term consequences of artificial general intelligence (AGI) systems, and how AI can be used to address pressing issues facing humanity like poverty, healthcare, climate change—all of which are still in the early research phase. As you look to the future, what does the world of Responsible AI look like? A year from now my hope is that Responsible AI will emerge as the cornerstone of all AI. Every person that touches AI in some capacity will have a role in accountability, transparency, and fairness embedded in AI systems and organizations. With that, I hope to see continued public commitments to Responsible AI practices by organizations in and outside of the tech industry. What I mean by that is organizations publicly disclosing R&D and investments in AI safety and governance, disclosures around AI systems, and impact assessments of AI applications. As an industry, we can only be held accountable to outcomes if we know and are transparent about what those outcomes are. John Larson leads Booz Allen’s AI practice, with a focus on ensuring leaders across federal missions achieve AI understanding, purpose-built solutions, and accelerated adoption. Geoff Schaefer serves as chief AI ethics advisor at Booz Allen, working to make AI ethics and safety more practical, tangible, and measurable for clients and within the organization.

SPEED READ

Industry’s AI investments are surging, with particular emphasis on responsible development to unravel scientific mysteries and bolster national defense, infrastructure, and medicine.

Responsible AI is not about stifling innovation or imposing barriers but about enabling and empowering AI deployment for critical missions. It involves weighing the risks against the possibilities, considering the maximum good AI can achieve for a program or mission while mitigating potential risks. The intent for Responsible AI is that it becomes a foundational norm rather than a catchphrase. Safety, governance, and ethical considerations in AI will evolve into standard practices across all AI endeavors. Tomorrow’s AI, when deployed responsibly, will lead to innovation and become crucial to safeguarding against predominant threats.

40

41

VELOCITY | © 2023 BOOZ ALLEN HAMILTON

Powered by