Velocity by Booz Allen

Beyond Ethics: Defining Responsible AI Aristotle famously focused on a principle he called

A Practice of Solutions—Not Barriers From transforming the provision and costs of healthcare, to securing our nation’s most critical infrastructure and augmenting our warfighters' capabilities, the risk is too great for missions of national priority to get by without AI. In the future, organizations that don’t effectively deploy these emerging capabilities will be vastly constrained.

The goal: deploying AI systems into the field without crossing any fingers, but with high confidence that the systems are safe, transparent, ethically sound, and conform to our societal and democratic values. Of course, efforts to evaluate the opportunity cost of an AI system must calibrate to the mission context, specific objectives, and potential impacts—good and bad. In many cases, the situational context is straightforward, such as scenarios in which using AI can result in life-or-death outcomes (e.g., combat scenarios) versus scenarios in which AI helps drive more effective operations and management (e.g., customer service). In other cases, the risk is dependent on sociotechnical factors, such as bias in the AI system’s training data. But even bias itself is complicated, with multiple nuanced dimensions. For instance, imagine a physician uses an AI system to help diagnose a veteran’s medical condition. The patient is from an underrepresented demographic and has a complicated condition. In this imagined scenario, historical discrimination impacts the quality of the data the AI was trained on, and the AI simply hasn’t seen enough cases of this particular condition and its variations to accurately evaluate it. The combined result is a “biased” diagnosis of the patient—for classically prejudicial reasons and because of the rarity of the disease itself. Both constitute bias, but only one is recognized as such. Zooming out, these issues of bias will manifest in all types of AI systems and applications. But the healthcare scenarios above demonstrate how using AI in certain daily situations can pose more risk and personal harm to a much bigger cross-section of society than an AI system powering satellites and drones. The power of Responsible AI in practice is that it codifies the language through which teams are discussing and contemplating this kind of risk calculation. Standardizing the lens through which AI is viewed makes it possible to measure that risk within a broader AI portfolio while calibrating it to the specific mission context. It opens the door to practical actions and creative solutions to mitigate anticipated risks and let AI get to work. While going from principles to operations is hard, federal agencies and other organizations don’t need an AI ethicist on staff or an extensive suite of capabilities in place to put a Responsible AI approach into practice. And they certainly don’t have to hold back AI development. However, they do need a well-designed framework and a purposeful process to navigate trade-offs and opportunity costs. Robust technical guardrails and engineering practices empower leaders to evaluate the use of AI across increasing lines of work and expanding mission sets—easing significant pressure and risk that would otherwise prevent mission-critical innovation.

investments in Responsible AI to protect the American public. Lawmakers are moving forward cautiously with new legislation—focused in the short term on understanding the complexities of the technology before proposing regulations. Meanwhile, there has been a profusion of guidance from industry and academic institutions to help evaluate the ethical dimensions of AI systems. But beyond matters of ethics, what is Responsible AI? A comprehensive, pragmatic definition of Responsible AI includes three distinct domains, each with its own body of research and technical considerations: SAFETY: The ability to adequately control an AI system’s behavior, use, and functionality in the field—and remain safe against

eudaimonia, the fusion of human happiness and flourishing that signified a life well lived. Centuries later, utilitarianism held that morality stems from producing the greatest good for the greatest number of people. As Aristotle might say if he were around: The future of AI is bright if we make specific choices to harness its power for the benefit of civilization. Society is already recognizing that AI is poised to become the most impactful technology of our lifetime, in ways not yet fully imagined. Glimpses of what’s possible can be found in stories such as DeepMind’s AlphaFold solution to the “gene- folding” problem that had been impenetrable to scientists for more than 50 years. Using AI, AlphaFold was able to “fold” and predict the structure of nearly all proteins known to science—a landmark accomplishment set to fundamentally innovate disease detection and novel drug development. As we look to the future of AI innovation, decision-makers and developers who are asking, “Is AI right or wrong?” should instead use their resources to advance tools that reinforce Responsible AI and embrace a mindset that focuses on harnessing the net good that AI systems will generate over their lifespan. This field of Responsible AI is still in its earliest, formative days, but it has been in discussion for some time, from NIST’s Responsible AI Framework to the Department of Defense’s Ethical Principles for AI. The Federal Government is a steadfast leader in the pursuit of policies and protocols, with already established high-profile directives for agency implementation and White House initiatives to bolster

THEIR WORLD WILL GET SMALLER. THEIR IMPACT WILL BE RESTRICTED. THEIR DECISIONS WILL LACK AN EDGE.

nefarious actors. GOVERNANCE:

The alignment of relevant policies, laws, executive orders, or other regulatory guidance in a programmatic fashion. ETHICS: The assessment of risks and values of an AI system’s deployment that considers net positive gains versus potential harms—and the decision to use that system or not. Many of the principles and frameworks for using AI responsibly are left to platitudes and lofty language, lacking tangible tools or clear-cut, actionable approaches. The above domains, taken together, provide a roadmap and broad set of practices and operational parameters for Responsible AI, but necessitate that leaders start to test tangible solutions.

Why? Their world will get smaller because they’ll no longer be able to keep pace with the exponential growth of information expanding around them. Their impact will be restricted because modern global challenges will not be a fit for old solutions. And their decisions will lack an edge because speed, precision, and information superiority are the markers of competitive advantage. Because of this reality, the practice of Responsible AI is not (and cannot be) about establishing a risk police as a barrier to innovation, or a mechanism to tell teams “No!” or “Start over!” Rather, it needs to be a solution-oriented practice that enables and empowers AI deployment in as many critical missions as possible. Its core purpose is to help navigate risks and challenges before AI systems are set free in the wild, so that organizations can, ultimately, extend use cases and accelerate the technology boldly and with confidence. In short, Responsible AI is a discipline that weighs risk against possibility. It positions leaders to consider real-world opportunity costs and put decision-making tools into the hands of those who can evaluate fundamental questions: • What is the maximal good that AI could do for a particular program and mission? • What are the potential risks of using AI in this way? • How can we mitigate these risks so that the maximal good can be achieved?

Safety Can we control the AI’s use and functionality?

Does it align to policy, principles, and regulatory guidance? Governance

Should we use the AI in this way, and what are the possible harms? Ethics

38

39

VELOCITY | © 2023 BOOZ ALLEN HAMILTON

Powered by