Velocity by Booz Allen

Recent Innovation Transforms a Long History of Research Generative AI as we know it today is relatively new. In fact, seasoned researchers and engineers have been surprised by the fidelity of language generated by large language models (LLM) like ChatGPT, but many of the models and approaches underlying it have been studied for years. The seminal 2017 paper “Attention Is All You Need” by researchers at Google laid the foundation for what we now know as generative AI. It introduced Transformer architecture, a way for these new models to direct their “attention” to specific parts of data, massively increasing their capabilities. These findings were groundbreaking and now serve as the basis for nearly all of the most state- of-the-art generative AI systems. So, at their core, how do LLMs work? LLMs are trained to generate responses by predicting the words that will produce the most desirable response. They “learn” to do this by studying the relationships between words in an enormously large collection of text produced by humans. The training process requires specialized hardware and so much computational power that researchers have raised concerns about the level of energy consumption and environmental impact from training these models (for more on sustainability, see page 52). Throughout the training process, information about the relationships of words in text is processed through deep neural networks consisting of layers of components called transformers, which can capture complex linguistic structures and subtle details about meaning and interaction. Although the structure of these networks is intricate, it is relatively simple compared to the complex and unpredictable results the algorithms produce. Now there is an entire field of research to study “emergent behaviors,” which are actions or abilities that an AI system was not explicitly designed to do. Experts are working to understand how LLMs learn, how to determine what they know, how to better train them, and how to shape their output for specific use cases. Rapidly, foundational LLMs are deploying with advanced techniques, such as supervised fine tuning and reinforcement learning,

Figure 2: Mitigation strategies for seven critical risk areas when engaging with generative AI

which reward models for their predictive abilities and success during training or when completing a task. While many technologists have long studied generative AI techniques, new research brings renewed opportunity to explore the technical approaches required to apply them to their fullest potential. Harnessing the Good—and Anticipating the Bad The promise of generative AI is undeniable. But enterprises have to navigate dynamic ethical implications, bias, and potential security vulnerabilities—and then implement mitigation strategies (see Figure 2). Risk management for generative AI is not intended to curb applications of the technology; rather, it empowers IT and mission leaders to deploy more use cases with confidence. A comprehensive approach includes the following steps: 1. Codify and clarify policies that outline acceptable use cases, guidelines for content generation, and measures to ensure ethical and responsible use. 2. Commit to transparency by documenting limitations, biases, potential inaccuracies, and training data of the generative AI system. 3. Deploy traceability mechanisms to track the sources and origins of the generated content—for example, maintain records of the training data, algorithms used, and decision-making processes. 4. Audit training data to guard against biased or offensive content generation. Training data is crucial in shaping model behavior, so diverse, representative, and unbiased data can minimize risks associated with generating unwanted or inappropriate content. vulnerabilities and attacks. This includes securing the underlying codebase, implementing secure data handling practices, and guarding against malicious uses of generative AI systems. 5. Extend security measures to protect against potential

Rise of Generative AI The rapid impact of generative AI—particularly ChatGPT—has prompted significant investments and has already revolutionized businesses and industries by augmenting core processes and support functions. Organizations are increasingly expected to deploy generative AI to drive productivity, A Meteoric Rise: Reaching 1 million users in 5 days, OpenAI’s ChatGPT serves as an exemplar for the speed with which people and enterprises have embraced the value of generative AI. Major Players : Key innovators in the generative AI landscape include Google, Microsoft, Amazon, and IBM. These companies are directly embedding generative AI technologies into their products, leveraging their respective streamline processes, and disseminate information. strengths to advance the field. Growing Enterprise Interest

RISK

MITIGATION STRATEGIES

Hallucinations or Confabulations Generative AI may create output that seems factual but is not.

• Encourage critical thinking and fact-checking of generated content. • Manually verify content by human reviewers. • Implement algorithms and filters to detect and flag potential hallucinations. • Implement filters and content moderation mechanisms to prevent the output of harmful content. • Regularly monitor and review system-generated content to ensure its appropriateness. • Conduct bias audits and continuously monitor the training data. • Implement techniques such as data augmentation, diverse training data, and fairness-aware training to mitigate bias. • Implement systems to detect and flag potential misinformation generated by AI. • Enhance media literacy among users to identify and challenge false information. • Conduct thorough IP clearance checks and ensure compliance with copyright and trademark laws. • Implement data protection measures to prevent IP leakage by employees interacting with generative AI systems. • Implement strict data privacy protocols and ensure compliance with regulations, such as the General Data Protection Regulation (GDPR). • Conduct privacy impact assessments and implement safeguards to prevent the input and output of PII. • Conduct security audits and penetration testing to identify and address vulnerabilities in generated content and code. • Implement robust cybersecurity measures to protect against potential attacks leveraging generative AI.

Harmful Content Training data that includes internet data can generate negative, offensive, or damaging content.

Algorithmic Bias Training data that includes biased data can perpetuate existing biases and create harm.

Misinformation and Influence Operations Generative AI can be used to create realistic yet fabricated content for disinformation campaigns.

’23

In 2023, 70% of organizations are currently exploring generative AI, with 19% in pilot or production according to Gartner®, and Forrester predicts that about 10% of Fortune 500 enterprises will generate content with AI tools. By 2024, approximately 40% of enterprise applications are expected to incorporate embedded conversational AI (Gartner®). By 2026, generative design AI is poised to automate 60% of the design effort for new websites and mobile apps (Gartner®). By 2027, nearly 15% of new applications will be automatically generated by AI without human intervention (Gartner®).

Intellectual Property (IP) Generated content may violate someone else’s intellectual property rights or lead to IP leakage.

Privacy Mishandling of personally identifiable information (PII) during input or output can lead to privacy breaches.

’24

’26

Cybersecurity Outputs of generative AI systems can introduce security vulnerabilities, and bad actors can exploit these tools for malicious purposes.

’27

6. Monitor and evaluate the performance, outputs, and impacts of generative AI systems. Continuous evaluations and audits can help identify emerging risks, detect potential biases or

inaccuracies, and inform necessary adjustments for responsible use. 7. Educate employees on the risks, limitations, and ethical considerations of generative AI.

By investing in a culture of awareness and responsible use, the workforce can actively contribute to risk mitigation efforts.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

20

21

VELOCITY | © 2023 BOOZ ALLEN HAMILTON

Powered by