Velocity by Booz Allen

Four important issues to navigate: 1 Reconciling access and equity with AI solutions. While the commercial sector can choose to cater to only the most profitable customer segments, the Federal Government takes pride in serving all of the population, delivering vital services equitably. AI can help increase the reach, accessibility, and impact of federal programs. An AI-enabled government of the future would be better able to serve citizens in ways that fit their situations and needs, on various devices, in nearly any language, at any time of day—not requiring people to use high-speed internet or to visit federal offices during business hours (such requirements inadvertently harm working families that don’t have paid time off or those without computers or broadband). A well-known shortcoming of AI technology is its potential to propagate biases, but AI tools can also be used to reach populations that may have been historically underserved. In healthcare, for example, race, gender, and socioeconomic biases in AI models have the potential to harm already marginalized populations and reinforce or magnify inequities in the system. AI can also easily exclude people who are not tech savvy. For instance, a chatbot powered by generative AI should be fine-tuned to accommodate the digital literacy of the entire citizen population. Agencies that orient responsible product development around broad and diverse customer needs, experiences, and levels of digital literacy can use AI to advance, rather than to impede, equity and access. Since federal agencies have a mission to support every citizen, they have a front-row seat to advances in responsible AI and can be a model for how other industries can experiment with inclusive innovation (read more about this topic on page 36). However, AI and technology are still unable to replace human-to-human exchanges for the most critical citizen needs. 2 Using AI to augment rather than replace humans. There are many areas where government should not put an AI agent between a person and their needs—seeking health benefits, natural disaster relief, or unemployment compensation is not like buying a pair of shoes. While AI cannot replace human agents, it can increase their capacity to do uniquely human tasks that are more complex and specialized. For instance, AI agents can answer routine citizen questions, provide personalized assistance, triage inquiries, and help human agents rapidly filter through large data sets. Enabling workforce efficiencies allows employees more time to handle complicated needs and questions, where they can add more value. Meanwhile, citizens should be kept fully in the loop with AI, given the high-stakes nature of federal services. While this may be counterintuitive, AI developers in government should consider what areas of an AI-powered system should actually be harder to use to reduce user error and encourage human interaction. If an AI system is too easy, a user might overlook a small error or inconsistency on a tax form or housing application—leading to real-world, personal consequences.

Similarly, developers should be aware of how the design of automated systems can exacerbate or mitigate automation bias, which is the propensity of people to believe and act on suggestions from automated decision-making systems instead of relying on their own analysis and expertise. This issue applies to anything dealing with safety, from medical diagnosis to aviation, but also to lower-risk situations, such as digitizing paper forms that contain critical information to process taxes and benefits. 3 Streamlining experiences through customer data while overcoming mistrust. Personalized, AI-driven services hinge on whether citizens trust the government enough to provide relevant information that can power the algorithms. However, according to 2023 data from the Partnership for Public Service, only about one-third of Americans say they trust the Federal Government, compared with 46% who do not. While the challenges of trust in digital society are deeper than the topic of customer experience (last year’s Velocity publication focused extensively on this issue), agencies deploying AI in citizen applications must navigate the “value exchange” inherent in digital society: People are more willing to share data with companies if they perceive there will be immediate and tangible value in return. This same value exchange underlies customer interactions in the federal space. Providing value in the moment through an AI output creates a continuous value exchange: • Agencies use AI to improve the immediacy and tangibility of their services. • Citizens perceive the value from exchanging data for services and become more willing to share data on a continued basis. Spotlight on Recreation.gov Recreation.gov is the government’s central travel planning platform and reservation system for nine federal agencies, where the public can reserve camping sites, buy park passes, find recreation information, and much more. Its customer service operation handles high call volumes covering a wide variety of issues that can take significant time and resources to address.

An AI solution, deployed across channels including web chat, voice, and text messaging, is streamlining the customer experience and improving the quality of services. The technology uses voice recognition and natural language processing to analyze callers’ speech, and machine learning

to handle requests for information and triage calls before they are transferred to live agents. Automated screening and routing of simple requests provides faster and more accurate responses to many questions, which allows live agents to spend the majority of their time on resolving more complex customer interactions.

The cycle continues as they receive immediate impact and ultimately reduce the time and effort needed to search and apply for services.

targeted individual suffers financially and has to spend hours correcting the mistake, which wasn’t of their making. While AI is the source of new fraud schemes against customers, AI can also be employed to counter these criminal activities, as part of a multidisciplinary approach (for more on cybersecurity, see page 64). For example, because predictive models are largely trained on known categories of fraud schemes, they routinely fail to detect novel techniques. More sophisticated analytics and the latest AI and machine learning techniques can detect new fraud schemes that are still needles in a haystack. Oftentimes bad actors test a few new schemes to determine which ones will work and are driven to other fraud schemes once they realize a particular avenue is monitored. AI can detect faint signals and suspicious patterns to identify these schemes so they can be stopped before they have a broad impact that affects customers. This changing digital ecosystem will require ongoing dialogue and educational effort among federal employees and the public to create broader awareness around advances in fraud and to help citizens spot tell-tale signs of bad actors using AI against them.

Of course, the significant benefits to the customer from this critical data exchange need to come with increased investments in more sophisticated data protection. 4 Mobilizing for an increased attack surface and new threat actors employing AI for fraud. Private-and public-sector organizations, even those not on the cutting edge of using AI for customer service, face growing threats from bad actors using tools like ChatGPT to commit fraud. Ever-evolving schemes include impersonating people through advances in deepfake visuals and bypassing filters and monitoring systems to steal information. Cyber criminals are growing increasingly skilled at getting through defenses such as continuous monitoring. Their successful exploits disrupt operations, generate negative publicity and political backlash, and degrade customer experience. Consider a stolen tax refund—the

60

61

VELOCITY | © 2023 BOOZ ALLEN HAMILTON

Powered by