Velocity by Booz Allen


AI’s ability to process zettabyte-scale datasets is already being seen in generative AI and large language models, and new methods and approaches are being applied to less structured data, such as imagery. Synthetaic Chief Executive Officer Corey Jaskolski says, “The future of AI lies in intuitive tools that put that power into the hands of subject-matter experts who can elucidate insights from the AI in real time and then make meaning from the insights it generates.” As AI/ML capabilities are paired more deeply with human analysts, national security stakeholders will see costs greatly reduced and mission outcomes dramatically enhanced. “The ability for analysts to easily capture, collaborate, and automate their tradecraft frees them to perform higher level analysis instead of worrying about data representation and translation,” says Nask Incorporated Technical Director Ken Pratt. “This is a force multiplier allowing fewer analysts to perform more effective and valuable analysis against a larger corpus of data.” Beyond automation, the decentralization of information processing heralds remarkable efficiencies. Here, raw data is exploited closer to its source and the integration occurs downstream, closer to the analyst or user. The embrace of decentralized workflows and the synergy of specialized tools significantly amplify the pace of exploitation and insight generation. This is true for nearly all data types and sources, including text-based foreign media and social media; published databases; and technical data from satellites, aircraft, and other sensors. An excellent example of this is the potential for decentralized Commercial Synthetic Aperture RADAR (COMSAR) collection and exploitation closer to the source, which would maximize the community’s ability to harness the rich, remote sensing metadata while

minimizing the costs of data transfer and storage of large COMSAR imagery files. “By delivering and processing that data at the edge, the analytic insight delivery time to the user is measured in seconds and minutes, not hours,” says Ursa Space Vice President of Government Programs George Flick. “This allows for quicker situational awareness and decision making for users and operators. Speed is essential.” New and emerging technologies and techniques in AI/ML are designed for large and complex datasets, and advanced algorithms shine the closer they’re hosted to the raw data. OSINT has continued to evolve alongside the emergence of new data sources and techniques, spanning the spectrum from free or open to commercial realms. The contemporary landscape boasts an unprecedented volume and diversity of OSINT, demanding more robust analytic capabilities in AI/ML and automation to sustain an intelligence edge and enable the IC to harness the massive amounts of information proliferating in the public sphere. The key to leveraging these sources and capabilities and driving enhancements for national security missions is to purposefully discover, identify, process, and integrate the right data—it is not to try to process every byte of unclassified data that’s out there. By understanding the value of the data, decentralizing its exploitation, and layering it through AI/ML technologies and techniques, the national security community can create an advantage through these sources and insights. Eric Zitz is a mission technology leader for Booz Allen’s national security business. He develops new capabilities and solutions that integrate data science, modeling, and automation to enhance intelligence operations. Gabi Rubin is a leader in Booz Allen’s OSINT capability Global4Sight®, which offers language-enabled and data-driven intelligence solutions for civilian, intelligence, and military agencies.


I n an article whose title shares that quote, PitchBook data spotlights the striking separation between early- stage AI startups and other young companies in initial funding rounds. In 2023, generative AI companies’ pre-money valuations increased an incredible 16% from the prior year, compared to a significant drop of 24% for startups in all other sectors attempting series A and B funding (see Figure 1). The enthusiasm from investors is well founded: Generative AI recently took AI from the realm of engineers and democratized the technology in a way that no other low-code/no-code platform has ever done. OpenAI’s ChatGPT launched into the public consciousness with a bang—setting the record for the fastest-growing consumer base for an application—and immediately demonstrated far-reaching “In the world of startup valuations, there’s generative AI—and everything else.”

capabilities as an AI assistant on everything from composing recipes to summarizing complex topics and generating computer source code. Google and Bing have incorporated generative AI into their search systems, enabling direct responses to queries and circumventing a user’s need to sift through numerous webpages to find answers. Service providers like Midjourney, Stability AI, and OpenAI’s DALL-E have harnessed generative AI to create accurate and striking images from text-based descriptions of the desired output. Despite the groundbreaking nature of these services, enterprises and their users should take a deliberate approach. Generative AI surfaces considerations regarding fair use and copyright based on how they use training data. Models can easily mislead users, novices, and experts alike, making it difficult to distinguish fabrications from factual content (this particular risk manifests as

“hallucinations,” which are outputs and information from models that sound highly plausible and convincing but are simply made up or incorrect). And of course, for enterprises with sensitive data, there’s the serious risk of data spillage by employees sharing confidential information with open- source generative AI models. The immense power of generative AI is ripe to uncover transformative opportunities across government missions. But like any tool, generative AI is only effective if it is applied with purpose. Therefore, rather than asking “How can I use generative AI?” the more nuanced, strategic question is: “Where will generative AI tackle a challenge better than the other tools in our toolbox?” This article explores the singular role of generative AI in revolutionizing government missions while agencies navigate emerging challenges; weigh the costs of application; and ensure responsible, effective use.

Comparison of median early-stage, pre-money valuations (in millions)




Figure 1: Generative AI Startups vs. Other Startups

Generative AI Startups


The modern landscape of national security challenges compels the U.S. intelligence community to swiftly process and disseminate information with unprecedented speed. This urgency is heightened by the exponential rise of publicly available information intensifying the demand for agility and adaptability. The ubiquity of public data narrows the advantage of classified intelligence sources, necessitating continuous evolution and integration. Open-source intelligence (OSINT) strikes a balance between secrecy and rapid information sharing, offering new perspectives on intelligence collection processes. The potential of OSINT is not static; it evolves to encompass diverse sources and domains, from text-based to technical data, fostering a holistic intelligence picture. National security organizations leverage modern technologies to tap into the vast trove of publicly available and commercially sold data. Automation, decentralized data processing, and AI/ML capabilities expedite the exploitation of data previously requiring manual efforts. The key is to identify valuable data, decentralize its exploitation, and integrate it with AI/ML technologies.

All Startups






Source: PitchBook data Geography: U.S. *As of May 18, 2023








Powered by