Velocity by Booz Allen


Networking LLMs to leverage their unique strengths ensures real-time advances as they collaborate on tasks. Linking these models is a practical way to deliver increasingly powerful results.

Can Quantum Supercharge AI? QUANTUM COMPUTING’S ROLE IN THE EVOLUTION OF AI Isabella Bello Martinez, Ryan Caulfield, and Brian Rost Q uantum mechanics and machine learning, two of the most transformative forces of the DID YOU KNOW: Quantum mechanics

Amplifying AI Advantages Real-time communication and collaboration between LLMs that are continually trained on trusted data opens the way to multiple advances. For example, the practice: • Frees up operators to focus on assessment rather than switching screens and manually evaluating and comparing data to anticipate threats. • Enables automated fusion of classified, civil government, and commercial data to train more powerful, precise AI models. • Provides each stakeholder with automatic access to data, improving decision making for stakeholders across domains. Training, Testing—and Then Trusting The concept of training LLMs to be networked starts with a focus on the mission and ensuring compliance with ethical guidelines like the AI Bill of Rights and the NIST AI Risk Management Framework. AI scientists need to confirm that data sources, including other LLMs, are trained on unique datasets from verifiable sources. Developers can quickly incorporate intelligent agents and tools that integrate easily with trusted sources once data ingestion is assured and a training pipeline and prompt templates are built. Meanwhile, strategies can streamline the process. For example, training on servers before migrating systems to the cloud saves on costly cloud computing.

Focused, nested training using trusted data and ensuring a strategic intersection between the LLMs is critical to ensure rapid, accurate returns. Data scientists need to go through the system and assess different weights, inputs, and other components, testing its information with truth data and then entrusting it with small tasks as a first step to more strategic ones. For example, it could be asked to develop a red-team attack scenario that the human experts can incorporate into a training exercise. Linking LLMs Can Launch Adaptive Space Awareness As General Chance Saltzman, the Space Force’s chief of space operations, emphasizes, resilience is essential and continuous awareness is critical for the Space Force’s strategy of competitive endurance. Networking LLMs to leverage their unique strengths ensures real-time advances as they collaborate on tasks. Linking these models is a practical way to deliver increasingly powerful results. It’s scalable, allowing the networking of multiple LLMs. It’s model-agnostic, so it can be used with any LLM. And it holds the promise of connecting the vast, siloed datasets that are key to avoiding celestial collisions and countering adversarial attacks. Ron Craig is vice president of space strategy and solutions at Booz Allen. Michelle Harper leads software projects that accelerate integrated capabilities for Booz Allen clients, including the Space Force.

Although quantum machine learning is still in its early stages, the progress made so far indicates that QML will have a transformative effect on AI. The impact will be felt across diverse fields, many of which are directly aligned with government interests, such as designing better materials for assets in space, improving health diagnostics, and advancing computer vision for superior ISTAR (intelligence, surveillance, target acquisition, and reconnaissance). An Evolution, in Partnership with Classical Algorithms Humanity has been “computing” since we first started using numbers. While the types of computations and technology used today are drastically more sophisticated than keeping track of bushels of wheat on an abacus, the fundamental computational model has remained the same. A computer carries out these computations using bits, which are objects that can be in one of two states—like “on” or “off”—called 0 and 1. For the first time in history, we’re starting to compute using a completely novel computing paradigm known as quantum computing . Based on quantum mechanics , quantum computing is expected to have a wide- reaching and transformative impact. At the core of quantum computation are qubits (quantum bits), the quantum version of bits. Like a bit, a qubit can exist in the 0 state or the 1 state. Unlike a bit, a qubit can also exist in a uniquely quantum state that is analogous to being partly 0 and partly 1, or existing along a continuum between 0 and 1. This makes qubits more complex than bits, enabling quantum computing to tackle problems well beyond what would be possible with classical computing.

is the physics which governs very small (particles, atoms), very cold (superconductors, superfluids), and exotic systems (lasers, semiconductors, stars) from which surprising behavior arises. Researchers are experimenting with different ways of making qubits , and no clear winner has emerged yet. Implementations range from basic quantum objects such as atoms, ions, or light (photons) to more exotic quantum systems such as nanodiamonds, superconductors, and more. At the heart of the advantage of quantum computation is a uniquely quantum phenomenon called entanglement , whereby multiple qubits become fundamentally linked and share information between themselves in ways not possible classically. The operations implemented by a quantum computer are called quantum gates. Both quantum and classical algorithms can be specified as circuits, a graphical representation of a series of gates to be applied to the qubits.

past two centuries, are converging to mark the start of a new era of AI. This convergence—known as quantum machine learning—has the potential to address limitations of classical (meaning “not quantum”) machine learning, particularly processing power and speed. Classical machine learning has undoubtedly made significant strides in data processing and predictive analytics. Yet it is often limited by computer speed and memory, especially when dealing with large and complex datasets. Quantum machine learning (QML) leverages some of the unique features of quantum systems to transitioning from offering a purely theoretical advantage to finding real- world, high-impact applications. In the realm of drug discovery, where the search for new drugs often involves navigating a vast space of molecular combinations, QML has shown potential for identifying promising compounds more efficiently. Recent evidence also suggests QML provides advantages for computer vision, where identifying key features in unlabeled images is becoming increasingly important. Relatively small quantum models have the power to perform well on even the largest and most complex datasets that would otherwise require impractically large, classical models. These, and other advantages of QML, arise from the fact that quantum systems are inherently more complex and more capable of representing complicated patterns than comparable classical systems. overcome these limitations. QML is currently on the cusp of


There’s an urgent need for advanced AI as an expected surge in the number of active satellites by 2030 makes space increasingly “competitive, congested, and contested.” Networking large language models (LLMs) can enhance space domain awareness and address these challenges. The BRAVO hackathon showcased the transformative capabilities of networked LLMs in space operations. By linking these models, they can communicate, share data, and amplify space operators’ awareness, leading to more efficient and precise decision making. By allowing LLMs to collaborate and learn from each other, networking them can provide comprehensive insights into space behaviors and threats. This interconnected system promises enhanced space domain awareness and strategic advantage.




Powered by