SECURITY BREACHES Vulnerabilities can quickly move from edge to the larger network, such as with machine learning (ML) models that poisoned data has compromised. INTEROPERABILITY CONSTRAINTS Silos make data and model sharing inefficient or impossible, hindering decision making, scale, and affordability. VISIBILITY LIMITS Barriers emerge, impacting the ability to achieve situational awareness at the edge and maintain visibility across large- scale mission domains. DECREASED FLEXIBILITY AND SCALABILITY Mission platforms may lack the capacity to evolve in line with changing objectives. RISING COSTS Operations and maintenance costs increase, and agencies must cope with constant refresh cycles—that are neither planned nor designed— to maintain mission resilience.
Mitigating these risks and successfully managing edge sprawl may involve adjustments to how organizations plan, design, and manage their mission platforms. It’s logical to seek out different solutions to achieve different goals, but, at root, edge ecosystems represent a decentralized fabric that gives agencies the ability to process information and train models without an exchange between devices and the central enterprise. In the push for mission-specialized devices and systems, the primary goal—to provide a secure, integrated foundation to help warfighters, civilian operators, and agencies make critical decisions faster in distributed or disconnected environments—should not be overlooked. From Complex Sprawl to Singular Architecture To counter the edge sprawl challenge, agencies should think about identifying a comprehensive, scalable approach that addresses essential aspects of their edge environments (see Figure 1). The high demand for AI at the edge is promising because any time organizations can move AI closer to
Undoubtedly, organizations need to build, manage, and train new models rapidly and effectively across the edge environment. But what can seem like quick wins can actually hinder the mission when a thoughtful plan isn’t in place for operating the ecosystem as a whole. Expansion of various mission applications for AI and edge technology leads to bespoke tools that agencies are unable to manage or scale for future system requirements. Within mission sets, each distinct activity may need its own set of highly tailored capabilities with unique form factors and hardware configurations. While increasingly critical, these single-use, single-mission solutions are a central cause of edge sprawl. For some organizations, the myriad tools and platforms risk spiraling toward uncontrolled complexity that prevents the sharing of data and distribution of insights between edge and enterprise and among edge systems. Where edge sprawl persists, organizations face a series of interrelated risks that can diminish mission outcomes:
Managing Edge AI Sprawl SIMPLIFYING COMPLEXITY BEYOND THE ENTERPRISE Brad Beaulieu, Beau Oliver, Josh Strosnider, and Rebecca Allegar Contributions from Eric Syphard and Greg Kacprzynski T he evolution of mission operations far beyond enterprise boundaries—from remote work
due to volume and security, rather than backhauling it to the enterprise “As agencies generate massive amounts of content and data at the edge from things like sensors, cameras, drones, industrial machines, and healthcare equipment, the data must be processed as close to the point of origin for many applications to be effective,” said Ramesh Kumar, head of product and solutions for AWS Snow services. “We believe that edge native applications for IoT, data and image analytics, and AI/ ML will grow in prevalence across the tactical edge with magnified growth for mission solutions.” But as advanced edge AI capabilities multiply, so do the devices, systems,
and resources needed to enable highly specialized mission sets. What should agencies consider when faced with this growing “edge sprawl”—and how should they adjust their design and management of AI models to reduce complexity from enterprise to edge and back? For many, the key lies in engineering backward from the mission and committing to open, modular architectures that support robust technical performance across enterprise-to-edge continuums. Edge Sprawl, Dissected Edge sprawl occurs when many mission-specific devices and systems operate independently in a fractured ecosystem that incrementally grows in size and diversity over time.
sites to the battlefield and outer space—continues to accelerate through
edge computing technology. For the Federal Government in
particular, edge technology is enabling agencies to harness powerful analytics and AI in the field and address growing mission requirements to: • Enhance resilience and redundancy beyond traditional IT architectures • Improve situational awareness and decision making at the operator level • Keep data at the point of processing,
Figure 1: A comprehensive, scalable edge environment
SECURITY ACROSS EDGE SYSTEMS
INTEROPERABILITY AMONG DEVICES
CONNECTION FROM ENTERPRISE TO EDGE AND BACK
AI AT THE EDGE
Accelerating the training cycle in operations. While there is undoubtedly a future where complex AI models are trained locally on edge devices, this technology is not yet fully mature. It can often take weeks to aggregate field data on cloud infrastructure and retrain and redeploy an AI model that needs refinement as conditions or parameters change. This long cycle time can diminish mission impact and erode trust in the solution. There are capabilities emerging to address this; for example, Synthetaic has developed a computer vision tool that allows users to “nudge” AI models in the right direction to improve future results for detection or classification of objects. As more capabilities like this become available, AI at the edge can be trained at a fraction of the time and cost of traditional methods. Innovating delivery pipelines. The success of development operations (DevOps) with continuous integration/continuous delivery (CI/CD) pipelines in traditional software must eventually apply to AI at the edge to drive operational success and improved adoption. While there is a trend toward increasingly powerful AI, like large language models born from massive computing resources, there is also a trend toward efficient AI runtimes and model delivery pipelines to realize true AI CI/CD. The evolution and convergence of both will be paramount to the next wave of AI.
The Next Wave Today’s AI models are often trained with the intent that they will be running on hefty cloud-based compute infrastructure, with data feeds that are multiple, persistent, and dependable. Without many constraints, AI engineers employ techniques to train their models for finely tuned performance. And why not? If an organization is going to invest in AI, it should perform at peak levels. But the future promise of AI at the edge will be marked by innovation and progress in a few key areas, including: Making tradeoffs between precision versus application. If the inference—the model runtime— is on fixed-compute, limited power devices (like a mobile phone or perched camera) in disconnected environments, perfection in performance need not be the primary objective. The potential value of AI- enabled decision support at the disconnected edge is so significant that a slight tradeoff in precision is often worth it if engineers can extend AI inference capability to low-compute and low-power environments.
Without a well-designed, open architecture, the applications and models operating at the edge are open to risks, including biased AI models, poisoning of data, and widening of the broader network’s attack surface. The security challenge of backhauling data to the enterprise, whether from contested or uncontested environments, should not be underestimated. These significant risks require management and mitigation beginning at the enterprise and working down to the edge.
Agencies need to harness increasingly powerful AI models and operate them at the edge, with the latest example being large language models like ChatGPT. However, these models call for unusually intensive compute resources and distinct hardware tools from an array of manufacturers. A crucial level of interoperability can be lost without strategies and tools, such as runtime environments to erase processor-specific silos.
Deficiencies in security and interoperability between multiple edge devices sever connections between edge and enterprise, isolating insights to devices, systems, and locations where decision makers lack capacity to access and apply them. That places senior leaders and operators in the dark, unable to make sense of the information that is being collected and processed at an edge node but not shared past that point.
Agencies should be mindful of tradeoffs between optimized technical performance and critical mission needs, such as seamless data transfers to enable full-domain awareness.
VELOCITY | © 2023 BOOZ ALLEN HAMILTON
Powered by FlippingBook