“As agencies generate massive amounts of content and data at the edge from things like sensors, cameras, drones, industrial machines, and healthcare equipment, the data must be processed as close to the point of origin for many applications to be effective. We believe that edge native applications for IoT, data and image analytics, and AI/ML will grow in prevalence across the tactical edge with magnified growth for mission solutions.” —Ramesh Kumar , head of product and solutions for AWS Snow services
the point of data generation, they can reliably create mission and technical advantages. But two stubborn challenges remain: • First, every time a new model is needed, it can take weeks to build and deploy it—that’s way too long. • Second, in the move from the cloud to the edge, AI performance tends to fall apart—accuracy, latency, and power efficiency all suffer. Siloed commercial technologies are unable to keep up with this demand, and it is becoming more critical for organizations to combine AI infrastructure, platforms, pipelines, and models into unified solutions that are flexible based on the specific mission. We refer to this concept as a “mission stack,” which is a set of integrated, mission-optimized technologies
that function holistically to accelerate production-ready capabilities and reduce the total cost of ownership for end users. For AI at the edge, a mission stack approach (versus a device- or capability- specific approach) helps organizations design open, modular frameworks. These frameworks promote feature enhancement and device and application monitoring while creating flexibility to build and deploy new capabilities in a secure, streamlined fashion. This allows organizations to create, test, and train models or key applications in an enterprise environment, ensuring that security protocols are maintained. These capabilities can then transition downrange into local hubs or across distributed enterprise endpoints to increase accessibility of the models and features to the operators or devices in the field.
This approach brings together different hardware and software solutions that might otherwise struggle to connect, communicate, and create value in unison. For instance, to rapidly build and run AI that can be used to train and deploy highly reliable AI to the edge in minutes, a holistic solution requires ML technology to train models in the mission and in real time without sacrificing latency. Such a solution also requires model compression so that models don’t sacrifice performance at the edge and can be integrated across diverse edge processors and devices. An open, modular design for AI at the edge allows for compression and maintenance of the models as they move downrange (see Figure 2). Maintaining the models at the enterprise level enhances their security, accuracy, and performance for the
community. And the ability to transition data, results, and performance metrics both up- and downrange is critical to continuously improving and informing the models. This approach creates the opportunities agencies need most to expand the edge environment while centrally managing and developing capabilities that can scale. A Future in Synch personalized health devices over the past decade. While fitness enthusiasts and patients once were forced to juggle disparate devices, from special cameras to multiple apps, today’s human performance users have uninterrupted access to synchronized dashboards that update in real time. This shift to interoperability and singular dashboards is enabled by standardized information sharing protocols, security It’s useful to think about efforts to address edge sprawl through a comparison to the evolution of requirements, medically informed dashboard requirements, and a
user-first approach to designing the user experience. It allows users to seamlessly get real-time updates on their aggregated and analyzed personal health statistics in a single location— with analysis on performance, recommendations for improvement, and insights into predictions throughout their day—versus spending time connecting each application and device to their computer, then downloading and making sense of the data system by system. From healthcare to disaster response to the battlefield, mission sets will increasingly require secure, interoperable edge ecosystems that enable faster, more reliable analytics and data processing at distributed endpoints. As discrete edge AI capabilities continue to advance, here’s the critical opportunity for the enterprise: to embrace those emerging capabilities at scale and in synchronization within their existing cloud-to-edge infrastructure.
Brad Beaulieu is an expert in cloud- to-edge architecture and security. He focuses on emerging cloud capabilities and development in Booz Allen’s BrightLabs incubator, an experimentation organization designed to develop, test, and incubate mission- centric solutions rooted in emerging technology. Beau Oliver leads Booz Allen’s Technology Exploration unit within the CTO, overseeing collaboration across the innovation ecosystem to scout, collaborate, and invest in mission-ready technologies. Josh Strosnider leads Booz Allen’s partnership efforts as part of the CTO organization and helps drive strategies to enable, differentiate, and expand federal access to emerging technologies. Rebecca Allegar oversees corporate collaborations between Booz Allen and industry partners.
Figure 2: Singular Approach to Edge Architecture An open architecture at the edge facilitates the aggregation of data to enhance a common mission. It enables interoperability between disparate point solutions for an integrated mission stack, pools resources and capacity for federated learning, and creates the critical connective tissue between the cloud and the tactical edge.
ENTERPRISE CLOUD
TACTICAL EDGE
Distribute and Optimize AI Models
Aggregate Data and Insights
Open Edge Architecture
Drones Forward Operators Wearables Smart Devices Medical Devices Extended Reality Technology Distributed Sensors Autonomous Vehicles
Federate When Connected Operate Disconnected Interoperate to Share Data and Combine Resources and Capacity Zero Trust Security
Data Share
SPEED READ
As advanced edge AI capabilities multiply, so do the devices, systems, and resources needed to enable highly specialized mission sets. Expansion of various mission applications for AI and edge technology leads to bespoke tools that agencies are unable to manage or scale for future system requirements.
Edge sprawl occurs when many mission-specific devices and systems operate independently in a fractured ecosystem that incrementally grows in size and diversity over time.
It is becoming critical for organizations to combine edge AI infrastructure, platforms, pipelines, and models into unified solutions that are flexible based on the specific mission. The key is engineering from the mission backward and committing to open, modular architectures that support robust technical performance across enterprise-to-edge continuums.
26
27
VELOCITY | © 2023 BOOZ ALLEN HAMILTON
Powered by FlippingBook