Edge AI will be key in helping to build sustainable development practices that reduce carbon emissions while also lowering cloud
Curbing the Proliferation of Energy Emissions
Cloud services make it easy to click a button and deploy a virtual server to run an application, start cloud-native service for business analytics, check email, and, notably, provide the processor-intense computational power necessary to train AI models. What’s less obvious is the corresponding carbon emissions of each button-click, leading to potentially inefficient cloud resource usage. A study of IT professionals at 100 companies spending about $1 million on cloud computing found that more than half of these companies use only 20%–40% of available central processing units (CPUs), due to idle servers, easily contributing to tens if not hundreds of millions of tons of carbon emissions (CO2e). While migrating to the cloud offers 1.4–2 times greater energy efficiency in comparison to on-premises data centers, the technology sector’s race for green IT has begun, and AI offers the starting point to our growing problems and scalable solutions to achieving cloud and AI sustainability.
Transformer 3 (GPT-3) with 175 billion parameters required 1,287 megawatt hours of electricity and 502 metric tons of carbon, with one ChatGPT search consuming 100 times more energy than one Google search. The only thing greater than AI’s effect on the environment is our opportunity to design AI algorithms in ways that increase our chances of a greener future. While AI fuels demand for cloud computing to train and execute the models, AI offers solutions for curbing carbon emissions through standard machine learning (ML) techniques that improve energy efficiency. Even before designing an algorithm, AI sustainability begins with the engineer’s decision to deploy cloud resources for training and running AI models in renewable energy-powered data centers, such as those listed in Google’s human- and machine-readable table. Algorithmic techniques for improving energy efficiency involve achieving the same accuracy of the model with less overall energy consumption and reduced computational cost. Some of these techniques include distillation, the transfer of knowledge from larger models into smaller ones; fine-tuning, which involves refining already-trained models for efficiency; and pruning to remove redundant parameters that do not affect the accuracy of the model but increase computational needs and carbon emissions. Right-sizing ML models for a given purpose also saves considerable amounts of energy by reserving the computationally intense models for consequential efforts, such as cancer research, and less computationally intense models for mundane tasks like summarizing meeting notes. The advent of TinyML and the BabyLM Challenge throws the gauntlet to ML engineers and researchers to design ML algorithms on low-cost, low-power microcontroller systems that can reduce carbon emissions by 5 to 38 times, according to initial testing. In addition, designing TinyML “at the edge,” or closest to where the data exists on a network, significantly reduces the cost and energy required to transfer the data to a central location and train massive models with potentially unnecessary parameters. Government missions at the tactical edge, ranging from the warfighter on the battlefield to the cyber analyst defending satellites in space, can greatly benefit from algorithm optimization and TinyML algorithms on edge compute devices to rapidly develop data and mission insights in areas with intermittent or zero connectivity. With these methods, smaller and optimized AI algorithms can more effectively enable government missions at the tactical edge and more easily offset their own carbon emissions by reducing emissions in cloud computing and across industry. A sustainability-aware AI algorithm (and engineer) can rapidly identify idle servers, underused compute resources, and more efficient options for cloud data storage with the insights and scalability from AI, while mitigating climate change by increasing the efficiency of power grids, batteries, manufacturing, and supply chains. Doing more with less and at the edge is the name of the AI sustainability game. Gaining awareness, visibility, and transparency into the connection between cloud and AI carbon emissions is a crucial step toward making data-driven decisions to advance sustainable cloud and AI.
While the technology sector soars to unprecedented levels of consumption, the Biden Administration established a Federal Sustainability Plan that can serve as a guide to agencies to curb emissions and decarbonize supply chains increasingly powered by the cloud. Data Centers Across Industries • The worldwide consumption of electricity by data centers was estimated to be
between 220 and 320 terawatt hours (TWh) in 2021, more than the electrical consumption of entire countries such as Sweden or Egypt. The telecom sector, supported by constantly humming servers, produces 5% of global carbon emissions, more than double those of the entire airline industry, and is set to increase to 14% by 2040. The infrastructure-as-a-service market, which drives data center growth and electrical power consumption in the technology and telecom sector, is projected to grow from $130.9 billion in 2023 to $325.9 billion by 2028, with a compound annual growth rate (CAGR) of 20% (Yahoo Finance).
costs. The amount of data being collected by devices like drones and cameras means cloud processing and energy demands have nowhere to go but up. Moving processing to the edge with models optimized for memory, power, and compute not only lowers cloud carbon emissions and costs, but can also extend missions and their impact.”
Enter AI, Both the Challenge and the Solution
The dramatic proliferation of large language models is generating massive amounts of carbon emissions from the computational power required to train these models, which goes virtually unnoticed by the public. A single AI algorithm, such as the transformer (big) model (see Figure 1), can generate upward of 300 metric tons of CO2e, the equivalent of 470 people taking flights from New York to San Francisco. However, training the same model using algorithmic techniques to improve energy efficiency reduced carbon emissions by over 1,000%. Training Generative Pre-trained
Figure 1: Carbon Emissions Consumption (CO2e lbs)
— Jags Kandasamy , CEO and co-founder of Latent AI
Air travel, 1 passenger from NY to SF
Human life, avg., 1 year
Reporting Carbon Emissions We can’t change what we can’t measure, and current cloud and AI carbon emissions reporting is still advancing to empower people to make carbon-conscious decisions. For most organizations, cloud and AI carbon emissions fall into Scope 3 reporting. This reporting includes carbon emissions that an organization does not produce directly from operations but indirectly from their supply chains, such as with purchased goods and services. Measuring Scope 3 emissions presents notorious challenges, due to imprecise data and scope for what falls within an organization’s supply chain. This problem is especially acute for cloud and AI carbon reporting for which hardly any emissions data exists. Federal agencies can begin to tackle carbon emissions reporting requirements by starting with setting goals to reduce greenhouse gas (GHG) emissions, as described in Biden’s “Executive Order 14057 on catalyzing American clean
American life, avg., 1 year
Car, avg. including fuel, 1 lifetime
Training one GPU model transformer (big) w/neural architecture search
100,000 200,000 300,000 400,000 500,000 600,000 700,000
Estimated carbon dioxide equivalent (CO2e) from everyday activities compared with training a common natural language processing model, the transformer (big) model, with 251 million parameters trained on 8 NVIDIA P100 graphics processing units (GPUs) for 3.5 days (84 hours; 300,000 steps). Source: Energy and Policy Considerations for Deep Learning in NLP
VELOCITY | © 2023 BOOZ ALLEN HAMILTON
Powered by FlippingBook