Wildfires, bridge collapses, epidemics, intense hurricanes. Disasters seem to be getting more severe and more frequent. Yet, budgets and workforces remain largely stagnant for many emergency preparedness and response (EP&R) organizations, which include public health, disaster, and other professionals that help the public prepare for and respond to emergencies. Estimates project a need for the US public health workforce to grow by 80%.1 However, the workforce has lost more than 45,000 workers in a little over a decade.2 Many organizations are strained to unprecedented levels trying to match the enormous scale of disasters while still delivering the individualized care each survivor needs.
Fortunately, generative AI has emerged as a transformative force that can potentially ease the pressure on EP&R personnel. What sets generative AI apart is its ability to personalize outputs while harnessing AI’s vast scalability, thus providing an ideal combination that can help public EP&R agencies.
While there are several strengths associated with the use of generative AI in EP&R, it is also important to consider the challenges that come with it. Ignoring these issues can lead to problems with accuracy, trust, competing investment priorities, and overreliance on automation, which can negatively impact the quality of EP&R work. Learning lessons from previous generations of automation can help generative AI pair its strengths with those of human workers to help transform EP&R.
The intersection of AI opportunity and EP&R imperatives creates a pivotal moment for leaders. Many legacy systems simply cannot keep up with today’s vast data sets or real-time processing needs. Change is needed, and generative AI can help. If EP&R leaders adapt their processes based on the strengths and weaknesses of the technology, they can uncover new ways to deliver the personalized care survivors need at the speed and scale of the modern world.
EP&R organizations have been using automation to improve work for more than half a century.3 Even AI is not new to the sector, with many small-scale successes being seen nationwide. For instance, the California Department of Forestry and Fire Protection uses image recognition to identify wildfires even before humans notice them, helping to control fires before they can spread.4 AI is also being utilized to tackle complex issues that involve human behavior. For example, researchers have successfully used AI to identify when tuberculosis patients have missed doses, thus helping patients better stick to their treatment regimens and decreasing the risk of further drug-resistant TB.5
But the use of AI in EP&R is still limited and often confined to smaller-scale proofs of concept.6 This is not uncommon and follows a similar pattern of adoption seen in other public sector organizations.7 A challenge with AI adoption at scale is the need for different skills, technologies, and governance compared to small proofs of concept.8 Luckily, the recent emergence of powerful generative AI tools can help overcome some of those challenges.
Unlike narrow AI tools that excel at one particular task, generative AI can produce results across a broad spectrum of domains. With its multifaceted capabilities, generative AI has the potential to amplify the effectiveness of emergency responders and their existing tools.9 Generative AI is not a stand-alone solution to all problems. But when layered with other tools and appropriate human judgment, it can bring more accurate early warning systems, predictive analytics for disaster management, innovative approaches to crisis response, and more.
So, while at-scale adoption of generative AI will still likely require new capabilities, its breadth means that EP&R organizations may only need to add those capabilities once to solve several problems.
When generative AI is paired with other automation tools and human workers, it can bring new capabilities to EP&R organizations.
Imagine having an extra set of eyes that can process information at lightning speed, spotting anomalies, and predicting risks. AI-driven algorithms and real-time data analysis can do just that. Whether it’s monitoring the spread of a wildfire or tracking the progression of a disease outbreak, AI can empower responders with a clearer, real-time picture of what’s happening. For example, the HealthMap project, a free, open-source health solution, uses AI to scan social media and other data sources to provide public health organizations worldwide with early warnings and real-time disease outbreak monitoring.10 Layering generative AI atop these algorithms could help workers quickly pull relevant insights out of the data even as its size grows exponentially. For instance, an AI-enabled emergency notification system can take weather data from the National Oceanic and Atmospheric Administration, input it into forecast models, and then automatically create a series of alert messages for affected areas, giving people greater advanced warning of an impending hurricane or other severe weather events.
In the chaos of an emergency, decision-makers often face the daunting task of allocating limited resources efficiently. AI’s data processing abilities can compare thousands of scenarios to help select the optimal course of action. From dispatching emergency personnel to distributing relief supplies, AI can help streamline the decision-making process, helping deploy resources where and when they are needed most. For instance, during the COVID-19 pandemic, AI helped hospitals navigate the shortage of personal protective equipment kits.11
Generative AI can improve training simulations by creating realistic scenarios and dynamic challenges so that responders are ready for a wide range of emergencies, from natural disasters to public health crises. The National Institute of Standards and Technology is developing an AI-powered simulator that leverages AI algorithms to model fire behavior and provide realistic training scenarios for firefighters.12
When a disaster strikes and people need assistance, they often have to navigate a maze of bureaucracy to get the help they need. Here’s where AI could help both the government and survivors. For the government, AI can rapidly assess needs, and generative AI could help quickly recode tools to link data from different government agencies without requiring specialized skills. For disaster survivors, generative AI could act as a single front door to various programs, helping people navigate processes like disaster assistance programs, financial support, and temporary housing with ease and efficiency, allowing survivors to receive timely and crucial support when they need it the most.
While generative AI holds immense potential, it’s not a panacea to eliminate all challenges associated with emergencies and disasters. One of the primary risks is expecting generative AI to navigate complex, unpredictable situations with the same finesse as human responders. Generative AI, like its narrow AI counterparts, has its limitations.13 All forms of AI rely on the data they have been trained on, which may not encompass all possible scenarios. So, while humans can exercise judgment in new situations based on intuition, emotions, and local knowledge, AI will typically struggle if a situation is too different from its training data.
Effective human-machine teaming can play an important role in achieving success during emergencies. Generative and narrow AI can crunch huge volumes of data in ways that humans cannot, but human responders possess the ability to assess the emotional and psychological needs of affected individuals, a dimension that AI may struggle to comprehend fully. Teaming the strengths of AI with those of human workers can be critical. In disaster scenarios, responders make nuanced decisions that consider the unique characteristics of the affected region, the needs of the population, and the cultural context. While AI can provide data-driven insights, it lacks the empathy and cultural awareness of human responders. Human oversight of AI models is vital for minimizing the potential for errors that could exacerbate the disaster’s impact.
Combining the computational power of AI with the intuition and judgment of humans can make EP&R organizations both more efficient and more effective. However, effective human-machine teaming can be challenging. Based on EP&R’s past experience with AI and automation, some common pitfalls include the following.
Imagine orchestrating an ensemble where each instrument plays its unique key—a dissonant symphony that echoes the complexity of data integration in EP&R. Integrating data from diverse sources, formats, and standards across various agencies and systems can be as intricate as composing harmonious music. To harness its full potential, AI needs high-quality, consistent data that flows seamlessly. However, the pandemic exposed the challenges of integrating patient data at the scale of a national health care emergency. A patchwork US public health system with disparate, fragmented data sources can make integration a major challenge. In fact, more than a third of local health agencies were unable to access surveillance data from local emergency departments during the pandemic.14 The lack of integrated data led to delays in tracking and responding to the virus’ spread, underlining the importance of efficient data integration in times of crisis.15
Data is more than mere information; it can be sensitive and critical. During emergencies, handling data while protecting privacy and security becomes paramount. Storing the large amounts of health data needed to fuel AI tools can make an attractive target for cybercriminals. For example, the WannaCry ransomware attack of 2017 compromised patient data at numerous hospitals worldwide.16
The solution to data interoperability is not necessarily storing all data in a centralized place. New technical advances can help realize the benefits of AI, but without increasing privacy risk. For example, homeomorphic encryption removes the specifics of data while retaining features needed for analysis. Federated learning can tackle the need to share sensitive data. Rather than moving data between organizations, federated learning moves AI models instead, allowing advanced models to be trained on several different data sets without organizations losing control of their data. These new technologies should be paired with new governance processes to help make the resulting insights widely available while protecting the privacy of the underlying data.
While many government organizations experiment with AI tools as proofs of concept, they often struggle to adopt successful pilots on a larger scale, in part due to constraints in the size and timing of resources.
Adopting an enterprise-level AI tool is not just a procurement exercise. It should have continued funding to keep developing the tool, protect it from model drift, and ensure its sustained accuracy and utility. Similarly, resources aren’t just needed for technology alone but also for the workforce. New tools can create the need for new skills and even entirely new roles, such as data scientist or algorithm auditor. Organizations that adopt AI-specific roles are 60% more likely to achieve their project goals than those that do not.17 Reskilling the workforce for AI doesn’t necessarily mean adding many data scientists or creating scores of new positions. Often, it means giving existing staff the skills they need to work effectively with AI. For instance, providing IT staff the skills to deal with new forms of data traffic, frontline workers with the skills to use AI tools to augment their work, and legal staff the understanding of how AI works to help ensure regulatory compliance.18
AI’s use in emergencies can have far-reaching consequences, making it very important to implement AI with equity, transparency, and accountability. The distribution of COVID-19 vaccines offers just one example that highlights these concerns, sparking debates globally on fairness and equity in deciding who should receive vaccines first.19 When used properly, AI can actually improve equity by identifying hidden patterns of bias or the roots of unequal outcomes.20
However, AI governance is needed at every step of the model lifecycle to help achieve equitable outcomes. From labeling training data with appropriate uses to periodically reevaluating model outputs to check for model drift, careful AI governance is a central tool for promoting the ethical use of AI.
Good governance can help organizations use AI ethically, but broader legal challenges exist. With disasters that can span county, state, and even national boundaries, EP&R organizations may have to comply with a diverse set of potentially contradictory AI regulations. To promote not just ethical use but also legal compliance requirements, EP&R organizations need to have a deep understanding of how their AI tools work, what outcomes they produce, and how they are used in day-to-day operations.
Disasters come in many forms—public health, natural, and human-made—each with a unique cadence. For example, the responses to the 2014 Ebola outbreak in West Africa differed from those to the 2011 Fukushima nuclear disaster, emphasizing the importance of tailoring strategies to the unique nature of each emergency. Today’s AI tools feature a trade-off between breadth and accuracy. Large language models are powerful general-purpose tools that can deal with very different scenarios but may struggle with the accuracy of certain details. On the other hand, more bespoke AI models can be extremely precise in solving specific problems they were trained on, but their scope is limited to only that problem.
As a result, agencies should balance the need to be prepared for any scenario with the need to have accurate, trustworthy conclusions. And do this, while the technology itself is constantly evolving.
These organizational challenges can seem daunting. However, a few key steps can help leaders move their organizations beyond pilot purgatory and help AI deliver the transformational outcomes that constituents desire.21
Organizations can easily feel pulled apart by the opposing demands of data integration and security. However, it’s important to remember that EP&R organizations aren’t facing these demands alone. There is a whole ecosystem of state and federal agencies, academia, industry, and even the general public, all invested in the success of EP&R. Collaborating with this wide ecosystem can provide access to new technologies and a broader technical skill set that can help break the trade-off between data integration and security. For example, the National Institutes of Health’s Bridge to Artificial Intelligence program is building an ecosystem of machine-readable data sets and automated tools to generate new ethically sourced data sets.22
Even beyond technical collaboration, working directly with the public can help increase buy-in from key stakeholders, fostering a sense of shared responsibility for the success of AI-powered initiatives. For example, hosting listening sessions can identify concerns with the privacy or security of a project early on and help the public feel like they are part of the process.
A key part of any strategy is understanding how activities build together to achieve an organization’s overall goal. AI is no different. Mapping how AI tools help an organization achieve its strategic goals is an important first step to justifying continued funding of those tools. In today’s budget-constrained environment, a tool without a clear line of sight to better outcomes is unlikely to survive for long.
AI works differently than a human worker, and while it may require less supervision for tasks like crunching numbers or analyzing trends, it may need significant oversight and governance for tasks like making benefit eligibility determinations. As a result, leaders should not simply bolt AI onto existing business processes. Instead, adapting strategies to align with both the strengths of AI and the workers involved in each step of the process can not only improve performance but also help avoid ethical issues arising from unsupervised AI. Workflows designed with human-machine teaming in mind from the start can help the outputs be both more effective and ethical.
Disasters come in various forms, making it challenging to predict the next one. But EP&R organizations don’t need to simply guess which tool to reach for because rarely is the right solution a single, monolithic AI tool. Instead, they should think of AI integration as assembling a toolkit, with each tool paired to the sub-task it excels in. Organizations should, therefore, embrace a diverse range of AI technologies that complement each other. Whether it’s machine learning for data analysis, natural language processing for text comprehension, or computer vision for image recognition, a mix of AI tools can allow EP&R organizations to be more effective in the next disaster, no matter what it is.23
This is a pivotal moment for EP&R. Increased demands are straining organizations while AI technologies are offering transformational new tools. This intersection creates a unique opportunity for EP&R organizations to adopt AI. By addressing the organizational challenges that have hampered previous automation efforts, leaders can not only improve performance but also ensure the privacy and equity of EP&R services. The result can be a future where communities are safer, more resilient, and better equipped to navigate even the darkest of hours.