No one was prepared for COVID-19. No one but a handful of epidemiologists and visionaries — including, most famously, Bill Gates in his now-viral TED talk — had an inkling that the next global pandemic was a matter of when and not if.
In retrospect, our unpreparedness is embarrassing and almost inexcusable. All the more so considering everything we know about HIV, SARS, MERS, and, most recently, the Ebola and Zika epidemics.
We’ve all been talking about AI’s superpower for the last decade. But AI hasn’t been able to save us so far. It cannot replace human decision-making — yet. However, the technology can provide us with tools that complement our own cognitive processes. That could make us better prepared to respond to future epidemics, crises, and black swan events.
You may wonder why researchers in emerging technologies haven’t been able to harness the power of AI and machine learning (ML) to foresee what was coming. But AI, in its current state, can’t even predict and calculate the sales forecast for luxury or necessity products in retail. Many smartly created forecasting models have failed drastically. That’s why machines didn’t warn us about the coming economic crisis, millions of people losing their jobs, and high levels of emotional instability across the population.
So that leaves us questioning how to prepare for future crises — for there will be others. How can we use modern technology, AI, and ML as part of crisis-management strategies and decision-making processes? The implications of this question reach far beyond the realm of academia. If we learn our lessons now, we could use emerging technology to help with other global catastrophes.
You think you’re a rational creature? Think again.
When we talk about hard-to-predict phenomena, we usually think of earthquakes, the stock market, or the weather at auntie’s tea party two Sundays from now. What we don’t usually think of is perhaps the wildest card of them all: human behavior.
We are haphazard and irrational creatures, swayed by emotions, unconscious motives, and our bodily functions. Simply compare how well you stick to your grocery shopping list on an empty stomach versus when you’ve just had a good meal. Or what your arguments with your partner look like after a long day at work versus on calm Saturday mornings. The same psychological mechanisms, in the wrong time and place, and in the wrong hands, have caused nuclear missile crises. During the COVID pandemic, we’ve seen scientists, politicians, and citizens so unprepared that they forget to think for themselves, which has caused delays in decision making.
The two systems
The simplest interpretation of how our brain thinks during decision-making was suggested by Nobel Prize-winning economist Daniel Kahneman, who grouped the thought processes into two systems. System 1 refers to our lightning-fast, intuitive, emotional reaction. System 2 is our slow, deliberate, rational thinking.
Both systems are the result of long evolutionary processes, and each has its advantages. However, they have their limitations too. System 1, or our gut feeling, is often right. However, it is also biased and irrational. System 2, on the other hand, is slow and requires a lot of effort. How these systems get activated and deactivated with the intervention of AI is not that clear and hasn’t been researched enough.
All of that makes us ill-equipped for crises such as the current pandemic. The nature of a crisis means that we have little to no information for thorough and rational decision making. To make things worse, crises also afford us extremely short time frames for implementing action.
How AI can help
AI can’t make these decisions for us yet. But it can — probably — help by making up for the flaws of our built-in decision-making models.
For one, we can use technology to store massive amounts of data and make accurate and ultra-fast calculations. This could help us develop reliable mathematical models to analyze and predict the progression of epidemics and come up with diagnosis and treatments. But the potential uses of AI don’t stop there. South Korea was among the countries with the most efficient COVID-19 response. They relied heavily on AI to boost diagnosis efficiency, as well as to classify patients, share information instantly, and for police quarantine and lockdown measures. One striking example is the use of AI to examine X-ray images and identify lung abnormalities in just three seconds.
Hidden biases and other pitfalls
In addition to their vastly superior memory storage and computational skills, machines do not suffer from emotions, lack of sleep, period cramps, or hangovers. However, that doesn’t mean they are infallible.
For AI to make accurate predictions and decisions, it needs to have been programmed to do so. That requires high-quality information input. However, when it comes to epidemics, no such pools of accurate data exist. There are no extensive records of similar historical patterns that we could use to feed AI learning.
As humans, we’ve been careless when it comes to upskilling AI on epidemics and disasters. All we have at present are limited datasets and analyses from a handful of pandemics from the last 100 years or so.
Crises are also unpredictable by definition. To paraphrase the ancient Greeks, you cannot have the same crisis twice. There is a limit to how much data and inference from past epidemics can help with what’s going on now.
Where does that all leave us?
Now, more than ever, we need data scientists and developers to be trained for building AI that can eventually interpret data. And just as we want to be sure that medical drugs have been developed by highly qualified researchers and been used with instructions, we expect to have rules for using AI technologies and develop principles.
AI can help researchers scour through the data to find potential treatments and other solutions, something that the CoronaWhy non-profit project is trying to accomplish.
By having more trained people and developing frameworks for responsible AI principles in a mixed approach where we rely on both human sense and machine intelligence, we will be able to avoid our System 2 falling prey to paralysis by analysis and feed our System 1 with reliable, fact-based data.
Valeria Sadovykh is Emerging Technology Global Delivery Lead at PwC Labs, where her focus is on decision making and the decision intelligence aspects of technologies such as social networks and AI.