1. Home
  2. Stories
  3. Explainable AI in Natural Hazard Monitoring
Artificial Intelligence

Explainable AI in Natural Hazard Monitoring

Publication date: 14-02-2024, Read time: 4 min

The only thing that separates miracles from science is explainability. For thousands of years, almost all people believed in magic and miracles until a time in history when scientific understandings were mature and accessible enough for at least some people to understand the science behind it. In artificial intelligence (AI), we are still in the era of magic and slowly moving towards the "science" part of AI.

In past two decades, AI has been used in various of applications, modelling natural hazards being one of the important avenues because of the higher predictive capacity of AI models. However, natural hazards are physical processes and have tremendous effects on people's lives and infrastructure. Modelling them without understanding why a given model made such a prediction/estimation is unscientific and risky. Therefore, while modelling processes directly influence people's lives and livelihoods, we must consider their reliability, and explainability is a prerequisite of reliability. 

Computer science research into explainable models

In computer science, there has been some research towards explainable models, and the most relevant approaches for explainable AI in natural hazards are human-on-the-loop, global explanation and local explanation.

The human-on-the-loop approach is when a human expert constantly evaluates the model results to train further or improve the model, eventually enhancing its reliability and accuracy.

The global and local explanations are approaches where the output of the models are explained for the overall model behaviours or on a case-by-case basis, respectively. An example of this is an explanation of deep learning output on landslide susceptibility using Shapley additive explanations (SHAPs), which allows one to understand the level of influence of each input variable on the outcome. This approach can explain which factors influence the model’s output on each slope and why the particular slope has a specific landslide susceptibility. 

Much of the work is done to explain models’ global and local behaviours in the context of explainable deep-learning models for natural hazards. However, almost all of them are “attribution” methods, meaning they assign importance or contribution to different input features but don’t explain the model itself. In other words, it focuses on understanding which input features or variables are most influential in driving the model’s predictions or outputs.

What would a proper explanation of a model look like? 

A proper explanation of a model should involve providing a comprehensive understanding of why the model made a particular decision or prediction. It goes beyond identifying influential features and offers insights into the underlying logic or reasoning behind the model’s behaviour.

An excellent example of an explainable model is a decision tree, where you can go to each node of the model to understand the exact reasoning behind the model output. However, in deep learning models, we are still on the way to reach that level of explainability. 

The importance of explainability in natural hazard modelling 

As we have seen with popular large language models, they can often "hallucinate" by giving statistically viable but completely wrong answers, thus creating wrong results. In the case of natural hazard modelling for decision-making, any wrong/false prediction from the model would lead to a disaster as wrongly identified hazards could lead to a significant delay in emergency response and early warnings.

Thus, in critical applications such as natural hazard modelling and forecasting, it’s imperative to understand and explain the model to act upon the model results. However, many deep learning-based natural hazard models either don’t explain the results or heavily rely on feature attribution approaches.

Our research must therefore focus on developing methods that help users, including experts and non-experts, understand the decision-making process of the AI model in a human-interpretable manner.

If you're interested in learning more on this topic, check out these journal articles entitled Explainable artificial intelligence in geoscience: A glimpse into the future of landslide susceptibility modeling, and Full seismic waveform analysis combined with transformer neural networks improves coseismic landslide prediction

Tags
Artificial Intelligence Disaster Risk
Last edited: 07-05-2024

Personalize your experience

Create a free account to save your favorite articles, follow important topics, sign up for newsletters and more!