1. Home
  2. Stories
  3. How can explainable AI contribute to natural hazard prediction?
Artificial Intelligence

How can explainable AI contribute to natural hazard prediction?

Publication date: 14-02-2024, Read time: 3 min

What is explainable AI? 

Explainable AI is a very recent development in artificial intelligence. It deals with tools typically belonging to the family of AI, such as deep learning, that become so complex that they cannot be easily interpreted. Explainable AI is a post-processing tool that is put to work after a machine learning model, typically a neural network, has been generated. It's aimed at providing an interpretability level that will allow us to trust predictions that we make - for example, about natural hazard occurrence.

What does explainable AI do? 

Normally, when we use solid statistics, a clear interpretation is guaranteed. We can understand why a certain prediction was made -  for instance, why the probability of landslide occurrence was assigned as a function of certain parameters. The same thing cannot be said for machine/deep learning architecture, where we can only use the probability that we get at the end of the process. Explainable AI is a recent branch that tries to combine the explanatory power of statistics and the typical performance-oriented outcome of an artificial intelligence model.

How is explainable AI different from traditional AI? 

Explainable AI can be helpful if we don't want to rely merely on what AI tells us. It’s also a way for us to assess the correctness of AI output. Let’s say you want to build a relatively simple model that involves only two characteristics to predict landslides: slope steepness and rainfall intensity. In such a case you will want the probability to follow your reasoning and your understanding of the underlying physics. If it turns out that at steeper slopes, the probability of landslide occurrence decreases instead of increasing – or, that at zero steepness the probability is high – you know that something is wrong with your model.

Normally in deep learning modelling this level of interpretation is neglected. You can see whether slope steepness provides more information than rainfall or other characteristics of the landscape, which is called predictor importance. But you cannot really investigate the model beyond that limitation. Explainable AI, however, gives us the ability to query our model element by element and understand how steeper slopes or more intense rainfall will increase the probability of landslides occurrence. It's always reassuring to see our assumptions reflected in the outcome, and it's definitely more accurate and satisfying than blindly trusting the model.

Is expainable AI more difficult to implement than standard AI? 

As mentioned earlier, explainable AI implemented after a machine learning model has been generated. All in all, the process will take longer but it won't be any more difficult. It's just an additional step that requires a certain amount of machine time, also for visualization. Researchers will also need to invest time in querying and understanding their findings. But in the end it will be worth it.

Tags
Artificial Intelligence Disaster Risk
Last edited: 07-05-2024

Personalize your experience

Create a free account to save your favorite articles, follow important topics, sign up for newsletters and more!