top of page
Writer's pictureAlex Carruthers

Earthquake science’s imminent ML-driven paradigm shift?

Updated: Nov 14, 2019

In layman terms, what do you do? 

Arnaud Mignan “tries to understand past catastrophes, their physics and statistics, to hopefully improve the forecast and mitigation of future, potentially damaging, events.”


“My main expertise is on earthquakes but I have projects in all sorts of risks, from asteroid impact to cyber-risk via domino effects. I also look at geo-energy and carbon capture projects and their potential risks in the wider context of climate change.”


He gave a simple example of research from a few years ago, detailing an exercise with some high school teachers in natural sciences that illustrated how an earthquake can cascade into extreme consequences. The research involved ‘reasoned imagination’.

Mignan et al. (2016), 'Using reasoned imagination to learn about cascading hazards: a pilot study', https://www.emerald.com/insight/content/doi/10.1108/DPM-06-2015-0137/full/html)

He references one particular scenario from the research.


"A large aftershock triggered a dike breach, which led to flooding. When receding, the flood in turn provoked a landslide on an unstable slope, which cut vital sections of the infrastructure networks of the area. With no water available, multiple gas leaks and roadblocks limiting access to first responders, fires quickly propagated. It led to a major industrial accident, the interruption of its business activities, and in consequence to a general slowdown of the regional economy, which is highly dependent on this industry. In this situation, riots and lootings followed.”


From an earthquake to riots. Preventing that sounds good.



I mentioned a recent medium post of his, in which he argued ML models may not be as effective as suggested in his line of work, catastrophe risk modeling. (It was actually this post that first drew my attention to Arnaud for exactly the reason that it was critical of machine learning.)


“I'm a huge fan of machine learning. I just meant that far simpler ML techniques may often do as well as - if not better than - complex, fancy models.”

He goes on to explain a general trend for making things more complex than they need to be in Science, with which I think many readers will agree.


“It’s not always justified. It might be easier to sell a complex model to a high-impact journal, rather than [selling in] a boring logistic regression. Although we did just that in Nature (with his colleague Marco Broccardo)! It is also tempting to surf on the AI wave and oversell fancy ML models to clients.”


Portraying his point, he references this recent NYT article, which makes an argument against ‘high tech disaster response’. The basic outtake is that the respective approaches to risk (in machine learning versus catastrophe risk) are misaligned, that if you get it wrong in risk management it means lives.




I asked if he saw this as a problem in general (machine learning touted as being able to solve something it can’t) in catastrophe risk modeling.


He believes that, as with computer science, the main issue is “to understand how to deal with the model bias-variance trade-off”.


“I’m a proponent of Occam's Razor and First Principles but I find it ironic that complex models are easier to build - since they provide more flexibility (variance) to the detriment of rules (model bias) that one would have to define otherwise.”


I find it interesting to note that in this particular field, a spin-off of intelligence, growing complexity appears to garner solid progress.


“So far, many firms in the risk business (such as the insurance industry) have tried to develop an AI-centric view on risk, but deep learning requires huge amounts of data and I'm not sure we have enough.”


“Risks evolve over time and extreme events are too rare to be correctly represented in existing databases. So, ML's potential is huge but it must be intertwined with physical and engineering discoveries.”


I ask what he thinks will have the greatest potential impact for ML in his line of work. Where our efforts should be focused?

“The area of reinforcement learning likely has the greatest potential, as it is well suited for real-life conditions. It is by definition ‘decision-making’ but optimized at its best.”

What he’s saying makes me think that machine learning fractures ‘decision making’ itself into sub-decision increments. In a sense, ‘utopia’ in machine learning is when there are no decisions left to be made. Everything is organised, handled, sent, delivered and even perhaps experienced at a sub-decision level.


“Reinforcement learning should therefore be applied at all stages of the risk process, from optimization of data acquisition to risk mitigation and urban planning.”


“Although it has a long history in real-life optimization problems, not so in catastrophe risk. Algorithmic risk governance is one direction I'm taking and my first project was on the site optimization of geothermal plants - depending on highly uncertain seismic feedback and stakeholder risk aversion. But there is no reinforcement learning in there yet.”


(Mignan et al. (2019), 'Including seismic risk mitigation measures into the Levelized Cost Of Electricity in enhanced geothermal systems for optimal siting', Applied Energy, https://www.sciencedirect.com/science/article/pii/S0306261919301230)


Perhaps this field is ready for innovation?


I ask him what he thinks is the most exciting thing in machine learning.


“Without hesitation, reinforcement learning as illustrated by DeepMind in 2014 when an algorithm was able to play Atari games like no human being could.”

“Watching that presentation on YouTube a few years back was both exciting and depressing. Depressing because I felt I had missed something really big. This is the year I started learning ML, something I didn't have in my toolbox at the time.”


“There was not one ML class in Geophysics in my old days (the early 2000s). I have also been impressed by the potential of GANs but now that my AI fever has gone down a bit, I see that the most exciting prospect will be in making ML a useful tool in catastrophe risk governance, but likely a long project and not an easy one.”




I ask him to tell me the strangest or most surprising thing about his line of work.


“The strangest and maybe most surprising thing in earthquake science is that it might not even be a science yet.”

“It might be bold to say that but a domain to be considered a science requires being able to predict the object of study.”


“We know we cannot predict earthquakes, as of now.”

“So, we are still in a cataloguing phase, akin to the naturalists collecting animals and plants in past centuries before modern biology emerged. We have plenty of physical models but virtually all are derivative, from lab experiments or from other scientific domains, such as material physics or complexity theory. It has been the same for the past few decades and a paradigm shift seems overdue.”

I am personally optimistic that machine learning will deliver this paradigm shift in earthquake science, among many other fields. But nevermind my optimism.


As he’d drawn a comparison to pre-biology naturism and the current status of geophysics, referencing a paradigm shift to come, I asked him to hazard a guess about what this could look like.


“I have the feeling that a paradigm shift is due based on a meta-analysis I did on the wave of optimism that followed the discovery in the late 1990s, of patterns of accelerating seismicity prior to large earthquakes.”


The number of publications on that topic grew fast in the first half of the 2000s to then collapse by 2005-2006. Moreover, the paradigm was based on Complexity theory (think of chaos, critical processes, phase transitions), which is still the dominant theory in earthquake science today. But since the crash, other hypotheses are popping up and it's not clear who will win.”


“This is a beautiful example of Kuhnian cycle, from famous philosopher of science Kuhn who came up with the structure of scientific revolutions (depicted below).”





(Above: a screenshot from Mignan (2019), 'A preliminary text classification of the precursory accelerating seismicity corpus: inference on some theoretical trends in earthquake predictability research from 1988 to 2018', updated from MIgnan (2011), 'Retrospective on the Accelerating Seismic Release (ASR) hypothesis: Controversy and new horizons', which nicely illustrates the cycle that Kuhn defined in his Structure of Scientific Revolutions.)


“We could think of the buzz for AI-based earthquake prediction as the new big thing but I think there is so far more marketing there than real physical discoveries.”


“My gut feeling is that we need to try things totally different than what has been done in the past (since nothing worked so far to predict earthquakes), maybe based on geometry/topology. This is the direction I'm personally taking but I may be wrong. Maybe complexity remains the way to go, or again, something totally different.”


“Maybe earthquakes are not predictable!”

We agree that would be a bit of a blow to earthquake science.

“Future will tell!”



From where I’m sitting, whilst people in fields touched by machine learning are wary of the over marketing of their discipline, they do seem optimistic.




Further reading

Mignan and Broccardo (2019), 'one neuron versus deep learning in aftershock prediction', Nature, https://www.nature.com/articles/s41586-019-1582-8



Bonus question.

Tell me something else interesting!

“I said a lot about my work. So, now about something else: meta-collecting. I have a passion for collecting objects from famous historic collections, which currently represents a gap in museology. I have found it to be an amazing way to learn about History, Socioeconomics, Art, Marketing, and even Psychology. But this is an entirely different story.”


The History of Collecting on Medium: https://medium.com/the-history-of-collecting

The Tricottet Collection: http://www.thetricottetcollection.com/

185 views0 comments

Recent Posts

See All

Comments


bottom of page