12 Comments
User's avatar
Barry Lederman, “normie”'s avatar

Is anyone collecting detection data observed in the animal kingdom or is it only physical detection signals assigned by humans?

Expand full comment
Kai Williams's avatar

I'm not quite sure what you mean by this --- could you clarify? Thanks!

Expand full comment
Kenny Easwaran's avatar

I don’t think there’s any evidence of earthquakes animals have detected that humans haven’t. There’s some controversy about whether animals notice earthquakes sooner than humans. But I think usually people make those claims for significant earthquakes, and not the tiny ones these detection methods are helpful for.

Expand full comment
jsb's avatar

I have not seen this data collected - these networks take huge amounts of standardized and label data so the basic earthquake catalogs are all we work on.

Expand full comment
Seth's avatar

I'm curious as to how the algorithmically-detected earthquakes are validated. It's not like you can travel to the time and place and say, "ah, yes, my vestibular system confirms that the earth indeed quaked." Does a seismologist just look at the seismograph data flagged by ChatEQK and say, "yes I agree that this looks like an earthquake"?

(Upon further reflection, I guess this question applies to any earthquake detected via seismograph...)

Expand full comment
Kai Williams's avatar

This is a really important question, and yeah definitely there's validation to do. One thing that helps with validation is if there are other seismograms nearby. Even if an algorithm falsely detects an earthquake in one place, it's less likely to then falsely detect an earthquake on all the other seismographs at times in a physically consistent way. So this helps, to some extent.

But false positives still do happen. If you look at the volcano diagram, you can see earthquakes detected basically everywhere on the image. Joe Byrnes, the UT Dallas professor I talked to, pointed out that a lot of those aren't actually earthquakes, they just happened to pass through this second step of phase association between different stations. (He still calls the volcano paper "super high quality work," and notes this is a challenge with existing detection methods).

Expand full comment
Seth's avatar

That's interesting, I would have assumed that the models were trained on and took as input all available seismograph sensors simultaneously. Is it a latency issue where you want to detect the earthquake before it would show up on other sensors?

Now I kinda want to know about the psychometrics of earthquake detection, like what's the repeatability and inter-rater reliability between human seismologists; but I think that's well outside the scope of your (very interesting) article!

Expand full comment
jsb's avatar
Oct 8Edited

There's some multistation methods but the single stations ones are still used mostly because yea, you just get weird peaks and individual seismograms, and that can be weird - but, if 10 stations all light up and the same time independently, then something happen. Multistation methods should perform better, but the independent measurements are comforting.

Expand full comment
John Quiggin's avatar

What is the criterion for "AI" here? As you've described things, it seems like the AI approach is just an improved version of the classification models used in earlier work, like that for the 2008 earthquake. These in turn can be traced back to linear discriminant analysis, developed in the 1930s and done by hand for small data sets.

Having said that, I do think there is something emergent, at least in massive LLMs. As Stalin is alleged to have said, quantity has a quality all its own.

Expand full comment
Kai Williams's avatar

I was using AI here to refer to deep machine learning (ML) on large datasets of earthquake data. (E.g. earthquake transformer is 56 layers deep). The lines are definitely a bit fuzzy between machine learning and good ol' statistics, though I think it is interesting to note that the important seismology ML models often directly take inspiration from computer science research.

Nothing I talked about here dealt with LLMs specifically, though there was a fun paper from Mousavi et. al this year that used Google's Gemini to estimate felt shaking reports. It was surprisingly accurate: https://rallen.berkeley.edu/pub/2025Mousavi/MousaviEtAl-GeminiEstimatesEqShaking-GJI-2025.pdf.

Expand full comment
Tj's avatar

This is the content I did not know I wanted. Very interesting read.

Expand full comment
Will O'Neil's avatar

Having grown up in California I have a sort of visceral interest in earthquakes, although I was never tempted to study geophysics. (Beyond learning that "P waves" are pressure waves and "S waves," shear waves.) But many decades ago my career was transformed when I took a short course from John Tukey on FFT — the technique that made matched filtering a widely-practical technique. So it's fascinating to see how ML can get the same results so much more efficiently. It's not obvious to me, though, that ML models can provide the same sort of insight into underlying mechanisms that matched filtering often does.

Expand full comment