I'm curious as to how the algorithmically-detected earthquakes are validated. It's not like you can travel to the time and place and say, "ah, yes, my vestibular system confirms that the earth indeed quaked." Does a seismologist just look at the seismograph data flagged by ChatEQK and say, "yes I agree that this looks like an earthquake"?
(Upon further reflection, I guess this question applies to any earthquake detected via seismograph...)
This is a really important question, and yeah definitely there's validation to do. One thing that helps with validation is if there are other seismograms nearby. Even if an algorithm falsely detects an earthquake in one place, it's less likely to then falsely detect an earthquake on all the other seismographs at times in a physically consistent way. So this helps, to some extent.
But false positives still do happen. If you look at the volcano diagram, you can see earthquakes detected basically everywhere on the image. Joe Byrnes, the UT Dallas professor I talked to, pointed out that a lot of those aren't actually earthquakes, they just happened to pass through this second step of phase association between different stations. (He still calls the volcano paper "super high quality work," and notes this is a challenge with existing detection methods).
That's interesting, I would have assumed that the models were trained on and took as input all available seismograph sensors simultaneously. Is it a latency issue where you want to detect the earthquake before it would show up on other sensors?
Now I kinda want to know about the psychometrics of earthquake detection, like what's the repeatability and inter-rater reliability between human seismologists; but I think that's well outside the scope of your (very interesting) article!
Is anyone collecting detection data observed in the animal kingdom or is it only physical detection signals assigned by humans?
I'm not quite sure what you mean by this --- could you clarify? Thanks!
I'm curious as to how the algorithmically-detected earthquakes are validated. It's not like you can travel to the time and place and say, "ah, yes, my vestibular system confirms that the earth indeed quaked." Does a seismologist just look at the seismograph data flagged by ChatEQK and say, "yes I agree that this looks like an earthquake"?
(Upon further reflection, I guess this question applies to any earthquake detected via seismograph...)
This is a really important question, and yeah definitely there's validation to do. One thing that helps with validation is if there are other seismograms nearby. Even if an algorithm falsely detects an earthquake in one place, it's less likely to then falsely detect an earthquake on all the other seismographs at times in a physically consistent way. So this helps, to some extent.
But false positives still do happen. If you look at the volcano diagram, you can see earthquakes detected basically everywhere on the image. Joe Byrnes, the UT Dallas professor I talked to, pointed out that a lot of those aren't actually earthquakes, they just happened to pass through this second step of phase association between different stations. (He still calls the volcano paper "super high quality work," and notes this is a challenge with existing detection methods).
That's interesting, I would have assumed that the models were trained on and took as input all available seismograph sensors simultaneously. Is it a latency issue where you want to detect the earthquake before it would show up on other sensors?
Now I kinda want to know about the psychometrics of earthquake detection, like what's the repeatability and inter-rater reliability between human seismologists; but I think that's well outside the scope of your (very interesting) article!