The unique seismicity of Mount St Helens

Most people know Mount St. Helens from the explosive eruptions it produced during 1980-1986. To this date, that period was the deadliest volcanic event in American history. But this volcano also has some very interesting seismicity resembling a drumbeat pattern because of its extremely repetitive nature, which is why I am featuring it in this post.

But first, some background. Mount St. Helens is a volcano which is a part of the Cascade Range, located in southwestern Washington State, USA. The Cascade mountain range extends all the way from British Columbia to California, housing many volcanoes alongside the mountains. The reason for this is that it is a part of the Ring of Fire: an area encompassing the Pacific Ocean which is where the majority of all the earthquakes and volcanic eruptions in the world happen.

mt_st_helens_1024

Mount St Helens (Lyn Topinka – CVO Photo Archive)

Mount St. Helens is still an active volcano to this day, with several recorded major explosive eruptions and many smaller eruptions in its history. 1980-1986 was one of these periods where the volcano exhibited eruptive activity, experiencing increased seismicity and explosive activity, resulting in 57 deaths.

In 1989-2001, Mount St. Helens again had periods of increased seismicity as a result of hydrothermal gas explosions. After this, it returned to a state of rest until 2004, when it was reawakened.

From 2004-2008, Mount St Helens exhibited increased seismicity again. This was unlike the previous awake periods as it didn’t actually have that many explosive events (only two! The 1980-1986 period had 17 lava dome-building episodes and hundreds of small gas and steam explosions). The other interesting quality of this reawakened period was the type of seismicity that was occurring. Small regularly-spaced earthquakes were repeatedly occurring during the eruptions. They are nicknamed “drumbeats” due to their resemblance of the sound pattern that is produced from the beating of a drum.

A days worth of seismicity during this period can be seen below, whereby each horizontal line represents 90 minutes.

download1

Mount St Helens day seismicity – one horizontal line is 90 minutes long.

We can even zoom into this and look at a 4 hour block (each horizontal line this time is only 30 minutes).

download2

Mount St Helens 4 hour seismicity – one horizontal line is 30 minutes long.

The repetitiveness of these small earthquakes is very clear to see in these images. This led scientists to wonder, what is causing these drumbeats?

Theory one (Iverson at al., 2006; Iverson, 2008; Anderson et al., 2010)
The drumbeats were due to a stick-slip motion of a piece of hardened magma (a conduit plug) being forced up through the vent which carries the magma from the magma chamber to the surface (the conduit) by ascending magma. The forcing of the plug up through the conduit causes it to interact with the sides. This happening repeatedly could then be what is causing the drumbeats.

Theory two (Waite et al., 2008)
The volcano is essentially acting like a steam engine. This would be due to there being a complicated crack system (think like those plumber games where you want to connect up all the pipes for the flow of water to begin) and a steady supply of heat and fluid from the magma chamber. This would then also cause the drumbeats to occur, similar to a train choo-chooing.

Some great analysis has been done on the similarity of these seismic signals (see References), as if the drumbeats are similar, it means that they have come from effectively the same source. This is where methods such as my correlation matrix become handy, as this measures how well correlated events are with one another. With this analysis, we can then see which events are true repeating events.

Mount St Helens is a great case study for building up any algorithm that focuses on finding any sort of pattern in seismic data, which is why I have been looking into it. This can then go towards our analysis for repeating events in earthquakes, although I doubt we will ever get as clean a signal at these drumbeats!

–Roseanne

References
Anderson, K., Lisowski, M., and Segall, P. (2010). Cyclic ground tilt associated with the 2004-2008 eruption of Mount St. Helens. Journal of Geophysical Research: Solid Earth, 115(11):1–29.

Iverson, R. M. (2008). Dynamics of Seismogenic Volcanic Extrusion Resisted by a Solid Surface Plug , Mount St . Helens , 2004 2005. In Sherrod, D., Scott, W., and Stauffer, P., editors, A Volcano Rekindled: The Renewed Eruption of Mount St. Helens 2004-2006, U.S. Geological Survey Professional Paper 1750, chapter 21, pages 425–460. USGS.

Iverson, R. M., Dzurisin, D., Gardner, C. A., Gerlach, T. M., LaHusen, R. G., Lisowski, M., Major, J. J., Malone, S. D., Messerich, J. A., Moran, S. C., Pallister, J. S., Qamar, A. I., Schilling, S. P., and Vallance, J. W. (2006). Dynamics of seismogenic volcanic extrusion at Mount St Helens in 200405. Nature, 444(7118):439–443.

Waite, G. P., Chouet, B. A., and Dawson, P. B. (2008). Eruption dynamics at Mount St. Helens imaged from broadband seismic waveforms: Interaction of the shallow magmatic and hydrothermal systems. Journal of Geophysical Research: Solid Earth, 113(2):1–22.

Advertisements

Correlation does not imply causation.. but it does give you a hint

This is just a short note on plotting a correlation matrix using the seaborn package within Python. I’ve found that this is the best way of showing the similarity between arrays to people who are unfamiliar with correlations. It also allows you to add some colour into your plots, which is always a nice thing! It can be used for a multitude of purposes, so I have left the variable names in my code (at the bottom) as general as possible, so that it can be copy and pasted for other users.

For those who have not seen these matrices before, what it shows is the similarity between different arrays. If two arrays have a correlation value of 1.0, this means that they have a perfect correlation (i.e. they are exactly the same), and a correlation value of 0.0 means that there is absolutely no similarity between the two. This can be used to compare datasets with one another if you are looking for a similar pattern.

Also, it is worth noting that one of the principal statements made in statistics is that,

“Correlation does not imply causation”

So you should also have some further information to back-up the correlation between arrays.

An example of one of these correlation matrices can be seen below, which shows the comparison of 54 arrays with each other (i.e. I have taken each array and cross-correlated it with the other 53 arrays). The squares with a darker tone have a higher correlation than those with a lighter tone.

corr

Correlation matrix for 54 arrays

Your first step is putting your correlation values into a pandas.DataFrame format, you can then just use the code below in order to create the matrix! This table should contain the full dataset, and this code can then create it into this triangle shape (as otherwise you will end up with the mirror image of this on the identity axis). I have used absolute values as I didn’t want to deal with negative correlation at this stage (this is when it is a perfect match but reversed in the x-axis).

If you don’t have any correlation values, I’d recommend reading up on cross-correlation, which is a function where you can obtain these correlation values. I might produce a blog post on this at a later date, but it is worth reading into it yourself so that you can fully understand the output.

— Roseanne


import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(font_scale=1.5)

def corr_mat_plot(correlation_mat, show = True, outfile = None):
    """
    Plots the correlation matrix in an image plot to show where the
    highest correlation between arrays is.
    """
    # Make the mask for the upper triangle so that it doesn't mirror image the values
    mask = np.zeros_like(correlation_mat, dtype=np.bool)
    mask[np.triu_indices_from(mask)] = True

    # Set up the figure
    fig, ax = plt.subplots(figsize=(10, 10))
    sns.set(font_scale=1.5)

    # Draw matrix
    sns.heatmap(np.abs(correlation_mat), cmap = sns.cubehelix_palette(8, as_cmap=True),
                mask=mask, vmin = 0,vmax=1, square=True, xticklabels=50, yticklabels=50,
                cbar_kws = {"shrink": .8, "label" : ("Correlation value")}, ax=ax)

    plt.title("Correlation between the arrays")

    if show:
        plt.show()

    if outfile:
        fig.savefig(outfile)
    elif show:
        plt.show()
    else:
        return fig

Gutenberg-Richter and fish?

This post is for explaining the basics behind two key statistical seismology terms: Gutenberg-Richter and Poisson distributions.

Gutenberg-Richter

The Gutenberg-Richter law is a relationship which every seismologist knows – for those who are not so aware (like me just over a year ago), it refers to an expression which relates the total number of earthquakes in any given region to the magnitude, by the following equation:

log_{10} N = a - bM

where N is the total number of earthquakes, a is a constant (usually 1), b is another constant which depends on the seismicity in the area (close to 1 in seismically active areas), and M is the magnitude. This can also be seen by the plot below.

Gutenberg-Richter law

This shows the Gutenberg-Richter distribution for a b value of 1. Code for this is at the end of the post.

What this expression does, is relate the frequency of earthquakes with their magnitude, i.e., there are lots of small earthquakes, and very few large earthquakes – makes sense.

At the moment, I am creating synthetic seismograms (see Make some noise for how to make the seismic noise), and as I am trying to make my seismograms as realistic as possible, it is only logical to want to have my seismic events follow a Gutenberg-Richter distribution as well. I have also added in a term for setting a minimum magnitude, as quite often there is a ‘fall-off’ of the magnitudes in the lower end, as it is sometimes harder to actually pick up these magnitudes in real-life.

 

Poisson distribution

You are probably wondering where the fish part of my title comes into play – well that’s because when I add my events, I am doing so with Poisson spaced inter-event times (also below), with magnitudes that follow this distribution (i.e., lots of small and few large earthquakes). For those still not following, Poisson = fish in French.. (ba dum tss)

Anyways, Poisson is used for the spacing of inter-event times as it is said that earthquakes follow a Poisson distribution. This is a rule which assigns probabilities to the number of occurrences, with a known average rate. This can be seen by the mathematical formula below,

P (n >1, t, \tau) = 1 - e^{-t/ \tau}

where the left term says the probability of at least one earthquake occurring in the time t, where there is an average recurrence time \tau – this can also be referred to with \tau = \frac{1}{\lambda}, where \lambda is the rate (i.e., P = 1 - e^{- \lambda t}), can be estimated.

So, if we were to say that there were an average recurrence time of 31 days, then after 25 days, there would be a 55% probability of an event. A Poisson distribution can be easily incorporated, as we just need to produce random numbers which scale to this \lambda term, as seen in the code at the end of this post.

In summary, I utilise both Gutenberg-Richter and Poisson statistics for my events, where the magnitude is scaled to Gutenberg-Richter, and are spaced as per Poisson distribution. I have supplied both functions (including how to do the Gutenberg-Richter plot) below.

— Roseanne


def gutenberg_richter(b=1.0, size=1, mag_min = 0.0):
"""Generate sequence of earthquake magnitudes
according to G-R law. logN = a-bM
Includes both the G-R magnitudes, and the
normalised version.
"""
g = mag_min + np.log10(-np.random.rand(size) + 1.0) / (-1*b)
gn = g/g.max()

return g, gn

# code for plotting the G-R distribution
testn = gutenberg_richter(size = 10**8)
y, bine = np.histogram(testn)
binc = 0.5 * (bine[1:] + bine[:-1])
plt.plot(binc, y, '.-')
plt.yscale('log', nonposy='clip')
plt.xlabel("Magnitude")
plt.ylabel("Log Cumulative frequency")

def poisson_interevent(lamb, number_of_events,st_event_2, samp_rate):
""" Finds the interevent times using Poisson, for the events, by choosing
lamb and number_of_events. We can use the random.expovariate function in
Python, as this generates exponentially distributed random numbers with
a rate of lambda for the first x number of events
( [int(random.expovariate(lamb)) for i in range(number_of_events)] ).
By taking the cumulative sum of these values, we then have the times at
which to place the events with Poisson inter-event times.
Here we create an array with a list of times which are spaced at a Poisson
rate of lambda. This will then be used as the times of the noise in which
we place the event at.
lamb = lambda value for Poisson
number_of_events = how many events you want
st_event_2 = your event
samp_rate = sampling rate
"""
poisson_values = 0
while (poisson_values == 0):
     poisson_values = [int(random.expovariate(lamb)) for i in range(number_of_events)]
     poisson_times = np.cumsum(poisson_values)
     for i in range(len(poisson_times)-1):
          if poisson_times[i+1] - poisson_times[i] <= len(st_event_2)/samp_rate:
               poisson_values = 0

return poisson_values, poisson_times

Make some noise

I spent a long time looking at how to ‘create noise’ in order to make some synthetic seismograms, so I thought that I would put up my code in case anyone ends up in the same spot as me! I take several steps in order to model this:

  • Load in some typical seismic noise (I have taken mine from a quiet day near the Tunguruhua volcano in Equador), which has been detrended and demeaned.
  • Taking the Fast Fourier Transform (FFT) of this (this puts the data into the frequency domain).
  • Smooth the FFT data.
  • Multiply this by the FFT of white noise.
  • Take the Inverse Fast Fourier Transform (IFFT) of this (takes it back into the time domain).

The results of this are shown below, where the green is our white noise, the blue is our real seismic noise, and the pink is our synthetic seismic noise.

Createnoiseplot2_comp_of_white_and_T_and_created

Creating seismic noise

There are a few other intermediate steps to this code (such as looping through so that it is in segments), however it is quite a simple process! A few other libraries are loaded into this beforehand, such as Obspy and Numpy, however you will probably have loaded these in already if you are doing this.

Now go and make some noise!

— Roseanne


def noise_segmenting(poisson_times, st_event_2, st_t, noise_level, samp_rate, delta):
 """ Creates the noise array so that it is big enough to host all of the events.
 Creating the noise by multiplying white noise by the seismic noise, in the frequency domain.
 We then inverse FFT it and scale it to whatever SNR level is defined to output the full
 noise array.
poisson_times = array of times where we then put in the seismic events (boundary for the noise)
st_event_2 = size of events that we are putting in later (again, this is a boundary)
st_t = seismic noise array that you are basing your synthetic on
samp_rate, delta = trace properties of st_t
 """
 # end time for noise to cover all events
 noise_lim = (poisson_times[-1] + len(st_event_2)) *2 #gives some time after last event
# load in seismic noise to base the synthetic type on
 st_noise_start_t = UTCDateTime("2015-01-22T01:00:00")
 st_noise_end_t = UTCDateTime("2015-01-22T01:02:00")
 test_trace = st_t[0].slice(st_noise_start_t, st_noise_end_t)
 test_trace_length = int(len(test_trace) / test_trace.stats.sampling_rate)
# setting the boundary for how many loops etc
 minutes_long = (noise_lim)/st_event_2.stats.sampling_rate
 noise_loops = int(np.ceil(minutes_long/2.0)) #working out how many 2 minute loops we need
# zero array
 noise_array = np.zeros([noise_loops, len(test_trace)])
# loop for the amount of noise_loops needed (in segments)
 for j in range(noise_loops):
      # we average the seismic noise over twenty 2 minute demeaned samples
      tung_n_fft = np.zeros([20, int(np.ceil((len(test_trace)/2.0)))])
      for i in range(20):
         st_noise = st_t[0].slice(st_noise_start_t+(i*test_trace_length),st_noise_end_t+(i*test_trace_length))
         noise_detrended = st_noise.detrend()
         noise_demeaned = mlab.demean(noise_detrended)
         noise_averaging = Trace(noise_demeaned).normalize()
         tung_n_fft[i] = np.fft.rfft(noise_averaging.data)

      # work out the average fft
      ave = np.average(tung_n_fft, axis=0)
      # smooth the data
      aves = movingaverage(ave,20)
# create white noise
      whitenoise = np.random.normal(0, 1, len(noise_averaging))
      whitenoise_n = Trace(whitenoise).normalize()
# FFT the white noise
      wn_n_fft = np.fft.rfft(whitenoise_n.data)
# multiply the FFT of white noise and the FFT smoothed seismic noise
      newnoise_fft = wn_n_fft * aves
# IFFT the product
      newnoise = ifft(newnoise_fft, n = len(st_noise))

      noise_array[j] = np.real(newnoise)
# transform the noise into an Obspy trace
 full_noise_array = np.ravel(noise_array)
 full_noise_array_n = Trace(np.float32(full_noise_array)).normalize()
 full_noise_array_n_scaled = Trace(np.multiply(full_noise_array_n, noise_level))
 full_noise_array_n_scaled.stats.sampling_rate = samp_rate
 full_noise_array_n_scaled.stats.delta = delta

 return full_noise_array_n_scaled

Multi-fault ruptures : higher likelihood of larger Californian earthquakes

As I have mentioned in my About section, I am funded by NERC (that’s the Natural Environment Research Council) for my PhD in a DTP, which is like a special training program for PhD students (see here for more information). I would recommend anyone else who is applying for a PhD to try and get one which is a part of one of these programs as it has been great!

Through my DTP, I was rewarded funding to undertake a 2 week internship at a business, in order to get some real-life experience. I think we all know that 2 weeks is not quite long enough to do much, but it is really useful in the sense of forging connections between my own research interests, and industry work. In my case, there were connections with Risk Management Solutions (RMS), who are a company who model catastrophe risk for insurance purposes. Their work ranges from modelling the risks associated with earthquakes (so right up my street) to terrorism risk.

My work entailed looking into multi-fault ruptures in California. Multi-fault ruptures were once thought of as a rare earthquake case of earthquakes ‘jumping’ from fault to fault. However, they are a deadly occurrence, as with multi-faults come larger earthquakes. Cases such as the M7.8 2016 Kaikoura, New Zealand earthquake, show just how massive multi-fault ruptures can be. This particular earthquake is reported as having 21 different fault ruptures, and causing about 180km of surface rupture.

intensity

ShakeMap of the M7.8 2016 Kaikoura earthquake – the red areas are those which experienced the largest shaking intensity (from USGS).

This is probably one of the most complex earthquake cases, as there are just so many faults involved. It also brings up the question: what if there were a similar case in California? It could be deadly.

As our understanding of earthquakes is ever evolving, it is important that earthquake forecast models are inclusive of this information. Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) is a new earthquake forecast model for the whole of California, which has been developed by a breadth of specialists. It is a highly advanced model, as it estimates magnitude, location, as well as the likelihood of potential earthquakes. UCERF3 is particularly new and innovative, when compared with other models, as it incorporates multi-fault ruptures. From including these types of ruptures in their forecast, the possibility of a larger earthquake (M>7) increased.

The overall likelihood of a magnitude 6.7 (or higher) earthquake occurring within the next 30 years, was calculated by UCERF3, and is shown below. The research done for the UCERF3 model not only shows the increased likelihood of earthquakes in California, but also how interconnected the whole fault system is.

Likelihood that each region of California will have a magnitude 6.7 (or larger) earthquake in the next 30 years.

UCERF3 produced this map of California (white lines define borders) showing the likelihood of a magnitude 6.7 (or higher) earthquake will occur within the next 30 years (from Field, E.H., and 2014 Working Group on California Earthquake Probabilities, 2015, UCERF3: A new earthquake forecast for California’s complex fault system: U.S. Geological Survey 2015–3009, 6p., https://dx.doi.org/10.3133/fs20153009. )

I would recommend checking out UCERF3 if you are interested – it’s fascinating being able to see all the different fault sections and how they are connected to one another.

It’s a shame I only had two weeks to look into this area – hopefully I will get the chance to revisit it again to be able to apply the model myself.

–Roseanne