This post is for explaining the basics behind two key statistical seismology terms: Gutenberg-Richter and Poisson distributions.
The Gutenberg-Richter law is a relationship which every seismologist knows – for those who are not so aware (like me just over a year ago), it refers to an expression which relates the total number of earthquakes in any given region to the magnitude, by the following equation:
where is the total number of earthquakes, is a constant (usually 1), is another constant which depends on the seismicity in the area (close to 1 in seismically active areas), and is the magnitude. This can also be seen by the plot below.
What this expression does, is relate the frequency of earthquakes with their magnitude, i.e., there are lots of small earthquakes, and very few large earthquakes – makes sense.
At the moment, I am creating synthetic seismograms (see Make some noise for how to make the seismic noise), and as I am trying to make my seismograms as realistic as possible, it is only logical to want to have my seismic events follow a Gutenberg-Richter distribution as well. I have also added in a term for setting a minimum magnitude, as quite often there is a ‘fall-off’ of the magnitudes in the lower end, as it is sometimes harder to actually pick up these magnitudes in real-life.
You are probably wondering where the fish part of my title comes into play – well that’s because when I add my events, I am doing so with Poisson spaced inter-event times (also below), with magnitudes that follow this distribution (i.e., lots of small and few large earthquakes). For those still not following, Poisson = fish in French.. (ba dum tss)
Anyways, Poisson is used for the spacing of inter-event times as it is said that earthquakes follow a Poisson distribution. This is a rule which assigns probabilities to the number of occurrences, with a known average rate. This can be seen by the mathematical formula below,
where the left term says the probability of at least one earthquake occurring in the time , where there is an average recurrence time – this can also be referred to with , where is the rate (i.e., ), can be estimated.
So, if we were to say that there were an average recurrence time of 31 days, then after 25 days, there would be a 55% probability of an event. A Poisson distribution can be easily incorporated, as we just need to produce random numbers which scale to this term, as seen in the code at the end of this post.
In summary, I utilise both Gutenberg-Richter and Poisson statistics for my events, where the magnitude is scaled to Gutenberg-Richter, and are spaced as per Poisson distribution. I have supplied both functions (including how to do the Gutenberg-Richter plot) below.
def gutenberg_richter(b=1.0, size=1, mag_min = 0.0): """Generate sequence of earthquake magnitudes according to G-R law. logN = a-bM Includes both the G-R magnitudes, and the normalised version. """ g = mag_min + np.log10(-np.random.rand(size) + 1.0) / (-1*b) gn = g/g.max() return g, gn # code for plotting the G-R distribution testn = gutenberg_richter(size = 10**8) y, bine = np.histogram(testn) binc = 0.5 * (bine[1:] + bine[:-1]) plt.plot(binc, y, '.-') plt.yscale('log', nonposy='clip') plt.xlabel("Magnitude") plt.ylabel("Log Cumulative frequency") def poisson_interevent(lamb, number_of_events,st_event_2, samp_rate): """ Finds the interevent times using Poisson, for the events, by choosing lamb and number_of_events. We can use the random.expovariate function in Python, as this generates exponentially distributed random numbers with a rate of lambda for the first x number of events ( [int(random.expovariate(lamb)) for i in range(number_of_events)] ). By taking the cumulative sum of these values, we then have the times at which to place the events with Poisson inter-event times. Here we create an array with a list of times which are spaced at a Poisson rate of lambda. This will then be used as the times of the noise in which we place the event at. lamb = lambda value for Poisson number_of_events = how many events you want st_event_2 = your event samp_rate = sampling rate """ poisson_values = 0 while (poisson_values == 0): poisson_values = [int(random.expovariate(lamb)) for i in range(number_of_events)] poisson_times = np.cumsum(poisson_values) for i in range(len(poisson_times)-1): if poisson_times[i+1] - poisson_times[i] <= len(st_event_2)/samp_rate: poisson_values = 0 return poisson_values, poisson_times