Category: Uncategorized (page 2 of 2)

The orientations of the stations in the Auckland seismic network

The stations of the seismic network that monitors the Auckland Volcanic Field are a mix of surface and borehole seismographs. Our aim here is to figure out the orientation of the three orthogonal components of each seismometer.

The vertical component

The vertical components of the seismic network are tested with some earthquakes with an epicentre that is only a ~100 km from the AVF. These are very deep earthquakes, assuring that high-frequency signal impinges from below on the network.

The figure below shows that seismic data from these events generally results in an arrival with a negative maximum, except stations KBAZ and WTAZ:

 

From this example and seismic data from other earthquakes, we conclude the polarity on the vertical component is reversed for stations KBAZ and WTAZ.

The horizontal components

The majority of the seismic stations in the AVF are in boreholes between 50 to 400 m deep. Particularly for these stations,  the orientation of the horizontal components of the each station is unknown. Luckily, there are several techniques to estimate the orientations of the orthogonal horizontal components.

Ambient Noise Orientation

Stachnik et al. (2012) used a earthquake-sourced Rayleigh wave polarisation method to estimated the orientation of ocean bottom seismometers. Zha et al. (2013) used ambient seismic noise, applying a polarisation technique to the Rayleigh wave signal in CCFs to estimate the orientation of ocean-bottom-seismometers. Here we use ambient noise orientation, based on the method used by Zha et al. (2013), but we use it for borehole seismometers. We performed ambient noise orientation on all 3-component seismometers in the network, using the surface stations as a control group. The  method involves three basic stages:

This method involves three stages:

  1.  Computing the three-component cross-correlation functions for all station pairs from noise.
  2. Measurement of sensor orientation through an optimization process.
  3. Data selection and statistical analysis to obtain final orientation angles.
three-component cross-correlation functions

Cross-correlation functions (CCFs) can be computed for any pairs of seismic sensors. Computing all the possible cross-correlation functions for a pair of three-component seismometers would produce nine functions. For this technique, we use some of the CCFs that potentially contain Rayleigh wave signal. Rayleigh wave motion occurs in the vertical plane so we expect virtual Rayleigh wave signal to emerge in the vertical-vertical(C_zz), and the radial-vertical (C_rz) CCFs. For seismometers with unknown orientation, the C_rz data is distributed across the horizontal-verticals (C_1z and C_2z) cross-correlation functions.

Optimisation

For each pair of seismometers, the symmetric CCFs are bandpass filtered to the frequency band where Rayleigh wave with the greatest amplitude are expected to emerge (In our case 0.05–0.5Hz). We then perform a grid-search through 360-degrees, each performing as an estimate of the angle that would rotate C_1z and C_2z into the C_rz and C_tz . For each estimate of C_rz,  the zero-lag cross-correlation and coherence between the C_rz and 90-degrees phase-shifted C_zz is computed. The maximum cross-correlation marks the best estimate of C_rz. The coherence at maximum cross-correlation is used in the data selection. The back azimuth can then be calculated because the coordinates of both stations are known.

 

The figure below this process is illustrated for the AWAZ-ABAZ pair. On the left, there is C_zz, C_1z, and C_2z, and the phase shifted C_zz in the dashed red. In the middle panel , the correlation and coherence are plotted by angle. On the right, there is C_zz, and the C_rz, and C_tz based on the angle at maximum correlation (260 degrees in this case), and the phase shifted C_zz in the dashed red.

Data Selection and Statistical Analysis

Estimates with a coherence below 0.5 at maximum correlation tend to be scattered to a greater degree. These coherences were discarded. We then take the mean of the remaining estimates as our final estimate of the orientation. In the figure below, the estimates for the AWAZ station are presented in a polar plot. The radial axis represents the number of estimates in each 1-degree bin. Red represents estimates with coherences below 0.5 that are removed.

 

Stachnik, J.C., Sheehan, A.F., Zietlow, D.W., Yang, Z., Collins, J., & Ferris, A., 2012. Determination of New Zealand ocean bottom seismometer orientation via Rayleigh-wave polarization. Seismological Research Letters, 83(4).

Zha Y, Webb SC, & Menke W. Determining the orientations of ocean bottom seismometers using ambient noise correlation. Geophysical Research Letters. 2013 Jul 28;40(14).

Ambient seismic noise correlations for the AVF

The key idea behind ambient noise seismology is to focus purely on the seismic noise that was once removed in traditional seismology. Incoming signals from 11 seismic stations across Auckland are filtered to look at frequencies attributed to ocean noise. This noise is continuous so measurements can be taken constantly.  The Auckland volcanic field (AVF) is ideal to study with this technique as we have oceans on both sides, creating a relatively even distribution of noise in the earth beneath us, hence we say that the AVF has a pseudo isentropic wavefield.  

11 stations comprising the Auckland seismic station network

 Waveforms from two stations are filtered at the frequencies associated with ocean noise and sliced into smaller chunks known as stacks. One waveform is moved relative to the other and the similarity or correlation is measured between them. A plot of the similarity against the time shift creates a Greens function. This is done for each of our chunks and is added together to create an impulse response between the two stations for each day.  

 

Green’s function between Awhitu and Waiheke Island

 

Above is an example of a Greens function between the Awhitu station and the Waiheke Island station (shown as orange dots on the map above). The right side (positive lag time) shows the impulse response as if there was a quake at the Awhitu station as seen from the Waiheke station and the left hand side shows the response at the Awhitu station as if there was a quake at the Waiheke station. Notice the virtual quake coming from the Awhitu side is larger than that from the Waiheke station. This is because the Awhitu station is closer to the West Coast, which is a strong source of ambient ocean noise, so its virtual earthquake appears larger. Dispersion can also be observed with low frequency waves arriving first. These waves have a greater wavelength, so travel through deeper parts of the earth where they can travel faster. MSNoise.org provides a suite of tools in Python to calculate the wave fields of these “virtual earthquakes” between each combination of station pairs of the Auckland network. 

 The time it takes for this impulse to go from station to station can be determined by looking at the Green’s Function. In this example it takes about 25 seconds for surface seismic waves to travel from Awhitu to Waiheke and vice versa.  

To use this method as a monitoring tool a reference is created to show what a normal Greens function looks like. Then our data for a set period of time is stacked. A short stack ( ie 1 or 2 days) will have more small fluctuations whereas larger stacks shows larger trends more clearly but lack the resolution to see short term impacts. These stacks are compared to the reference. Below is a plot of the change in seismic velocity over the last 6 months for different stack lengths. Shorter stack lengths appear to have more random fluctuations whereas the 30 day-long stack at the bottom allows smoother trend to be seen. A decrease in seismic velocity in October and an increase going into December can be observed. 

Seismic velocity variation for the Auckland Volcanic Field between August 2018 and January 2019

Link to up-to-date dvv plots for the AVF.

There are a number of possible explanations for changes to surface seismic velocities.  For example, on Mount Merapi, Indonesia, changes in the waveforms correlate with changes in precipitation saturating the near-surface. However, in principle magmatic changes could be detected this way. Areas of active research include how to do the calculations so that we can exclude causes for changes in the waveforms not related to the Earth. For example, the distribution of noise sources may vary from day to day, affecting the calculated “virtual earthquakes.” 

FMTT of AVF

This blog will describe the running of FMTT on the AVF.

INSTALLATION

The installation was done by following the instructions obtained from

http://rses.anu.edu.au/~nick/teletomo.html

In fact, all the necessary files are attainable in an all in one download as a compressed Unix gzipped and tarred file specified in that link. In which it includes the instruction manual, a research paper of its application, and a couple of examples.

In a nutshell, to install all we need to do is to run the shell script compileall after the compressed file has been unpacked. But for things to run smoothly, prior to running the shell script what we did was:

  • Edit line 22 of  compileall and line 5 of source/aktimes/Makefile into the appropriate Fortran compiler. In my case this is gfortran.
  • Still in compileall, add ./ before remodl and setbrn in line 82 and 83 respectively so that these can be found during compilation.
  • Edit line and line 19 and line 181 of syntht.f90 source file. Delete (KIND=i10) in both lines to avoid mismatch of function error.

Compilation complete

FMTT itself consist of several individual executable programs. Each one requires an input file which name is identical to the executable except for the .in extension. In these input files we can specify and modify the parameters so that the desirable inversion routine is performed. More details on each program can be found inside the instruction manual.

An executable worth to note is tomo3dt, which perform the whole tomographic inversion as opposed to run the necessary executable programs one after the other. Another notable mention is gmtslicet. This program is used to help plot and visualize the inversion result as depth, E-W, and N-S slices. The complete plotting procedure is based on GMT scripts (GMT stands for Generic Mapping Tools).

Running the Example

Two examples were provided in the compressed file. Prior to do any of the examples, ak135.hed and ak135.tbl need to be copied into the respective example directory. These files can be found in source/aktimes after compilation. Once the files are copied, simply execute tomo3dt from the directory. It should returns the following lines:

gugi@sc-9020-hs:~/AVF/fmtt_v1.0/example1$ tomo3dt
Program fm3dt has finished successfully!
Program fm3dt has finished successfully!
Program fm3dt has finished successfully!
Program fm3dt has finished successfully!
Program fm3dt has finished successfully!
Program fm3dt has finished successfully!
Program fm3dt has finished successfully!
Program fm3dt has finished successfully!
Program fm3dt has finished successfully!
Program fm3dt has finished successfully!
Program fm3dt has finished successfully!

The number of times the line “Program fm3dt has finished successfully!” is repeated depends on the number of iteration specified in the tomo3dt.in, the input file for tomo3dt. This shows that we have managed to perform a successful tomographic inversion.

 

Plotting the Examples

After tomo3dt has been successfully run and the rounding issue in gridi.vtx and gridc.vtx has been adjusted, the results are visualized using gmtslicet and GMT. Go to the subdirectory /gmtplot and execute gmtslicet. Prior to executing gmtslicet, the files gridi.vtx and gridc.vtx might need to be adjusted. In the top lines of both files, the numbers round off differently to each other which could result in an error while running gmtslicet. Equalizing these values will solve the error. Nick Rawlinson kindly provides us with an updated gmtslicet.f90 source file. It meant to tackle the rounding problem. It works in some computer but somehow it could be machine specific.

The execution of gmtslicet will produce three bound .gmt files (among others) for depth, E-W, and N-S strike. These are used by the plotting scripts plotd, plotew, and plotns. Prior to running these scripts, some modification were made. #!/bin/csh was added on the top of each script file. Then, depending on how GMT is called in your computer, gmt may need to be added before each GMT command namely gmtset, xyz2grd, grdimage, psscale, psxy, pscoast. Output file can also be modified.

The results of running the plotting scripts are of .ps file type which can be viewed. Depth or E-W or N-S slice is depicted by their respective script file.

Python script to plot an epicentral distance on a map

In GEOPHYS310, we learn python. One of the exercises is to plot a station (circle) on a map, with a circle representing some epicentral distance to an earthquake:

import cartopy.crs as ccrs
import matplotlib.pyplot as plt

proj = ccrs.PlateCarree(central_<wbr>longitude=180)

ax1 = plt.subplot(111, projection=proj)
ax1.set_extent([-50, 50, -40, 40], crs=proj)
ax1.stock_img()

ax1.tissot(rad_km=3200, lons=180, lats=10, alpha=0.3)
ax1.plot(0,10,'rv') # note the longitude is now with respect to the central_longitude, but NOT for the tissot circle! 

plt.show()

A python script to estimate the epicentral distance and origin time of an earthquake

In GEOPHYS330, we do a ray tracing exercise. Here is some code to perform the task of estimating the origin time and the epicentral distance of an earthquake:

from obspy import UTCDateTime
import numpy as np
import matplotlib.pyplot as plt
from obspy import read
from obspy.taup import TauPyModel

def main():
    # estimate the epicentral distance. This is one of your free parameters:
    Delta = 35 # in degrees
    # estimate the origin time of the earthquake; your other free parameter:
    t0=UTCDateTime("2014-04-13T12:25:19")

    maxamp = readandplotseismogram("https://pal.auckland.ac.nz/files/2014/08/quake.sac_.gz")
    computeandplottts(Delta,t0,maxamp)

    # tighten up the axes, and show:
    plt.axis('tight')
    plt.show()
    #return

def readandplotseismogram(seismogram):
    '''read and plot the seismogram'''
    # read the stream
    st = read(seismogram)
    starttime = UTCDateTime("2014-04-13T12:38:00")
    st.trim(starttime,starttime+1000)
    # the trace in the stream is
    tr = st[0]
    # Make a time array and an amps array:
    t1= tr.stats.starttime.timestamp
    t2= tr.stats.endtime.timestamp
    npts = tr.stats.npts
    times = np.linspace(t1,t2,npts)
    amps = tr.data
    maxamp = max(amps)
    #Plot the seismogram against a grid
    plt.figure(figsize=(12,3))
    plt.plot(times,amps,color='blue')
    #Converting tick marks to actual times:
    locs, labels = plt.xticks()
    new_label=[]
    for loc in locs:
        new_label.append(UTCDateTime(loc).strftime("%H-%M-%S"))
        plt.xticks(locs,new_label,rotation=10)
        plt.xlabel('Time (HH-MM-SS UTC)')
        plt.ylabel('Displacement')
        plt.grid()
    return maxamp

def computeandplottts(Delta,t0,maxamp):
    # compute arrival times based on a epicentral distance (and depth, but
    # I fixed depth to a constant value):
    model = TauPyModel(model="iasp91")      
    arrivals = model.get_travel_times(distance_in_degree=Delta,source_depth_in_km=10)
    #Construct vertical lines to show arrival times of predicted seismic waves
    for arrival in arrivals:
        dummy = t0+ arrival.time
        if arrival.name == "P":
            plt.vlines(dummy,-maxamp/2,maxamp/2)
            plt.text(dummy,maxamp/2+0.05*maxamp,arrival.name)
        if arrival.name == "PP":
            plt.vlines(dummy,-maxamp/2,maxamp/2)
            plt.text(dummy,maxamp/2+0.05*maxamp,arrival.name)
        if arrival.name == "S":
            plt.vlines(dummy,-maxamp/2,maxamp/2)
            plt.text(dummy,maxamp/2+0.05*maxamp,arrival.name)
        if arrival.name == "SS":
            plt.vlines(dummy,-maxamp/2,maxamp/2)
            plt.text(dummy,maxamp/2+0.05*maxamp,arrival.name)
    return()

# this will actually run the code if called stand-alone:
if __name__ == '__main__':
    main()

An implementation of raytracing software Taup in obspy

In one of our labs for GEOPHYS330, we use the ray tracing software TauP. Below is an example of how to use TauP from within python (using obspy), our computer language of choice for the lab, course, and department:

from obspy.taup.tau import plot_ray_paths
plot_ray_paths(source_depth=100,phase_list=['P'],npoints=25)

python script to plot and analyse sea level data from the Auckland harbour

In GEOPHYS330, we use python to download, plot and do a regression on a hundred years of sea level measurements in the Auckland harbour:

# these are the modules that contain functions we will use in the script:
import csv # this is to read a "comma separate values" file 
from urllib.request import urlopen # to read csv data from a website
import matplotlib.pyplot as plt # plotting functions
import numpy as np # numerical tools
from scipy.stats import linregress # linear regression function
import pandas as pd

# you can find and load the sea level data for Auckland here:
url="http://www.psmsl.org/data/obtaining/rlr.monthly.data/150.rlrdata"
df = pd.read_csv(url,delimiter=';')

# "bad" values are defined by a large negative number, so we only
# accept positive values:
df = df[df.iloc[:,1] >= 0]

# convert the pandas dataframe to numpy matrix:
npdat =df.as_matrix()
time= npdat[:,0]
height = npdat[:,1]

slope, intercept, r_value, p_value, std_err = linregress(time,height)
print("r-squared: %f", r_value**2)

# plot the sea height data:
plt.plot(time,height,'ro')
plt.plot(time,intercept+slope*time,'k')
plt.grid()
plt.xlabel('Date (year)')
plt.ylabel('Sea height (mm)')
plt.title('Sea height at Princes Wharf, Auckland (New Zealand)')
plt.legend(['measurements','best linear fit'],loc=4)
plt.axis('tight')
#plt.savefig('sealevel_auckland.png')
plt.show()

RUS

Resonance Ultrasound Spectroscopy (RUS) uses the normal modes of an oscillating body to determine its elastic parameters. It allows the complete tensor to
be determined from a single measurement.

It fits an experimental gap between low frequency stress-strain measurements and high frequency time of flight experiments and deals with frequencies at the high end of those relevant to geophysical applications. It is a nondestructive method that can be used on small, rare or hard to obtain samples.

processing flow


#!/bin/sh

## this is the flow of processing steps for two purposes: 1) we add
## data to our database for research purposes. 2) we transform the ZIP
## files to the format IRIS wants them

# First step: your zip files collected in the field on date
# $servicedate are in
# /pal/fieldcamp/2011/passive_seismic/data/$servicedate/raw (and the
# Service Forms should digitized and stored in the same directory)

. /opt/antelope/5.1-64/setup.sh # run the antelope setup for the path
# to antelope functions.

# Our network code is:
NET=XN

# Our database we call:
dbname=neal_HS

servicedate=20120405

########### setup to correct directory structure ##########################
mkdir $servicedate/mseed # location for mseed files after extracting
# with unchunky (in rt2ms)
mkdir $servicedate/logs # location for *.log, *.err, and *.run files
mkdir $servicedate/day_volumes # day-long miniseed files after running
# miniseed2days. It is these files you
# FTP to IRIS.

###########################################################################
# this next section only necessary the first time you process data:
# copy the needed files to correct directory
#/bin/cp -f /pal/fieldcamp/2011/passive_seismic/codes/batchfile.pf .

# convert batch to parameter file:
#batch2par batchfile.pf > par_file_tmp.pf

# EDIT THE refstrm COLUMN to all 1s:
#sed 's/rs250spsrs/1/' par_file_tmp.pf > parfile.pf
#rm -rf par_file_tmp.pf
##########################################################################

# make a list of all zip files in the directory:
ls $servicedate/raw/*.ZIP > list.file

# convert the zip files from the reftek (rt) to miniseed (ms):
rt2ms -F list.file -p parfile.pf -o $servicedate/mseed/ -R -L

# so the next service date can be processed, make sure that:
rm -rf list.file

##########################################################################
# The next section is to convert the log files to mseed LOG files:
# log2miniseed needs some parameters changed from the default, before
# running:
/bin/cp -f $ANTELOPE/data/pf/log2miniseed.pf .

# this makes sure the LOG files get into the proper day_volumes
# directory to go with the data:
sed "/wfname/ c wfname $servicedate/day_volumes/%{sta}/%{sta}.%{net}.%{loc}.%{chan}.%Y.%j" log2miniseed_tst.pf

mv -f log2miniseed_tst.pf log2miniseed.pf

# script to convert log files to miniseed, for all log files in the
# mseed directory:
for file in `ls $servicedate/mseed/*log`
do
# first, we need establish the serial number of each file:
srnmb=`echo $file | awk -F . '{print $6}'`
# then, map serial number to station name:
case $srnmb
in
9477) log2miniseed -a -n XN -s PS01 $file;;
9261) log2miniseed -a -n XN -s PS02 $file;;
92C3) log2miniseed -a -n XN -s PS03 $file;;
984E) log2miniseed -a -n XN -s PS04 $file;;
9559) log2miniseed -a -n XN -s PS05 $file;;
9294) log2miniseed -a -n XN -s PS06 $file;;
956E) log2miniseed -a -n XN -s PS07 $file;;
9924) log2miniseed -a -n XN -s PS08 $file;;
9144) log2miniseed -a -n XN -s PS09 $file;;
9098) log2miniseed -a -n XN -s PS10 $file;;
929B) log2miniseed -a -n XN -s PS11 $file;;
esac
done
# note that PS11 changed RT130s early in the project!
##########################################################################

# move the raw log and err files out of the mseed directory:
mv $servicedate/mseed/*log $servicedate/mseed/*err $servicedate/logs/

# to build the antelope database do the following. DON'T DO THIS, if you
# are adding data to an existing database:
#dbbuild -b $dbname ./batchfile.pf >& dbbuild.out

# use "dbe $dbname" to look at the database to make sure it is sound

# link the waveforms to day_volumes for IRIS:
miniseed2days -Du -d $dbname -w "$servicedate/day_volumes/%{sta}/%{sta}.%{net}.%{loc}.%{chan}.%Y.%j" $servicedate/mseed/ >& msd2days.out

################################################################################
# the rest is just checking the integrity of the database:

# asign calibration values from the calibration table:
dbfix_calib $dbname

# verify the correlation of your data and database:
dbversdwf -tu $dbname
dbverify -tj $dbname >& dbverify.out

# create the dataless SEED volume (ONLY ONCE!). The dataless SEED volume, often
#referred to as a "dataless", contains the meta-data describing the
#station and instrumentation of your experiment. To generate the
#dataless SEED volume, run mk_dataless_seed, which builds the dataless
#from the contents of your experiment's database. You will submit this
#file along with the waveforms to PASSCAL. (should be named with the
#*current* date):

#outputfile=$NET.`date +%y`.$dbname.`date +%Y%j%H%M`.dataless

# make the dataless SEED volume:
#mk_dataless_seed -v -o $outputfile $dbname

# check the structure of the dataless SEED with
#seed2db -v $outputfile