Friday, March 28, 2008

Marching orders (again!)

How time flies!

We have only just managed to get back into the swing of things after the Antarctica field mission, and they send us right back out again.

This time JY and I are going to the French Sub-Antarctic islands of Crozet, Kerguelen, St Paul and New Amsterdam. This will not be my first journey to this part of the world. I first went there in 2006, only a few weeks after starting my job here in Strasbourg. I have great memories of that trip!

These islands are all in the southern Indian Ocean, and are accessible only by ship. We shall be traveling on the Marion Dufresne, a multi purpose ship: part research vessel, part cruise ship, part cargo ship, part tanker, part helicopter carrier. As comfort and stability go, it is way better than the Astrolabe (the ship we took to Antarctica three months ago).

We set off on April 3rd from Reunion Island, then visit each of the islands in turn, starting with Crozet. We are expected back at Reunion Island on May 2nd. You can follow the ship's progress using either this web page or this kml file for Google Earth.

By popular request, I shall be field-blogging again, so stay tuned for more exciting stuff from the world of seismology...

Thursday, March 27, 2008

I've been blogged...

It seems I have been found out...

I received an email today from one of the editors at, an outfit that rates and categorizes blogs. They had found my blog and rated it, giving me an 8 out of 10... for whatever that's worth.

Here is some blurb from their "About us" page: is all about blog discovery. It's a place for readers to discover interesting blogs and for authors to discover who their readers are. [...]
Our blogs are reviewed, rated, and categorized by our editors, so you won't have to experience the frustration of filtering through blogs that are either spam, outdated, or irrelevant. You'll be able to find quality blogs that you would have unlikely found through a traditional blog search.
And here is the pretty widget they gave me to put on my blog: Sismordia -  Seismology at Concordia at Blogged

Monday, March 24, 2008

Can better physics guarantee better tomographic models?

ResearchBlogging.orgOne of the key elements in discussing an inverse problem such as seismic tomography is the quality of the forward theory. The better the forward theory, the better synthetic data can be predicted from physical model parameters, and hence the better the solution to the inverse problem, right?

Unfortunately the issue is not so simple. Trampert & Spetzler (2006) come to the dual conclusions that better physics (in the form a finite-frequency formulation of the sensitivity kernels of seismic wave measurements) is a necessary but not sufficient condition for improvement of tomographic models, and that the null-space (due to uneven or insufficient data coverage) is currently too large to permit the improvements in resolution that better physics could provide.

Despite finite-frequency kernels being more accurate than the approximate sensitivity formulations of ray-theory, models constructed from either theory are statistically similar, i.e. one cannot construct a finite-frequency model (with a given data fit and horizontal resolution) which cannot also be obtained from ray theory by changing the regularization damping of the inversion accordingly. Regularization dominates the significant aspects of tomographic models, and affects both finite-frequency models and ray-theory models similarly. Data error propagation is worse for finite-frequency kernels, but given the large influence of regularization, this is a minor problem.

The authors maintain that in order to increase the resolution of tomographic inversions, we have to remove the ill posedness in the inverse problem (an ill posed inverse problem has more degrees of freedom than can be constrained by the available data) by increasing and/or homogenizing data coverage. I agree whole-heartedly with this statement! What can be done?

(1) The current distribution of seismic stations is in-homogeneous (see figure at bottom of post showing all FDSN seismic stations), and is limited by the accessibility of suitable installation sites. We should attempt to homogenize the distribution of seismic stations by installing more instruments in currently inaccessible locations such as the sea-floor (ocean-bottom seismometers) and my personal favorite, Antarctica. This solution requires lots of time, effort and a high level of funding that is becoming more and more difficult to obtain.

(2) So far we only use very little information from the complete seismogram (first arrival times of a few main waves, or the dispersion characteristics of surface waves). We should use more of the information available from the complete seismogram, given that modern adjoint methods have made it possible to associate a complete sensitivity kernel to each measurable wiggle in a seismogram. This solution is technically feasible given enough computing power and the development of new tools to automate the data selection and measurement processes.


Trampert, J., Spetzler, J. (2006). Surface wave tomography: finite-frequency effects lost in the null space. Geophysical Journal International, 164(2), 394-400. DOI: 10.1111/j.1365-246X.2006.02864.x

Keep up to date with the latest developments at

Sunday, March 23, 2008

High-performance computing: a revolution in seismic tomography?

ResearchBlogging.orgSeismic tomography - the process of making 3D images of the interior of the Earth from the information contained in seismograms - has in recent times become central to the debate about a number of controversial issues to do with the internal workings of the Earth, from the depths reached by subducted material, to the origin of mantle plumes, to the structure of the innermost inner core.

Many of these controversies stem in no small part from the lack of similarity between different tomographic images at small spatial wavelengths. It is evident to most tomographers that more needs to be done to improve the resolution of tomographic studies. Boschi et al. (2007) set out to discuss the role that may be played by high-performance computing in such an improvement.

The paper starts out with a useful and brief discussion of those factors that are currently limiting tomographic solution. These are : (i) the geographic coverage of the seismic observations being input into the tomographic inversions; (ii) the resolving power of the parameterization used for the inversions themselves; (iii) the accuracy of the theoretical formulations, i.e. the equations that relate seismic observations to the Earth parameters that are being inverted for.

In recent years, there has been a lot of progress on the theoretical side (point iii), and we have learned more about the sensitivity of seismic waves to heterogeneities in Earth structure. Although differences do exist between tomographic models made using the newer theories (finite-frequency models) and those made using the older approximate theories (ray-theory models), these differences do not seem to be as important as those caused by data coverage and parameterization.

Parameterization density (point ii) defines the size of the inverse problem to be solved, with denser parameterizations being required for better resolving power (if the data can indeed constrain the greater number of degrees of freedom implied by all these parameters). Increases in parameterization density lead to the need for larger computers (or greater number of nodes in parallel computing machines). These needs are by and large being met by advances in both desktop PC technologies and high-power computing facilities.

The authors infer that the main factor limiting tomographic resolution is data coverage (point i), which is very inhomogenous due to the limited geographic distribution of earthquakes (they are concentrated mainly at the boundaries often tectonic plates) and of stations (located mostly on land, while two-thirds of the Earth's surface lies under water). They surmise that in the absence of uniform station coverage, the main challenge for tomographers is to establish appropriate parameterization / regularization criteria to damp instabilities in the inversions caused by lack of data, without obscuring valuable information.

The rest of the paper describes two growingly popular techniques (statistical information criteria and adjoint methods), and calculates the computational cost if they were to be used for a global tomographic study. The conclusion reached is that high performance computing is indeed required in order to implement these techniques on a global scale.

The question I ask in the title of this post - whether the implementation of high-performance computing tomographic techniques will bring about a revolution in our ability to image the interior of the Earth - is not really answered in this paper. It is probably too early to say. I find, however, that the authors have skimmed over the data-coverage problem too quickly.

Although it is true that the geographical distribution of earthquakes is fixed and that the distribution of seismic stations is unlikely to change dramatically over the next decade, I believe more can be done with the currently available data. Most tomographic inversions use only a small fraction of the information contained in the seismograms.

A greater use of full waveform data, combined with the accurate calculation of their sensitivity, is likely to fill in many of the regions that are currently under-sampled and hence poorly resolved in tomographic models. As data-coverage seems to be the most severe limiting factor to the resolution of seismic tomography, might not relatively minor increases in the exploitation of the information contained in each seismogram lead to significant improvements to tomographic images?

Following is one of the figures of a paper I am working on, which describes a new strategy for selecting data windows on seismograms in a way that is appropriate for the latest generation of tomographic techniques. One of the considerations that went into the development of this strategy was to enable the use of as much of the information contained within the seismogram as possible. I'll divulge more about how this all works once the paper has been submitted.


BOSCHI, L., AMPUERO, J., PETER, D., MAI, P., SOLDATI, G., GIARDINI, D. (2007). Petascale computing and resolution in global seismic tomography. Physics of The Earth and Planetary Interiors, 163(1-4), 245-250. DOI: 10.1016/j.pepi.2007.02.011

Keep up to date with the latest developments at

Friday, March 21, 2008

Antarctic campaign blog digest

A blog post a day keeps the doctor away...

... or rather: a blog post a day adds up to a lot of pages. I have just finished compiling a pdf version of my posts from the Antarctica field trip into an 88-page book. The text is almost identical to that here on the blog (only minor tweaks were necessary to fit the format), but the image quality is much improved.

You can download the pdf file (all 13Mb of it) here: Sismordia-book.pdf.

Keep up to date with the latest developments at

Monday, March 10, 2008

Launching the Concordia Seismology website

One of the things that has kept me from blogging much lately has been writing and setting up a static website about Concordia seismology, giving access to public domain information about the permanent station CCD and the CASE-IPY experiment, including data snapshots.

The website is unimaginatively called Concordia Seismology, and can be found here:

It has dedicated CCD pages, dedicated CASE-IPY pages, and a list of conference (and at some point journal) publications concerning seismology at Concordia.

The Concordia Seismology website is not intended to be a static copy of this blog, rather a place to distribute technical information about the permanent and temporary stations at Concordia. It will act as the official online source of such information.

Keep up to date with the latest developments at

Saturday, March 8, 2008

Antarctica photo album

Apologies to my regular readers for the recent lack of posts. I have been catching up on work after my two-month absence.

I have only just got round to organizing photos from the Concordia field trip. You can find a selection of photos in this Picassa album, also accessible through the image below. The pictures in this album were mostly taken by JY and me, though a few were taken by other Antarctic adventurers (you know who you are).

Antarctica 2007-2008

Keep up to date with the latest developments at

Monday, March 3, 2008

CASE-IPY : daily data update

The images and pdf files available from this post are updated regularly.

We are very fortunate in being able to receive daily data updates from our prototype CASE-IPY stations. The following pdf files contain daily snapshots of the 1sps data (Z=vertical component, N=North-South component, E=East-West component):

CAS01.Z , CAS01.N , CAS01.E
CAS02.Z , CAS02.N , CAS02.E
CAS03.Z , CAS03.N , CAS03.E

The following image shows the vertical component seismograms of the latest available data for the three stations. Click on the image for a larger version.

Here is a quick description of the steps taken to process the data into these snapshots:

  • three component analogue data is produced by a seismometer;
  • the data is digitized by a Reftek-130 acquisition system and stored locally on flash cards;
  • the Reftek turns on a radio modem once a day for 10 minutes;
  • a PC at Concordia monitors the radio link to each station continuously, and launches the retrieval process for the previous day's data when the link is active;
  • Jean-Fran├žois Vanacker, who is wintering over at Concordia, checks the data have arrived correctly, compresses them and sends them to us via email once a day;
  • my colleague JJL unpacks the data, views them, processes them into a more usable format (miniseed), places the raw and processed data in a central archive and updates the SOH (state-of-health) plot available from this post;
  • I process the miniseed data to generate the daily snapshots, and update the pdf files available above.

Keep up to date with the latest developments at