Sunday, March 23, 2008

High-performance computing: a revolution in seismic tomography?

ResearchBlogging.orgSeismic tomography - the process of making 3D images of the interior of the Earth from the information contained in seismograms - has in recent times become central to the debate about a number of controversial issues to do with the internal workings of the Earth, from the depths reached by subducted material, to the origin of mantle plumes, to the structure of the innermost inner core.

Many of these controversies stem in no small part from the lack of similarity between different tomographic images at small spatial wavelengths. It is evident to most tomographers that more needs to be done to improve the resolution of tomographic studies. Boschi et al. (2007) set out to discuss the role that may be played by high-performance computing in such an improvement.

The paper starts out with a useful and brief discussion of those factors that are currently limiting tomographic solution. These are : (i) the geographic coverage of the seismic observations being input into the tomographic inversions; (ii) the resolving power of the parameterization used for the inversions themselves; (iii) the accuracy of the theoretical formulations, i.e. the equations that relate seismic observations to the Earth parameters that are being inverted for.

In recent years, there has been a lot of progress on the theoretical side (point iii), and we have learned more about the sensitivity of seismic waves to heterogeneities in Earth structure. Although differences do exist between tomographic models made using the newer theories (finite-frequency models) and those made using the older approximate theories (ray-theory models), these differences do not seem to be as important as those caused by data coverage and parameterization.

Parameterization density (point ii) defines the size of the inverse problem to be solved, with denser parameterizations being required for better resolving power (if the data can indeed constrain the greater number of degrees of freedom implied by all these parameters). Increases in parameterization density lead to the need for larger computers (or greater number of nodes in parallel computing machines). These needs are by and large being met by advances in both desktop PC technologies and high-power computing facilities.

The authors infer that the main factor limiting tomographic resolution is data coverage (point i), which is very inhomogenous due to the limited geographic distribution of earthquakes (they are concentrated mainly at the boundaries often tectonic plates) and of stations (located mostly on land, while two-thirds of the Earth's surface lies under water). They surmise that in the absence of uniform station coverage, the main challenge for tomographers is to establish appropriate parameterization / regularization criteria to damp instabilities in the inversions caused by lack of data, without obscuring valuable information.

The rest of the paper describes two growingly popular techniques (statistical information criteria and adjoint methods), and calculates the computational cost if they were to be used for a global tomographic study. The conclusion reached is that high performance computing is indeed required in order to implement these techniques on a global scale.

The question I ask in the title of this post - whether the implementation of high-performance computing tomographic techniques will bring about a revolution in our ability to image the interior of the Earth - is not really answered in this paper. It is probably too early to say. I find, however, that the authors have skimmed over the data-coverage problem too quickly.

Although it is true that the geographical distribution of earthquakes is fixed and that the distribution of seismic stations is unlikely to change dramatically over the next decade, I believe more can be done with the currently available data. Most tomographic inversions use only a small fraction of the information contained in the seismograms.

A greater use of full waveform data, combined with the accurate calculation of their sensitivity, is likely to fill in many of the regions that are currently under-sampled and hence poorly resolved in tomographic models. As data-coverage seems to be the most severe limiting factor to the resolution of seismic tomography, might not relatively minor increases in the exploitation of the information contained in each seismogram lead to significant improvements to tomographic images?

Following is one of the figures of a paper I am working on, which describes a new strategy for selecting data windows on seismograms in a way that is appropriate for the latest generation of tomographic techniques. One of the considerations that went into the development of this strategy was to enable the use of as much of the information contained within the seismogram as possible. I'll divulge more about how this all works once the paper has been submitted.



References

BOSCHI, L., AMPUERO, J., PETER, D., MAI, P., SOLDATI, G., GIARDINI, D. (2007). Petascale computing and resolution in global seismic tomography. Physics of The Earth and Planetary Interiors, 163(1-4), 245-250. DOI: 10.1016/j.pepi.2007.02.011


-----
Keep up to date with the latest developments at http://sismordia.blogspot.com

No comments: