Monday, March 24, 2008

Can better physics guarantee better tomographic models?

ResearchBlogging.orgOne of the key elements in discussing an inverse problem such as seismic tomography is the quality of the forward theory. The better the forward theory, the better synthetic data can be predicted from physical model parameters, and hence the better the solution to the inverse problem, right?

Unfortunately the issue is not so simple. Trampert & Spetzler (2006) come to the dual conclusions that better physics (in the form a finite-frequency formulation of the sensitivity kernels of seismic wave measurements) is a necessary but not sufficient condition for improvement of tomographic models, and that the null-space (due to uneven or insufficient data coverage) is currently too large to permit the improvements in resolution that better physics could provide.

Despite finite-frequency kernels being more accurate than the approximate sensitivity formulations of ray-theory, models constructed from either theory are statistically similar, i.e. one cannot construct a finite-frequency model (with a given data fit and horizontal resolution) which cannot also be obtained from ray theory by changing the regularization damping of the inversion accordingly. Regularization dominates the significant aspects of tomographic models, and affects both finite-frequency models and ray-theory models similarly. Data error propagation is worse for finite-frequency kernels, but given the large influence of regularization, this is a minor problem.

The authors maintain that in order to increase the resolution of tomographic inversions, we have to remove the ill posedness in the inverse problem (an ill posed inverse problem has more degrees of freedom than can be constrained by the available data) by increasing and/or homogenizing data coverage. I agree whole-heartedly with this statement! What can be done?

(1) The current distribution of seismic stations is in-homogeneous (see figure at bottom of post showing all FDSN seismic stations), and is limited by the accessibility of suitable installation sites. We should attempt to homogenize the distribution of seismic stations by installing more instruments in currently inaccessible locations such as the sea-floor (ocean-bottom seismometers) and my personal favorite, Antarctica. This solution requires lots of time, effort and a high level of funding that is becoming more and more difficult to obtain.

(2) So far we only use very little information from the complete seismogram (first arrival times of a few main waves, or the dispersion characteristics of surface waves). We should use more of the information available from the complete seismogram, given that modern adjoint methods have made it possible to associate a complete sensitivity kernel to each measurable wiggle in a seismogram. This solution is technically feasible given enough computing power and the development of new tools to automate the data selection and measurement processes.



References

Trampert, J., Spetzler, J. (2006). Surface wave tomography: finite-frequency effects lost in the null space. Geophysical Journal International, 164(2), 394-400. DOI: 10.1111/j.1365-246X.2006.02864.x

-----
Keep up to date with the latest developments at http://sismordia.blogspot.com

No comments: