SCEC/USGS Workshop: California 3-D Seismic Velocity Models

Conveners: Cliff Thurber, Egill Hauksson, Peter Shearer, and Felix Waldhauser

Place: SCEC 2006 annual meeting in Palm Springs, Hilton Hotel

Time/Date: 9:00 am to 4 pm; Sunday, 10 September 2006

The goal of the workshop was to bring together interested scientists to develop a plan for the construction of California-wide 3-D seismic velocity models (Ca3D). These models will be used for a variety of purposes, such as improving earthquake locations, calculating 3-D finite-fault wave-propagation effects, modeling source mechanisms, and interpreting tectonic structures. There are other valuable aspects to consider as well, including providing a 3-D structural framework for facilitating statewide hazard estimates, and research using data from the USArray stations deployed in California.

About 65 scientists registered for the workshop; we estimate that on the order of 50 attended. The format of the workshop was a series of 5-minute, 3-slide presentations followed by open discussion, on the following general themes:

·  The “Vision” for a 3-D statewide model

·  Model construction, representation, and use

·  Current state of the art and utilization

·  Discussion and action items

Summary of the 26 presentations:

C. Thurber briefly reviewed the UW-Madison group's work on a northern California 3-D model as an example of the elements required to assemble a state-wide model, including a high-quality dataset of earthquake arrival times and extensive active-source data. The former is readily available for northern California, but significant effort will be required to assemble a relatively complete southern California active-source dataset. Use of quarries identified from satellite and other imagery data (e.g. GoogleEarth®) can be used to provide additional controlled-source data, albeit without origin time information. Analysis methods for constructing a state-wide model are generally in place, including double-difference, spherical-earth, and adaptive-mesh tomography codes (although a code incorporating all three of these does not yet exist) and practical model resolution and covariance matrix calculation techniques.

R. Clayton discussed a column grid approach for representing a 3-D model - a database of latitude, longitude, depth, physical properties, and "characterization" (e.g., lithology, sediments, crust, mantle, etc.). Properties at any point are determined from interpolation among the columns. Such an approach is flexible (multiple interpolation methods and length scales are possible) and extensible (columns and properties are easily added), and allows for a simple calculation of model variance. An example map of column types and distribution for Baja California was shown.

D. Okaya presented an overview of the Earthworks "On Demand Synthetic Seismograms" system using the "workflow technology" approach. The user selects a geographic region and chooses an event (including mechanism) and one of several Earth models, and the system uses available computer resources and codes to generate synthetic seismograms and any additional desired derived products (e.g., PGA). A demonstration version is available now.

A. Plesch described the SCEC Community Fault Model (CFM) and its derived product, the Community Block Model (CBM). The CFM has 171 faults, and the CBM has 39 blocks and includes topography, basement interface, and the Moho. Substantial effort is going into ensuring consistency between the CBM and the Community Velocity Model (CVM).

P. Davis argued for inclusion of the mantle in the Ca3D effort. He made the point that the mantle may provide significant geodynamic driving forces that influence crustal deformation. Receiver functions to delineate the Moho and upper mantle features, SKS splitting studies to characterize mantle flow patterns, and surface- and body-wave tomography can all contribute.

P. Seuss presented an overview of the Harvard v. 4.0 Community Velocity Model (CVM). The model has Vp, Vs, and density for the consolidated sediments, based mainly on industry data, atop the 3D tomographic model of Hauksson for the basement. Boundaries are defined by triangulated surfaces. The model can be queried simply by providing tabulated lat-lon-depth values.

R. Clayton described v 4.0 of the SCEC 3D CVM. From top to bottom, it has a geotechnical layer (with limited coverage), basins with formula-based properties, a background tomographic crustal model (Hauksson's model updated with one iteration including the basins), a Moho (interpolated from Zhu and Kanamori's work, smoothed), and a tomographic upper mantle model (Kohler's).

T. Brocher described the USGS 3D velocity models for the SF Bay region and northern California as a whole. They are based on the "3D geologic map" of Jachens and coworkers. Empirical velocity-depth relations for the lithologies in the geologic model were used to construct the velocity model, including both Vp and Vs. In the process, a new regional Vp-Vs empirical relationship was developed. The model has been put to use for simulations of strong motion from the 1906 great San Francisco earthquake.

H. Zhang presented double-difference tomography models at several scales for central and northern California. A relatively fine-scale model (mainly 1 to 5 km horizontal grid spacing) has been developed for the region around Parkfield, extending from San Simeon to Coalinga. The 2004 Parkfield earthquake rupture zone appears to be associated with a high-velocity body on the northeast side of the San Andreas fault. An intermediate-scale model (mainly 5 to 10 km horizontal grid spacing) has been developed for the San Francisco Bay region. The model clearly images a number of bedrock highs and sedimentary basins, and shows strong velocity contrasts across portions of several of the major faults. A large-scale model (mainly 10 to 20 km horizontal grid spacing) has been developed for most of Northern California. In addition to imaging the down-going Gorda slab beneath the Mendocino region, a distinct high-velocity body is found beneath the northern part of the Great Valley that is interpreted to be ophiolitic in nature.

G. Lin reported on the use of "composite events" and quarry blasts to improve the tomography model for Southern California. Using an on-line waveform dataset for 450,000 earthquakes covering 1981 to 2005, waveform correlation calculations have been expanded to 76 million event pairs. The composite event approach makes the inversion of the absolute picks more efficient, the use of quarries identified in remote-sensing imagery provides absolute calibration control, and the massive relative arrival time data provides excellent constraints on relative locations.

J. Hardebeck presented a new preliminary Vp and Vp/Vs model for the central coast region of California. The model covers the area from about 34.5° to 36° N and 120° to 121.75° W, along the coast from Lompoc to San Simeon. A key to the development of this model was access to catalog picks from the PG&E array of 3-component stations to augment the available CISN data.

G. Fuis recommended the extensive use of borehole and refraction data (existing and new) to provide critical constraints on the structure of the uppermost crust. He used a comparison of models along LARSE Line 1 (refraction model, earthquake tomography model, SCEC model) to illustrate the improvement possible with active-source and in-situ data. He also suggested the use of gravity data in joint inversions to help constrain the shallow crust where refraction data do not exist.

J. Murphy focused on the critical need for more S-wave data. In addition to recommending taking advantage of empirical velocity relations, she presented an example of S picks obtained from vertical-component records of LARSE shots. Clearly "mining" existing refraction data for S waves could be quite fruitful.

R. Clayton presented new receiver function results for Southern California using the expanded dataset available from Trinet and a stacking algorithm. The new results show evidence for abrupt changes in Moho depth, from receiver function differences at nearby stations and from azimuthal variations at a given station. The typical Southern California station spacing of 40-50 km still greatly exceeds the 10-20 footprint of the receiver function of one station, so the observations remain severely aliased in most places.

C. Tape and Q. Liu described the use of spectral element forward modeling and adjoint method inverse modeling to improve earthquake source mechanisms and identify deficiencies in the Southern California 3D model. The Yorba Linda earthquake was used as an example showing the improvement in forward modeling a 3D model provides compared to a 1D model. The fit is of course not perfect, requiring time shifts for arrivals as well as amplitude and complexity changes. Back-projection of time-delayed, residual-weighted waveforms "images" deficiencies in the model. The adjoint approach is an efficient method for a formal inversion. A simple checkerboard test was used to illustrate the procedure.

P. Chen presented an alternative scattering-integral approach for full waveform inversion for 3D structure and source properties. They isolate the waveform segment to be modeled by windowing the complete finite-difference synthetic seismogram and obtaining the so-called isolation filter. This isolation filter is then cross-correlated both with the observed and complete synthetic seismograms. They window the cross-correlograms and then narrow-band filter them at many frequencies. The phase and amplitude differences between the narrow-band filtered cross-correlograms give the frequency-dependent phase-delay and amplitude measurements. The frequency-dependent kernels for the inversion are constructed by convolving the forward wavefields generated by the earthquake source with the Green tensors for impulsive point sources located at the receiver.

M. Ritzwoller summarized previous work applying ambient noise tomography to Southern California, discussed current efforts to refine and extend the method and apply it to new regions, and highlighted areas of future work. Ambient noise tomography using Rayleigh wave energy is complementary to traditional surface wave data because of the significantly higher resolution at shorter period (< 20 seconds), providing constraints on the crust. Efforts to extend the method to larger and smaller scales and phase velocities and Love waves are ongoing. Joint inversions with other data types are envisioned.

R. Catchings discussed the importance of refraction data for calibration of and putting constraints on 3D velocity models. Surface geology, surface structure, and borehole data cannot be extended to depth and laterally, respectively, without potential problems. Earthquake tomography typically does not resolve shallow structure well. He showed examples of disagreement between shallow seismic refraction models and the geology-based 3D Bay Area model. Results from dozens of shallow reflection/refraction profiles in California are available.

V. Langenheim pointed out that potential-field data (gravity and magnetics) cover the entire state, including offshore, whereas we don't have active-source profiles uniformly distributed throughout the state nor can passive source tomography image the entire crust from top to bottom because earthquakes don't occur everywhere. Thus potential-field data offer an opportunity to extrapolate geologic structure throughout the state and help guide development of a statewide 3D seismic velocity model. Isostatic gravity anomalies are particularly good for mapping location and geometry of faults, detecting dense, high-velocity bodies, and defining the geometry of Cenozoic basins. Magnetic anomalies are also good for mapping the location and geometry of faults, and for detecting ophiolite bodies.

C. Thurber reviewed some of the data types and advanced techniques that may prove vital to constructing a reliable statewide 3D velocity model. Echoing the comments of G. Fuis and R. Catchings, he emphasized the value of controlled-source and borehole data for constraining velocities and interface positions, and the need for joint velocity-interface inversions. Teleseismic body-wave tomography will be important for extending the earthquake tomography model to greater depth. Joint receiver function and surface wave inversions and ambient noise tomography inversions can provide critical constraints on shear velocity structure and interface depths. Joint seismic-gravity inversions show promise for overcoming coverage issues with seismic data, and constrained inversions using other geophysical observables or an a priori geology-based 3D model need to be explored.

F. Waldhauser showed how a real-time high-precision relocation system, using differential times and double-difference location, can be run in parallel with the present NCSN real-time location system. The effectiveness of the double-difference approach is quite clear, as an example from Parkfield illustrates. Some practical current and future issues include model parameterization and implementation, 3D ray tracing or finite difference travel times versus grid search (lookup tables for travel times and partial derivatives), how to handle dynamic earthquake catalogs or velocity models, and how to test new velocity models in an efficient (and possibly automated) fashion.

P. Shearer (on behalf of Y. Fialko) presented two examples of the use of InSAR data to investigate strain accumulation in southern and central California, including the effect of material heterogeneity. Using 35 interferograms of the southern San Andreas around the Salton Sea from 1992-2000, Fialko derived line of sight velocities from the stacked InSAR data. In addition to expected changes across the major faults, non-uniform lateral strain gradients are visible that may reflect variations in elastic properties. Similarly, Schmalzle and coworkers analyzed InSAR data for the Carrizo Plain segment of the San Andreas, finding different strain gradients NE versus SW of the fault that they associate with a likely seismic velocity contrast between the two sides of the fault.

B. Aagaard summarized the research efforts on broadband simulation of the 1906 great earthquake and other actual or potential earthquakes. The objective is to generate synthetic seismograms that accurately capture travel times, reflections, refractions and amplification. The desirable features in seismic velocity models to be used for the simulations include a unified structural representation, a standard interface for querying the models, and fast, efficient queries. He then summarized desirable properties and characteristics of a unified structural representation and the model query interface. Among them for the former are consistency between the velocity model and the fault surfaces, the incorporation of topography, and the integration of Vs30 values.

E. Humphreys presented an updated teleseismic tomography model for southern California. Features in the new model are suggestive of the presence of small-scale convection and delamination beneath several areas - the Transverse Ranges, Southern Sierras, and possibly NW Sonora. Short-term improvements include utilizing the finite frequency approach, incorporating 3D ray tracing, and including constraints from receiver functions. Longer-term plans include iterative, multi-method inversion.

R. Clayton briefly presented some results from the NARS-Baja/RESBAN Array in the Baja California area. The array spans the Gulf of California. Tomography images show sharp vertically oriented low-velocity features associated with the rifting in the Gulf of California.