GGOS Science Committee Meeting (SC-7)

Report Working Group on Ground Networks and Communications

December 12, 2006

San Francisco, CA

The Working Group has accumulated participants as topics have been addressed through its meetings and telecons. All of the participants have been considered members of the Working Group. They include: Zuheir Altamimi, David Arnold, Yoaz Bar Sever, Norman Beck, Dirk Behrend, Wolfgang Bosch, Rene Ferland, Rene Forsberg, Richard Gross, Werner Gurtner, Steve Kenyon, Frank Lemoine, Linling Li, Dan MacMillan, Chopo Ma, Zinovy Malkin, Jan McGarry, Angelyn Moore, Ruth Neilan, Carey Noll, Mike Pearlman, Erricos Pavlis, John Ries, Markus Rothacher, David Rowlands, David Rubincam, David Stowers, Frank Webb, Pascal Willis.

The Working Group is currently dealing with network design and satellite retroreflector array activities to support the evolution of the reference frame and other requirements. The activities are:

·  Scope (design) a network layout to define and maintain the terrestrial reference system at the level of 0.1 mm/year (or better) to support global change activities such as the study of sea level change;

·  Specify the utility and feasibility of the placement of SLR reflectors on the GPS satellites as recommended the GGOS; develop retroreflector standard for the GNSS satellite and present array options to meet the standard.

The group is developing SLR/VLBI simulation capability to provide optimal network distributions to achieve the 0.1 mm/yr reference frame stability. This capability is being used to scope the size and layout of a future multi-technique network, maximizing the strength of co-location to define a robust terrestrial reference frame that is stable at the level of 0.1 mm per year. The activity will support the development of the capability and the examination of single technique and then multi-technique options. The simulation capability will include SLR, VLBI, and GNSS integrated observatories.

1. Network Scoping and Design

SLR Simulation

Progress in simulated SLR data analysis

The SLR activity uses the “geocenter” as a proxy for the study of optimal network design. At the heart of the network design analysis is a realistic representation of (1) the actual SLR data error characteristics, (2) the temporal and geographic distribution of the SLR data acquisition, and (3) the LAGEOS-1/2 dynamical and observation modeling errors. To address the first, we place some limitations on the magnitude of the data errors based on actual results, particularly for the core network that provides the vast majority of the quality data. The second component is based on the observed data production from the current network, with our best assumptions for extrapolation of future performance including new sites not represented in the current network configuration. The third component is the most challenging. We have only the overall SLR residual statistics to guide us as to the level of modeling error coming from the gravity field (static, tidal, seasonal, secular, etc.), surface forces, and station displacements (tides, tidal and atmosphere loading). In most cases, the best we may be able to do is to attempt to place upper bounds.

In an initial analysis to start to place such bounds on the likely modeling errors, a simulation was performed using as the network the ‘core’ set of 25 stations that dominated the data acquired during the four-year period 2000 to 2004. This included a few stations that have been since closed (Haleakala, Arequipa), but which could be expected to eventually return to operation. Thus this reference set could be considered as a representation of the network ‘status quo’; i.e. its future configuration assuming no significant changes. The most important aspect of the simulation is likely to be a faithful representation of the geographic distribution of the data, in the long-term and as a function of the season. As Fig. 1-1 below demonstrates, there is a serious imbalance in the available SLR tracking between the Northern and Southern hemispheres, as well as between the Eastern (loosely defined as 0-180 East longitude) and Western hemispheres. It is particularly striking how the East-West imbalance became increased further even as the total number of stations steadily grew.

Figure 1-1. Distribution of SLR tracking to LAGEOS-1 as a function of geography and season over the interval of 1993-2005.

With this station and temporal data distribution as a nominal starting point, the next stage was to refine the error modeling for the dynamical and measurement models. After some experimentation, a set of error sources was constructed which provided SLR residuals with the ‘look and feel’ of the actual residuals. In this initial simulation, the proxy for the reference frame performance was the recovery of the seasonal variation in the geocenter. This is a relatively easy quantity to compare to actual geocenter recovery, as illustrated in Fig. 1-2. In these early results, it appears that the simulation gives reasonable results. However, it is important to note that, while these errors gave realistic residuals as well as realistic geocenter recovery, it cannot be concluded that the error power is correctly distributed across the various error sources or that all important error sources have been considered.

In a follow-up simulation, the level of modeling error was increased a little more in order to avoid overly optimistic results (a common problem with simulations). In particular, the ocean loading and pole tide errors were significantly increased. When these new error models were used to repeat the previous analyses, the results were a little worse, as expected. However, when several new SLR sites were introduced into the simulation, expecting to see an improvement in the geocenter recovery, the results instead indicated degradation. It is noted that most of these new sites were island or near-coastal sites (Guam, Kerguelen, Easter Island and Concepcion), and the contributions of Hawaii and Tahiti were also artificially increased in the test. Consequently, it is likely that the level of modeling error (particularly the ocean loading modeling error) was increased beyond a reasonable level and that the simulation is no longer realistic. As we refine the fidelity of the simulation, we may learn as much about the nature of the SLR technique as from running the simulation itself.

Figure 1-2. Simulated and actual geocenter estimates from LAGEOS-1/2 (X-component).

We will continue to refine the error models, which is the most difficult and important component of this effort. When we have converged on a set of modeling errors that appears to give reasonable and reliable results, we can extend the simulation to include new sites and/or station quality/quantity assumptions. This will allow us to investigate the impact of various network configurations on the SLR-only reference frame. With similar progress in the VLBI simulation capability, it will then be possible to examine combinations of the two techniques, especially regarding the effect of the number and distribution of the survey ties.

Real data analyses to investigate dependence of the TRF origin on network geometry

With the emphasis of the work placed on the “geocenter” as a proxy for optimal network design, we have looked at the limitations of the current SLR network. The entire real data set of weekly arcs from 1993 to early 2006 was used. We designed 18 test solutions that were all based on sub-sets of the full data set. We split the data in two, three and four subsets in two ways: either in continuous in time subsets (i.e. first half vs. second half, first 1/3 vs. second 1/3 vs. third 1/3, etc.) or by sampling the entire data set with the appropriate frequency (e.g. every other week, generating a subset of “even” and “odd” weeks, both having bout ½ of the data, but spanning the entire 13+ years, etc.). The concept is better explained from the cartoon in Fig. 1-3.

It is sufficient to look at the last column that shows the 3D offset and associated uncertainty for each case vs. the standard 13+-year solution. The top group in Table 1-1 are results from solutions that span the same 13+-year interval but have different sampling rate, while the bottom group are results that span contiguous time intervals that are fractions of the total 13+-years. The top group results give us an idea of how the amount of data over a given time interval affects the quality of the origin definition. Within each sub-group, the additional breakdown gives some idea of how similar amounts of data spanning the same total time span can reproduce the origin given that fact that the geometry between these groups will be different to certain extent due to network problems, weather, satellite phasing, etc. In general the quality is similar to within one standard deviation for these cases. The bottom group results are very much a “mirror” of the evolution of the network. The numbers in the last column show very clearly that in the early years, a similar amount of data could produce a definition of the origin that was almost an order of magnitude different from what we obtain from the latter years (see the ¼ case results, 84 mm vs. 8 mm!). The associated error estimates reflect that also, with errors at least twice as large between these cases.

Figure 1-3. Schematic of solution strategy for geocenter robustness investigations.

Once we had these solutions completed, we first looked at the origin definition vs. the one obtained from the analysis of the full data set after a Helmert transformation between the two TRFs. We used the 3D difference in the origin and the associated error as figures of merit to rank the 18 solutions. The results are summarized in Table 1-1.

In addition to the definition of the origin at epoch, we also looked at the stability of the secular, annual and seasonal variations of the origin, each obtained from the same subset solutions discussed above. We give an example of the variations in the Z-component, by far the most variable of the three. Figure 1- 4 shows the evolution of the origin with respect to an origin defined from the complete data set (standard solution). The secular trend fitted along with higher frequency terms indicates a rate of about 1.73 mm/y. In Fig. 1-5, we show the two cases of the same variations obtained from two subset solutions spanning the same time interval but using every other week of data (i.e. about half the data and with a network that is pretty close in both cases).

Table 1-1. Offsets between each subset solution and the standard analysis TRF for 1993-2006.

Figure 1-4. Z-component of the geocenter variations from all LAGEOS 1 & 2 data (1993-2006).

Figure 1-5. Z-component of the geocenter variations from LAGEOS 1 & 2 (1993-2006) using every other week of data in each case (top: “odd” weeks, bottom: “even” weeks).

The general conclusion that one can draw from these tests are:

o  On average each component estimated from 13+-years of data is no better than 6-8 mm

o  The 1993-present SLR data set is significantly non-uniform, due to network variations, variable system performance (accuracy and operations), and the predominant N-S unbalanced station distribution.

o  There is a steady improvement in accuracy over the years. However, there can be even an order of magnitude difference between the early vs. the recent years in the 3D offset of TRF origin.

o  A conservative estimate for the accuracy of the TRF origin defined from the full 13-year data set is 15 mm.

o  The estimated secular trends from subset solutions that span similar time periods agree within ~7-10%, i.e. at the 1-2 sigma level

o  Secular trends estimated from subset solutions that span different time periods suffer from the changes in the network and can differ up to 100% or have even opposite signs. More than ~10 years needed for robust results.

o  Seasonal variations show phase changes due to network variations, their magnitudes though seem stable.

An immediate corollary of the results obtained from these analyses is that in order to be able to make robust inferences about the optimal design of future networks, we need to work with data sets that span at a minimum one decade. We have thus simulated SLR data from LAGEOS 1 & 2 for the same period used with the real data analyses and we are now using that data to validate the inferences made so far from the real data analyses. These simulated data will be later combined with data from additional “future” stations in selected locations (preferably where other space techniques will also place future sites or already have presence), to study the size and distribution of the optimal future space geodetic network. Figure 1-6 shows a proposed first attempt, compatible with the future VLBI network and linked with that of GNSS.

Figure 1-6. Locations of GPS and VLBI sites where future SLR sites could be added (if not already available) or upgraded to the quality expected for the future NASA systems.

VLBI Simulation

Progress in VLBI Simulations and Network Design

The procedures developed for performing the VLBI simulations include the following steps:

1) Specify network antenna locations, antenna sensitivities, slew rates, SNR

requirements, and other observing mode parameters;

2) Generate an observation schedule for a 24-hour VLBI experiment with the SKED

Program;

3) Generate a simulation database file from the schedule that can be run with the

SOLVE analysis program

4) Perform Monte Carlo simulations by generating simulated observations and

making repeated SOLVE runs with different simulation input

5) Determine the precision of estimated parameters (e.g., station positions,

baseline length, or EOP) by computing the WRMS of the estimated parameter

of interest over the series of SOLVE runs.

In the next step, we investigated the validity of the simulation procedure by running simulated observations through SOLVE using actually observed experiment observation schedules. To do this we used the experiment schedules from the recent CONT05 campaign of September 2005 consisting of 15 consecutive 24-hour experiment sessions. Since the dominant VLBI errors are tropospheric and instrumental (clocklike) errors, input simulated model observations were generated as random walk processes with typical expected atmosphere and clock variances. SOLVE runs using the simulated data produced baseline length WRMS precision that was in reasonably good agreement (within 10-15% for nearly all baselines) with the observed precision especially given the simplifying assumption that the stations all used different realizations of the same delay error model.