The history of moored measurements for climate studies in the equatorial Pacific dates back to the 1970s, when surface current meter moorings were first deployed along the equator as part of the EPOCS program [Halpern, 1987b] and the NORPAX Hawaii-Tahiti Shuttle [Wyrtki et al., 1981; Knox and Halpern, 1982]. Based in part on these early successes, original plans in TOGA called for a small number of moorings to be deployed near the equator and in gaps between widely spaced XBT lines [U.S. TOGA Office, 1988].
However, the 19821983 El Niño highlighted the inadequacy of existing ocean observational networks for climate studies, in part because of the lack of high-quality real-time data by which to monitor evolving conditions in the tropical Pacific Ocean. This realization spurred attempts to develop telemetry systems for deep ocean moorings at NOAA's Pacific Marine Environmental Laboratory. The most notable development in this regard was the Autonomous Temperature Line Acquisition System (ATLAS) mooring [Milburn and McLain, 1986; Hayes et al., 1991a], which incorporated many proven design concepts from taut-line current meter moorings used in earlier equatorial ocean studies [Halpern, 1996]. However, significant cost savings were achieved by eliminating current meters from the suite of instrumentation and by targeting temperature rather than velocity as the primary oceanographic variable. Elimination of current meters, whose moving parts (rotors, vanes, or propellers) were sensitive to mechanical wear and biofouling in the energetic and biologically productive upper layers of the equatorial Pacific, also extended the expected lifetime of the mooring from 6 months to 12 months. Equally significant, the ATLAS mooring was designed to telemeter air temperature, SST, and subsurface temperature data to shore in real time via Service Argos. In 1986, real-time winds were added to the ATLAS system, adapting earlier design concepts developed for real-time wind measurements from current meter moorings [Halpern et al., 1984]. Relative humidity sensors were added to ATLAS moorings in 1989.
ATLAS sampling and data transmission schemes have evolved with time. The current generation ATLAS telemeters all data as daily averages and, in addition, as hourly values for SST and surface meteorology coincident with three to four satellite overpasses per day. Data are also internally recorded and available upon recovery of the mooring system. A recent assessment of instrumental accuracies indicates errors of about 0.03°C for SST, 0.2°C for air temperature, < 0.1°C for subsurface temperature, 0.2 m s-1 for wind speed, and 4% for relative humidity [Mangum et al., 1994; Freitag et al., 1995]. The estimate of wind speed error (unlike the other estimates) does not take into account possible calibration drift for instruments deployed at sea for up to one year. An assessment of this drift is presently underway, and preliminary results suggest that including it may lead to an overall accuracy of about 0.5 m s-1 for wind speed.
The early technical successes of the ATLAS mooring program and the recognized value of the data for short-term climate studies led to multinational plans for a basin-scale expansion of the array during the second half of TOGA [National Research Council, 1990]. This expansion was feasible because the relatively low cost of the ATLAS mooring allowed for its deployment in large numbers and because the 1-year ATLAS design lifetime made for manageable long-term maintenance costs and ship time requirements. The array, dubbed TOGA-TAO [Hayes et al., 1991a], far exceeded in scope what had been originally anticipated as a moored buoy component of the TOGA observing system [U.S. TOGA Office, 1988].
Coordinated with the early ATLAS mooring program, but separate from it, was a parallel effort to develop a long-term array of current meter moorings for TOGA studies in the Pacific [World Climate Research Program, 1990a]. These moorings were concentrated on the equator where direct measurements would be most valuable in view of the limited applicability of the geostrophic approximation. Siting was based in part on historical precedent (i.e., where long records already existed) and the need to sample different hydrodynamic regimes (e.g., cold tongue, warm pool). It became apparent, however, as the ATLAS program expanded, that the current meter mooring and ATLAS mooring programs should be integrated more fully for a variety of technical, logistic, and scientific reasons. Thus, during the second half of TOGA, TAO was configured to include both ATLAS and current meter moorings in a single unified mooring program [McPhaden, 1993a]. Details of current meter mooring design, sampling characteristics, and instrumental accuracies can be found in work by Halpern [1987a, c], McPhaden et al. [1990b], McCarty and McPhaden , Lien et al. , Freitag et al. , Plimpton et al. , and Weisberg and Hayes .
Design criteria for the TAO array were based on general circulation model simulations of wind-forced oceanic variability and on empirical studies of space-time correlation scales as described in section 2. The array was built up over time and maintained through a series of research cruises at roughly 6-month intervals. These cruises were necessary to deploy new mooring systems and recover old mooring systems that were close to or past their design lifetimes. The array was completed at the very end of TOGA in December 1994, with deployment of an ATLAS mooring at 8°N, 156°E [McPhaden, 1995].
A subset of the real-time TAO data stream, formatted by Service Argos into World Meteorological Organization (WMO) code, is retransmitted on the GTS. The data are then available to operational meteorological and oceanographic centers around the world. The availability of TAO data on the GTS increased significantly in November 1992 after long-standing problems with the Argos-GTS link were finally resolved; data throughput increased at that time from 1030% to 8090% [McPhaden, 1993b].
The rapid growth of the TAO array during the second half of TOGA has led to improvements in the quality of several important operational climate analysis and prediction products. For example, at present approximately half the wind observations used in FSU monthly Pacific wind analyses in the band 10°N10°S are from TAO buoys (Figure B1). TAO data are also included in the weekly NCEP SST analysis (see section C1). The Comprehensive Ocean Atmosphere Data Set (COADS) [Woodruff et al., 1987], a global compilation of marine observations since 1854, also incorporates TAO data. As of this writing, only those TAO data available from the GTS have been included in COADS. A COADS reanalysis of data collected after 1980, including a more comprehensive set of delayed-mode TAO data, is planned for the near future (S. Worley, personal communication, 1996).
Figure B1: GTS ship (small solid dot) and buoy (open circles) wind reports used in the Florida State University Pacific surface wind analysis for the month of December 1994. In the latitude band 10°N10°S a total of 6970 observations were reported; 3809 of these reports (55%) were from TAO buoys (data courtesy of D. Legler, 1997).
Development of the TAO array required an extraordinary effort from individuals and institutions in several countries, at the core of which was sustained support provided by the United States, Japan, France, Taiwan, and Korea. As one measure of effort, accumulated over the 10 years between 1985 and 1994, more than 400 TAO moorings were deployed on 83 research cruises involving 17 ships from six different countries, requiring a total of 5.7 years of shiptime. At present, nearly 1 year of dedicated shiptime per calendar year is required to maintain the fully implemented array of nearly 70 moorings. Overall, data return has been > 80%, with many sites providing over 90% data return. Regions of data return < 80% are found in the far eastern and western Pacific, where vandalism by fishing fleets has been an ongoing problem [e.g., Koehn et al., 1995, 1996]. Scientific use of TAO data has been encouraged by the development of sophisticated data management, display, and dissemination capabilities. These include the TAO workstation and access to the data via anonymous file transfer protocols and the World Wide Web [Soreide et al., 1996].
In the early 1970s the Argos Doppler ranging system became operational on National Oceanic and Atmospheric Administration (NOAA) polar orbiting weather satellites, and a cost-effective technique of listening to and locating radio transmitters on the global ocean surface was made available to oceanographers. This spawned the design and construction of a large number of ocean surface drifters, both for measuring ocean circulation as well as for use as platforms for deploying a variety of meteorological sensors. Throughout the 1970s, typical drifter configurations consisted of a 100200-kg aluminum surface float and a World War II surplus parachute or 2- × 6-m rectangular window-shade drogue attached with rope and chain to a depth of 1030 m. Over 150 of these drifters were released into the Gulf Stream system [Richardson, 1983], and the largest array deployment was in the Antarctic Circumpolar Current, where about 180 drifters were operational during the intensive observation phase of the Global Atmospheric Research Program (GARP) First Global GARP Experiment (FGGE) in 19791980 [Hofmann, 1985]. Small arrays of FGGE drifters with drogues were also deployed in the tropical Pacific as part of the EPOCS program between 1979 and 1987, with the main purpose of understanding eastern tropical circulation.
During the planning phase of TOGA it became clear that accurate global fields of SST, atmospheric pressure and ocean basin-wide patterns of surface circulation were required [World Climate Research Program, 1985] (see also Table 1). A potentially valuable tool was the Argos-tracked drifter, but several serious questions arose regarding the feasibility of designing an affordable instrument that could be deployed in global arrays. The drifters used during FGGE were too heavy to be routinely deployed from merchant ships or by air; they were very costly to build and did not retain their drogues longer than several months. No mechanical design improvements had been made to them since 1975. There were no engineering standards or field-verified hydrodynamic models by which to design a Lagrangian drifter in order that its water-following capability could be determined to the accuracy required by TOGA. Finally, the tariffs charged by Service Argos, the firm which had the exclusive right to decode Argos location data, would severely limit the extent of a global, long-term deployment. To meet TOGA objectives, a two-pronged program of drifter deployments was developed, as described below.
In 1982 a group of oceanographers and engineers met at the National Center for Atmospheric Research to consider the challenges presented by the WCRP requirements for global ocean and atmosphere monitoring and to determine how a variety of newly designed ocean Lagrangian tools could be used to meet these needs. It was decided that a low-cost, lightweight surface drifter should be developed. Funding for the new drifter development came first from the Office of Naval Research and then from NOAA and the National Science Foundation.
By 1985, competing drifter designs had emerged from the Draper Laboratory, NOAA/Atlantic Oceanographic and Meteorological Laboratory (AOML), and Scripps Institution of Oceanography. The field measurements of the water-following capability of the drifters, with vector-measuring current meters attached to the top and bottom of the drogue, were done over the period of 19851989 [Niiler et al., 1987, 1995]. Several modeling studies of drifter behavior in steady upper layer shear and linear gravity wave fields were also done [Chabbra et al., 1987; Chereskin et al., 1989]. These studies provided a rational basis for the interpretation of the drogue slip measurements in the field. A TOGA/WOCE Surface Velocity Program (SVP) was organized to seek broad international support for drifter acquisitions and deployments.
By 1986 several SVP drifter designs had emerged and were being used in research programs in the Atlantic and Pacific. In 1988 a Pacific basin-wide TOGA process study, the Pan-Pacific Surface Current Study [World Climate Research Program, 1988], became operational. Its technical objectives were to use VOS ships to maintain an array of 120 drifters for a 3-year period and to select from the competing SVP drifter designs the most robust elements. Its scientific objectives were to obtain a tropical Pacific basin-wide field of surface currents and SST for the purpose of studying a variety of processes that determine SST evolution. Barometers were added to the SVP drifters in 1991, and in 1992, salinity sensors became an operational system on drifters in TOGA COARE. SVP drifters were deployed for WOCE in significant numbers in the Pacific by 1992, in the Atlantic as part of the Atlantic Climate Change Program in 1992, and in the Southern and Indian Oceans in 1994. By the end of TOGA, over 700 drifters were operational, and SVP emerged as the Global Drifter Program, maintained by resources from 16 countries.
The evolution of the SVP drifter to the Global Lagrangian Drifter took nearly 5 years of design and evaluation. It now consists of a spherical surface float that carries the electronics, SST, barometer, and drogue-on sensors. This float is tethered with plastic-coated wire to a holey sock drogue centered at 15-m depth [Sybrandy and Niiler, 1991; Sybrandy et al., 1995]. In the subtropical oceans the mean lifetime of a buoy (defined in terms of drogue retention) is 440 days; in the Southern Ocean the mean lifetime is 280 days. The accuracy of the water-following capability is dependent upon the winds and the "drag area ratio," the ratio of the frontal drag areas of the drogue relative to the surface float and tether [Niiler et al., 1995]. These drifters were designed to slip < 1 cm s-1 in 10-m s-1 winds and have a drag area ratio about 5 times larger than was used in FGGE drifters [Niiler and Paduan, 1995]. At the time of deployment the calibrated accuracy of the SST sensor is 0.1°C, and the accuracy of the barometer is 1 mbar. Location data provided by Service Argos have a minimum error of 300 m. To reduce Service Argos fees, these drifters transmit one third of the time in a 24- or 72-hour period.
Service Argos processes these data for location, SST, and sea level pressure and places it on GTS within 2 hours of reception. The GTS data are quality controlled and used on an operational basis by the meteorological agencies for weather and climate prediction and in a variety of data products that assess the nature of the variability of the oceans and lower atmosphere. For example, NCEP uses the raw drifter barometer data in real time off the west coast of the United States to aid in marine weather broadcasts and forecasts. Scientific quality data are processed at the Global Drifter Center at NOAA/AOML, and on a 6-month interval, they are deposited at Marine Environmental Data Service (MEDS), Canada, for release to the scientific and operational communities.
In TOGA the SVP drifters were deployed from research vessels, VOS, and airplanes. The average failure rate upon ship deployment was 5% and from airplanes 15%. In the tropical Pacific most of the drifters were released near the equator. The objective was to maintain drifter arrays with enough samples in 2° latitude × 8° longitude areas to define the 15-m circulation. The sample size (Figure B2) depends more on the nature of the deformation field of the circulation than upon where drifters were released. For example, there were many more deployments in the eastern Pacific equatorial waveguide than in the North Equatorial Countercurrent, although the data density was much larger in the latter because of the nature of the surface flow and its variability.
Figure B2: Number of 5-day observations of velocity observed in 2° latitude × 8° longitude areas from surface drifters between January 1, 1979, and December 31, 1995, in the tropical Pacific. The total number of 5-day observations is 81,589. The maximum number of 5-day observations possible in any given box is 1098.
While the SVP drifter was being developed, the second part of the two-pronged program for drifter deployment was getting underway. The U.S. TOGA Scientific Steering Group in 1983 authorized a program to begin deployment of FGGE-type drifters in the Southern Oceans, managed by NOAA's National Ocean Service and National Data Buoy Center. These drifters carried barometers and SST sensors inside the metal hulls, and the data were reported routinely via Argos through the GTS as an operational system. Some drifters had a long rope or cable attached to the base, while others drifted freely in the wind and waves. In 1984, about 40 FGGE-type drifters were deployed, but escalating costs, inflation, a noncompetitive environment for industrial construction, and fixed budgets reduced the array to 20 drifters by 1994. Several operational meteorological agencies contributed drifters to this Southern Ocean FGGE drifter program through the Drifting Buoy Cooperation Panel (which later became the Data Buoy Cooperation Panel). The data reported on GTS is stored in MEDS, Canada.
TOGA inherited a substantial Pacific tide gauge network that was largely installed during NORPAX. Recognition of the importance of the El Niño phenomenon and many of the early diagnostic studies of it may not have been possible without the sea level data. For example, the sea level changes associated with the El Niño events of 1972, 1976, and 19821983 were described prior to the beginning of TOGA in a series of papers by Wyrtki [1975, 1977, 1979, 1984, 1985b].
Efforts in the Pacific during TOGA were focused on expanding and refining this network. As the network was expanded, new gauges were generally placed at least 500 km from existing ones. In the Indian and Atlantic Oceans, however, few gauges existed or were reporting data regularly at the start of TOGA. Hence significant effort was undertaken to remedy this situation. At present the Indian Ocean network is nearly complete, in the sense that most of the available sites have been instrumented. The network in the Atlantic Ocean, on the other hand, remains limited by the scarcity of islands suitable for gauges. Early in TOGA, it was determined that the problem in the Atlantic was basically one of collecting available data, rather than attempting to place new instruments. This data collection effort was also largely successful, with 21 sites reporting data by the end of 1994 (Table 3).
The University of Hawaii Sea Level Center was responsible for coordinating, maintaining, and expanding the tide gauge network in the tropical regions for TOGA purposes. Instrumentation used in the network consists of a heterogenous blend of instruments ranging from bubbler-type pressure gauges to state-of-the-art acoustic gauges, as described by Carter et al. . The majority of the sites, however, have traditional float gauges in stilling wells. Some general information on the instrumentation available for sea level measurements is given by Pugh , and more specific information about the equipment used by the University of Hawaii Sea Level Center can be found in work by Kilonsky and Caldwell  and Mitchum et al. .
The heterogeneity of the instrumentation is due in part to the logistics involved in maintaining gauges over a wide geographical area. Simple bubbler gauges are favored at very remote locations, whereas sophisticated acoustic gauges have been installed at several readily accessible sites. At most sites, however, the float-type gauge is favored because of its relatively low cost, which allows the placement of redundant systems at each site. This philosophy has been proven successful, in that the University of Hawaii Sea Level Center network typically has a data return exceeding 97%, even with maintenance trips spaced at 23-year intervals.
The redundant nature of the University of Hawaii Sea Level Center installations allows an estimate of the instrumental accuracy of the float-type gauges. There are typically two independent stilling wells with completely separate instrumentation at each site, and these wells are within 12 m of one another. Differences between the time series are taken to be an estimate of the instrumental accuracy. These intercomparisons show that for timescales longer than 2 days the redundant float-type gauges agree to ~0.5 cm. Bubbler gauges are typically noisier, with differences of the order of 2 cm. Frequency spectra of the differences show that the noise is approximately white and will thus be negligible on monthly mean timescales.
A more significant concern is possible contamination of the tide gauge time series by local island effects distorting the large-scale open ocean pressure field. This type of error is more difficult to estimate, but recent intercomparisons with sea surface heights from satellite altimeters suggest that it is not a very significant error source at low frequencies. Mitchum  and Cheney et al.  have shown that the sea levels from the tide gauges agree with the heights from the TOPEX altimeter to ~4 cm for timescales longer than 20 days and to 2 cm for timescales longer than 2 months. These estimates are comparable to what is expected from the error budget for the altimeter alone, which implies that the tide gauges cannot be contributing a large amount of variance to the differences.
For the stations in that portion of the TOGA tide gauge network maintained directly by the University of Hawaii Sea Level Center, data are returned for many stations via the data channels on the geostationary satellites, and this real-time delivery is backed up by delayed transmission of tapes from the stations to the Sea Level Center on a monthly basis. For stations using only the delayed mode data delivery, data are processed and available to the community within several months of collection. Many of the TOGA tide gauges contribute to the operational data flow handled by the IGOSS Sea Level Project in the Pacific and to the near-real-time data set provided by the WOCE "Fast Delivery" Sea Level Center, both of which are also operated by the University of Hawaii Sea Level Center.
The international system of voluntary observation ships, initiated in the last century [Maury, 1859], is still a critical element of modern meteorological and oceanographic observation networks. This section treats different components of the VOS program separately. Surface marine meteorological data are reviewed first (section B4.1), followed by a discussion of the VOS XBT program, which was one of the major observational initiatives of TOGA (section B4.2). For completeness we also provide a brief discussion of the VOS sea surface salinity effort in the Pacific (section B4.3).
There are currently around 7000 VOS worldwide, operated by about 50 countries. They collect observations on sea surface pressure, wind velocity, sea state, humidity, and SST, as part of the World Weather Watch (WWW). Each month, typically, 100,000 or more surface observations are collected and transmitted in real time to national meteorological centers via satellite communication systems or via coastal radio stations. The meteorological centers are responsible for entering the data on the GTS for general use. VOS coverage is excellent in the vicinity of the well-traveled shipping routes (e.g., the North Pacific and North Atlantic) but has serious gaps in the southern oceans and in parts of the tropical oceans [Weller and Taylor, 1993].
Prior to the establishment of TAO and other dedicated TOGA observing systems, data from VOS marine reports and from island weather stations constituted the bulk of the available information on seasonal and interannual variability in tropical surface marine meteorological fields. The TOGA data requirements for surface fields and fluxes (Table 1) were based almost entirely on knowledge derived from analyses of VOS data [e.g., Taylor, 1984]. The International TOGA Project Office  made several suggestions for improving the quality and density of VOS observations, but by and large, these were not implemented. Nonetheless, analyses of VOS data were extremely valuable for TOGA.
The creation of COADS [Woodruff et al., 1987] was a development based largely on VOS surface data that had a significant impact on climate research during TOGA. Prior to 1985, scientists who wished to work with the conventional surface marine data set often had to go through a laborious process of data extraction from the archives, followed by extensive quality control and analysis. COADS substantially reduced this impediment by creating a single data set of all available archived marine observations. The data were quality controlled and made widely available. This compilation was the basis for the Oberhuber  and da Silva et al.  climatologies and has been the basis for many recent studies of longer-term variability [e.g., Shriver and O'Brien, 1995].
Perhaps the most significant development in terms of the TOGA scientific history was the application of surface marine wind observations to produce time-varying winds. Wyrtki and Meyers [1975, 1976] produced the first such maps of wind and wind stress over the tropical Pacific Ocean, though on a coarse 2° latitude × 10° longitude grid. Esbensen and Kushnir , Han and Lee , and Hellerman and Rosenstein  also exploited the marine data set to produce global analyses of wind stress and marine fields, but only for the seasonal and annual means. The Mesoscale Air-Sea Interaction Group at Florida State University (FSU), motivated by the need to produce a wind field data set suitable for forcing tropical ocean models, reanalyzed the Wyrtki and Meyers wind data [Goldenberg and O'Brien, 1981]. These analyses were for pseudostress (the product of the wind velocity times wind speed) and were originally restricted to the tropical Pacific Ocean (30°S30°N) on a 2° × 2° grid. Focusing on pseudostress allowed Goldenberg and O'Brien  to avoid complications due to uncertainties in specification of drag coefficients while at the same time including at least some of the nonlinearity of the wind stress formulation, which is quadratic in wind speed.
The FSU wind analysis evolved considerably through the period of TOGA [Legler and O'Brien, 1984; Legler, 1991; Stricherz et al., 1992]. Monthly analyses are now performed routinely in near-real time for the Pacific Ocean using data available from the GTS. The fact that these analyses have been made available in near-real time allowed the development of timely and useful prediction systems like that of Cane et al. . Recently, full development of the TAO array during the second half of TOGA has approximately doubled the number of wind estimates used in the FSU analyses between 10°N and 10°S in the equatorial Pacific (Figure B1). Yearly reanalyses are performed augmenting GTS data with delayed mode data from National Climate Data Center (NCDC) archives and COADS.
Legler et al.  extended the FSU analysis system to the Indian Ocean using techniques that allow information from various platforms, including satellites, to be merged. This technique has been extended to include surface fluxes over the Indian Ocean for the period 19601989 [Jones et al., 1995]. Rao et al.  also analyzed the COADS data for the tropical Indian Ocean region to produce a consistent set of heat flux fields. These fields have been used in various numerical models of the Indian Ocean [e.g., McCreary et al., 1993].
In the tropical Atlantic Ocean one of the first time-varying analyses of surface marine data was produced by the Institut Français de Recherche Scientifique pour le Développement en Coopération (ORSTOM) group at Brest, France, following the methodology developed by the FSU [Servain et al., 1984, 1985]. Also, Reverdin et al. [1991a] analyzed the wind stress between 20°S and 30°N in the tropical Atlantic, using merchant ship wind observations. These analyses have been used in several numerical modeling studies, including Blanke and Delecluse  and Braconnot and Frankignoul .
The accuracy of surface analyses based on merchant ship marine data is dependent on the quality and sampling density of the input data. For example, Weare  cataloged a number of different systematic errors in surface marine observations, including conversion of Beaufort wind force observations to wind speeds and spurious warming in SSTs from engine intake temperatures. Systematic errors like these are significant and cannot be removed by increased sampling density, as can random errors. As an example of the magnitude of these effects, Weare  also concluded that uncertainties in latent heat flux computed from VOS data exceeded 30 W m-2 everywhere. Cardone et al.  also cautioned that differing interpretations of Beaufort wind observations in the historical data set can lead to artificial trends in surface analyses, such as that of Legler and O'Brien .
Major events in the evolution of XBT sampling since the instrument was invented were discussed by Meyers et al. . The XBT is a temperature profiler commonly dropped from VOS recruited from the merchant shipping, fishing, and military fleets [e.g., Baker, 1981; Sy, 1991]. The most commonly used models (T4 and T7) measure to a depth of 460 and 760 m. The instrument was developed during the 1960s and over the years has perhaps been more extensively used than any other single oceanographic instrument. Among its advantages are that the measurements can be carried out quickly, while the ship is underway, without in most cases having to reduce speed; operation of the instrument is easily learned by a new observer.
According to the manufacturer's specifications the temperature accuracy of the XBT is ±0.15°C. Some studies have shown that probe-to-probe thermistor temperature variability can be <±0.06°C at the 95% confidence level [Sy, 1991]. The measurement of relative, vertical temperature differences is also more accurate than the specifications [Roemmich and Cornuelle, 1987] so that small inversions and finestructure are detectable in the profile. The depth is estimated from a drop-rate equation using the time elapsed after the probe enters the water. It has been demonstrated, however, that temperature profiles made using the T4, T6, and T7 probes exhibit a systematic error with depth that is associated with an inadequate drop-rate equation supplied by the manufacturer [Hanawa et al., 1995]. After correction for the systematic error the depth accuracy is within the manufacturer's specifications (±2% of depth or ±5 m) in ~82% of XBT drops. Quality control of XBT data is a major task because the instrument malfunctions before reaching 250 m in about 15% of the probe launches. The modes of instrument failure have been carefully documented [Bailey et al., 1994] and distinguished from real temperature inversions and other structure so that a data quality expert can recognize and flag most faulty data in postcruise processing.
Design of the VOS XBT array for TOGA recognized the need to map large-scale variations in thermal structure to a known accuracy on a grid that would barely detect the smallest scales of interest. A strategy of low-density sampling was devised to provide broad-scale, widely dispersed coverage in areas that have routine merchant shipping on a monthly-to-quarterly cycle. Sampling error due to unresolved small-scale variability such as eddies, tropical instability waves, and internal waves is a source of geophysical noise. Many studies since the late 1970s have shown that the noise variance is about equal to the variance of the large-scale signals [Meyers et al., 1991; White, 1995; Kessler et al., 1996]. Maps of large-scale signals are produced using optimal interpolation (OI) as a filter to remove (or reduce) the smaller-scale variability. The mapping error variance after OI is typically 0.3 to 0.5 times the signal variance, in the areas that are best sampled. In dimensional units this translates to mapping errors of about 46 m in the depth of isotherms [Smith and Meyers, 1996].
Using the method of OI to design a sampling strategy required a prior knowledge of the statistical structure of the subsurface temperature field. Of particular importance are the so-called "decorrelation scales of variability," which represent the typical spatial and temporal extent in latitude, longitude, and time of the most energetic features. The scales for the tropical oceans were estimated [Meyers et al., 1991; Meyers and Phillips, 1992] by fitting a Gaussian curve to the covariance function of temperature variability estimated from observations. Recognizing that the scales show considerable regional differences as well as differences in depth and time, a uniform set of e-folding scales was recommended for application throughout the tropical oceans, 2° latitude × 15° longitude × 2 months. Maps of the temperature field were found to be rather insensitive to the exact value of the decorrelation scales in regions with good data coverage; however, mapping errors changed considerably with changes in the assumed scales.
On the basis of the above considerations the recommended low-density sampling strategy was prescribed as one XBT drop per 1.5° latitude × 7.5° longitude per month. Experience has shown that the recommended low-density sampling can be achieved in regions with good VOS coverage by dropping an XBT every 6 hours along the regular shipping tracks. A shortcoming of the VOS XBT network is that merchant shipping does not cover all areas of the global ocean, so that XBT sampling must be combined with other observations from in situ instruments or altimetric data to achieve global coverage.
In addition to the description of large-scale signals and initialization of ocean models the design of VOS XBT sampling for TOGA also recognized a need to observe seasonal-to-interannual variations of major geostrophic currents in the tropical oceans. A strategy of frequently repeated sampling was devised for a few transequatorial VOS lines in each ocean, with a recommended sample rate of three observations per decorrelation scale in latitude and time [Meyers et al., 1991]. The frequently repeated sampling rate can usually be achieved with 4-hour sampling on 18 voyages per year. Some noise due to spatial aliasing may be introduced into analyses of repeat transect data, if the ship tracks are spread out in a swath, but are treated as having been exactly repeated [McPhaden et al., 1988c]. In most cases this error is much smaller than the signals of interest along frequently traversed lines in the tropical Pacific. On some routes, XCTD data are also collected [Roemmich et al., 1994].
Since the 1980s most shipboard XBT systems have recorded data on a personal (desktop or laptop) computer. Real-time delivery is achieved for most installations by sending data to the GTS via Argos or geostationary satellites. Some data still are sent to the GTS by coastal radio stations. Upper Ocean Thermal Data Assembly Centers provide expert quality control of delayed mode data which are archived at the WOCE/TOGA Subsurface Data Center in Brest, France. Based in part on VOS XBT data collected during TOGA and WOCE, the annual and seasonal mean upper ocean thermal structure of the global ocean has been documented with the most comprehensive data set available in the World Ocean Atlas 1994 [Levitus and Boyer, 1994].
Salinity data collected as part of VOS programs have provided valuable insights into near-surface water mass variability and its relation to atmospheric forcing in the tropics. Though not among the highest-priority measurements during TOGA, surface salinity nonetheless exhibits strong seasonal-to-interannual timescale variations that are important to understand in the context of coupled ocean-atmosphere interactions associated with ENSO. For this reason and to present a complete picture of the overall VOS effort in TOGA we briefly discuss SSS measurements in this section.
The history of SSS measurements based on VOS networks dates at least back to the early 1950s in the Gulf of Guinea [Berrit, 1961]. On the basis of this early effort, VOS SSS networks were initiated by ORSTOM in the Pacific in 1969 and in the Atlantic and Indian Oceans in 1977 [see Donguy, 1994, and references therein]. SSS measurements are obtained from water samples bottled by ship officers about every 60 nautical miles and later analyzed on shore by laboratory salinometers. As compared with CTD measurements, the accuracy of bucket measurements is estimated to be of the order of 0.10.2 psu.
TOGA inherited a decade-long VOS SSS network in 1985, but because of various obstacles the bucket sampling rate decreased dramatically in 1994 to about 25% of the 1985 rate. From the second half of TOGA and during the COARE Enhanced Monitoring Period, efforts were focused on complementary arrays of thermosalinographs installed on board merchant ships [Henin and Grelet, 1996], on TAO moorings [McPhaden et al., 1990c; Koehn et al., 1996], and on drifting buoys [Swenson et al., 1991]. When deployed, the accuracy of temperature and conductivity sensors on these platforms results in an accuracy of about 0.02 psu in salinity. Owing to the disproportionate availability of surface to subsurface salinity measurements, most TOGA-related salinity studies concern SSS only.
Return to Appendix A or go to Appendix C
PMEL Outstanding Papers
PMEL Publications Search