To give you the best possible experience, this site uses cookies. Continuing to use this site means you agree
to our use of cookies. If you'd like to learn more about the cookies we use please find out more


Technical descriptions of techniques used for observing system evaluation and design follows. A good overview, from the NWP literature, is Rabier et al. (2008).


Observing System Experiments (OSEs):

OSEs involve the systematic denial of a sub-set of observations, and the evaluation of the degradation in quality of the resulting analyses and forecasts. The degradation quantifies the impact of the with-held observations. Relevant references: Balmaseda et al. (2007), Oke and Schiller (2007; GRL)


Observing System Simulation Experiments (OSSEs):

OSSEs, sometimes referred to as twin experiments, typically use two different models. One model is used to perform a `truth` run - and it is treated as if it is the real ocean. The truth run is sampled in a manner that mimics either an existing or future observing system - yielding synthetic observations. The synthetic observations are assimilated into the second model, and the model performance is evaluated by comparing it against the truth run. Relevant references: Ballabrera-Poy et al. (2007), Schiller et al. (2004), Vecchi and Harrison (2007)


Analysis self-sensitivities

Coming soon

Relevant references: Cardinala et al. (2004)


Singular vector analysis

Coming soon

Relevant references: Fujii et al. (2008a,b)


Forecast sensitivities

Coming soon

Relevant references: Langland and Baker (2004)


Adaptive sampling Coming soon

Relevant references: Bishop et al. (2001), O`Kane et al. (2010)


Ensemble Transform Kalman Filter

Coming soon

Relevant references: Bishop et al. (2001)