Welcome to NUSAP net

Robust knowledge for Sustainability

· Post Normal Science
· Participatory IA
· Sensitivity Analysis
· Expert elicitation
· Uncertainty Communication
· Pluralistic uncert. management
· Glossary

· Reports, papers, ...
· RIVM/MNP Uncertainty Guidance
· Model quality checklist
· Interactive tools
· Pedigree matrices

Related websites
· Post Normal Times
· The Integrated Assessment Society
· Precautionary Principle
· Renewable Energy Master

Main Menu
· Home
· Reviews
· Web Links
· Downloads
· Your Account
· Top 10

Other Options
· Members List

Pedigree matrix to assess parameter strength of Integrated Assessment Models

(9721 reads)   Printer Friendly Page

Pedigree matrix to assess parameter strength (Van der Sluijs et al., 2001)




Theoretical understanding




An exact measure of the desired quantity Controlled experiments and large sample direct measurements Well established theory Best available practice in well established discipline Compared with independent measurements of the same variable over long domain


Good fit or measure Historical/field data uncontrolled experiments small sample direct measurements Accepted theory with partial nature (in view of the phenomenon it describes) Reliable method common within est. discipline; Best available practice in immature discipline Compared with independent measurements of closely related variable over shorter period


Well correlated but not measuring the same thing Modelled/derived data; Indirect measurements Accepted theory with partial nature and limited consensus on reliability Acceptable method but limited consensus on reliability Measurements not independent proxy variable limited domain


Weak correlation but commonalities in measure Educated guesses indirect approx. rule of thumb estimate Preliminary theory Preliminary methods unknown reliability Weak and very indirect validation


Not correlated and not clearly related Crude speculation Crude speculation No discernible rigour No validation performed


Sometimes it is not possible to represent directly the thing we are interested in by a parameter so some form of proxy measure is used. Proxy refers to how good or close a measure of the quantity that we model is to the actual quantity we represent. Think of first order approximations, over simplifications, idealisations, gaps in aggregation levels, differences in definitions, non representativeness, and incompleteness issues. If the parameter were an exact measure of the quantity, it would score four on proxy. If the parameter in the model is not clearly related to the phenomenon it represents, the score would be zero.

Empirical basis
Empirical basis typically refers to the degree to which direct observations, measurements and statistics are used to estimate the parameter. When the parameter is based upon good quality observational data, the pedigree score will be high. Sometimes directly observed data are not available and the parameter is estimated based on partial measurements or calculated from other quantities. Parameters determined by such indirect methods have a weaker empirical basis and will generally score lower than those based on direct observations.

Theoretical understanding
The parameter will have some basis in theoretical understanding of the phenomenon it represents. If our theoretical understanding of some mechanism is very high, we may well be able to make reliable estimates for the parameters that represent that mechanism, even if the empirical basis is weak. On the other hand a strong empirical basis may not be sufficient to estimate future values of parameters if our theoretical understanding of the mechanisms involved is absent. In that case extrapolation from past data is not warranted. This criterion aims to measure the extent and partiality of the theoretical understanding that was used to generate the numeral of that parameter. Parameters based on well-established theory will score high on this metric, while parameters whose theoretical basis has the status of speculation will score low.

Methodological rigour
Some method will be used to collect, check, and revise the data used for making parameter estimates. Methodological quality refers to the norms for methodological rigour in this process applied by peers in the relevant disciplines. Well-established and respected methods for measuring and processing the data would score high on this metric, while untested or unreliable methods would tend to score lower.

This metric refers to the degree to which one has been able to cross-check the data and assumptions used to produce the numeral of the parameter against independent sources. When these have been compared with appropriate sets of independent data to assess its reliability it will score high on this metric. In many cases, independent data for the same parameter over the same time period are not available and other data sets must be used for validation. This may require a compromise in the length or overlap of the data sets, or may require use of a related, but different, proxy variable for indirect validation, or perhaps use of data that has been aggregated on different scales. The more indirect or incomplete the validation, the lower it will score on this metric.

Jeroen van der Sluijs, James Risbey, Serafin Corral Quintana , Jerry Ravetz, José Potting, Arthur Petersen, Detlef van Vuuren, ASSESSMENT OF PARAMETER STRENGTH, in: Jeroen P. van der Sluijs, Jose Potting, James Risbey, Detlef van Vuuren, Bert de Vries, Serafin Corral Quintana, Jerry Ravetz (eds.) 2001. Uncertainty assessment of the IMAGE/TIMER B1 CO2 emissions scenario, using the NUSAP method Dutch National Research Program on Climate Change, Report no: xxxxx (2001), ISBN: xxxxx, 215 pp.


[ Back to Pedigree matrices | Sections Index ]