[ 1 ] |
|
Terms |
Definitions |
Aggregation |
Aggregation is the joining of more or less equivalent elements.
Aggregation can take place across different scale-dimensions, leading to different resolutions on
these scales. The most relevant scale dimensions in environmental assessment are: temporal scale
(e.g. diurnal; seasonal; annual; century), spatial scale (e.g. local; regional; continental; global), and
systemic scales (e.g. individual plants; ecosystems; terrestrial biosphere). |
Aggregation error |
Aggregation error arises from the scaling up or scaling down of variables to meet a required aggregation
level. In cases of non-additive variables the scaling-up or scaling-down relations
are always to a certain degree arbitrary. |
Assessment |
Assessment is a process that connects knowledge and action (both directions) regarding a problem.
Assessment comprises the analysis and review of knowledge for the purpose of helping someone in
a position of responsibility to evaluate possible actions or think about a problem.
Assessment usually does not mean doing new research. Assessment means assembling, summarizing,
organizing, interpreting, and possibly reconciling pieces of existing knowledge, and communicating
them so that they are relevant and helpful to an intelligent but inexpert policy-maker or other actor
involved in the problem at hand. |
Behavioural variability |
|
Bias |
A constant or systematic deviation as opposed to a random error.
It appears as a persistent over- or under-estimation of the quantity measured, calculated or estimated.
See also expert bias and value loading. |
Bias: Anchoring |
Assessments are often unduly weighted
toward the conventional value, or first value given, or to the
findings of previous assessments in making an assessment.
Thus, they are said to be 'anchored' to this value. |
Bias: Availability |
his bias refers to the tendency to give too
much weight to readily available data or recent experience
(which may not be representative of the required data) in
making assessments. |
Bias: Coherence |
Events are considered more likely when many
scenarios can be created that lead to the event, or if some
scenarios are particularly coherent. Conversely, events are
considered unlikely when scenarios can not be imagined.
Thus, probabilities tend to be assigned more on the basis of
one's ability to tell coherent stories than on the basis of
intrinsic probability of occurrence. |
Bias: Overconfidence |
Experts tend to over-estimate their
ability to make quantitative judgements.
This can sometimes be seen when an estimate of a quantity and
its uncertainty are given, and it is retrospectively discovered that
the true value of the quantity lies outside the interval.
This is difficult for an individual to guard against;
but a general awareness of the tendency can be important. |
Bias: Representativeness |
This is the tendency to place more
confidence in a single piece of information that is
considered representative of a process than in a larger body
of more generalized information. |
Bias: Satisficing |
this refers to a common tendency to search through a limited
number of familiar solution options and to pick from among them. Comprehensiveness is sacrificed for expediency in
this case. |
Bias: Unstated assumptions |
A subject's responses are
typically conditional on various unstated assumptions. The
effect of these assumptions is often to constrain the degree
of uncertainty reflected in the resulting estimate of a
quantity. Stating assumptions explicitly can help reflect
more of a subject's total uncertainty. |
Burden of proof |
The `burden of proof' sets the onus of responsibility
in argumentation according to whether one must prove positive or negative attributes (innocence/guilt; presence/absence, etc.) about the issue in dispute. The burden of proof therefore sets out who is responsible for making a case. For example, burden of proof in
environmental regulation may be set such that an activity will not be regulated or prohibited unless proof of harm can be made. Alternatively, the burden of proof may be set such that activities of a certain kind will be prohibited unless it can be proved that they will do no harm. |
Conflicting evidence |
|
Context validation |
Context validity refers to the probability that an estimate has
approximated the true but unknown range of causally relevant aspects and
rival hypotheses present in a particular policy context.
Context validation thus is minimizing the probability that you overlook something of relevance.
Context validation can be performed by a participatory bottom-up process to elicit from
stakeholders aspects considered relevant and rival hypotheses on causal relations underlying a problem
and rival problem definitions and problem framings. See Dunn,1998,
2000. |
Cultural theory |
cultural theory also known as "group grid theory". An explanatory scheme created by Mary Douglas and
applied by herself and colleagues as Michael Thompson. It assumes two axes for describing social formations,
"group" and "grid"; when these are at "high" and "low", they yield types described as "hierarchist", "egalitarian", "
fatalist" and "individualist". Michael Thompson has added a fifth type, residing in the middle, called "hermit". In
recent applications the "fatalist" has been eliminated from the scheme.
Recently Ravetz (2001) proposed a modification of the scheme using as dimensions of social variation: Style of action (isolated / collective) and
location (insider / outsider), yielding the types: "Administrator", "Business man", "Campaigner", and "Survivor" (ABCS). |
Disciplinary bias |
Science tends to be organized into different disciplines.
Disciplines develop somewhat distintive cultures over time.
That is, they tend to develop their own character, manner
of viewing problems, manner of drawing problem boundaries
and selecting the objects of inquiry, and so on. These
differences in perspective will translate into forms of
bias in viewing problems. |
Epistemology |
The theory of knowledge. |
Expert bias (cognitive bias) |
Experts and lay people alike are subject to a variety of sources of
cognitive bias in making assessments. Some of these sources of bias
are as follows:
overconfidence,
anchoring,
availability,
representativeness,
satisficing,
unstated assumptions,
coherence.
A fuller description of sources of cognitive bias in expert and lay
elicitation processes is available in Dawes (1988). |
Extended facts |
Knowledge from other sources than science, including local knowledge, citizens' surveys, anecdotal
information, and the results of investigative journalism. Inclusion of extended facts in environmental
assessment is one of the key principles of Post-Normal Science.
|
Extended peer communities |
Participants in the quality assurance processes of knowledge production and assessment
in Post-Normal Science, including all stakeholders engaged in the management of the problem at hand. |
Extrapolation |
The inference of unknown data from known data, for instance future data from past data, by
analyzing trends and making assumptions. |
Facilitator |
A person who has the role to facilitate a structured group process (for instance participatory
integrated assessment) in such a way that the aim of that group process will be met. |
Focus group |
Well established research technique applied since the 1940's in the social sciences, marketing fields,
evaluation and decision research. Generally, a group of 5 to 12 people are interviewed by a moderator on a
specific focused subject. With the focus group technique the researcher can obtain at the same time information
from various individuals together with the interactions amongst them. To a certain extent such artificial settings
simulate real situations where people communicate among each other. |
Functional error |
Functional error arises from uncertainty about the nature of the process represented by the model.
Uncertainty about model structure frequently reflects disagreement between experts about the
underlying causal mechanisms. |
GIGO |
Literally, Garbage In, Garbage Out, typically referring to the fact
that outputs from models are only as good as the inputs. Ravetz
(following Andy Stirling) has formulated gigo as: Do the uncertainties in the inputs need
to be suppressed lest the outputs become indeterminate? Ravetz notes
that a symptom of gigo is that as the accuracy of quantitative inputs goes down, the precision of
numerical outputs goes up.
A variant formulation is "Garbage In, Gospel Out" referring to a tendency to put faith in computer outputs regardless of the quality of the inputs. |
Global sensitivity analysis |
Global sensitivity analysis is a combination of sensitivity and
uncertainty analysis in which "a neighbourhood of alternative
assumptions is selected and the corresponding interval of inferences
is identified. Conclusions are judged to be sturdy only if the
neighbourhood of assumptions is wide enough to be credible and the
corresponding interval of inferences is narrow enough to be useful".
Leamer (1990) quoted in Saltelli (2001). |
Hardware error |
Hardware errors in model outcomes arise from bugs in hardware. An obvious example is the bug in
the early version of the Pentium processor for personal computers, which gave rise to numerical
error in a broad range of floating-point calculations performed on that processor.
The processor had already been widely used worldwide for quite some time, when the bug was
discovered. It cannot be ruled out that hardware used for environmental models contains undiscovered
bugs that might affect the outcomes, although it is unlikely that they will have a significant influence
on the models' performance.
To secure against hardware error, one can test critical model output for reproducibility on a
computer with a different processor before the critical output enters the policy debate. |
Hedging |
Hedging is a quantitative technique for the iterative handling of uncertainties in decision making.
It is used, for instance, to deal with risks in finance and in corporate R&D decisions.
For example, a given future scenario may be considered so probable that all decisions which are made
assume that the forecast is correct. However, if these assumptions
are wrong, there may be no flexibility to meet other outcomes. Thus, rather than solely developing a course of
action for one particular future scenario, business strategic planners prefer to tailor a hedging strategy that will
allow adaptation to a number of possible outcomes. Applied to climate change, it could for example be used by
stakeholders from industry to reduce the risks of investing in energy technology, pending governmental measures
on ecotax. Anticipating a range of measures from government to reduce greenhouse gases emissions, a branch
of industry or a company could estimate the cost-effectiveness of investing or delaying investments in more
advanced energy technology. |
Ignorance |
The deepest of the three sorts of uncertainty distinguished by Funtowicz and Ravetz (1990):
Inexactness, unreliability and border with ignorance. Our knowledge of the behavior of the
data gives us the spread, and knowledge of the process gives us the assessment, but there is still
something more. No process in the field or laboratory is completely
known. Even physical constants may vary unpredictably. This is the realm of our ignorance: it includes
all the different sorts of gaps in our knowledge not encompassed in the previous sorts of uncertainty.
This ignorance may merely be of what is significant, such as when anomalies in experiments are
discounted or neglected, or it may be deeper, as is appreciated retrospectively when revolutionary new
advances are made. Thus, space-time and matter-energy were both beyond the bounds of physical
imagination, and hence of scientific knowledge, before they were discovered. Can we say anything
useful about that of which we are ignorant? It would seem by the very definition of ignorance that
we cannot, but the boundless sea of ignorance has shores, which we can stand on and map.
The Pedigree qualifier in the NUSAP system maps this border with ignorance in knowledge production.
In this way it goes beyond what statistics has provided in its mathematical approach to the management of uncertainty. |
Indeterminacy |
Inderterminacy is a category of uncertainty which refers to the open-endedness (both
social and natural) in the processes of environmental damage caused by human intervention.
It applies to processes where the outcome cannot (or only partly) be determined from the input.
Indeterminacy introduces the idea that contingent social behavior also has to be included in the
analytical and prescriptive framework.
It acknowledges the fact that many knowledge claims are not fully determined by empirical observations
but are based on a mixture of observation and interpretation.
The latter implies that scientific knowledge depends not only on its degree of fit with nature (the observation part),
but also on its correspondence with the social world (the interpretation part) and on its success in building and
negotiating trust and credibility for the way science deals with the 'interpretive space'.
|
Inexactness |
One of the three sorts of uncertainty distinguished by Funtowicz and Ravetz (1990): Inexactness, unreliability and border with ignorance.
Quantitative (numerical) inexactness is the simplest sort of uncertainty; it is usually expressed by significant digits and
error bars. Every set of data has a spread, which may be considered in some contexts as a
tolerance or a random error in a calculated measurement. It is the kind of uncertainty that relates
most directly to the stated quantity, and is most familiar to student of physics and even
the general public. Next to quantitative inexactness one can also distinguish qualitative inexactness which
occurs if qualitative knowledge is not exact but comprises a range. |
Institutional uncertainty |
One of the seven types of uncertainty distinguished by De Marchi
(1994) in her checklist for characterizing uncertainty in
environmental emergencies:
institutional, legal, moral, proprietary, scientific, situational, and societal uncertainty. Institutional uncertainty is in some sense a subset of societal
uncertainty, and refers more specifically to the role and actions of
institutions and their members. Institutional uncertainty stems from
the "diverse cultures and traditions, divergent missions and values,
different structures, and work styles among personnel of different
agencies" (De Marchi, 1994). High institutional uncertainty can
hinder collaboration or understanding among agencies, and can make the
actions of institutions difficult to predict. |
Lack of observations/measurements |
|
Legal uncertainty |
One of the seven types of uncertainty distinguished by De Marchi et al. in their checklist for characterizing uncertainty in
environmental emergencies:
institutional, legal, moral, proprietary, scientific, situational, and societal uncertainty. Legal uncertainty is relevant "wherever agents must consider future
contingencies of personal liability for their actions (or inactions)". High legal uncertainty can result in defensive
responses in regard to both decision making and release of
information. Legal uncertainty may also play a role where actions are
conditioned on the clarity or otherwise of a legal framework in
allowing one to predict the consequences of particular actions. |
Leidraad |
Leidraad is a Dutch word and has no satisfactory English
equivalent. It constitutes an offering of
guidance which can be taken up if it helps or discarded if
not. |
Limited knowledge |
|
Model-fix error |
Model-fix errors are those errors that arise from the introduction of non-existent phenomena in the
model. These phenomena are introduced in the model for a variety of reasons. They can be included
to make the model computable with today's computer technology, or to allow simplification, or to
allow modelling at a higher aggregation level, or to bridge the mismatch between model behaviour
and observation and or expectation. An example of the latter is the flux adjustment in many coupled
Atmosphere Ocean General Circulation Models used for climate projection.
The effect of such model fixes on the reliability of the model outcome will be bigger if the simulated
state of the system is further removed from the (range of) state(s) to which the model was calibrated.
It is useful to distinguish between (A) model fixes to account for well understood limitations of a model
and (B) model fixes or to account for a mismatch between model and observation that is not understood. |
Monte Carlo Simulation |
Monte Carlo Simulation is a statistical technique for stochastic model-calculations and analysis of error propagation in calculations.
It's purpose is to trace out the structure of the distributions of model output. In it's simplest form this distribution is mapped by calculating the
deterministic results (realizations) for a large number of random draws from the individual distribution functions
of input data and parameters of the model. To reduce the required number of model runs needed to get
sufficient information about the distribution in the outcome (mainly to save computation time), advanced
sampling methods have been designed such as Latin Hyper Cube sampling. The latter makes use of
stratification in the sampling of individual parameters and pre-existing information
about correlations between input variables. |
Moral uncertainty |
One of the seven types of uncertainty distinguished by De Marchi et al. in their checklist for characterizing uncertainty in
environmental emergencies:
institutional, legal, moral, proprietary, scientific, situational, and societal uncertainty. Moral uncertainty stems from the underlying moral issues related to
action and inaction in any given case. De Marchi notes that,
though similar to legal responsibility, moral guilt may occur absent
legal responsibility when negative consequences might have been
limited by the dissemination of prior information or more effective
management for example. "Moral uncertainty is linked to the ethical
tradition of a given country be it or not enacted in legislation
(juridical and societal norms, shared moral values, mores), as well as
the psychological characteristics of persons in charge, their social
status and professional roles" (De Marchi, 1994). Moral uncertainty
would typically be high when moral and ethical dimensions of an issue
are central and participants have a range of understandings of the
moral imperatives at stake. |
Motivational bias |
Motivational bias occurs when people have an incentive to reach a certain conclusion or see things a certain way.
It is a pitfall in expert elicitation. Reasons for occurrence of motivational bias include:
a) a person may want to influence a decision to go a certain way; b) the person may perceive that he will be evaluated based
on the outcome and might tend to be conservative in his estimates; c) the person may want to suppress uncertainty that he
actually believes is present in order to appear knowledgeable or authoritative; and d) the expert has taken a strong stand in the past
and does not want to appear to contradict himself by producing a distribution that lends credence to alternative views.
|
Multi-criteria decision analysis |
A method of formalising issues for decision, using both "hard" and "soft"
indicators, not intended to yield an optimum solution but rather to clarify positions and coalitions.
|
Natural randomness |
|
Normal science |
The term 'normal science' was coined by T.S.Kuhn (1962), and Funtowicz and Ravetz (1990) referred to it when explaining their post-normal science'. In their words: "By 'normality' we mean two things. One is the picture of research science as 'normally' consisting of puzzle
solving within an unquestioned and unquestionable 'paradigm', in the theory of T.S. Kuhn (Kuhn 1962). Another is the assumption that the
policy environment is still 'normal', in that such routine puzzle solving by experts provides an adequate knowledge base for policy decisions. Of course researchers and experts must do routine work on
small-scale problems; the question is how the framework is set, by whom, and with whose awareness of the process. In 'normality', either science or policy, the process is managed largely implicitly, and is
accepted unwittingly by all who wish to join in." |
Numerical error |
Numerical error arises from approximations in numerical solution, rounding of numbers and
numerical precision (number of digits) of the represented numbers.
Complex models include a large number of linkages and feedbacks which enhances the chance
that unnoticed numerical artifacts influence the model behaviour to a significant extent.
The systematic search for artifacts in model behaviour which are caused by numerical error, requires
a mathematical 'tour de force' for which no standard recipe can be given. It will depend on the model
at hand how one should set up the analysis.
To secure against error due to rounding of numbers, one can test the sensitivity of the results to the
number of digits accounted for in floating-point operations in model calculations. A pitfall here is pseudo precision. |
NUSAP |
Acronym for Numeral Unit Spread Assessment Pedigree
Notational system developed by Silvio Funtowicz and Jerry Ravetz to better manage and communicate
uncertainty in science for policy. |
Parameter |
A quantity related to one or more variables in such a way that it remains constant for any specified
set of values of the variable or variables. |
Pedigree |
Pedigree conveys an evaluative account of the production process of information (e.g. a number) on a quantity or phenomenon, and indicates different aspects of the underpinning of the numbers and scientific status of the knowledge used. Pedigree is expressed by means of a set of pedigree criteria to assess these different aspects. Examples of such criteria are empirical basis or degree of validation. These criteria are in fact yardsticks for strength. Many of these criteria are hard to measure in an objective way. Assessment of pedigree involves qualitative expert judgement. To minimise arbitrariness and subjectivity in measuring strength a pedigree matrix is used to code qualitative expert judgements for each criterion into a discrete numeral scale from 0 (weak) to 4 (strong) with linguistic descriptions (modes) of each level on the scale. Note that these linguistic descriptions are mainly meant to provide guidance in attributing scores to each of the criteria. It is not possible to capture all aspects that an expert may consider in scoring a pedigree in a single phrase. Therefore a pedigree matrix should be applied with some flexibility and creativity. Examples of pedigree matrices can be found in the Pedigree matrices section of the website www.nusap.net |
Pitfall |
A pitfall is a characteristic error that commonly occurs in assessing
a problem. Such errors are typically associated with a lack of
knowledge or experience, and thus may be reduced by experience, by
consultation with others, or by following procedures designed to
highlight and avoid pitfalls. In particularly complex problems we
sometimes say that pitfalls are "dense", meaning that there are an
unusual variety and number of pitfalls. See Ravetz (1971). |
Post-Normal Science |
Post-Normal Science is the methodology that is appropriate when "facts are uncertain, values in dispute,
stakes high and decisions urgent". It is appropriate when either "systems uncertainties" or "decision stakes"
are high. A tutorial is available on the website www.nusap.net |
Practically immeasurable |
|
Precautionary principle |
The principle is roughly that "when an activity raises threats of harm
to human health or the environment, precautionary measures should be
taken even if some cause and effect relationships are not fully
established scientifically" (Wingspread conference, Wisconsin, 1998).
Note that this would apply to most environmental assessments since
cause-effect statements can rarely be fully established on any issue.
If the burden of proof were set such that one must demonstrate a
completely unequivocal cause-effect relationship before taking action,
then it would not be possible to take action on any meaningful
environmental issue. The precautionary principle thus relates to the
setting of burden of proof. |
PRIMA approach |
Acronym for Pluralistic fRamework of Integrated uncertainty Management
and risk Analysis (Van Asselt, 2000). The guiding principle is that uncertainty legitimates different perspectives and that as a consequence uncertainty management should consider different perspectives.
Central to the PRIMA approach is the issue of disentangling controversies on complex issues in terms of salient uncertainties. The salient uncertainties are then 'coulored' according to various perspectives.
Starting from these perspective-based interpretations, various legitimate and consistent narratives are developed to serve as a basis for integrated analysis of autonomous and policy-driven developments
in terms of risk. |
Probabilistic |
Based on the notion of probabilities. |
Probability density function (PDF) |
The probability density function of a continuous random variable represents the probability that an infinitely small variable interval will fall at a given value.
The probability density function can be integrated to obtain the probability that the random variable takes a value
in a given interval. |
Problem structuring |
An approach to analysis and decision making which assumes that participants do not have
clarity on their ends and means, and provides appropriate conceptual structures. It is a part of "soft systems
methodology". |
Process error |
Process error arises from the fact that a model is by definition a simplification of the real system
represented by the model. Examples of such simplifications are the use of constant values for entities
that are functions in reality, or focusing on key processes that affect the modelled variables by
omitting processes that play a minor role or are considered not significant. |
Proprietary uncertainty |
One of the seven types of uncertainty distinguished by De Marchi et al. in their checklist for characterizing uncertainty in
environmental emergencies:
institutional, legal, moral, proprietary, scientific, situational, and societal uncertainty. Proprietary uncertainty occurs due to the fact that information and
knowledge about an issue are not uniformly shared among all those who
could potentially use it. That is, some people or groups have
information that others don't and may assert ownership or control over
it. "Proprietary uncertainty becomes most salient when it is
necessary to reconcile the general needs for safety, health, and
environment protection with more sectorial needs pertaining, for
instance, to industrial production and process, or to licensing and
control procedures" (De Marchi, 1994). De Marchi notes that 'whistle
blowing' is another source of proprietary uncertainty in that there is
a need for protection of those who act in sharing information for the
public good. Proprietary uncertainty would typically be high when
knowledge plays a key role in assessment, but is not widely shared
among participants. An example of such would be
the case of external safety of military nuclear production
facilities. |
Proxy |
Sometimes it is not possible to represent directly the quantity or phenomenon we are interested in by
a parameter so some form of proxy measure is used. A proxy can be better or worse depending on how
closely it is related to the actual quantity we intend to represent. Think of first order approximations,
over-simplifications, idealisations, gaps in aggregation levels, differences in definitions etc.. |
Pseudo-imprecision |
Pseudo-imprecision occurs when results have been expressed
so vaguely that they are effectively immune from refutation
and criticism. |
Pseudo-precision |
Pseudo-precision is false precision that occurs when the
precision associated with the representation of a number or
finding grossly exceeds the precision that is warranted by
closer inspection of the underlying uncertainties. |
Resolution error |
Resolution error arises from the spatial and temporal resolution in measurement, datasets or models.
The possible error introduced by the chosen spatial and temporal resolutions can be assessed
by analyzing how sensitive results are to changes in the resolution. However, this is not as straightforward
as it looks, since the change in spatial and temporal scales in a model might require significant changes in
model structure or parameterizations. For instance, going from annual time steps to monthly time steps in a climate model requires the
inclusion of the seasonal cycle of insolation. Another problem can be that data are not available
at a higher resolution. |
Robust finding |
A robust finding is "one
that holds under a variety of approaches, methods, models, and
assumptions and one that is expected to be relatively unaffected by
uncertainties" (IPCC, 2001). Robust findings should be insensitive to most
known uncertainties, but may break down in the presence of
surprises. |
Robust policy |
A robust policy should be relatively insensitive to over- or
underestimates of risk. That is, should the problem turn out
to be much better or much worse than expected, the policy
would still provide a reasonable way to proceed. |
Scenario |
A plausible description of how the future may develop, based on a coherent and internally consistent
set of assumptions about key relationships and driving forces (e.g., rate of technology changes, prices). Note
that scenarios are neither predictions nor forecasts. The results of scenarios (unlike forecasts) depend on the
boundary conditions of the scenario. |
Scientific uncertainty |
One of the seven types of uncertainty distinguished by De Marchi et al. in their checklist for characterizing uncertainty in
environmental emergencies:
institutional, legal, moral, proprietary, scientific, situational, and societal uncertainty. Scientific uncertainty refers to uncertainty which emanates from the
scientific and technical dimensions of a problem as opposed to the
legal, moral, societal, institutional, proprietary, and situational
dimensions outlined by De Marchi et al. Scientific uncertainty is
intrinsic to the processes of risk assessment and forecasting. |
Sensitivity analysis |
Sensitivity analysis is the study of how the uncertainty in the output
of a model (numerical or otherwise) can be apportioned to different
sources of uncertainty in the model input. From Saltelli (2001). |
Situational uncertainty |
One of the seven types of uncertainty distinguished by De Marchi et al. in their checklist for characterizing uncertainty in
environmental emergencies:
institutional, legal, moral, proprietary, scientific, situational, and societal uncertainty. Situational uncertainty relates to "the predicament of the person
responsible for a crisis, either in the phase of preparation and
planning, or of actual emergency. It refers to individual behaviours
or personal interventions in crisis situations" (De Marchi, 1994)
and as such represents a form of integration over the other
six types of uncertainty. That is, it tends to combine the
uncertainties one has to face in a given situation or on a
particular issue.
High situational uncertainty would be characterized by situations
where individual decisions play a substantial role and there is
uncertainty about the nature of those decisions. |
Societal randomness |
|
Societal uncertainty |
One of the seven types of uncertainty distinguished by De Marchi et al in their checklist for characterizing uncertainty in
environmental emergencies:
institutional, legal, moral, proprietary, scientific, situational, and societal uncertainty. Communities from one region to another may differ in the set of norms,
values, and manner of relating characteristic of their societies.
This in turn can result in differences in approach to decision making
and assessment. Some salient characteristics of these differences
will be different views about the role of consensus versus conflict,
on locating responsibility between individuals and larger groups, on
views about the legitimacy and role of social and private
institutions, and on attitudes to authority and expertise. From De
Marchi (1994). Societal uncertainty would typically be high when
decisions involve substantial collaboration among groups characterized
by divergent decision making styles. |
Software error |
Software error arises from bugs in software, design errors in algorithms, type-errors in model source
code, etc. Here we encounter the problem of code verification which is defined as: examination
of the numerical technique in the computer code to ascertain that it truly represents the conceptual
model and that there are no inherent numerical problems in obtaining a solution (ASTM E 978-84, cited in Beck et al., 1996).
If one realizes that some environmental models have hundreds of thousands of lines of source code,
errors in it cannot easily be excluded and code verification is difficult to carry out in a systematic manner. |
Stakeholders |
Stakeholders are those actors who are directly or indirectly affected by a issue and who could affect the
outcome of a decision making process regarding that issue or are affected by it. |
Stochastic |
In stochastic models (as opposed to deterministic models), the parameters and variables are
represented by probability distribution functions. Consequently, the model behavior, performance, or operation
is probabilistic. |
Structural uncertainty |
Uncertainty about what the appropriate equations are to correctly represent a given causal relationship. |
Structured problems |
Hoppe and Hisschemöller have defined structured problems as
those for which there is a high level of agreement on the
relevant knowledge base and a high level of consent on the
norms and values associated with the problem. Such problems
are thus typically of a more purely technical nature and
fall within the category of 'normal' science. |
Surprise |
Surprise occurs when actual outcomes differ sharply from expected
ones. However, surprise is a relative term. An event will be
surprising or not depending on the expectations and hence point of
view of the person considering the event. Surprise is also inevitable
if we accept that the world is complex and partially unpredictable,
and that individuals, society, and institutions are limited in their
cognitive capacities, and possess limited tools and information. |
Sustainable development |
"Sustainable development is development that meets the needs of the
present without compromising the ability of future generations to meet
their own needs. It contains within it two key concepts: the concept
of "needs", in particular the essential needs of the world's poor, to
which overriding priority should be given; and the idea of limitations
imposed by the state of technology and social organization on the
environment's ability to meet present and future needs." (Brundtland
Commission, 1987) |
Technological surprise |
|
Transparency |
The degree to which a model is transparent. A model is said to be transparent if all key assumptions that underlie the model are accessible and understandable for the users. |
Type I error |
also: Error of the first kind. In hypothesis testing,
this error is caused by incorrect
rejection of the hypothesis when it is true.
Any test is at risk of being too selective and too sensitive.
The design of the test, especially confidence limits, aims
at reducing the likelihood of one type of error at the
price of increasing the other. Thus, all such statistical
tests are value laden. |
Type II error |
also: Error of the second kind. In hypothesis testing this error is caused by not rejecting the
hypothesis when it is false.
|
Type III error |
also: Error of the third kind. Assessing or solving the wrong problem by incorrectly accepting the false
meta-hypothesis that there is no difference between the boundaries of a problem, as defined by
the analyst, and the actual boundaries of that problem (Raifa, 1968, redefined by Dunn, 1997,
2000). |
Unreliability |
One of the three sorts of uncertainty distinguished by Funtowicz and Ravetz (1990):
Inexactness, unreliability and border with ignorance.
Unreliability relates to the level of confidence to be placed in a quantitative statement,
usually represented by the confidence level (at say 95 % or 99 %). In practice, such judgements
are quite diverse; thus estimates of safety and reliability may be given as "conservative
by a factor of n". In risk analyses and futures scenarios estimates are qualified as "optimistic"
or "pessimistic". In laboratory practice, the systematic error in physical quantities, as distinct
from the random error or spread, is estimated on an historic basis. Thus it provides a kind of
assessment (the A in the NUSAP acronym) to act as a qualifier on the number together with its spread (the S in the NUSAP acronym). |
Unstructured problems |
Hoppe and Hisschemöller have defined unstructured problems as
those for which there is a low level of agreement on the
relevant knowledge base and a low level on consent on norms
and values related to the problem. Compare with structured
problems. Unstructured problems have similar characteristics
to post-normal science problems. |
Validation |
Validation is the process of comparing model output with observations
of the 'real world'. Validation can not 'validate' a model as true or
correct, but can help establish confidence in a model's utility in
cases where the samples of model output and real world samples are at
least not inconsistent. For a fuller discussion of issues in
validation, see Oreskes et al., (1994). |
Value diversity |
|
Value-ladenness |
Value-ladenness refers to the notion that value orientations and biases of an analyst, an institute,
a discipline or a culture can co-shape the way scientific questions are framed,
data are selected, interpreted, and rejected, methodologies are devised, explanations are formulated
and conclusions are formulated. Since theories are always underdetermined by
observation, the analysts' biases will fill the epistemic gap which makes any assessment
to a certain degree value-laden. |
Variability |
In one meaning of the word, variability refers to the observable variations (e.g. noise) in a quantity that result from randomness in nature (as in 'natural variability of climate') and society.
In a slightly different meaning, variability refers to heterogeneity across space, time or members of a population.
Variability can be expressed in terms of the extent to which the scores in a distribution of a quantity differ from each other.
Statistical measures for variability include the range, mean deviation from the mean, variance, and standard deviation. |
|
[ 1 ] |