In the previous section we identified some of the problems with both the practice and the process of integrated assessment. Many of these problems are not unique to integrated assessment. They are to be found in many other interdisciplinary fields. In what follows, we try to suggest some of the causes or possible explanations for the observed problems in integrated assessment. Many of the problems with integrated assessment research are, in fact, driven by a lack of quality control. Ravetz (1971) suggests that quality may be described in terms of two criteria: adequacy and value. Quality depends not only on the character of the work (adequacy) but also on the field context in which it is placed and judged (value).
In many instances, the activity of integrated assessment is akin to that of building a house, where the bricks constitute the substantive and methodological knowledge from different disciplines and the mortar (or glue) frequently takes the form of the practitioner's subjective judgments linking the disparate knowledge blocks. Unfortunately, while the bricks may be quite sound and well described, the subjective judgments (glue) are often never made explicit[FN]. As a result, it is difficult to judge the stability of the structure that has been constructed. Thus, in the case of integrated assessment, not only do we need criteria for assessing the quality of the individual components of the analysis, we also need criteria that are applicable to the glue or the subjective judgments of the analyst, as also for the analysis as a whole. While criteria for adequacy for the individual components may be obtained from the individual disciplines, a similar situation does not exist for the ``glue'' in the analysis.
While adequacy and value as the twin criteria of quality are equally applicable in IA as in traditional science, they have potentially different implications in the former than in the latter. In traditional science, the detailed products of the craft work of scientists are intelligible and valuable mainly to other scientists. The ``users'' of IA are, in part purportedly from outside the community of the practitioners of IA. They are policy analysts, decision-makers, or even the lay public. Thus, while adequacy can and ought to be judged from within the IA community, one would expect that value ought to be judged from without. This poses a problem, precisely due to the range of possible users of IA, and their differing motivations and world-views. So long as the determination of adequacy and value was contained within the community of practitioners, one might have expected that this determination might be easier. The IA community needs to realize that the value of the work hinges on its ability to meet judgments from without. The experience with the IPCC chapter on greenhouse damages described earlier is a case in point. An NGO - the London based Global Commons Institute was largely responsible for the rejection of the chapter in its then extant form. This rejection highlights the importance of outside judgment. We now discuss some specific possible causes of problems in integrated assessment.