As has been pointed out earlier, the process of IA is akin to the construction of a house with many disciplinary bricks and a lot of subjective mortar and glue. It is often the case that while the bricks may be documented, the glue is not. One consequence of this is the loss of transparency, where the actual model structure is opaque to other members of the community. This leads to a situation where results and insights cannot be reproduced. Reproducibility is an important component of assuring adequacy, for unlike many other disciplines, it is not enough to assess the adequacy of the individual components in an integrated assessment model. It is necessary to assess the adequacy of the whole as well. Reproducibility and transparency are key to this task, which can be accomplished only if the model results and insights can be traced through the model structure to the starting assumptions and inputs to the analysis.
Transparency cannot be ensured simply by placing models in the public domain. Transparency issues have to be considered at all points in the modeling process. At the outset transparency is increased by making the software modular, more readable and by providing adequate documentation. These are common sense programming practices that can go a long way in facilitating the use and understanding of IA models by outsiders.