Please use this identifier to cite or link to this item:
|Title:||Developing reproducible and comprehensible computational models|
|Keywords:||Computational model;Simulation;Methodology;Behavioural test;Theory development;Specification language;Documentation;Robust testing;Test;CHREST;Canonical result;Cooper;Shallice;Discrimination|
|Citation:||Artificial Intelligence, 144, 251-263.|
|Abstract:||Quantitative predictions for complex scientific theories are often obtained by running simulations on computational models. In order for a theory to meet with wide-spread acceptance, it is important that the model be reproducible and comprehensible by independent researchers. However, the complexity of computational models can make the task of replication all but impossible. Previous authors have suggested that computer models should be developed using high-level specification languages or large amounts of documentation. We argue that neither suggestion is sufficient, as each deals with the prescriptive definition of the model, and does not aid in generalising the use of the model to new contexts. Instead, we argue that a computational model should be released as three components: (a) a well-documented implementation; (b) a set of tests illustrating each of the key processes within the model; and (c) a set of canonical results, for reproducing the model’s predictions in important experiments. The included tests and experiments would provide the concrete exemplars required for easier comprehension of the model, as well as a confirmation that independent implementations and later versions reproduce the theory’s canonical results.|
|Appears in Collections:||Psychology|
Dept of Life Sciences Research Papers
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.