Proceedings of the 51st Annual Meeting of the ISSS - 2007, Tokyo, Japan, Papers: 51st Annual Meeting

Font Size:  Small  Medium  Large


Gerald Midgley


Systems practitioners often make significant claims for the value of their methodologies and methods. However, when evidence is presented to support these claims, it is usually based solely on the practitioner’s own reflections on single case studies. Less often, practitioners set up post-intervention debriefings with project participants using questionnaires. While the latter is an improvement on researcher reflections alone, there have been few attempts at systematically evaluating across methods and across case studies undertaken by different practitioners. This is understandable because, in any given local intervention, contextual factors, the skills of the practitioner and the purposes being pursued by stakeholders are inevitably going to affect the perceived success or failure of a method. The use of standard metrics and even qualitative criteria for comparison can therefore be made problematic by the need to consider what is unique in each intervention. So is it possible to develop a single evaluation approach that can support both locally meaningful evaluations and longer-term comparisons between methods? This paper offers a framework for the evaluation of methods that seeks to do just this. Research on the framework and associated tools is in its infancy, but pilot studies suggest that it is promising. Comparing across methods will ultimately require the development of a longer-term international research program, and this paper serves as a first call for participants in this.

Full Text: PDF