Failure of foresight: Learning from system failures through dynamic model

Takafumi Nakamura, Kyoichi Kijima


A dynamic model for holistically examining system failures is proposed, for the purpose of preventing further occurrence of these failures. An understanding system failure correctly is crucial to preventing further occurrence of system failures. Quick fixes can even damage organizational performance to a level worse than the original state. There is well known side effect of “normalized deviance” which leads NASA’s Challenger and Columbia space shuttle disasters. And there is so called “incubation period” which leads to catastrophic system failures in the end. However this indicates there is a good chance to avoid catastrophic system failures if we can sense the incubation period correctly and respond the normalized deviance effect properly. If we don’t understand system failure correctly, we can’t solve it effectively. Therefore we first define three failure classes to treat dynamic aspects of system failures. They are Class 1 (Failure of deviance), Class 2 (Failure of interface) and Class 3 (Failure of foresight) respectively. Then we propose a dynamic model to understand system failure dynamically through turning hindsight to foresight to prevent further occurrence. An application example in IT engineering demonstrates that the proposed model proactively promotes double loop learning from previous system failures.


system failure; engineering safety; dynamic model; double loop learning

Full Text: