ognizant Communication Corporation

FAILURE & LESSONS LEARNED IN INFORMATION AND TECHNOLOGY MANAGEMENT

ABSTRACTS
VOLUME 2, NUMBER 4, 1998

Failure & Lessons Learned in Information Technology Management, Vol. 2, pp. 163-171, 1998
1088-128X/98 $10.00 + .00
Copyright © 1998 Cognizant Comm. Corp.
Printed in the USA. All rights reserved.

Experiences of Fault Data in a Large Software System

Niclas Ohlsson and Claes Wohlin

Department of Computer and Information Science, Linkoping University, S-581 83 Linkoping, Sweden

Early identification of fault-prone modules is desirable both from developer and customer perspectives because it supports planning and scheduling activities that facilitate cost avoidance and improved time to market. Large-scale software systems are rarely built from scratch, and usually involve modification and enhancement of existing systems. This suggests that development planning and software quality could greatly be enhanced, because knowledge about product complexity and quality of previous releases can be taken into account when making improvements in subsequent projects. In this article we present results from empirical studies at Ericsson Telecom AB that examine the use of metrics to predict fault-prone modules in successive product releases. The results show that such prediction appears to be possible and has potential to enhance project maintenance.

Key words: Failure; Fault data; Fault-prone modules

Correspondence and requests for reprints should be addressed to Niclas Ohlsson. E-mail: nicoh@ida.liu.se




Failure & Lessons Learned in Information Technology Management, Vol. 2, pp. 173-182, 1998
1088-128X/98 $10.00 + .00
Copyright © 1998 Cognizant Comm. Corp.
Printed in the USA. All rights reserved.

Formal Support for Development of Knowledge-Based Systems

Dieter Fensel,1 Frank van Harmelen,2 Wolfgang Reif,3 and Annette ten Teije2

1Institue AIFB, University of Karlsruhe, 76128 Karlsruhe, Germany
2Department of Mathematics and Computer Science, Vrije Universiteit, Amsterdam, 1018WB Amsterdam, The Netherlands
3Faculty of Computer Science, University of Ulm, 89069 Ulm, Germany

This article provides an approach for developing reliable knowledge-based systems. Its main contributions are: Specification is done at an architectural level that abstracts from a specific implementation formalism. The model of expertise of CommonKADS distinguishes different types of knowledge and describes their interaction. Our architecture refines this model and adds an additional level of formalization. The formal specification and verification system KIV is used for specifying and verifying such architectures. We have chosen KIV for four reasons: (1) it provides the formal means required for specifying the dynamics of knowledge-based systems (i.e., dynamic logic), (2) it provides compositional specifications, (3) it provides an interactive theorem prover, and (4) last but not least it comes with a sophisticated tool environment developed in several realistic application projects.

Key words: Formal methods; Verification; Validation; Knowledge-based systems

Correspondence and requests for reprints should be addressed to Dieter Fensel. E-mail: fensel@aifb.uni-karlsruhe.de




Failure & Lessons Learned in Information Technology Management, Vol. 2, pp. 183-200, 1998
1088-128X/98 $10.00 + .00
Copyright © 1998 Cognizant Comm. Corp.
Printed in the USA. All rights reserved.

Formal Methods in the Development of Safety-Critical Knowledge-Based Components

Giovanna Dondossola

ENEL-SRI, Department of Electrical and Automation Research, Electronic Technologies for Automation, Via Volta 1, Cologno Monzese 20093 Milan, Italy

The work reported in this article is part of the ongoing Esprit project Safe-KBS No. 22360.* A main objective of the project is the definition of an engineering methodology for certifiable knowledge-based software components to be embedded into safety-critical systems. For about a decade the use of formal methods in the development of traditional software for safety critical systems has been greatly encouraged. On the other hand, research works in the Knowledge Engineering field are proposing new formal methods as a means to increase the quality of KB software products and processes. Therefore, it seems quite natural to propose a pervasive use of formal methods from the early stages of the development as a vehicle to promote the acceptance of KB software in safety-critical application domains. The subject of this article concerns both the role of formal methods in the Safe-KBS engineering methodology and the experimentation of their application based on a general purpose formal method called TRIO. The specification and V&V features of TRIO will be analyzed and judged with respect to the requirements coming from the safety-critical KB software.

Key words: Knowledge-based components; Safety-critical software; Formal methods; Temporal logic; Object-oriented concepts; Specification; Verification; Validation; Certification; Life cycle; Methodology

Correspondence and requests for reprints should be addressed to Giovanna Dondossola. E-mail: dondossola@pea.enel.it

*The Safe-KBS project is partially funded by the ESPRIT Programme of the Commission of the European Communities as project number 22360. The partners in the Safe-KBS project are Sextant-Avionique, Det Norske Veritas, Enel-Sri, Tecnatom, Computas Expert Systems, Uninfo, Qualience. This article reflects the opinions of the author and not necessarily those of the consortium.




Failure & Lessons Learned in Information Technology Management, Vol. 2, pp. 201-206, 1998
1088-128X/98 $10.00 + .00
Copyright © 1998 Cognizant Comm. Corp.
Printed in the USA. All rights reserved.

Analyzing Software Sensitivity to Human Error

Jeffrey Voas

Reliable Software Technologies Corporation, Suite 250, 21515 Ridgetop Circle, Sterling, VA 20166

Human operator errors in human-supervised computer systems are becoming a greater concern. Software fault injection is an inexpensive way to simulate thousands of human operator error scenarios to determine what will occur if they were to happen. By trying these different scenarios before a system is deployed, greater confidence can be achieved that the software and human will work amicably.

Key words: Software sensitivity; Human error; Safety

Correspondence and requests for reprints should be addressed to Jeffrey Voas. E-mail: jmvoas@rstcorp.com




Failure & Lessons Learned in Information Technology Management, Vol. 2, pp. 207-210, 1998
1088-128X/98 $10.00 + .00
Copyright © 1998 Cognizant Comm. Corp.
Printed in the USA. All rights reserved.

Languages for Critical Systems

B. A. Wichmann

National Physical Laboratory, Teddington, Middlesex, UK, TW11 0LW

The most critical computer systems have to be shown to be correct to regulatory authorities or perhaps that all reasonable care has been taken should a legal claim arise. The choice of programming language for such systems can aid analysis and management of the complexity within the software. This article shows that Ada 95 has the necessary attributes to contribute in the construction of such systems.

Key words: Critical systems; Software safety

Correspondence and requests for reprints should be addressed to B. A. Wichmann. E-mail: Brian.Wichmann@npl.co.uk