It is difficult to assure the safety, security, reliability or other nonfunctional properties of software-based systems because of their size, complexity, and continuing evolution. Traditional software and systems engineering techniques, including conventional test and evaluation approaches, cannot provide the justified confidence needed. The SEI is exploring the use of the assurance case is a means of providing such confidence, starting as early as when the system is designed and continuing through deployment. We are also creating a theory of assurance case confidence that will help acquirers, developers, and evaluators understand how much confidence they should have in the resulting system.
The concept of an assurance case has been derived from the safety case, a construct that has been used successfully in Europe for over a decade to document safety for nuclear power plants, transportation systems, automotive systems, and avionics systems.
The assurance case provides a means to structure the reasoning that engineers use implicitly to gain confidence that systems will work as expected. It also becomes a key element in the documentation of the system and provides a map to more detailed information.
The following figure is a fragment of an assurance case for a keypad. It makes the claim (C1.1) that entry errors caused by the design of the keypad are mitigated. It bases this claim on an argument (only partially developed) showing how several possible hazards to proper data are mitigated (C3.1, C3.2, and C.3). C3.2 makes the claim that keypad markings are unambiguous, and this claim is supported by evidence Ev4.1 and Ev4.2 (design review and log of observed errors).
Our recent work has focused on exploring how to achieve and measure confidence in an assurance case argument. This is a classic philosophical problem: determining the basis for belief (becoming confident) in a hypothesis when it is impossible to examine every possible circumstance covered by the hypothesis.
Our solution is to use an argumentation theory based on defeasible reasoning, eliminative induction, and Baconian probability to help identify sources and amounts of confidence in a claim. Eliminative induction is in contrast to the more common enumerative induction. Baconian probability is in contrast to the more common Pascalian probability.
One becomes confident in a hypothesis using enumerative induction by finding confirming instances of that hypothesis. An example of enumerative induction is running tests on a system. As long as the tests succeed, you have confirmation that the hypothesis (that the system behaves properly) is correct.
Consider, for example, the common light bulb that is controlled by a switch. Before you flip the switch, you have confidence that a light in the room will turn on. That confidence is based on the fact that you've flipped the switch hundreds or thousands of times before, and the light has always turned on. This is an example of building confidence by enumerative induction—counting the number of times something has worked successfully and using that experience as the basis to predict continued success.
However, there are many reasons why the light might not go on when you flip the switch. The switch might not be connected, the light might be burned out, or there may be no power to the switch. If we really want to make sure that the light will go on when we flip the switch, we check that these reasons for doubt are eliminated before we flip the switch. Having eliminated those reasons for doubt, we have a basis for being confident that the light will turn on. This is an example of building confidence by eliminative induction—eliminating reasons for doubting that something will work as expected.
Confidence, in this view, is just a function of how many reasons for doubt have been identified and removed. If n reasons for doubting a claim have been identified and i of these are eliminated by argument or evidence, then confidence in the claim is expressed as the Baconian probability, i|n. (i|n is read as "i out of n"; it is not a fraction.)
In practice, arguments about system properties are defeasible, that is, the conclusions are subject to revision based on additional information. The ways of attacking an argument are called defeaters. There are only three types of defeaters: rebutting, undermining, and undercutting. A rebutting defeater provides a counter-example to a claim. An undermining defeater raises doubts about the validity of evidence. An undercutting defeater specifies circumstances under which a conclusion is in doubt when the premises of an inference rule are true.
The following figure is a confidence map for the keypad assurance case fragment shown in the first figure. It explicitly shows the defeaters and inference rules associated with the case.