NEWS AT SEI
This article was originally published in News at SEI on: December 1, 2001
Many organizations are now constructing major software systems from commercial off-the-shelf (COTS) products. An essential part of such an undertaking is evaluating the commercial products that are available to determine their suitability for use in the system. Virtually all organizations perform an evaluation of COTS software products before using them, but still projects fail. These failures are often directly traceable to the quality of the organization's evaluation process.
In response to these problems the SEI and the National Research Council Canada (NRC) have co-developed a COTS software product evaluation process that can be tailored to suit the needs of a variety of organizations. This evaluation process helps organizations evaluate COTS products to determine their fitness for use in their systems. The process is being taught in a two-day tutorial that can be delivered at the SEI or a customer organization. A half-day version of the tutorial is also being offered at the International Conference on COTS-Based Software Systems in Orlando, Florida, February 4-6, 2002.
The importance of choosing the right product for a COTS-based system cannot be overstated. The influence that the product has on the system is pervasive: the COTS product can determine the system architecture, the functional capabilities of the system, and even the maintenance processes for the system. With some COTS-based systems running into the tens and even hundreds of millions of dollars, the risk of failure is too great not to invest in evaluating the products that these systems are based on.
When choosing to make use of commercial components, the question then becomes how to assess or evaluate these products. In addition to considering specific techniques for evaluation, there is also the need to define some of the more general process-related issues that arise when evaluating COTS products. For example, whose job is it to do this? How do the traditional notions of evaluation differ from COTS evaluation? What new activities might be implied when COTS products are under evaluation?
To help answer these questions, the SEI has developed a framework for an evaluation process that organizations can tailor to their needs. In addition, a set of techniques is identified that can be applied in this process.
The evaluation process is based in part on the experience of the SEI in working with organizations that have struggled with building COTS-based systems. Ed Morris of the SEI says, "A lot of organizations were struggling in part because of inadequate evaluation of commercial products. They would select a product and they would find out later that, for example, the product didn't do as much as they expected. That's a failure of evaluation. And we saw other cases where an organization bought a product and a few months later the company that sold the product would go out of business, which is another failure of evaluation."
Based in part on ISO 14598, the high level process is flexible and can be adapted to many specific process implementations. It consists of four basic elements:
In addition to the basic process, there is a set of techniques to help in planning, establishing criteria, and collecting and analyzing data.
The elements are summarized below. The tutorial provides an in-depth look at each, and suggests techniques that practitioners can use throughout the process.
The organization determines level of effort required and estimates cost and schedule. To identify the level of effort, organizations must consider the criticality of the components and candidate products in relation to strategic objectives. The greater the technical risk, and the more critical the strategic objectives, the greater the rigor required.
Although there are few specific techniques for estimating resources and schedule for COTS evaluation, several general techniques are applicable—for example, expert opinion, analogy, decomposition, and cost modeling.
The first step in establishing evaluation criteria is to distill from the full set of requirements the requirements that are appropriate for the evaluation. Determining evaluation requirements involves analyzing system requirements to determine their applicability, and generating new requirements specific to the use of the COTS product. From these requirements, evaluation criteria are developed, consisting of a capability statement (a measurable statement of ability to satisfy a need) and a quantification method (a means for assessing the product's level of compliance with the capability statement).
The organization collects information about how the products perform against the evaluation criteria developed previously. Different criteria and situations require different data collection techniques. For example, the technique applied to determine the value of a critical criterion will be quite rigorous; other techniques for non-critical criteria will be less rigorous. The tutorial outlines a number of techniques that can be used to collect data. It emphasizes "hands-on" techniques that collect data by actually running the product in sample scenarios.
The organization takes the data collected and consolidates it into a form that can be analyzed. Some useful techniques for data analysis are sensitivity analysis, gap analysis, and cost of fulfillment. Sensitivity analysis helps determine how the evaluation results are affected by changes in assumptions, such as a change in the weighting of criteria. Gap analysis highlights the gap between the capability provided by a COTS component and the capability required for the system. Cost of fulfillment helps determine the effort needed to narrow such a gap. For example, fulfillment could involve altering the system architecture, adding features, or modifying the requirements.
For more information
Please tell us what you
think with this short
(< 5 minute) survey.