The SEI CBS work has focused on three primary categories of practices:
Even organizations that have never developed a COTS-based system are aware of the complexity of selecting a COTS product. Not only must they consider the qualities of competing products,but they must also determine whether the technologies on which the products are based are sufficiently mature for general use, and whether these technologies are likely to remain viable over the life of the system.
Similarly, organizations implementing CBS strategies for new or legacy systems must consider not only immediate system requirements but also the unceasing evolution of computing and software technology. Failure to address either of these concerns results in a flawed system: in the first case, a system that does not meet immediate user expectations, and in the latter case a system that follows technological directions that ultimately dead-end.
In order to ensure that systems meet user requirements, product evaluation practices must be developed. Typically,products are described in terms of interfaces that provide access to functionality. Here, standards may provide a frame of reference for comparing the product to generally accepted capabilities. Various approaches have been developed for evaluating products in terms of their interfaces.
However, to determine the fitness of a product for a given use, consideration must be given to more than just the interfaces the product provides. Aspects of performance, reliability, and flexibility, as well as the implicit assumptions made by the product about the operating environment must be considered. For example, while examination of the published interface of a product may suggest that it can interoperate with a second product, interoperation may be limited by each product's assumption that it has primary responsibility for handling incoming events. Much of this sort of information is not addressed by standards and is unavailable from product suppliers. Thus, hands-on evaluation to identify such mismatches (alternately called architectural mismatches by Garlan, Allen, and Ockerbloom, and interface mismatches by Wallnau, Clements, and Zaremski) must be a primary option.
Evaluation of a product can also extend to examination of other factors, such as the COTS product supplier or the process used to create and maintain the product. For example, many organizations now insist on ISO 9000 certification for vendors as an indication that the vendor's product has been produced using well-defined practices and procedures.
However, simply composing a system of quality COTS products will not ensure that the right technologies are selected or that a system remains viable over an extended period of operation. The ebb and flow of technologies and related products in the marketplace necessitate strict discipline in identifying, analyzing, and selecting COTS products that incorporate viable technologies. In order to understand the characteristics of a technology, an organization can use representative products to build demonstrators and provide proof-of-concept for use of the technology in specific system scenarios.
Part of our CBS activity has involved identifying sound practices for evaluating COTS technologies. These practices are presented in our Product Evaluation Tutorial and in the technical report A Process for COTS Software Product Evaluation.
In addition, we have evaluated techniques to wrap legacy and COTS products and mediate or bridge the differences and gaps between these products. We have investigated technologies (and related COTS products) such as Web browsers, CORBA, COM, and Enterprise JavaBeans (EJB) to determine the feasibility of using them to address our