NEWS AT SEI
This library item is related to the following area(s) of work:System of Systems
This article was originally published in News at SEI on: March 1, 1999
In this column I will deal with one of the most interesting and often unexpected aspects of using commercial off-the-shelf (COTS) products in complex software systems. That is the relationship that exists between between product evaluation and system design. Numerous experiences have now shown that in systems that make extensive use of COTS products, there is a remarkable bond between evaluating the commercial components and designing the system. These two activities are not just closely related, but are in fact mutually dependent processes.
I carefully used the expression "complex software system" in the opening paragraph, and that was an important point. We hear the phrase "COTS-based system" used quite often these days, and it can refer to anything from Microsoft Office to an avionics system that happens to run on a commercial operating system. The relationship I wish to consider is most pertinent for a certain subset of this wide expanse of system types, so it is first necessary to restrict the domain of discourse.
We note the existence of a spectrum of software systems that could be called "COTS based." At one extreme is the system that comes from a single vendor and that performs widely used and well-understood business functions. Examples abound in financial management, personnel management, and so forth. Such systems need tailoring, of course, but the notion is still one of "off the rack," in much the same the way that ready-made clothes are purchased. The customer chooses a vendor with products in the particular functional area; other discriminators might include cost, proximity, or past performance. The customer then describes more precisely what he or she is looking for, and the vendor responds with a solution that typically already exists. The vendor has previously solved the integration issues, has determined the system’s broad architectural principles, and has resolved any questions of infrastructure. After some necessary tailoring for some specific needs (e.g., "we’ll take it in an inch here"), the vendor supplies the system as a package to the customer.
At the other extreme is the system that an organization designs, constructs, and builds, a system that meets a specific and often unique need. The functionality of the system is the aggregate output of several--perhaps a great many--components, which have been integrated, typically with a good deal of "middleware" technology. The components themselves are obtained from a wide variety of sources, including commercial vendors and existing components, and some items are custom made for the occasion. Any issues such as incompatible interfaces between components are problems that the integrator must solve.
There are two major things that distinguish these extremes. One is that for the former class of system, the emphasis is on tailoring (or "customizing") while for the latter class of system the emphasis is on integration. The second distinguishing aspect is that for the former system, the conceptual "owner" of the system (regardless of who buys it) is the vendor who sells it as a package. It is the vendor to whom the customer turns if the system fails, who is generally expected to provide normal maintenance support, and so forth. In the latter kind of system, the conceptual owner is the organization that determined the functional characteristics, selected the constituent components, and integrated them into a coherent whole. Any maintenance is piecemeal: as new versions of each component appear, the organization chooses whether or not to upgrade that part of the system. These extreme ends of the "COTS-based system" spectrum are shown in the figure below. I refer to the former type as a "COTS-solution system" and the latter type as a "COTS-integrated system."
For the remainder of this column, I shall concentrate on the right-hand side of this figure, on issues that arise in COTS-integrated systems, for it is in such systems that the mutual relationship between evaluation and design is most evident.
For most people who have built software over the past few decades, the notions of design and evaluation are usually quite distinct, particularly in their chronology. Design is commonly done early, soon after requirements are defined; evaluation is commonly done fairly late in the process, whether in the sense of "test and evaluation," acceptance testing, or even, for some people, as a form of quality assurance. Design is done before the system exists, evaluation is done after it exists. You can’t evaluate something until you’ve built it, and you can’t build it until you design it. So far, so good. If, that is, you are building a system from scratch.
But when we choose to let more and more of a system come from pre-existing parts--whether because of a government mandate or because of hoped-for economies--the way design and evaluation are performed changes in a subtle but very definite manner.
First, it becomes evident that the notion of "requirements" is now divided. On one hand, there is some collection of requirements that our final system must satisfy. On the other hand, each of the components that will be part of the system has some independent set of requirements that governed its creation. For commercial components, those requirements will rarely be explicit, and will almost certainly be based on marketplace imperatives. The detailed needs of our system are unknown to the COTS vendors, and even if they are known they are rarely of interest.
Second, these components--or many of them, at least--will already exist before our system is even specified. The components’ interfaces, their architectural assumptions (e.g., choices between kernel-level threads or user-level threads and decisions about security factors), and their functional dependencies on other technologies will all characterize the products. They are not variable attributes that the system designer can change.
Third, the life cycle of individual components is in someone else’s hands. Updates, revisions, changes to a component’s internal architecture, even a decision to stop supporting a product are now all determined by the COTS vendor, and decisions are made as much for business reasons as by technical necessity.
These points force us to revise our notions of when and how we do evaluation and design. In traditional development, the principal constraint on a system may be its requirements. But a COTS-integrated system is constrained both by the system’s requirements and by the capabilities and constraints of the available components. We cannot simply follow the traditional sequence of specifying requirements and designing a system, and then hope to perform the implementation phase simply by going out to buy COTS products. To do so will almost certainly be hopeless, because it assumes that somewhere in the COTS marketplace is a collection of commercial products that just happen to fit perfectly with our needs.
Instead, we now must do a significant amount of inspection of the available products before we solidify our design. We must do testing, benchmarking, prototyping--in short, we must include product evaluation as a part of the design process. (Even more difficult is trading off requirements against existing commercial products, but I shall deal with that in a later column.)
We find that this changes our existing notions of both design and evaluation. Our design activity has always included tradeoffs. But now the kinds of tradeoffs we make are much more various. Consider the "ilities," for example: As much as we consider the reliability of a component, we must also factor in the reliability of its vendor both to stay in business and to offer reasonable product support. As much as we assess the usability of a component, we must guess whether its vendor will still be in business three years hence.
The evaluation activity is equally changed. The traditional notion of evaluation was rooted in requirements: A "shall" statement existed for some throughput figure, and the software either did or did not perform adequately. Even for requirements that were not easily quantified, there was still the implicit assertion that they were satisfied in some specific manner. Now, however, COTS product evaluation has an added element of "what if?" We assess the requisite functional capabilities that we need, but we also look at what else the product might do. Serendipity is not necessarily a bad thing, and a product’s unexpected features might lead us to reconsider the system’s design. Sudden or unexpected marketplace developments, which seem to be more and more the rule, might also suggest system-design changes, an unhappy but pragmatic reality for the project manager.
And finally, these two activities are now carried out simultaneously. We start with certain design constraints, and we evaluate some products that might satisfy them. We then realize the implications that some products bring with them, and we modify the design to capitalize on a certain product’s strengths. We make market forecasts about new products, and focus our design constraints accordingly. We do another form of evaluation continuously, even apart from any particular system, by staying aware of current products and emerging technologies. What we don’t do is leave the evaluation activity to the end of the development life cycle.
The effect of a COTS approach on software system development is not simply found in the system’s cost. Savings may well result, but we should not forget Newton’s rule about actions and opposite reactions. Realizing large savings from the COTS marketplace also means that we agree to be bound by the marketplace’s realities. For those of us who specify and design systems, yet are also choosing to purchase the parts of those systems as COTS components, the marketplace will never provide exactly what we want, so we must make do with what we can find. System design and product evaluation jointly contribute to how we structure our systems. As I noted in the first paragraph, this fact is often unexpected. But it cannot be ignored.
My next column will continue with this topic and will provide some detailed examples of how product evaluation and system design can interact. Stay tuned.
David Carney is a member of the technical staff in the Dynamic Systems Program at the SEI. Before coming to the SEI, he was on the staff of the Institute for Defense Analysis in Alexandria, Va., where he worked with the Software Technology for Adaptable, Reliable Systems program and with the NATO Special Working Group on Ada Programming Support Environment. Before that, he was employed at Intermetrics, Inc., where he worked on the Ada Integrated Environment project.
The views expressed in this article are the author's only and do not represent directly or imply any official position or view of the Software Engineering Institute or Carnegie Mellon University. This article is intended to stimulate further discussion about this topic.
For more information
Please tell us what you
think with this short
(< 5 minute) survey.