Software Product Lines
Framework Home
Introduction
Product Line Essential Activities
Product Line Practice Areas
Software Engineering
Practice Areas
Architecture Definition
Architecture Evaluation
Component Development
Mining Existing Assets
Requirements Engineering
Software System Integration
Testing
Understanding Relevant Domains
Using Externally Available Software
Technical Management
Practice Areas
Organizational Management
Practice Areas
Frequently Asked Questions
Glossary
Bibliography

A Framework for Software Product Line Practice, Version 5.0

Next Section Table of Contents Previous Section

Architecture Evaluation

"Marry your architecture in haste, and you can repent in leisure." So admonished Barry Boehm in a 2000 lecture [Boehm 2000a]. The architecture of a system represents a coherent set of the earliest design decisions, which are the most difficult to change and the most critical to get right. The architecture is the first design artifact that addresses the quality goals of the system such as security, reliability, usability, modifiability, and real-time performance. The architecture describes the system structure and serves as a common communication vehicle among the system stakeholders: developers, managers, maintainers, users, customers, testers, marketers, and anyone else who has a vested interest in the development or use of the system.

With the advent of repeatable, cost-effective architecture evaluation methods, it is now feasible to make architecture evaluation a standard part of the development cycle. And because so much rides on the architecture and it is available early in the life cycle, it makes utmost sense to evaluate the architecture early when there is still time for mid-course correction. In any nontrivial project, there are competing requirements and corresponding architectural decisions that must be made to resolve them. It is best to air and evaluate those decisions and then document the basis for making them before the decisions are cast into code. Architecture evaluation is a form of artifact validation, just as software testing is a form of code validation. In the "Testing" practice area, we discuss the validation of artifacts in general–including design models such as the architecture–but the architecture for the product line is so foundational that we give its validation its own special practice area.

The evaluation can be done at a variety of stages during the design process; for example, when the architecture is still on the drawing board and candidate structures are being weighed. The evaluation can also be done later, after preliminary architectural decisions have been made but before detailed design has begun. The evaluation can even be done after the entire system has been built (such as in the case of a reengineering or mining operation). The outputs will depend on the stage at which the evaluation is performed. Enough design decisions must have been made so that the achievement of the requirements and quality attribute goals can be analyzed. The more architectural decisions that have been made, the more precise the evaluation can be. On the other hand, the more decisions that have been made, the more difficult it is to change any one of them.

An organization's business goals for a system lead to particular behavioral requirements and quality attribute goals. The architecture is evaluated with respect to those requirements and goals. Therefore, before an evaluation can proceed, the behavioral and quality attribute goals against which an architecture is to be evaluated must be made explicit. These quality attribute goals support the business goals. For example, if a business goal is that the system should be long-lived, modifiability becomes an important quality attribute goal.

Quality attribute goals, by themselves, are not definitive enough for either design or evaluation; they must be made more concrete. Using modifiability as an example, if a product line can be adapted easily to have different user interfaces but is dependent on a particular operating system, is it modifiable? The answer is "yes" with respect to the user interface but "no" with respect to porting to a new operating system. Whether this architecture is suitably modifiable depends on what modifications to the product line are expected over its lifetime. That is, the abstract quality goal of modifiability must be made concrete: modifiable with respect to what kinds of changes, exactly? The same is true for other attributes. The evaluation method that you use must include a way to concretize the quality and behavioral goals for the architecture being evaluated.

Aspects Peculiar to Product Lines

In a product line, architecture assumes a dual role. There is the architecture for the product line as a whole, and there are architectures for each of the products. The latter are produced from the former by exercising the built-in variation mechanisms according to the production plan. All the architectures should be evaluated. The product line architecture should be evaluated for its robustness and generality to make sure it can serve as the basis for products in the product line's envisioned scope. Product architectures should be evaluated to make sure they meet the specific behavioral and quality requirements of the product at hand. In practice, the extent to which product architectures require separate evaluations depends on the extent to which they differ from the product line architecture and on the degree of automation used in creating them.

Often, some of the hardware and other performance-affecting factors for a product line architecture are unknown to begin with. Thus, the evaluation of the product line architecture must establish bounds on the performance that the architecture is able to achieve, assuming bounds on hardware and other variables. Evaluating the product line architecture identifies potential contention problems and helps put in place the policies and strategies to resolve them. The evaluation of a particular instance of the product line architecture can verify whether the hardware and performance decisions that have been made are compatible with the goals of that instance.

Application to Core Asset Development

Clearly, an evaluation should be applied to the core asset that is the product line architecture.

Some of the business goals (against which the product line architecture will be evaluated) will be related to the fact that the architecture is for a product line. For example, the architecture will almost certainly have built-in variation points that can be exercised to derive specific products having different attributes. The evaluation will have to focus on the variation points to make sure they are appropriate, offer sufficient flexibility to cover the product line's intended scope, can be exercised in a way that lets products be built quickly to support the product line's production constraints (see Core Asset Development), and do not impose unacceptable runtime performance costs. Also, different products in the product line may have different quality attribute requirements, and the architecture will have to be evaluated for its ability to provide all the required combinations.

As the requirements, business goals, and architecture all evolve over time, periodic (although not frequent) reevaluations should be performed to discover whether the architecture and business goals are still well matched. Those reevaluations may be shortened by focusing on the important differences between the business goals of the product and those of the overall product line. Such reevaluations should reveal important architectural differences to key on. Some evaluation methods produce a report that summarizes what the articulated, prioritized, quality attribute goals are for the architecture and how the architecture satisfies them. Such a report makes an excellent rationale record, which can then accompany the architecture throughout its evolution as a core asset in its own right.

An architecture evaluation can also be performed on components that are candidates to be acquired as core assets, as well as on components developed in-house. In either case, the evaluation proceeds with the help of technical personnel from the organization that developed the software. An architecture evaluation is not possible for "black-box" architecture acquisitions where the architecture is not visible. The quality attribute goals to be used for the evaluation will include how well the potential acquisition will (1) support the quality goals for the product line and (2) evolve over time to support the intended evolution of the products in the product line.

Because product architectures are variations of the product line architecture, the product architecture evaluation is similarly a variation of the product line architecture evaluation. The artifacts produced during both product line architecture and product architecture evaluations (such as scenarios, checklists, and so on) will certainly have reuse potential and may become core assets by themselves.

Application to Product Development

An architecture evaluation should be performed on an instance or variation of the architecture that will be used to build one or more of the products in the product line. The extent to which it is a separate, dedicated evaluation depends on the extent to which the product architecture differs in quality-attribute-affecting ways from the product line architecture or on how much the instantiation process can be trusted to produce a product architecture with the required quality attributes. If the differences are minor or exercising the variation mechanisms will most likely produce the expected results, these product architecture evaluations can be abbreviated. The results of architecture evaluation for product architectures often provide useful feedback to the architect(s) of the product line architecture and fuel improvements in it.

Finally, when a new product is proposed that falls outside the scope of the original product line (for which the architecture was presumably evaluated), the product line architecture can be reevaluated to see if it will suffice for this new product. If it will, the product line's scope is expanded to include the new product. If it will not, the evaluation can be used to determine how the architecture would have to be modified to accommodate the new product.

Example Practices

Several different architecture evaluation techniques exist and can be modified to serve in a product line context. Techniques can be categorized broadly as either questioning techniques (those using questionnaires, checklists, scenarios, and the like as the basis for architectural investigation) or measuring techniques (such as simulation or experimentation with a running system) [Abowd 1996a]. Well-versed architects should have a spectrum of techniques in their evaluation kit. For full-fledged architectures, software performance engineering or a method such as the SEI Architecture Tradeoff Analysis Method (ATAM) or the SEI Software Architecture Analysis Method (SAAM) is indispensable. For less complete designs, a technique such as SEI Active Reviews for Intermediate Designs (ARID) is handy. For validating architectural (and other design) specifications, active design reviews (ADRs) are helpful. The article "The Bibliography of Software Architecture Analysis" published in Software Engineering Notes provides more alternatives [Zhao 1999a].

ATAM: The ATAM is a scenario-based architecture evaluation method that focuses on a system's quality goals. The input to the ATAM consists of an architecture, the business goals of a system or product line, and the perspectives of stakeholders involved with that system or product line. The ATAM focuses on an understanding of the architectural approach that is used to achieve particular quality goals and the implications of that approach. The method uses stakeholder perspectives to derive a collection of scenarios that give specific instances for usage, performance requirements, various types of failures, possible threats, and a set of likely modifications. The scenarios help the evaluators understand the inherent architectural risks, sensitivity points to particular quality attributes, and tradeoffs among quality attributes.

Of particular interest to ATAM-based evaluations of product line architectures are the sensitivity points to extensibility (or variation) and the tradeoffs of extensibility with other quality attribute goals (usually real-time performance, security, and reliability).

The output of an ATAM evaluation includes

The ATAM can be used to evaluate both product line and product architectures at various stages of development (conceptual, before code, during development, or after deployment). An ATAM evaluation usually requires three full days plus some preparation and preliminary investigation time. The ATAM is described in detail by Clements, Kazman, and Klein [Clements 2001a] and on the SEI's Software Architecture Technology (SAT) Web site [SEI 2007b].

SPE: Software performance engineering (SPE) is a method for making sure that a design will allow a system to meet its performance goals before it has been built. SPE involves articulating the specific performance goals, building coarse-grained models to get early ideas about whether the design is problematic and refining those models along well-defined lines as more information becomes available. Conceptually, SPE resembles the ATAM but the singular quality attribute of interest is performance. Smith wrote the definitive resource for SPE [Smith 1990a] and, along with Woodside, its concise method description [Gelenbe 1999a].

ARID: ARID is a hybrid design review method that combines the active design review philosophy of ADRs with the scenario-based analysis of the ATAM and SAAM [Clements 2000a]. ARID was created to evaluate partial (for example, subsystem) designs in their early or conceptual phases, before they are fully documented. While such designs are architectural in nature, they are not complete architectures. ARID works by assembling stakeholders for the design, having them adopt a set of scenarios that express a set of meaningful ways in which they would want to use the design, and then having them write code or pseudocode that uses the design to carry out each scenario. This process wrings out any conceptual flaws early and familiarizes stakeholders with the design before it is fully documented. An ARID exercise takes one to two days.

ADRs: An ADR [Parnas 2001a] is a technique that can be used to evaluate an architecture still under construction. ADRs are particularly well-suited for evaluating the designs of single components or small groups of components before the entire architecture has been solidified. The principle behind ADRs is that stakeholders are engaged to review the documentation that describes the interface facilities provided by a component and then are asked to complete exercises that compel them to actually use that documentation. For example, each reviewer may be asked to write a short code segment that performs some useful task using the component's interface facilities, or each reviewer may be asked to verify that essential information about each interface operation is present and well specified. ADRs are contrasted with unstructured reviews in which people are asked to read a document, attend a long meeting, and comment on whatever they wish. In an ADR, there is no meeting; reviewers are debriefed (or walked through their assignments) individually or in small informal groups. The key is to avoid asking questions to which a reviewer can blithely and without much thought answer "yes" or "no." An ADR for a medium-sized component usually takes a full day from each of six or so reviewers, who can work in parallel. The debriefing takes about an hour for each session.

Practice Risks

The major risk associated with this practice is failing to perform an effective architecture evaluation; such an evaluation prevents unsuitable architectures from being allowed to pollute a software product line effort. Architecture evaluation is the safety valve for product line architectures, and an ineffective evaluation will lead to the same consequences as an unsuitable architecture. (For a list of these consequences, see the "Architecture Definition" practice area.)

An ineffective evaluation can result from the following:

Further Reading

[Clements 2001a]
Clements, Kazman, and Klein wrote a primer on software architecture evaluation that contains a detailed process model and practical guidance for applying the ATAM and compares it with other evaluation methods. Other methods, including ARID, are also covered.

[Del Rosso 2006a]
Del Rosso describes Nokia's experience with evaluating architectures for product lines using a variety of evaluation methods.

[Parnas 2001a]
The work of Parnas and Weiss, in which they describe ADRs, remains the most comprehensive source of information on this approach.

[SEI 2007b]
The SEI's SAT Web page provides publications about the ATAM and the SAAM, as well as other software architecture topics.

[Smith 1990a]
Smith's work remains the definitive treatment of performance engineering.

[Smith 2001a]
The work of Smith and Williams is a good accompaniment to, but not a substitute, for Smith's solo work [Smith 1990a].

[Zhao 1999a]
Zhao compiled a bibliography on software architecture analysis. His Web site, where the list is kept up-to-date, is cited on the SEI's SAT Web page [SEI 2007b].

Next Section Table of Contents Previous Section