NEWS AT SEI
This library item is related to the following area(s) of work:Software Architecture
This article was originally published in News at SEI on: September 1, 2002
In previous columns, I described initial experiences applying Quality Attribute Workshops (QAWs) to evaluate the implications of system-design decisions. This column provides an update on the development of the method and provides lessons learned from applying the QAW method in four different U.S. government acquisition programs. Most of these lessons were integrated into the method incrementally, as described in a recent SEI technical report .
QAWs provide a method for analyzing a system’s architecture against a number of critical quality attributes, such as availability, performance, security, interoperability, and modifiability, that are derived from mission or business goals. The QAW does not assume the existence of a software architecture. It was developed to complement the Architecture Tradeoff Analysis Method (ATAM) in response to customer requests for a method to identify important quality attributes and clarify system requirements before there is a software architecture to which the ATAM could be applied. The QAW analysis is conducted by applying a set of test cases to a system architecture, where the test cases include questions and concerns elicited from stakeholders associated with the system. In this column, I describe the activities in the QAW method, how it has been adapted to specific customer needs, and several lessons learned during the evolution of the process.
The QAW process, shown in Figure 1, can be organized into four distinct groups of activities: (1) scenario generation, prioritization, and refinement; (2) test case development; (3) analysis of test cases against the architecture; and (4) presentation of the results. The first and last segments of the process occur in facilitated one-day meetings. The middle segments are undertaken independently by those developing or analyzing the test cases, and may involve experimentation that continues over an extended period of time.
Figure 1: The QAW Process
The process is iterative in that the test-case architecture analyses might lead to the development of additional test cases or to architectural modifications. Architectural modifications might prompt additional test-case analyses, and so forth.
The first activity in the QAW process is to generate, prioritize, and refine scenarios. In the QAW, a scenario is a statement about some anticipated or potential use or behavior of the system (see sidebar 1). Scenarios are generated in a brainstorming, round-table session and capture stakeholders’ concerns about how the system will do its job. Only a small number of scenarios can be refined during a one-day meeting, so stakeholders must prioritize the scenarios generated previously by using a voting process. Next, the stakeholders refine the top three or four scenarios to provide a better understanding of their context and detail (see sidebar 2). The result of this meeting is a prioritized list of scenarios and the refined description of the top three or four scenarios on that list.
The next activity in the QAW process is to transform each refined scenario from a statement and list of organizations, participants, quality attributes, and questions into a well-documented test case. The test cases may add assumptions and clarifications to the context, add or rephrase questions, group the questions by topic, and so forth (see sidebar 3). Who is responsible for developing the test cases depends on how the method has been applied and who carried out the task (e.g., sponsor/acquirer or development team).
The test-case architecture analysis is intended to clarify or confirm specific quality attribute requirements and might identify concerns that would drive the development of the software architecture. Some of the test cases could later be used as “seed scenarios” in an ATAM evaluation (e.g., to check if a concern identified during the test-case analysis was addressed by the software architecture). The results of analyzing a test case should be documented with specific architectural decisions, quality attribute requirements, and rationale (see sidebar 4).
The results presentation is the final activity in the QAW process. It is a one- or two-day meeting attended by facilitators, stakeholders, and the architecture team. It provides an opportunity for the architecture team to present the results of its analysis and to demonstrate that the proposed architecture is able to handle the cases correctly.
The application of the method can be tailored to the needs of a specific acquisition strategy and might include incorporating specific documents or sections of documents into the request for proposals (RFP) or contract .
In one application, the QAW method was used in a pre-competitive phase for a large system. Stakeholders involved laboratories and facilities with different missions and requirements. An architecture team (with members from various facilities) was building the architecture for a shared communications system before awarding a contract to a developer, and tailored the QAW process as follows:
Figure 2 illustrates a common acquisition strategy. Starting with an initial request for proposals, the acquisition organization evaluates proposals from multiple contractors and chooses one to develop the system.
Figure 2: Common Acquisition Strategy
In one application of the QAW method, the QAW activities took place during the competitive selection, and were customized as follows:
Figure 3 illustrates a “rolling down select,” a different acquisition strategy. Starting with an initial request for proposals, the acquisition organization awards contracts to a small number of contractors to conduct a “competitive fly-off.” In this phase, the contractors work on a part of the system, still competing for award of the complete contract. At the end of the phase, the contractors submit updated technical proposals, including additional details, and the acquirer makes a final “down select,” or selection of one of the competing contractors.
Figure 3: Acquisition Strategy Using Competitive Fly-Off
In this application, the QAW method was used during the Competitive Fly-Off phase (with three industry teams competing) of the acquisition of a large-scale Command, Control, Communications, Computers, Intelligence, Surveillance, and Recognizance (C4ISR) system. In this case, the QAW process was customized as follows:
The scenario-generation meeting is a useful communication forum to familiarize stakeholders with the activities and requirements of other stakeholders. In several cases, the developers were unaware of requirements brought up by those with responsibility for maintenance, operations, or acquisition. In one case, potential critics of the project became advocates by virtue of seeing their concerns addressed through the QAW process. We also learned that the facilitation team has to be flexible and adapt to the needs of the customer, as the following observations indicate:
Building the test cases from the refined scenarios takes time and effort.
In one case, the QAW facilitators did not extract sufficient information during the refinement session to build the test cases, and the facilitators had to organize additional meetings with domain experts to better define the context and quality-attribute questions. An unintended consequence was that the resulting test-case context was far more detailed than if it had been generated during the scenario-refinement session. As a result, only portions of the larger test-case context were relevant to the test-case questions. We learned that having an extremely detailed test-case context is not worthwhile. It takes too long to develop, may be hard to understand, and does not lead to focused questions. A test case should not be more than a few sentences.
Since the software architecture is not yet in place, the questions and expected responses should not force design decisions on the development team. Hence, the questions must be quite general, and the expected responses may suggest architectural representations (for example, “what is the availability of this capability?”) but not design solutions (for example, “use triple modular redundancy for high availability”).
The test-case architecture analysis might reveal flaws in the architectures and cause the architecture team to change the design. The test cases generated by the QAW process often extend the existing system requirements.
In one case, the new requirements seemed to challenge the requirements-elicitation effort and raised concerns on the architecture team. A typical comment was, “The system wasn’t meant to do that.” Some judgment must be made as to which test cases can be handled and at which phase of system deployment. While this can lead to extended arguments within the team, it is a useful exercise, since these concerns must be resolved eventually.
In another case, the stakeholders were concerned because the process only analyzed a few test cases out of a large collection of scenarios. They wanted to know what was to be done with the remaining scenarios. This issue should be resolved before the scenario-generation meeting. One approach is to analyze the architecture incrementally against an ever-expanding set of test cases and, if necessary, adjust the architecture in each increment. However, this approach is constrained by budgets, expert availability, and participants’ schedules.
Like in the scenario-generation meeting, participants are provided with a handbook before the meeting. The handbook includes the test cases and provides a test-case analysis example so the participants know what to expect at the meeting. In some applications of the QAW, we have conducted the results presentations in two phases: first as a rehearsal, and then as a full-scale presentation. The following observations are derived from conducting a number of QAWs:
The process for conducting QAWs is solidifying as we continue to hold them with additional customers, in different application domains, and at different levels of detail. The approach looks promising. The concept of checking for flaws in the requirements before committing to development should reduce rework in building the system.
 Barbacci, M. et al. Quality Attribute Workshops, 2nd Edition (CMU/SEI-2002-TR-019). Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 2002.
 Bergey, J. & Wood, W. Use of Quality Attribute Workshops (QAWs) in Source Selection for a DoD System Acquisition: A Case Study (CMU/SEI-2002-TN-013). Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 2002.
Mario Barbacci is a Senior Member of the staff at the Software Engineering Institute (SEI) at Carnegie Mellon University. He was one of the founders of the SEI, where he has served in several technical and managerial positions, including Project Leader (Distributed Systems), Program Director (Real-time Distributed Systems, Product Attribute Engineering), and Associate Director (Technology Exploration Department). Prior to joining the SEI, he was a member of the faculty in the School of Computer Science at Carnegie Mellon University.
His current research interests are in the areas of software architecture and distributed systems. He has written numerous books, articles, and technical reports and has contributed to books and encyclopedias on subjects of technical interest.
Barbacci is a Fellow of the Institute of Electrical and Electronic Engineers (IEEE) and the IEEE Computer Society, a member of the Association for Computing Machinery (ACM), and a member of Sigma Xi. He was the founding chairman of the International Federation for Information Processing (IFIP) Working Group 10.2 (Computer Descriptions and Tools) and has served as chair of the Joint IEEE Computer Society/ACM Steering Committee for the Establishment of Software Engineering as a Profession (1993-1995), President of the IEEE Computer Society (1996), and IEEE Division V Director (1998-1999).
Barbacci is the recipient of several IEEE Computer Society Outstanding Contribution Certificates, the ACM Recognition of Service Award, and the IFIP Silver Core Award. He received bachelor’s and engineer’s degrees in electrical engineering from the Universidad Nacional de Ingenieria, Lima, Peru, and a doctorate in computer science from Carnegie Mellon.
The views expressed in this article are the author's only and do not represent directly or imply any official position or view of the Software Engineering Institute or Carnegie Mellon University. This article is intended to stimulate further discussion about this topic.
For more information
Please tell us what you
think with this short
(< 5 minute) survey.