NEWS AT SEI
This article was originally published in News at SEI on: June 1, 2000
In large software systems, the achievement of qualities such as performance, security, and modifiability is dependent not only on code-level practices but also on the overall software architecture. Thus, it is in developers' best interests to determine, at the time a system's software architecture is specified, whether the system will have the desired qualities.
With the sponsorship of the U.S. Coast Guard's Deepwater Acquisition Project, the SEI is testing the concept of a "Quality Attribute Workshop" in which system stakeholders focus on the discussion and evaluation of system requirements and quality attributes. The goal of the Deepwater Project is to create a system of systems, using commercial and military technologies and innovation to develop a completely integrated, multi-mission, and highly flexible system of assets-including cutters, patrol boats, and short-, medium- and long-range aircraft-at the lowest total ownership cost. The project is the largest and most comprehensive recapitalization effort in Coast Guard history.
The purpose of a Quality Attribute Workshop is to identify scenarios from the point of view of a diverse group of stakeholders and to identify risks and possible mitigation strategies. Scenarios are used to "exercise" the architecture against current and future situations, and include the following types:
The stakeholders—including architects, developers, users, maintainers, and others—generate, prioritize, and analyze the scenarios, and identify tradeoffs and risks from their points of view, depending on the role they play in the development of the system, and their expertise on specific quality attributes. Together, the scenarios, risks and mitigations strategies are used as input to the architecture developers.
Figure 1 illustrates the Quality Attribute Roadmap, the process used during the workshops to discover and document quality attribute risks and tradeoffs in the architecture.
During the workshop, participants engage in several activities aimed at generating various outputs or products:
Various types of questions are used to collect and analyze information about current and future system drivers and architectural solutions. The types of questions fall into the following categories:
Screening questions are used to quickly narrow or focus the scope of the evaluation. They identify what is important to the stakeholders. Screening questions are qualitative; the answers are not necessarily precise or quantifiable. The emphasis is on expediency. If the quality attribute of concern was security, an example screening question might be: "What are the trusted entities in the system and how do they communicate?"
Elicitation questions are used to gather information to be analyzed later. They identify how a quality attribute or a service is achieved by the system. Elicitation questions collect information about decisions made; the emphasis is on extracting quantifiable data. Elicitation questions for security might be: "What sensitive information must be protected? What approach is used to protect that data?"
Analysis questions are used to conduct analysis using attribute models and information collected by elicitation questions. Analysis questions refine the information gathered by elicitation. An analysis question for security might be: "Which essential services could be significantly affected by an attack?"
For more information
Please tell us what you
think with this short
(< 5 minute) survey.