Identifying Quality Attributes

NEWS AT SEI

This article was originally published in News at SEI on: June 1, 2000

In large software systems, the achievement of qualities such as performance, security, and modifiability is dependent not only on code-level practices but also on the overall software architecture. Thus, it is in developers' best interests to determine, at the time a system's software architecture is specified, whether the system will have the desired qualities.

With the sponsorship of the U.S. Coast Guard's Deepwater Acquisition Project, the SEI is testing the concept of a "Quality Attribute Workshop" in which system stakeholders focus on the discussion and evaluation of system requirements and quality attributes. The goal of the Deepwater Project is to create a system of systems, using commercial and military technologies and innovation to develop a completely integrated, multi-mission, and highly flexible system of assets-including cutters, patrol boats, and short-, medium- and long-range aircraft-at the lowest total ownership cost. The project is the largest and most comprehensive recapitalization effort in Coast Guard history.

Workshop Overview

The purpose of a Quality Attribute Workshop is to identify scenarios from the point of view of a diverse group of stakeholders and to identify risks and possible mitigation strategies. Scenarios are used to "exercise" the architecture against current and future situations, and include the following types:

  • Use-case scenarios reflect the normal state or operation of the system. If the system is yet to be built, these would be about the initial release.
  • Growth scenarios are anticipated changes to the system. These can be about the execution environment (e.g., double the message traffic) or about the development environment (e.g., change message format shown on operator console).
  • Exploratory scenarios are extreme changes to the system. These changes are not necessarily anticipated or even desirable situations. Exploratory scenarios are used to explore the boundaries of the architecture (e.g., message traffic grows 100 times, operating system is replaced).

The stakeholders—including architects, developers, users, maintainers, and others—generate, prioritize, and analyze the scenarios, and identify tradeoffs and risks from their points of view, depending on the role they play in the development of the system, and their expertise on specific quality attributes. Together, the scenarios, risks and mitigations strategies are used as input to the architecture developers.

Quality Attribute Roadmap

Figure 1 illustrates the Quality Attribute Roadmap, the process used during the workshops to discover and document quality attribute risks and tradeoffs in the architecture.

During the workshop, participants engage in several activities aimed at generating various outputs or products:

  • Scenario generation takes place during a facilitated brainstorming process; stakeholders propose scenarios that test the effectiveness of a candidate or conceptual architecture to achieve specific quality attributes within a specific Deepwater mission and geographic context.
  • During scenario analysis, for each of the high-priority scenarios, the stakeholders choose an appropriate architectural fragment as an artifact for analysis, and apply the scenario to the artifact. The purpose of the analysis is to identify important architecture decisions and sensitivity points. As a result of this activity, the stakeholders might decide to conduct additional, more detailed or formal analyses of the scenarios or artifacts, but these analyses take place "off line," not during the workshop.
  • During tradeoff and risk identification, the stakeholders use the results of the analysis activity to identify and document risks-i.e., potential future problems that might affect the cost, schedule, or quality attributes of the system. Various sources are used as inputs for the activities, including architecture documentation, stakeholder points of view, and architecture styles.

Figure 1: Quality Attribute Roadmap

Figure 1: Quality Attribute Roadmap

Questions for Collecting and Analyzing Information

Various types of questions are used to collect and analyze information about current and future system drivers and architectural solutions. The types of questions fall into the following categories:

Screening questions are used to quickly narrow or focus the scope of the evaluation. They identify what is important to the stakeholders. Screening questions are qualitative; the answers are not necessarily precise or quantifiable. The emphasis is on expediency. If the quality attribute of concern was security, an example screening question might be: "What are the trusted entities in the system and how do they communicate?"

Elicitation questions are used to gather information to be analyzed later. They identify how a quality attribute or a service is achieved by the system. Elicitation questions collect information about decisions made; the emphasis is on extracting quantifiable data. Elicitation questions for security might be: "What sensitive information must be protected? What approach is used to protect that data?"

Analysis questions are used to conduct analysis using attribute models and information collected by elicitation questions. Analysis questions refine the information gathered by elicitation. An analysis question for security might be: "Which essential services could be significantly affected by an attack?"

Find Us Here

Find us on Youtube  Find us on LinkedIn  Find us on twitter  Find us on Facebook

Share This Page

Share on Facebook  Send to your Twitter page  Save to del.ico.us  Save to LinkedIn  Digg this  Stumble this page.  Add to Technorati favorites  Save this page on your Google Home Page 

For more information

Contact Us

info@sei.cmu.edu

412-268-5800

Help us improve

Visitor feedback helps us continually improve our site.

Please tell us what you
think with this short
(< 5 minute) survey.