NEWS AT SEI
This article was originally published in News at SEI on: April 1, 2005
The importance of the right software architecture to a development effort has become widely recognized. This trend is probably not surprising to most readers of this column. Consequently this might be an odd time and place to ask why. Why is software architecture a critical software artifact?
The simple answer is that software architecture is important by definition. That is to say, software architecture was invented to be an artifact
- defined in terms of elements whose grain is sufficiently coarse that the overall design of relatively large systems can be represented in a human-comprehensible form and consequently could aid communication about the overall design of the system; and
- whose specification is sufficiently detailed to reason about relative to the satisfaction of critical system requirements
Based on this simple observation, we can state a core principle of software architecture.
- Principle 1: A software architecture should be defined in terms of elements that are coarse enough to allow for human intellectual control and specific enough to allow for meaningful reasoning.
Principle 1 alone, however, is not sufficient to reap the potential benefits of software architecture. Principle 1 helps to make the software architecture right. Without understanding, in addition, how to “make the right software architecture,” we fall short.
At the Software Engineering Institute (SEI), we have been working on software architecture-related methods and tools for more than a decade and have considerable experience in applying architecture-based design and analysis methods to real systems from a wide variety of domains. From this experience, we have come to believe that there are two other essential principles for realizing the potential benefits of software architecture:
- Principle 2: Business (and/or mission) goals determine quality-attribute requirements.
- Principle 3: Quality-attribute requirements guide the design and analysis of software architectures.
A software architecture lies at the fulcrum between a system's business (and/or mission) goals and its implementation. Principle 2 states that business goals provide the raison d’être for the system. Business goals naturally lead to quality-attribute goals, which, as stated by Principle 3, provide analytic rationale for an architecture to exhibit one type of design versus another.
Together, these three principles allow us to understand why software architecture is important, what purpose it is intended to fulfill, and whether it fulfills that purpose. These principles have been realized in several of our architecture analysis and design methods: the SAAM [Kazman 94], the QAW [Barbacci 03], the ATAM [Kazman 99], the ADD method [Clements 02], and the CBAM [Kazman 01]. More recently, however, we have begun to explore the techniques that link these methods to the principles mentioned above. This allows us to combine these techniques in new ways and create new methods for particular contexts.
The SAAM, the ATAM, and the CBAM work by explicitly identifying the business goals or context, capturing evaluation criteria by scenarios, choosing among criteria based on active stakeholder participation, and relying on the architect for an explanation of how the architecture satisfies the criteria. These methods are effective at identifying which portion of the architecture to examine to determine whether the criteria are satisfied.
These methods use a number of important common techniques.
- The explicit elicitation of business goals. Our methods all have a step that requires a presentation of business goals. This enables external evaluators to determine the criteria with which to evaluate a system. The output of the methods interprets architectural decisions in terms of their impact on the business goals. This provides a means for communicating to management the business impact of technical decisions.
- Active stakeholder participation and communication. We have found that stakeholder concerns are not always expressed in documents and not always well understood by development teams. We include stakeholders in our methods and ensure that they participate in setting of priorities involving the business goals and in setting the focus of the methods. We have developed techniques, such as the “utility tree” [Clements 02], to aid in structured scenario elicitation and prioritization.
- The explicit elicitation of architecture documentation and rationale in standardized views [Clements 03]. To evaluate an architecture, it is necessary for the architecture to be unambiguously represented and clearly understood by the evaluators. Because software architecture is, as we have identified in Principle 1, defined to be at the level of granularity that enables human comprehension, we require that there be such a representation.
- The use of quality-attribute scenarios to characterize stakeholder concerns. Business goals can be expressed at different levels of abstraction. To evaluate a design, the business goals must be expressed in terms that are operational for the software architect. As we stated in Principle 2, business/mission goals determine the quality-attribute requirements. Quality-attribute requirements must be expressed clearly and unambiguously. We use a specific representation of quality-attribute scenarios called six-part scenarios to express the realization of business goals, and we use general scenarios to aid in the elicitation of these six-part scenarios [Bass 03].
- The mapping of quality-attribute scenarios onto the architecture representation to determine the aspects of the architecture on which to focus. Even though architectures are defined to be understandable, they still may represent large systems and contain much detail. The scenarios are used to focus on particular aspects of the architecture.
- The representation of design primitives, called tactics [Bachmann 03], to make the process of design more consistent and to explicitly link design operations to desired quality-attribute goals.
- The use of templates to capture information and make the methods more consistent among different evaluators. In applying our methods, we have learned that consistency in the execution of a method can be achieved only if there are templates for recording the elicited information and the analyses generated. Templates provide a consistency to the gathering and reporting of information that is useful both for the evaluator and the consumer of the evaluation.
- The explicit elicitation of costs and benefits associated with architectural decisions [Kazman 01]. To rank and make architecture-improvement decisions, it is necessary to elicit information about costs, benefits, and schedule implications of architectural decisions, since these concerns always trade off with pure quality-attribute concerns.
These techniques are efficient at helping designers and analysts find the correct location to examine in an architecture to determine whether a scenario can be achieved. But these techniques provide little support for determining what to examine at that location. Most architecture-based methods, ours included, have relied heavily on the expertise of the designers and evaluators when examining the relevant portions of the architecture. Expert opinion has typically been required to determine whether the architectures are satisfactory.
To address this shortcoming, we have recently added two new techniques to the list above:
- Architectures can be analyzed through the use of quality-attribute models. Some quality attributes, such as performance, have well-known analysis models. Other quality attributes, such as variability, testability, and security, have less mature models. In each case, however, we can use the quality-attribute models that exist to help us understand the design decisions made in the architecture.
- Quality-attribute models lead to a set of quality-attribute design principles. Given a particular problem identified by an analysis, there must be some method to generate alternatives for improvement. We have identified a set of design principles based on quality-attribute models, and these principles aid in identifying alternatives.
We are now in the process of creating a follow-on method to the ATAM that combines these techniques in new ways. Stay tuned to subsequent columns for more information.
Bachmann, Felix; Bass, Len; Klein, Mark. Deriving Architectural Tactics: A Step Toward Methodical Architectural Design (CMU/SEI-2003-TR-004). Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 2003.
Bass, L.; Clements, P.; Kazman, R.; Software Architecture in Practice, 2nd edition, Boston, MA: Addison-Wesley, 2003.
Barbacci, Mario R.; Ellison, Robert; Lattanze, Anthony J.; Stafford, Judith A.; Weinstock, Charles B.; Wood, William G. Quality Attribute Workshops, Third Edition (CMU/SEI-2003-TR-016). Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 2003.
Clements, P.; Kazman, R.; Klein, M. Evaluating Software Architectures: Methods and Case Studies. Boston, MA: Addison-Wesley, 2002.
Clements, P.; Bachmann, F.; Bass, L.; Garlan, D.; Ivers, J.; Little, R.; Nord, R.; Stafford, J. Documenting Software Architectures: Views and Beyond. Boston, MA: Addison-Wesley, 2003.
Kazman, R.; Asundi, J.; Klein, M. “Quantifying the Costs and Benefits of Architectural Decisions,” Proceedings of the 23rd International Conference on Software Engineering (ICSE 23), (Toronto, Canada), May 2001, 297-306.
Kazman, R.; Barbacci, M.; Klein, M.; Carriere, S.; Woods, S. “Experience with Performing Architecture Tradeoff Analysis”, Proceedings of the 21st International Conference on Software Engineering (ICSE 21), (Los Angeles, CA), May 1999, 54-63.
Kazman, R.; Abowd, G.; Bass, L.; & Webb, M. “SAAM: A Method for Analyzing the Properties of Software Architectures,” 81-90. Proceedings of the 16th International Conference on Software Engineering. Sorrento, Italy, May 16-21, 1994. Los Alamitos, CA: IEEE Computer Society, 1994.
About the Authors
Rick Kazman is a senior member of the technical staff at the SEI, where he is a technical lead in the Architecture Tradeoff Analysis Initiative. He is also an adjunct professor at the Universities of Waterloo and Toronto. His primary research interests within software engineering are software architecture, design tools, and software visualization. He is the author of more than 50 papers and co-author of several books, including a book recently published by Addison-Wesley titled Software Architecture in Practice. Kazman received a BA and MMath from the University of Waterloo, an MA from York University, and a PhD from Carnegie Mellon University.
Len Bass is a senior member of the technical staff at the Software Engineering Institute (SEI) and participates in the High Dependability Computing Program. He has written two award-winning books in software architecture as well as several other books and numerous papers in a wide variety of areas of computer science and software engineering. He is currently working on techniques for the methodical design of software architectures and to understand how to support usability through software architecture. He has been involved in the development of numerous production or research software systems ranging from operating systems to database management systems to automotive systems.
Mark Klein is a senior member of the technical staff of the Software Engineering Institute. He has more than 20 years of experience in research on various facets of software engineering, dependable real-time systems and numerical methods. Klein's most recent work focuses on the analysis of software architectures, architecture tradeoff analysis, attribute-driven architectural design and scheduling theory. Klein's work in real-time systems involved the development of rate monotonic analysis (RMA), the extension of the theoretical basis for RMA, and its application to realistic systems. Klein’s earliest work involved research in high-order finite element methods for solving fluid flow equations arising in oil reservoir simulation. He is the co-author of two books: A Practitioner’s Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems and Evaluating Software Architecture: Methods and Case Studies.