Architecture Competence: What Is It? How Do We Measure It?

NEWS AT SEI

This article was originally published in News at SEI on: May 1, 2008

Software architecture is the single most important software artifact determining the success of a software system, agree many experts. That’s why so many researchers have examined its technical aspects and offered tools and methodologies toward making it better. What they hadn’t investigated were the people wielding these technologies—architects—and the human and organizational factors necessary to producing sound architecture. Such investigation, determined five SEI researchers, might help define architecture competence and enable its measurement and improvement. Paul Clements, Len Bass, Rick Kazman, Mark Klein, and John Klein, all members of the SEI Software Architecture Technology (SAT) Initiative, undertook the study. For years this team had taught the architecture-centric practices that they had defined for creating quality systems. Now they wanted to know the characteristics of the architect and organizational environment that expedite these practices. Their goals were to

  • identify the measurable factors that contribute to architecture competence in individuals and organizations
  • develop an instrument for evaluating and an approach for improving these factors

To achieve these goals, Clements, Bass, Kazman, Klein, and Klein first examined four models of performance competence, discussed below, that they could adapt to software architecting.

The Duties, Skills, and Knowledge (DSK) Model

To establish the DSK model, the investigators set out to identify the key duties, skills, and knowledge that a competent architect must possess. They gleaned their data from about 200 information sources targeted to the practicing architect, such as books, courses, and thousands of job descriptions. The duties, skills, and knowledge areas most frequently encountered are listed below.

  • duty areas: architecting, other development-cycle phases, interacting with stakeholders, management, organization- and business-related duties, leadership, and team building
  • skill areas: communication, interpersonal, work, and personal
  • body of knowledge topics: basic software engineering, people, business, architecture techniques, requirements engineering, software project management, programming, platform technology, systems engineering, architecture documentation, reuse and integration, domain knowledge, and mentoring

Organization of the data according to the DSK model would provide valuable structure and material for a future assessment instrument.  

The Human Performance Technology (HPT) Model

The concept underlying the HPT model is that competent individuals produce valuable results at a reasonable cost. This model evolved from the human engineering work of Thomas Gilbert, who believed that competence is usually hindered by inadequate performance support at work, rather than by an individual’s lack of knowledge or skill [Gilbert 1996]. This approach expresses the worth of an individual’s performance as a ratio of the value of the performance to its cost. One challenge here is to determine the value of quality architecting, some of which lies in the avoidance of costly problems over the life of the system. Isolating the various duties identified in the DSK model might be a first step towards solving this problem, reasoned the team. Another challenge is to determine the infrastructure required for calculating such worth. The team views these issues as important areas for further research.

The Organizational Coordination Model

The organizational coordination model involves the sharing of information among organizational members and teams. The researchers focused on coordination and communication activities necessitated by particular types of architecture decisions. For example, when each of several teams is developing a different module in a software system, an architecture decision that results in dependencies among modules will require increased coordination between module developers. The SEI researchers also hoped to learn the effectiveness of mechanisms for facilitating coordination, such as shared discussion boards and engagement of intermediaries. These mechanisms can be useful for measuring coordination activity (for example, by the number of discussion board posts required to solve a problem). How well the coordination requirements brought on by an architecture are met by the organization’s coordination capability reflects the organization’s architectural competence.

The Organizational Learning Model

Finally, the organizational learning model of competence assumes that just as individuals can learn, so can organizations. This learning is evident when change in the organization occurs as a function of experience and is observable in the organization’s knowledge, practices, or performance. Competence lies in an organization’s ability to convert experience into knowledge through “mindfulness”; conducting architectural reviews, lessons learned, or analyses on completed projects exemplifies such mindfulness. The organization will strive to understand which learning processes are best suited for different types of learning and how various types of experience affect the transfer of such experience into knowledge. An architecturally competent organization performing architecture-centric practices, for example, will recognize the learning opportunities contained in those practices and conduct postmortems or comparisons with previous projects to maximize learning. Organizational learning is measurable through questionnaires and surveys.

Exploring the models proved an effective strategy for this study. Between them, the four models (1) covered a continuum of observational possibilities that would apply well to individuals, teams, or whole organizations, and (2) offered principles that examined past performance as well as present activity. These are valuable characteristics for developing useful instruments and assessing a wide spectrum of competency factors. “We found that together the four models provide strong coverage across these important dimensions, giving us confidence that they are effective choices for informing an evaluation instrument.” This statement is from the SEI technical report Models for Evaluating and Improving Architecture Competence that discusses the study in detail.

The team has developed survey questionnaires by engaging both a top-down and a bottom-up approach towards assessment. To quote the report:

We have generated questions from a knowledge of the place of architecture in the software development and system development life cycles. For example, we know that architectures are critically influenced by quality attribute1 requirements, so questions in the instrument must probe the extent to which the architect elicits, captures, and analyzes such requirements. In the bottom-up approach, we examine each category in the models and generate questions that address each component. This approach leads to tremendous overlap, which helps to validate and refine the questions in the instrument.

These questions map straightforwardly to the four models and lay the foundation for future competence assessment instruments. Depending on how they are administered and used, such instruments and assessments promise to serve at least three groups:

  1. Acquisition organizations should find that architecture competence assessment can help in evaluating a contractor or in choosing among competing bids. Hiring the more architecture-competent contractor typically brings fewer future problems and less reworking [Boehm 2007].
  2. Service organizations should benefit from maintaining, measuring, and advertising their architecture competence to attract and retain customers. Objective assessments of their competence levels by outside organizations would strengthen their clients’ trust.
  3. Development organizations could assess, monitor, and then increase their levels of architecture competence, thus benefiting the advertising of their products’ quality and improving their internal productivity and predictability.

The team’s work has opened an area that is clearly poised to expand. In June the SEI invited interested researchers from several countries to a workshop on architecture competence that resulted in a rich exchange of ideas. These concepts are being integrated into the team’s assessment work to best benefit organizations that desire evaluation and improvement. For more information contact us using the link in the For More Information box at the bototm of this page.

References

[Boehm 2007]
Boehm, B., Valerdi, R., & Honour, E. “The ROI of Systems Engineering: Some Quantitative Results.” Proceedings of the Seventeenth International Symposium of the International Council on Systems Engineering (INCOSE). San Diego, CA, June 2007. INCOSE, 2007. 

[Gilbert 1996]
Gilbert, Thomas F. Human Competence: Engineering Worthy Performance. Washington, DC: International Society for Performance Improvement, 1996.

1 Quality attributes are qualities such as modifiability, security, and performance that must be built into the system to fulfill stakeholder requirements.

Find Us Here

Find us on Youtube  Find us on LinkedIn  Find us on twitter  Find us on Facebook

Share This Page

Share on Facebook  Send to your Twitter page  Save to del.ico.us  Save to LinkedIn  Digg this  Stumble this page.  Add to Technorati favorites  Save this page on your Google Home Page 

For more information

Contact Us

info@sei.cmu.edu

412-268-5800

Help us improve

Visitor feedback helps us continually improve our site.

Please tell us what you
think with this short
(< 5 minute) survey.