NEWS AT SEI
This article was originally published in News at SEI on: June 1, 2000
Because of the existence of several maturity models, organizations undergoing process improvement efforts often encounter the problem of deciding which model to choose, or how to interpret differences in terminology or guidance.
The Capability Maturity Model (CMM) Integration effort is focused on eliminating these difficulties. The goal is to improve efficiency, return on investment, and effectiveness by using a single model that integrates disciplines such as systems engineering and software engineering, which are inseparable in a systems development endeavor.
The CMMI project team is a collection of government, industry, and SEI participants. Since the inception of the CMMI project, user and stakeholder feedback has played a key role in the definition and evolution of the model. In particular, since September 1999, the CMMI team has been continuously gathering feedback and improving the model in preparation for the release of version 1.0 later this year. This process of "getting to" version 1.0 was an essential part of building a robust, useful model.
In the early development of the model, the team relied on a set of stakeholders already intimately involved in the development and use of one or both of the source models from which CMMI is derived—the CMM for Software (SW-CMM) and the Systems Engineering Capability Model (SECM). This group reviewed an initial version, v0.1. By December 1999, with the help of these stakeholders, the CMMI project team was released version 0.2 for public review.
For this review, the model was made available on the Web, where reviewers could download it and submit feedback. "This was a public review," says Mike Phillips, project manager of the CMMI project, "and we wanted to make sure the right people were aware of the opportunity: we sent emails to SEI stakeholders, systems engineering groups, lead assessors, transition partners, and others."
This review provided the first opportunity for the broader community to give feedback on the model. "Sometimes it's not the people who are most familiar that give you the best input, says Phillips. "The most useful input may be from the person who's struggling just to get started. So we wanted to listen to all of them." This version was made available for public review September 1, 1999, for a 90 day period.
In addition to the 90-day public review, two other means of gathering feedback were used: First, the team members acted as observers, accompanying the assessment teams that were piloting the CMMI model, training, and assessment method. Observations made during the pilots—for example, problems with interpreting or using the model—were written up as change requests. Observers and members of the assessed organizations provided the input for these change requests.
The other method used to collect feedback was at focus groups conducted in conjunction with SEI training classes such as the High Maturity Practices Workshop. Participants in these workshops were personnel from organizations rated at high maturity levels, typically CMM levels four and five. After the workshop, CMMI team members asked volunteers to participate in a focus group. As the CMMI project team piloted the Introduction to the CMMI course, attendees were encouraged to spend an additional half day to help improve the model and the training.
About 3,000 change requests were received from the public review, pilots, and focus groups. The team has spent the first half of 2000 processing these change requests, making decisions about the input received, and updating the model based on their decisions. "It's important to point out," says Phillips, "that it's not just the SEI deciding whether to accept a change request. In fact, the majority of the CMMI project team reviewing CRs—about 60-70%—are from industry and government. We developed a voting process because we weren't able to come to consensus on everything. If anybody voted against the majority, the minority opinion was heard and then a revote was taken. In some cases, the minority made their point clearly enough that they changed the decision of the group."
As in any review, conflicting requirements often meant making difficult tradeoffs. For example, a large number of reviewers indicated that they would like the model to be smaller. "Concentrating" the model, while maintaining its technical integrity, proved to be one of the team's biggest challenges. For example, "peer reviews" was a dedicated key process area in the software CMM. But because of the need to reduce the size of the model, in addition to the need to include in the model other methods for verifying the quality of work products, the team chose to put peer reviews at the goal level rather than as a process area. "That can be difficult, says Phillips, "because some organizations have said 'this is one of the most valuable process areas that you had captured, but now you seem to be giving it less emphasis.' But a goal is still at a level where a weakness there would be noted in any kind of assessment. So I believe we've captured the essence of the request. But those are the kind of difficult decisions that, to keep the model from growing out of bounds, we've had to make."
All change requests and the decisions made about them were captured in a decision log, which will be made available for individuals interested in tracking changes to the model.
The model is scheduled for release later this year. Team members encourage the organizations that will be using CMMI to continue to provide feedback on how to further refine the model to meet their need for improving how they develop software intensive systems.
For more information
Please tell us what you
think with this short
(< 5 minute) survey.