NEWS AT SEI
This article was originally published in News at SEI on: April 1, 2007
My previous column introduced a new component of the CMMI Product Suite: Understanding and Leveraging a Supplier’s CMMI Efforts: A Guidebook for Acquirers, otherwise known as the Acquisition Guidebook. This new guidebook is designed to help government acquisition organizations use the existing CMMI Product Suite to select qualified suppliers. In this month’s column, I’ll describe the work currently under way by the CMMI Team—government, industry, and SEI—to seek ideas for further improvements in the CMMI Product Suite over the next several years.
The initial launch of the CMMI Product Suite (V1.0) was based on the need to integrate three process-improvement models and their associated appraisal methods and training into one integrated package. The second release of CMMI (V1.1) was a planned but moderate update of the product suite in response to the feedback from initial users. This release consisted primarily of refinements to the Standard CMMI Appraisal Method for Process Improvement (SCAMPISM) appraisal method. SCAMPI was moved from a discovery-based method to a verification-based method to reduce the amount of time spent on site and to enable easier evaluation by external (government) sponsors either seeking or executing a contract. The recent V1.2 release was based on four years of feedback collected from CMMI users, which included feedback mechanisms such as direct change requests from individuals and groups and an SEI project called Interpretive Guidance that helped guide the kinds of change made in V1.2 to better meet the needs of the broad set of users of the CMMI Product Suite.
The co-sponsors of CMMI are the Office of the Secretary of Defense for government and the National Defense Industrial Association (NDIA) for industry. These sponsoring organizations determined that the CMMI project team should investigate the next steps for the next release of CMMI earlier than in the past. This early investigation recognizes the need for research and experimentation on alternatives for growth as the product suite becomes more and more widely used as a process-improvement tool. A series of workshops is now under way to begin the search for both incremental improvement and major innovation. The NDIA is leading these workshops, and the SEI is providing technical support (a similar sponsorship arrangement to the one used for the annual CMMI Workshop held every November in Denver, Colo.).
As of this writing, the first two workshops have been held—one in San Francisco and one in Washington, D.C. Three international workshops are planned in the summer and fall. The next will be held in Montreal, Canada, July 10 to 13. Two others are planned: one to be held in London, U.K., October 9 to 12, and the other to be held in Sydney, Australia, November 27 to 30. Additional U.S. and international workshops may be considered in 2008.
The approach of these workshops has been to encourage innovative thinking about how the CMMI Product Suite should evolve in its second decade of use. The workshop agenda is divided into two segments: two days on the model aspects of CMMI and two days on the CMMI appraisal methods. While all workshop participants are invited to attend the first segment, some experience with SCAMPI or related process-improvement methodologies is requested for attendance at the appraisal-related segment. A set of starter questions is provided to stimulate discussion at each workshop, and new questions are added that are pertinent to the specific interests of each workshop.
The set of starter questions is:
The discussion in each of the first two workshops brought together a mixture of government, industry, and academic perspectives. A summary of the diversity of opinion would not properly recognize the many innovative ideas that were recorded in these workshops to assist CMMI efforts over the next few years, but the following glimpse of the ideas, issues, challenges, and opportunities can give you an idea of the input we’ve received so far.
From a model perspective, the efficacy of the constellation approach remains uncertain in the minds of workshop attendees. Since none of the new constellations is fully matured, the discussions were often about how the various constellations might be used together. Many attendees were from organizations responsible for development and at least one of the other areas covered by the new constellations in our plans for V1.2 release. Since the intent of CMMI is to avoid a proliferation of improvement models, do the constellations help or hinder that strategy? How can organizations avoid duplicating appraisal effort when seeking to address a mixture of projects when some are strongly flavored with one emphasis (e.g., development) while others are equally strong in their focus on another (e.g., acquisition)?
Another area that prompted extensive discussion was process performance. In a 2004 Crosstalk article, Dr. Robert Charette, Laura Dwinnell, and John McGarry wrote that a wide-ranging set of appraisals, conducted as part of the Tri-Service Assessment Initiative, found many deficiencies in the process performance of more than 90% of the DoD programs examined [Charette 04]. While about half of these programs had fundamental problems of process adherence that CMMI readily addresses, an additional dimension of deficiencies involved program teams—often across multiple companies—not having adequate tailoring of various organizational standard processes to meet the challenges of complex software-intensive development by the full team. While CMMI already encourages senior management to review the results of process-improvement activities, senior management may need to increase its focus on process effectiveness and efficiency and on the appropriate tailoring taking place in program teams.
Another theme worthy of note is the need to relate CMMI to the many other standards and methodologies that organizations often must consider for a variety of reasons. In previous columns I have written about the compatibility of CMMI with improvement approaches such as Six Sigma and development approaches such as the SEI Team Software Process (TSP) methodology. Many workshop attendees suggested ways to relate the various approaches together to capture their synergy without demanding that any particular coupling be accomplished. A simple example was that terminology from related standards (e.g., IEEE 12207 or 15288) might be chosen for future updates to minimize unintended differences in interpretation for similar practices.
On the appraisal side, workshop attendees were interested in reducing the impact of appraisal preparation on the organization undergoing the appraisal. While the move from a discovery-based to a verification-based methodology in the SCAMPI method does reduce on-site time, it places a higher burden on the organization gathering the practice implementation indicators (PIIs). Some participants asked if there could be ways to reduce this burden without sacrificing appraisal integrity, repeatability, or consistency.
A longstanding discussion continues in the workshops about the advantages and disadvantages of having the choice of two representations for an appraisal. Attendees believed that the distinctions between the representations have been reduced over time, most notably by the single-book approach and the elimination of advanced practices and common features.
A final theme that couples both model and appraisal issues is to view the model in a new way. This view divides model practices into two fairly distinct groups. One group emphasizes doing the work. These practices are well documented in the Project, Engineering, and Support process area categories at maturity levels 2 and 3. The other group emphasizes the organizational elements and the higher-maturity elements in which the practices focus on process improvement. These two parts mix nicely in use and appraisals, but it might be possible to focus on best practices for doing the work in one appraisal and focus on process improvement in another type of appraisal.
All of the ideas gathered in these workshops will take time to investigate and pilot—so our aspiration to release a new version of the product suite every 3-5 years does not appear to need adjustment at this point. I encourage all of you to consider attending one of the upcoming workshops to contribute your ideas. If you cannot attend, you can still submit your input and ideas. To ensure that the right groups receive your comments, please complete and submit a change request form.
I will be providing updates of the progress of these workshops both in future columns and at a variety of conferences over the next year. The CMMI Product Team intends to continuously improve the CMMI Product Suite to provide greater value for a growing population of CMMI users.
In my next column, I will discuss the work we have been doing to continuously improve V1.2 appraisals. While much has been done in the current product-suite release, our new approach to the certification of SCAMPI Lead Appraisers will leverage the current capabilities of Lead Appraisers while helping them to build their professionalism.
Charette, Robert; Dwinnell, Laura; & McGarry, John. “Understanding the Roots of Process Performance Failure.” Crosstalk 17, 8 (August 2004): 18-22.
As the director of special projects at the Software Engineering Institute, Mike Phillips leads the Capability Maturity Model Integration (CMMI) project for the SEI. He was previously responsible for transition-enabling activities at the SEI. Prior to his retirement as a colonel from the Air Force, he managed the $36B development program for the B-2 in the B-2 SPO and commanded the 4950th Test Wing at Wright-Patterson AFB, Ohio. In addition to his bachelor’s degree in astronautical engineering from the U.S. Air Force Academy, Phillips has master’s degrees in nuclear engineering from Georgia Tech, in systems management from the University of Southern California, and in international affairs from Salve Regina College and the Naval War College.
For more information
Please tell us what you
think with this short
(< 5 minute) survey.