NEWS AT SEI
This article was originally published in News at SEI on: June 1, 2003
When we began the CMMI project, we knew that we had to provide an appraisal method to accompany the models. Our initial choice was SCAMPI V1.0 (Standard CMMI Appraisal Method for Process Improvement). But as we upgraded toward V1.1 models, our sponsor directed that we also specifically address the use of CMMI for source selection and contract monitoring, a function previously performed by the Software Capability Evaluation (SCE) or System Development Capability Evaluation (SDCE). SCAMPI V1.1 was significantly revised to enable that use. This column will address some of the ways that SCAMPI can be used with CMMI models for use in source selection and contract monitoring.
Government contractors often receive a Request for Proposal (RFP) from a government program office that requires that candidate contractors achieve a particular CMM or CMMI maturity level to be considered a viable competitor for the contract. In some cases, proposing contractors are asked to provide evidence of process discipline. Frequently these are internal appraisals performed at the contractor’s expense.
For the Capability Maturity Model for Software (SW-CMM), these appraisals are usually CMM-Based Appraisals for Internal Process Improvement (CBA IPIs). Some contractors have chosen to administer an internally (usually corporately) sponsored SCE, which provides similar information. For CMMI models, these appraisals use the Standard CMMI Appraisal Method for Process Improvement (SCAMPI). Some program offices prefer to send their own or other external teams to appraise the contractors’ current development sites. As I mentioned above, these were called evaluations when the SW-CMM was the model, but SCAMPI has unified the appraisal methods into one that covers both internally sponsored assessments and externally sponsored evaluations.
An appraisal is used to confirm the progress a contractor has made in reaching higher levels of organizational maturity or process area capability. Several of a contractor’s projects are chosen for appraisal. These projects are supposed to be representative of the contractor’s full capabilities, not just the “pick of the litter.” Typically, functional area experts from the contractor’s other projects or non-project staff also are interviewed or provide data to ensure that the results gathered represent the entire contractor organization, not just the projects examined during the appraisal. The challenge for the appraisal team is to gain confidence that the goals of each applicable process area are being met across the organization—and that a deficiency in one project does not characterize the whole organization.
In many cases, the proposed development will require that the contractor create a new product team to work on the contract. Since a fully formed team may not have been created, the appraisal conducted to evaluate the contractor typically examines active project teams.
Because the appraisal precedes the creation of a new product team for the contract, the program office cannot have full confidence that the work to be performed under the proposal will be performed at the same maturity level as the appraisal suggests, because product teams are typically reformed and teamed differently with both internal and external partners. On the other hand, contractors that are continuing to improve their organizational maturity may well exceed the maturity levels documented in earlier appraisals.
Another way of stating this is that there is a risk that a new contract will stimulate both hiring and reorganization within the contractor organization. This may bring together organizational elements that weren’t seen in the earlier appraisals. Further, many contracts for developing software-intensive systems demand teaming arrangements across multiple contractors. These requirements lessen the usefulness of predictive appraisals conducted on individual contractors because they are less directly applicable to the actual multi-contractor project team.
Thus, a preliminary appraisal may best be considered a way to establish the playing field among the contractors being considered—to set the minimum standards. If all proposing contractors are at maturity level 3, for example, the program office would have some degree of confidence that each of the contractors is able to predict cost and schedule realistically and that these contractors will be better able to work together than contractors that have not improved their development discipline to this level. Such a situation would not, however, give the program office confidence that development risks would be predictably addressed by the contractor team.
The original purpose of SCEs was to address risk to the program office. SCE teams were created by the program office to look at specific areas of concern. These areas of concern typically included a handful of process areas that were investigated at the contractor’s site for a few days. A method similar to the SCE, the SDCE, involved asking the contractor questions that focused on specific areas of concern instead of maturity level ratings. In spite of the initial emphasis of these methods, maturity levels eventually became commonly used and familiar to the organizations using them. A maturity level focus displaced the risk focus of the two evaluation methods in many acquisition environments.
This emphasis may now be changing in some environments. For example, in the Department of Defense (DoD), the guidance from the Office of the Secretary of Defense/Acquisition, Technology, and Logistics (OSD/AT&L), which had required maturity level 3 for large DoD programs, no longer requires these programs to achieve these ratings. The new guidance still encourages these programs to pursue process improvement, but the maturity level expectation is no longer stated in policy guidance.
The program manager must still mitigate the risks of development of complex software-intensive systems. The CMMI Product Suite may be of assistance. Using a CMMI model and the SCAMPI method, the program office can conduct a baseline appraisal of the overall project and gain agreement with the contractors about the primary developmental risks to the success of the contract. Risk mitigation then becomes a measure of program progress, and award fee mechanisms can be used to encourage and reward improvements that address the weaknesses identified in the baseline appraisal.
Contract monitoring appraisals, tailored to the areas of concern, can provide the confidence to the program office that needed improvements have been made by the contractors to ensure the success of the program. In one recent example, a baseline appraisal found 47 risk areas that could have potentially significant impact on program success. A year later, 41 of those risk areas had been mitigated by the contractors. The remaining six were still in progress, but the government and industry program managers agreed that this attention to process discipline was paying dividends on both sides.
Another approach briefed by an Army representative at the recent Software Technology Conference in Salt Lake City was similar. It allowed contractors to provide varied evidence of process improvement in their proposals. Maturity level documentation was not required. Instead, the contractors were encouraged to propose how the government-industry team would collaborate on continuing process improvement to both reduce risk and raise quality.
These real-world examples require the development of an acquisition strategy, RFPs, proposals, processes, and relationships that allow and encourage this sort of teamwork between the government and industry. Another factor in overall program success is the effectiveness of the teamwork across contractor teams from multiple companies.
A frequent challenge these days occurs when multiple contractors, often with very different cultural roots, must work closely together. In these situations, separate appraisals of each contractor’s capabilities may be misleading when applied to the overall program, as the teams have typically not been working together in the way they must after contract award. CMMI and SCAMPI offer some particular advantages in these situations. An appraisal of the entire team of contractors that focuses on a set of critical processes, or process areas, can be conducted. In some cases, there may be value in establishing a single process to be shared by multiple contractors; however, the primary value of such a focused appraisal is in knowing how the contractors must modify their processes to improve the effectiveness of integrating their work. Typically, the time spent in product integration and test is costly and time consuming. Early attention to process integration can pay big dividends by minimizing the time and effort required for product integration and test at the end of the product development life cycle.
There is a risk when contractors on a team vary in their level of process improvement achievement. A noteworthy example was a maturity level 5 contractor that needed subsystems provided by a maturity level 1 small company. The maturity level 5 contractor determined that the best way to ensure that these two contractors could work together effectively was to include the small company as a full team member for the contract. The small company’s staff worked within the process architecture of the maturity level 5 contractor, and contributed its pieces of the system within a well-established framework. The small company gained exposure to the values of process discipline as a result of the mentoring teamwork. Consequently, all organizations involved benefited from this strategic approach. The program office was assured success, the maturity level 5 contractor was able to meet its obligations with minimized risk, and the maturity level 1 contractor gained valuable experience.
CMMI provides elements of the Software Acquisition CMM (SA-CMM) in process areas at maturity level 2 and maturity level 3. At maturity level 2, the Supplier Agreement Management (SAM) process area addresses acquisition issues. At maturity level 3, the Integrated Supplier Management (ISM) process area provides further guidance in acquisition activities. The integrated product and process development (IPPD) process areas, Integrated Project Management (IPM), Integrated Teaming (IT), and Organizational Environment for Integration (OEI), of course, provide a rich set of practices to encourage effective teamwork. There are great opportunities for shared value when implementing the Requirements Development (RD) and Requirements Management (REQM) process areas using a teaming approach.
The SEI has been asked by several program offices to help them improve the systems engineering provided within government acquisition. The SEI has supplemented the CMMI models with practices more directly applicable to an acquisition environment; these were extracted from the Software Acquisition CMM (SA-CMM) to provide the basis for investigation. Visits to program offices that lasted less than a week allowed team members to determine what capabilities were present in the workforce and where improved processes and/or training were required. While these visits share the basic data-gathering techniques of the more rigorous SCAMPI Class A, no attempt at determining a maturity level rating is seen as necessary or desirable.
As you may have gathered from the examples used in this article, much of the SEI’s experience with contract monitoring has been with government organizations. However, these concepts are equally relevant in commercial industry. The goal is the same for government and industry—to ensure that the products they procure are developed and delivered by qualified contractors and result in quality integrated products.
CMMI models, coupled with a variety of appraisal approaches—from a full SCAMPI Class A through risk based appraisals to capability determinations—can be used in a variety of ways to meet the needs of the acquisition workforce, both in government and industry.
More information about this topic is available in the following documents, which are available on the SEI Web site:
Mike Phillips is the Director of Special Projects at the SEI, a position created to lead the Capability Maturity Model® Integration (CMMI®) project for the SEI. He was previously responsible for transition-enabling activities at the SEI.
Prior to his retirement as a colonel from the Air Force, he managed the $36B development program for the B-2 in the B-2 SPO and commanded the 4950th Test Wing at Wright-Patterson AFB, OH. In addition to his bachelor’s degree in astronautical engineering from the Air Force Academy, Phillips has masters degrees in nuclear engineering from Georgia Tech, in systems management from the University of Southern California, and in international affairs from Salve Regina College and the Naval War College.
For more information
Please tell us what you
think with this short
(< 5 minute) survey.