Using CMMI with Suppliers – The Acquirer’s Concerns

NEWS AT SEI

Author

Mike Phillips

This library item is related to the following area(s) of work:

Process Improvement
CMMI

This article was originally published in News at SEI on: February 1, 2005

Two previous columns have focused on appraisals and it is time for another installment. I'll begin with a brief history of the general use of CMM-based approaches applied not for internal process improvement, but by external entities. These are usually government organizations seeking to choose or otherwise examine a potential (or existing) provider of software-intensive products.

A Short History

Many of you may know that the CMM actually began its life as a way for government program managers to gather questions so they could gain confidence in selecting contractors to provide software-intensive systems for the Air Force's Electronic Systems Center. Over time these questions became a set of best practices to look for and then became the SW-CMM model and now CMMI. From an appraisal perspective, these began as Software Process Assessments (SPAs). The SPA was replaced by two approaches, the CMM-Based Appraisal for Internal Process Improvement (CBA IPI) and the externally focused Software Capability Evaluation (SCE). With CMMI, the SEI was asked by the DoD to assure that one method, SCAMPI V1.1, would be applicable to either internal or external use.

The establishment of a level 3 expectation as a discriminator has its own history. The first "requirement" for level 3 was established by a key Air Force executive in the Pentagon. This memorandum, though, was directed to government organizations that wished to provide organic capability, and insisted that level 3 was the expectation. Over time level 3 grew to become a common request in many requests for proposal, both inside and outside of government. For a little over a year, DoD had a policy requiring CMM level 3 for its largest (ACAT I) programs, but a revision to DoD directives eliminated the policy guidance. While specific guidance about these expectations is gone, encouragement of appraisals remains. There are two paragraphs covering this in DoD 5000.1:

4.2.5.2 Capability Reviews. Capability reviews such as manufacturing capability and software capability reviews are a useful tool available during source selections to assess the offerors' capability in selected critical process areas. Capability reviews may be the appropriate means for evaluating program-specific critical processes such as systems engineering, software development, configuration management, etc. The reviews would be useful to supplement process past performance data to ascertain the risks in selecting a given offeror and to assist in establishing the level of government oversight needed to manage the process-associated risks if that offeror is awarded the contract. The trade-off in determining whether or not to do a capability review would be the criticality of the process versus the time and resources to do the review versus the availability, adequacy, and currency of an offeror's process past performance data.

4.2.5.3 Capability Appraisals. In all cases, the program manager retains the right (and is encouraged) to independently evaluate the process capabilities of the selected team prior to or immediately after contract award in order to have a better understanding of potential risks associated with the development team's process capabilities. Once the developer is selected, the program manager can conduct an evaluation to support the up-front risk assessment of the developer's capability to deliver. Periodic appraisals are encouraged as part of contract process monitoring activities. The selection of assessment or appraisal method would be dependent upon the needs of the particular project, the level of risk associated with the project, and any areas of concern the program manager may have. The program manager should understand that: 1) appraisal and assessment results are another tool (like past performance) to gauge the likelihood that the contractor will succeed and perform to the requirements of the contract; 2) assessments are most valuable when they apply across the full program team, and not just one segment of the organization; and 3) domain experience is at least as important as process maturity level when evaluating the program team's capability.

Maturity Levels and Program Success

Maturity levels are an imperfect approach to assuring program performance. Maturity levels are indicators of organizational potential: they describe how the next project will most likely be conducted based on a sampling of existing projects. Maturity levels describe the organization and are not an indication of how an individual project is performing. The organizational scope of an appraisal is usually limited to a division or company within a larger corporation that manages similar projects or a product line (see Figure 1). This "organization" may have many projects managed within its scope, but for the purposes of an appraisal, the number of focus projects is usually limited to between three and six. These focus projects must, taken as a whole, represent the entire lifecycle and be representative of the implemented processes and functional areas being investigated within the organizational unit.

Figure 1: Applicability of CMMI Maturity Levels

Figure 1: Applicability of CMMI Maturity Levels

Acquisition programs are concerned with the performance of their programs. The program is usually made up of a set of projects from multiple contractor organizations, teamed with the acquisition office to deliver capability to an end user (see Figure 2). Choosing contractors with high maturity levels is necessary but not sufficient to ensure that strong systems-engineering and program-management practices are employed on any program.

Figure 2: The Acquirer's Concern

Figure 2: The Acquirer's Concern

A better way of ensuring strong systems-engineering and project-management practices is to evaluate the capability of potential bidders in select areas of importance to an acquisition program. For example, if an acquisition program is looking to hire a lead systems integrator, the areas of concern may be the bidder's ability to manage risk, manage suppliers, plan and track the program, build an integrated team, develop an architecture, and integrate the various components. During the source selection, the acquirer could then evaluate the capability of these specific areas using relevant CMMI process areas to determine the strengths and weaknesses of a given bidder. This approach provides a key discriminator in the selection process against a publicly available model and a set of industry accepted practices.

Recent Developments

In recent years, a government-industry team concluded that it would be helpful to establish a registry of appraisals done by industry with government participation on the appraisal teams. Such a registry was viewed as providing government more confidence in the outcome. This approach remains available, with the SEI maintaining the record of government membership. If queried, the SEI can provide a reference to the participants without compromising the confidentiality of the actual appraisal data.

Although the addition of external evaluation options was provided with the release of SCAMPI V1.1 and an associated guide for use in source selections and contract monitoring, the SEI has not received any of these appraisals to date. Existing government evaluators have noted that a "full" (level 3) appraisal, performed on the set of proposing contractors, would take too long to be acceptable as part of a source selection.

Over the last two years, the SEI teamed with a consortium of organizations using CMMI and a set of government evaluators to collect some of the best practices for the less-robust appraisals that we have characterized as class "B" or "C." We had determined that our SCAMPI Lead Appraisers deserved to have a tool-kit of building-block appraisal approaches that were compatible with existing "SCAMPI A" appraisals.

The handbook that has resulted from this work includes guidance that a government team would need to have to conduct an externally oriented appraisal that uncovers needed source selection (or contract monitoring) information. The handbook helps teams gather strengths and weaknesses of the program-development team and likely risks to program execution, without seeking achievement of any specific maturity level. And because we have maintained the requirements for CMMI knowledge at the same level that we expect of our SCAMPI A teams, we believe we can maintain the needed confidence in appraisal-team performance. This clearly is a concern for both the government and contractor sides of these external appraisals.

Many acquisition programs are currently using this handbook, including the Navy's Multi-Mission Maritime Aircraft program, the National Reconnaissance Office's Future Imagery Architecture Program, the National Security Agency's Cryptologic Mission Management program; it has also been adopted by Australia's Defense Materiel Organization.

After contract award, the acquisition program can then continue to evaluate the actual performance of the entire team, including the acquisition program office, against select process areas of interest from the CMMI for the developers and the CMMI Acquisition Module (CMMI-AM) for the acquisition program. This ongoing look at how the program is performing can provide early indicators of process-related risk and will help ensure that strong systems-engineering and project-management practices are employed across the entire program team.

About the Author

Mike Phillips is the Director of Special Projects at the SEI, a position created to lead the Capability Maturity Model Integration (CMMI) project for the SEI. He was previously responsible for transition-enabling activities at the SEI.

Prior to his retirement as a colonel from the Air Force, he managed the $36B development program for the B-2 in the B-2 SPO and commanded the 4950th Test Wing at Wright-Patterson AFB, OH. In addition to his bachelor’s degree in astronautical engineering from the Air Force Academy, Phillips has masters degrees in nuclear engineering from Georgia Tech, in systems management from the University of Southern California, and in international affairs from Salve Regina College and the Naval War College. 

Please note that current and future CMMI research, training, and information has been transitioned to the CMMI Institute, a wholly-owned subsidiary of Carnegie Mellon University.

Find Us Here

Find us on Youtube  Find us on LinkedIn  Find us on twitter  Find us on Facebook

Share This Page

Share on Facebook  Send to your Twitter page  Save to del.ico.us  Save to LinkedIn  Digg this  Stumble this page.  Add to Technorati favorites  Save this page on your Google Home Page 

For more information

Contact Us

info@sei.cmu.edu

412-268-5800

Help us improve

Visitor feedback helps us continually improve our site.

Please tell us what you
think with this short
(< 5 minute) survey.