A Practical Approach to Improving Pilots

NEWS AT SEI

This library item is related to the following area(s) of work:

Measurement and Analysis

This article was originally published in News at SEI on: March 1, 2000

The staff of the SEI's Software Engineering Measurement and Analysis (SEMA) group have spent the past several years developing methods for measuring and collecting data from software engineering innovations. More recently, SEMA has begun to focus on ways to get ahead of the process of introducing innovative methods, by looking at new ways to design pilot projects.

SEMA leader David Zubrow and SEI technical staff member Will Hayes point out that pilot projects have long been regarded as a smart way to gain experience with a new idea. "But practical limitations often erode the good intentions of the professionals who conduct pilot studies," Hayes says. "Immediate customer needs are more important than research. Political influences—pro and con—can come into play. And reporting a negative result could be unacceptable."

Also, a pilot study that is viewed only as a feasibility analysis leads to a go/no-go decision, Hayes says. "The business context is rarely that simplistic. Utility, not feasibility, is what matters. The conditions that influence utility must be identified, as well as options and mitigators. A binary outcome is too limiting."

SEMA’s goal is to develop guidance for designing and conducting effective and efficient pilots. Zubrow says pilots will be effective if they yield results that support good decisions, and efficient if they consume as few resources as possible to achieve the desired level of confidence. The guidance for such pilot projects should address the design of the project and its measurement methods, the data collection, and the analysis of that data.

"The problem lies in the fact that conducting pilot projects and evaluations is a common practice, but little guidance exists," Zubrow says. "An opportunity is missed to leverage the results and experiences of previous pilots. There is also a significant risk in applying the results of a pilot to the broader organization if the pilot was not designed properly."

The question, Zubrow says, is "how do you design the pilot in a way that gives you the best information, given that there is a set of constraints: time, disruption, cost, et cetera."

Hayes and Zubrow have proposed the use of quasi-experimental techniques and meta-analysis to provide powerful, low-cost, methods for going beyond the current practice for conducting pilot studies.

Quasi-Experimental Design

The "quasi-experimental" approach is so called because it does not meet the standard of a true scientific experiment. But such an approach is often more feasible given the constraints of time, cost, and allowable levels of disruption in an organization.

A pilot that uses a quasi-experimental design strives to approximate, in a field situation, a true experimental design, which requires the random assignment of subjects, control over their exposure to experimental treatment, control over other influences, and a large number of observations. In quasi-experimental design, allowances are made for the fact that it may be impossible to fully conform to these requirements. Zubrow points to three types of quasi-experimental designs:

  • time series, which involves periodic measurement with the introduction of an innovation
  • non-equivalent control group, in which the innovation is given to one candidate and another is selected to be a control
  • multiple time series design, which is a combination of the time series and non-equivalent control group designs

In a time series design, repeated measurements of performance are made prior to and after the introduction of the improvement. This approach requires the implementation of only one pilot project, but lacks history and is difficult to use for generalizations. It works best when observations and measurements are taken for an extended time before and after the introduction of the improvement.

In a non-equivalent control group design, the innovation is applied to naturally occurring groups, rather than through random assignment. Measurements are made at comparable times both prior to the introduction of the innovation and after it. There is no assurance that the groups are equivalent prior to the "experiment." In this method, shorter observation times are required, but at least two groups are needed and the lack of equivalence among the groups is a weakness. This method is useful when there are limited opportunities to gather data over time and when there are two comparable situations in which the innovation can be tried.

The preferred approach, when possible, is to use a combination of the time series and non-equivalent control group designs—the multiple time series design. It provides improved confidence in the results, but requires increased data collection and coordination.

Meta-Analysis

Meta-analysis has been defined as "the statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings.

With meta-analysis, study results and testimonials can be treated as data, with no requirement to accept one result and reject others. A weighting scheme can be devised based on such objective criteria as the amount of background information provided, the similarity of the context to the project at hand, the timeframe of the study, and the subject pool involved. The data can be employed to construct prediction intervals for the pilot study, and to provide added context to interpret pilot study results.

The use of research has limitations, Hayes points out, because typically only positive results are published. Also, use of student subjects in academic settings leads to skepticism about the applicability of the results to the "real world." Extrapolation beyond the research setting is typically an act of faith, not driven by quantitative methods.

Combining published research findings with pilot results can help minimize the weaknesses of both sources of information:

  • Pilot study results can be estimated using published results from experiments.
  • Multiple studies can be used even if they report conflicting results.
  • Important differences in study design or subjects used can be incorporated into the analysis.
  • Multiple pilot studies can be analyzed in combination.

Designing pilots with meta-analysis in mind can further add value. "We want to design pilots to make the data amenable to meta-analysis. That way the organization builds a body of knowledge about itself and its capability to improve," Zubrow says.

References

Campbell, D.T. and Stanley, J.C. 1963. Experimental and Quasi-Experimental Designs for Research. Boston: Houghton Mifflin

Hedges, L. V., and Olkin, I. 1985. Statistical Methods for Meta-Analysis. Orlando, FL: Academic Press

Wolf, F. M. 1988. Meta-Analysis: Quantitative Methods for Research Synthesis. Beverly Hills, CA: Sage Publications

Find Us Here

Find us on Youtube  Find us on LinkedIn  Find us on twitter  Find us on Facebook

Share This Page

Share on Facebook  Send to your Twitter page  Save to del.ico.us  Save to LinkedIn  Digg this  Stumble this page.  Add to Technorati favorites  Save this page on your Google Home Page 

For more information

Contact Us

info@sei.cmu.edu

412-268-5800

Help us improve

Visitor feedback helps us continually improve our site.

Please tell us what you
think with this short
(< 5 minute) survey.