NEWS AT SEI
This article was originally published in News at SEI on: March 1, 1999
Software developers must consider requirements early in the development process--and consider them carefully. Errors that occur early in the software development life cycle, but are not discovered until late in the life cycle, are the costliest to fix. This realization has encouraged software engineers to focus increased attention on requirements, as a way to find improvements that will have a large impact.
While the Software Engineering Institute does not concentrate specifically on requirements engineering, all SEI work in software engineering management and technical practices bears some relation to requirements elicitation, analysis, or management. For example:
In this article, we survey some of the ways in which the SEI is meeting the challenges of requirements engineering.
Rick Kazman of the SEI’s Architecture Tradeoff Analysis Initiative points out that there are two kinds of requirements, functional and quality. Most customers and developers have focused on functional requirements--what the system does and how it transforms its input into its output. But while functional requirements are necessary, quality requirements are critical to the software architecture and significantly influence the shape of the architecture.
“You can satisfy a set of functional requirements with different software structures. The choice between these software structures is not constrained by the functional requirements,” Kazman says. “The choice is affected by how you meet the quality requirements. The way that I satisfy functional requirements with structure A versus structure B versus structure C may have a huge impact on how maintainable the system is, or how well it performs, or how well it scales, or how secure it is, or how reliable it is. Those qualities have nothing to do with what the system computes, but they determine how fit the system is for its purpose over time, how expensive it will be to build and maintain, and how error prone and secure it will be.”
Choices among different quality requirements shape the architecture, Kazman explains. “Each requirement suggests certain architectural structures and rules other ones out. I will choose one set of architectural structures over another because I know that it’s a good architecture for being able to predict and control end-to-end latency or throughput, or that it’s a good architecture for high availability.”
Architecture analysis provides another benefit to requirements-engineering efforts. The process, which can include structured walkthroughs of the architecture and construction of models, often clarifies a system’s requirements. “It causes us to ask questions of the requirements that often the stakeholders haven’t asked themselves,” Kazman says. For example, a requirement might call for end-to-end latency of three seconds. “It might turn out that it’s really hard to achieve that in a particular architecture. We might ask, ‘Do you really need three seconds?’ The stakeholders might say, ‘No, we just picked that number, it could be five.’ Or, they might say, ‘Yes, it’s three come hell or high water.’ Then we just have to change everything to meet three.”
Architecture analysis also uncovers missing, overlooked, and insufficiently well-understood requirements, as well as requirements that are too vague to be testable, such as: “The system shall be maintainable and robust.” Kazman says, “There’s no way you can test whether that requirement is met or not. The process forces you to say what you really mean by ‘maintainable and robust.’”
Software requirements have been cited as the source of many if not most of the errors that are manifested throughout a development or upgrade effort, says Dave Gluch of the SEI’s Dependable Systems Upgrade Initiative. Frequently these errors are not discovered until later, for example during design, code, or test, and they can remain latent until they cause a failure during operations.
Peer reviews of requirements can be effective in uncovering errors, but participants in reviews often spend much of their effort checking such properties as consistent terminology and syntax. “As a result of this attention to application-independent aspects, reviewers can fail to uncover many of the incorrect facts or logic errors associated with the application,” Gluch says. “In addition, many procedurally formalized review protocols do not provide technical direction for the reviewers, often resulting in subtle logic errors going undetected. Many of the errors in requirements arise out of complex interactions that cannot be easily unraveled through manual analysis.”
An emerging approach toward improving error identification throughout a software development or upgrade effort is to judiciously incorporate formal methods, in the form of models, into verification practices. The SEI is maturing and codifying this collection of techniques into a broad practice suite, termed “model-based verification,” for identifying certain types of errors in software designs.
At the center of model-based verification is a systematic practice of building and analyzing “essential” models of a system. Essential models are simplified formal representations that capture the essence of a system, rather than provide an exhaustive, detailed description of it. “In analyzing essential models you don’t really execute the model.” Gluch says. “Rather, because it is a formal model, you explore its characteristics based on the mathematical properties of the model itself.”
The reduced complexity of essential models helps to provide the benefits of formal methods while minimizing the high cost normally associated with them. “We’re not saying do everything in formal methods. We’re saying use the formalism in a judicious and pragmatic way with essential models. Focus on what is essential to the system and model that.”
While model-based verification is applicable to the analysis of designs and code, the highest leverage and greatest benefits come from its use in analyzing requirements. “A good percentage of errors occur at the requirements stage, and if they’re not discovered until later on, the cost of fixing them is substantially higher,” Gluch says. “So the earlier you can find the error, the greater leverage you have. There can be an order of magnitude difference between finding an error in requirements versus finding it in design or in the code.”
Because it requires discipline and rigor to build a formal model, simply building the model uncovers errors in requirements. Then, once the formal model is built, it can be analyzed using automated model-checking tools, which can uncover especially difficult-to-identify errors that emerge in complex systems with multiple interacting and interdependent components. “These types of errors are almost impossible to detect during manual reviews,” Gluch says. “There have been cases where potentially catastrophic errors were uncovered in requirements specifications that had been extensively reviewed, simulated, tested, and implemented.”
An approach similar to model-based verification was used by the SEI for the POSIX 1003.21 working group during the early stages of developing a language-independent standard for real-time distributed systems communication. A mathematical specification language was used to create a model of the standard’s requirements. “The development and checking of the model--though without the benefit of tools--discovered gaps and inconsistencies in the requirements,” says the SEI’s Pat Place, who worked on the POSIX project. “The entire working group felt that the exercise was beneficial and was an efficient way to improve the quality of the requirements.”
A summary of model-based verification and its technical foundations can be found in the technical report, Model-Based Verification: A Technology for Dependable Upgrade.
The SEI has been conducting small-scale studies into the application of model-based verification techniques, focusing on the engineering and process issues associated with implementation. The results of one of the studies are summarized in the SEI technical report, A Study of Practice Issues in Model-Based Verification Using the Symbolic Model Verifier (SMV).
These studies are providing important data on the time, expertise, and engineering decisions required, as well as insight into the effectiveness of the approach. The results of these studies and future pilot investigations on more complex real-world systems will form the basis for developing engineering and management guidelines for implementing and using model-based verification in complex software development and upgrade projects.
In traditional software development, system requirements are defined in minute detail, and the system is then built to match those requirements. But if developers want to use commercial off-the-shelf (COTS) components, requirements often need to be much more flexible, and much less specific, says the SEI’s Pat Place. COTS components have usually been designed with the software marketplace, rather than a specific developer’s needs, in mind. “COTS products meet the requirements that are perceived to be the most likely to make a sale,” Place says. As such, if developers specify system requirements too narrowly, they may find that no COTS products exist that match those requirements.
One approach to achieving the necessary flexibility is to divide requirements into three groups: “must have,” “should have,” and “nice to have.” As the names suggest, systems that do not satisfy the “must-have” requirements are unsuitable, while the “should-have” and “nice-to-have” requirements become tradeoff points that help determine which COTS products should be used. “Partitioning the requirements into these groups involves a great deal of discipline,” Place says. The traditional view treats all requirements equally. Under the new appraoch, “each ‘must-have’ requirement needs to be examined and justified as to why it is non-negotiable.” Similarly, the Department of Defense’s “cost-as-independent-variable” principle states that if requirements make a system too costly, the requirements must be renegotiated.
Coupled with the notion of flexibility is the notion of abstraction: How much detail should be put into the requirements? “If we specify too much detail, we may be unable to find COTS products that match our requirements, thus eliminating the acquisition of a COTS-based system,” Place says. “However, too little detail means that we may be unable to distinguish between competing products that all claim to satisfy the requirements. Indeed, it may leave us unable to reject products that we know are unsuitable for use. In a competitive contracting situation it may be difficult to eliminate such proposals without fear of a protest.”
Product suitability also depends on quality factors, not just the satisfaction of functional requirements. For example, if the look and feel of the product’s user interface differs from the interface for other components of the system or embodies concepts unacceptable to the users--such as opaque windows when the users want transparent windows--then the interface is unsuitable. If the interface cannot be tailored to eliminate such problems, then the product itself is unsuitable.
One of the advantages of COTS-based acquisition is the existence of a marketplace of COTS products as well as a secondary marketplace of consultants. “It seems to be a general rule that, whenever there are a number of competing products, there will also be consultants willing to offer an opinion and help a purchaser select between the products,” Place says. “If we can identify honest consultants, then we might use those consultants to help set the requirements for our system. They can provide us with knowledge about different products’ capabilities, and they may also bring knowledge concerning previous attempts to use one product or another within a system. We have to be wary, though, since in addition to knowledge, consultants may bring bias toward one product or another.” This contrasts with traditional software development approaches, where consultants usually have been specialists in the system rather than the marketplace.
When an organization acquires a system with COTS components, it must consider the relationship between function and technology. Function defines what a product actually does, while technology refers to the underlying concepts of the product--whether it is client/server, distributed, or Web based, for example. “Many new technologies are available to us,” Place says. “If we specify a technology in the requirements, we may lock into a particular technological approach and eliminate many products in the marketplace. On the other hand, if we don't specify a technology, we may have to evaluate products embodying technologies that we would rather not use.” Another risk of not specifying technology is that developers might end up with a system whose technology is supported by only a limited number of vendors.
Further, just because a COTS product uses the right technology and provides the right function does not mean it is the best product for the intended system. “In at least one case, we’ve seen a COTS product chosen because it provided the right technological model--distributed client/server--with less attention paid to the details of the manner in which the product operated. When placed into the intended application, it was discovered that functionality was deficient and required extensive rework,” Place says. On the other hand, to achieve a desired functionality, developers might have to accept a particular technology, which could then affect the entire system’s architecture and design.
When acquiring a system with COTS components, requirements engineers must consider the overlap between the marketplace and the requirements for the system. “The requirements, marketplace, and design each influence the other. The result is that a system developer must consider each of them simultaneously, accepting that the resultant system will be a compromise among these different concerns,” Place says.
Issues of flexibility also confront developers of software product lines and customers who want to apply product lines to new systems.
Developing software for product lines requires two separate requirements engineering processes, one for the assets that will be common across the product line and one for each individual product. Sholom Cohen, who works on the SEI’s Product Line Practices Initiative, explains that the asset-development effort, usually called domain engineering, involves establishing boundary conditions for the product line. These include the functional and quality requirements, the classes of users, and the context for use.
If the asset requirements are thoroughly developed, the process for developing product requirements can be relatively simple. “Instead of needing to generate requirements from scratch, the new product could be 80 percent spelled out already,” Cohen says. Some product line approaches use “generators.” A user specifies the capabilities and requirements for a product and the software is automatically generated.
Determining requirements for product lines, especially lines that are expected to last for several years, can be extraordinarily difficult because it often involves forecasting the future. “You’re not specifying requirements for products that are known entities,” Cohen says. “You may be able to look at legacy systems, or at competitors’ products. You may go to domain experts and technology experts.” But, he points out, such efforts would not have helped a requirements engineer working in 1993 who wanted to plan a long-lasting set of products for the World Wide Web. “Today people are talking about total desktop computer systems for the Web that could not have been envisioned five or six years ago.”
Cohen is currently working with a systems developer on a class of systems that have never been built, and which are expected to use future electronics that will go beyond anything that exists today. “The requirements problem becomes quite severe. It’s a totally new concept that requires visiting a broad range of different types of stakeholders: users, developers, marketing people, and people who understand where the technology is going.” He adds, “It’s very hard to make this kind of prediction. But if you don’t try to do it, you’ll define the product too narrowly and come up with a rather static definition. If you define it too broadly, you end up with something that is too shallow to really cover anything in any depth. Then you have to go back and refine the requirements at a later stage.”
For systems developers, using product lines offers great advantages in terms of lower costs, faster development times, and easy integration of new products. But developers have to be willing to work with the existing assets. “Our recommended approach is that the product requirements get developed in light of the existing assets, so the product line requirements, capabilities, and architectural qualities guide the new system development,” Cohen says. “If some capabilities aren’t there, maybe the developers can live without them, or they can be developed in a special way so that they are compatible with everything else in the product line.”
In some cases, a new customer presents a set of requirements that raises questions about whether a product line has become obsolete. One company with which the SEI has worked, CelsiusTech, developed a product line for shipboard command and control that was installed on 10 systems for different navies around the world. When a new customer presented a requirements document, CelsiusTech determined that it would take three years and cost $10 million to build the system to specifications, but if the customer accepted only 85 percent of the capability the project could be done in six months for one-third the cost. “If a new customer says no, then the product line organization has to ask whether it’s a request from out on the fringes, because of some unique weapons system or some geography they must deal with, or whether it represents a trend that’s evolving. In that case, the organization would need to reconsider the pool of assets in its product line, and go through a new requirements cycle.”
For more on the CelsiusTech case see the SEI technical report A Case Study in Successful Product Line Management.
The SEI and the software engineering community have recently begun to study the types of requirements that should be specified for software systems to survive adverse conditions and continue to support the organizational mission. Such requirements are especially important in large-scale, critical infrastructure systems, and in life- and mission-critical applications.
Survivability is defined as the capability of a system to fulfill its mission, in a timely manner, in the presence of attacks, failures, or accidents. “People seldom, if ever, have a requirements specification for survivability,” says Nancy Mead of the SEI’s Survivable Systems Initiative. “They’ll talk about functional requirements and performance requirements. Sometimes they have security requirements, which is all about prevention. Survivability says, ‘What if I can’t prevent the attack or a failure?’” In the area of attacks and intrusions, Mead says software customers should consider a number of survivability questions:
The SEI’s work on the Survivable Network Analysis (SNA) method has shown a real need to develop requirements for survivability. The prototype SNA method that is being used with clients enables software engineers to analyze an architecture to understand what functions must survive an attack, how to increase the survivability of those functions, and how to improve the capability of the system to recover from the attack. “We’ve got a long way to go in terms of understanding what it is that makes a good set of survivability requirements, and we’re doing research on that topic” Mead says. “Right now we can look at individual systems and say, after some analysis, ‘Here’s how you could strengthen the requirements in this area.’ But what we need is a general statement of what people should be doing.”
The SEI has held two IEEE-sponsored workshops on “information survivability,” which is a term often used interchangeably with “system survivability.” The most recent Information Survivability Workshop (ISW’98) was held in Orlando, Florida, this past October. Howard Lipson of the SEI served as general chair, and John Knight of the University of Virginia served as program chair. This invitation-only workshop focused on the domain-specific survivability requirements and characteristics of four different critical infrastructure and critical application areas: banking, electric power, transportation, and military information systems. The primary goal of the workshop was to foster cooperation and collaboration between domain experts and the survivability research community to improve the survivability of critical, real-world systems. ISW’98 brought together many of the leading researchers in the field of information survivability along with distinguished figures from critical infrastructure and application domains.
In its program of research and development, the Survivable Systems Initiative has also formed a Survivable Systems Working Group with Carnegie Mellon’s School of Computer Science to explore collaborative research efforts in survivability. And the SEI’s David Fisher and Howard Lipson are currently developing a survivable systems simulator.
The simulator promises to help organizations establish requirements and evaluate tradeoffs for survivable systems. Fisher and Lipson are using two assumptions to guide their work:
Lipson describes the survivability mission as “a very high-level statement of requirements” that is dependent on a particular context. For example, a financial system for trading on Wall Street might have a requirement that says it will never be down for more than five minutes. But what if a natural disaster interrupts power to New York City for 24 hours? Such a scenario encourages developers to think in terms of the context-dependent mission of the system, which in this case might be to maintain the integrity and confidentiality of data so that it can be quickly recovered after power is restored.
The ability to evaluate a system under “what-if” scenarios, such as the New York City power outage, is a central feature of the simulator. But rather than simply showing whether a system fails or succeeds, the simulator will provide information about how well the system survives. “It will show how robust the system is in the presence of the scenarios,” Lipson says. This differs from traditional ideas about computer security, which are “binary”--an attack is either successful or it is not. With survivability, “there is no overall rating number, like ‘98 percent survivable.’ It is all based on the context of survivability scenarios.”
If the requirements of a system are not adequate for a certain level of survivability, they can be altered and run through the scenario again. Or, a different attack scenario can be run against the same set of requirements. Also, the requirements will not have to be precise and comprehensive. “The simulator will allow us to simulate applications at varying levels of abstraction,” Fisher says. “Requirements tend to be abstract and not spelled out in every detail.”
No simulator currently exists that can represent a system in a highly distributed, unbounded configuration, Fisher says. Traditional security approaches assume a bounded domain, in which an administrator has control of the entire system. Lipson adds, “The class of problems we’re most interested in is associated with unbounded networks, such as the Internet. No one has complete control. Survivability is a new way of thinking that goes well beyond security.”
The simulator is in the early development stages. Fisher and Lipson plan to demonstrate it internally at the SEI later this year.
Another simulation is being tested by the SEI’s Alan Christie, in cooperation with Mary Jo Staley, an SEI affiliate from Computer Sciences Corp. (CSC). The simulation is designed to determine how long it will take a team to develop requirements and how well those requirements will meet a particular quality standard.
Christie, who works on collaborative processes at the SEI, says he and his associate at CSC first developed a formal textual description of the requirements development process, based on CSC’s principles for joint software application development. “It’s a very intensive process,” Christie says. “The team gets together for multiple-day sessions to tease out the issues associated with specific requirements for a particular application.”
Christie’s simulation assumes that the team will comprise three domain experts, a database expert, a graphical user interface expert, and a systems integration expert. The goal of the simulation is to predict--based on the amount of resources applied to the effort--how long the process will take and the quality of the requirements product that will emerge from the process. The simulation is exploring the impact of resource constraints at the organizational level with the effectiveness of member technical and communication skills at the detailed requirements development level. The output of this simulation provides insights into the length of the research and development process and the resulting requirements quality. The quality of the requirements in turn affects the subsequent review cycle, with potential upper management involvement if quality is low.
Requirements engineering could also benefit from an investigation by Christie and the SEI’s Ray Williams on collaborative team risk management. This project proposes to examine the difficulties encountered when teams of suppliers and customers must work together, over great geographic distances, to develop risk-management plans. The work is aimed at exploring the use of technologies to enhance communication, and the entire collaborative experience, for teams from different organizations and remote locations who must work together. The technologies, which go well beyond videoconferencing, include tools for collaborative brainstorming, such as computer-enhanced whiteboards that members can write on and view simultaneously at multiple locations, as well as projectors that display life-sized images of team members rather than showing them “on a monitor in the corner of the room,” Christie says. Use of these technologies could foster more frequent and richer interactions, encourage a global perspective, and inhibit parochial and personal views from interfering with a collaborative effort.
Although the work would focus on team risk management, Christie says requirements development efforts “show very similar characteristics. You need the same sort of high-bandwidth communication channels to support both the computer displays that you can interact with in multiple locations and the large projected images.” Also, he points out, requirements development often involves bringing together geographically dispersed supplier and customer teams. “We won’t know what all the implications are until we gain more experience,” he says. “But clearly, if you can allow people to interact frequently without them having to fly across the country, it has a real benefit.”
The Capability Maturity Model Integration (CMMI) project, a joint effort of the SEI, government, and industry, takes a holistic view of the requirements engineering challenge. As such, it touches on elements of the other requirements efforts reviewed in this article.
The objective of CMMI, which is expected to release models for public review and piloting in summer 1999, is to develop a product suite that provides industry and government with a set of integrated models to support organizational process improvement. It establishes common ground among systems engineering, software engineering, and the collection of maturity models that have evolved since the SEI’s landmark release of the CMM for Software in 1993.
CMMI calls for organizations to be proactive and to think long term about requirements. “A lot of the work at the SEI, in areas such as architecture analysis, COTS, and product lines, has a direct benefit on the proactive character of the requirements process,” says Mike Konrad, who is helping coordinate the CMMI project. “Your understanding of what the product should do matures with time.” Konrad adds, “Other SEI initiatives play into the technical capability of the organization, and help it be more agile and more capable of understanding the implications of particular requirements.” Those implications can include the product’s competitive position, whether it faces regulatory issues, and its impact on the organization’s long-term strategy. “That positioning and that proactive view are what we are focusing more attention on in CMM Integration,” Konrad says.
CMMI also acknowledges the crucial role of communication among affected parties. “Many software and systems engineering organizations have learned that there must be a dialog for understanding what is required of the product, both now and in the future.” says Mike Konrad.
But grounding that dialog in a language that is understandable to all parties is one of the thorniest challenges of requirements engineering, says the SEI’s Mark Paulk, a principal developer of the CMM for Software. Often requirements are not explicitly stated, but are assumed, by either the end user or the requirements engineer, as part of the context. “As a specifier of requirements, I might not even know that we don’t have a shared understanding until I actually get to the point of using the system that’s been delivered,” Paulk says. “It’s a communication issue. You need domain experts working with the customer and the end user to help surface the things that users don’t know how to say or don’t know that they need. It’s very difficult.”
Previous Capability Maturity Models touched on elements of requirements engineering, and requirements management is a key process area in the CMM for Software. But in the case of software embedded as part of a larger product that is to be produced, other requirements challenges were considered to be the domain of systems engineers--that is, another part of the developing organization--who had responsibility for system requirements, Konrad says. CMMI also recognizes that software and systems cannot be separated, and that software engineers must proactively work with systems engineers at the earliest stages of requirements capture and tradeoff analysis. “The two must work together because more and more products, which before had some kind of special-function mechanical or manufactured electronic component, now have an embedded piece of software that does that same function,” Konrad says. “The software provides its own integration of many of the product functions that used to be handled separately by different subsystems. Those can now be integrated through software. Products are not just systems engineering and not just software engineering. The reality is that there have been lessons learned in both disciplines, and both software and systems engineers should benefit from having that shared understanding of the requirements process. CMMI will also benefit organizations for which the produced product is just software, enriching the information that has been given in the past with lessons learned from both software and systems engineering.”
Ultimately, CMMI should help organizations develop “a deeper competence and understanding of what it takes to meet their customer’s needs, now and in the future,” Konrad says. “That knowledge may be a legacy of the requirements journey that all the other SEI initiatives play into.”
Bill Thomas is the editor in chief of SEI Interactive. He is a senior writer/editor and a member of the SEI's technical staff on the Technical Communication team, where his duties include primary responsibility for the documentation of the SEI's Networked Systems Survivability Program.
His previous career includes seven years as director of publications for Carnegie Mellon University's graduate business school. He has also worked as an account manager for a public relations agency, where he wrote product literature and technical documentation for Eastman Kodak's Business Imaging Systems Division. Earlier in his career he spent six years as a business writer for newspapers and magazines.
He holds a bachelor of science degree in journalism from Ohio University and a master of design degree from Carnegie Mellon University.
For more information
Please tell us what you
think with this short
(< 5 minute) survey.