News, Announcements, and Press Releases
October 1, 2010—The following technical reports and technical notes were published recently by the Software Engineering Institute. For the latest SEI technical reports and papers, see http://www.sei.cmu.edu/library/reportspapers.cfm.
The SGMM Team
The Smart Grid Maturity Model (SGMM) is a business tool stewarded by the Software Engineering Institute at Carnegie Mellon University. It was originally developed by electric power utilities for use by electric power utilities. The model provides a framework for understanding the current extent of smart grid deployment and capability within an electric utility, a context for establishing strategic objectives and implementation plans in support of grid modernization, and a means to evaluate progress over time toward those objectives.
The SGMM is composed of eight domains and six maturity levels as detailed in this document, which contains the full definition and description of the model. Introductory material to aid in understanding the purpose and use of the SGMM is also provided.
The primary audiences for the SGMM, and for this document, are electric power utilities that are seeking guidance related to the modernization of their operations and practices for delivering electricity. The audience also includes any related stakeholders for such utilities. Currently, the model is better suited for utilities with transmission and distribution operations than for pure generation utilities.
Nancy R. Mead & Julia H. Allen
Researchers at the CERT Program, part of Carnegie Mellon University’s Software Engineering Institute, need a framework to organize research and practice areas focused on building assured systems. The Building Assured Systems Framework (BASF) addresses the customer and researcher challenges of selecting security methods and research approaches for building assured systems. After reviewing existing life-cycle process models, security models, and security research frameworks, the authors used the Master of Software Assurance Reference Curriculum knowledge areas as the BASF. The authors mapped all major CERT research areas to the BASF, proving that the BASF is useful for organizing building assured systems research. The authors also performed a gap analysis to identify promising CERT research areas. The BASF is a useful structure for planning and communicating about CERT research. The BASF will also be useful to CERT sponsors to track current research and development efforts in building assured systems.
Travis Christian & Nancy Mead
Security is often neglected during requirements elicitation, which leads to tacked-on designs, vulnerabilities, and increased costs. When security requirements are defined, they are often either too vague to be of much use or overly specific in constraining designers to use particular mechanisms. The CERT Program, part of Carnegie Mellon University’s Software Engineering Institute, has developed the Security Quality Requirements Engineering (SQUARE) methodology to correct this shortcoming by integrating security analysis into the requirements engineering process.
SQUARE can be improved upon by considering the inclusion of generalized, reusable security requirements to produce better-quality specifications at a lower cost. Because many software-intensive systems face similar security threats and address those threats in fairly standardized ways, there is potential for reuse of security goals and requirements if they are properly specified. Full integration of reuse into SQUARE requires a common understanding of security concepts and a body of well-written and generalized requirements. This study explores common security criteria as a hierarchy of concepts and relates those criteria to examples of reusable security goals and requirements for inclusion in a new variant of SQUARE focusing on reusability, R-SQUARE.
Julia H. Allen & Noopur Davis
Measurement involves transforming management decisions, such as strategic direction and policy, into action, and measuring the performance of that action. As organizations strive to improve their ability to effectively manage operational resilience, it is essential that they have an approach for determining what measures best inform the extent to which they are meeting their performance objectives. Operational resilience comprises the disciplines of security, business continuity, and aspects of IT operations.
The reference model used as the foundation for this research project is the CERT Resilience Management Model v1.0. This model provides a process-based framework of goals and practices at four increasing levels of capability and defines twenty six process areas, each of which includes a set of candidate measures. Meaningful measurement occurs in a context so this approach is further defined by exploring and deriving example measures within the context of selected ecosystems, which are collections of process areas that are required to meet a specific objective. Example measures are defined using a measurement template.
This report is the first in a series and is intended to start a dialogue on this important topic.
Grace A. Lewis
This report presents general computation trends and a particular set of emerging technologies to support the trends for software-reliant systems of systems (SoSs). Software-reliant SoSs now tend to be highly distributed software systems, formed from constituent software systems that are operated and managed by different organizations. These SoSs are moving from a directed management structure (in which constituent systems are integrated and built for a specific purpose) to a virtual one (in which there is no central authority or centrally agreed purpose). This shift is introducing a need for new technologies to deal with the lack of central authority or centrally agreed purpose.
Harrison D. Strowd & Grace A. Lewis
This technical note presents the results of applying the T-Check method in an initial investigation of cloud computing. In this report, three hypotheses are examined: (1) an organization can use its existing infrastructure simultaneously with cloud resources with relative ease; (2) cloud computing environments provide ways to continuously update the amount of resources allocated to an organization; and (3) it is possible to move an application’s resources between cloud computing providers, with varying levels of effort required. From the T-Check investigation, the first hypothesis is partially sustained and the last two hypotheses are fully sustained within the context specified for the investigation.
From an engineering perspective, cloud computing is a distributed computing paradigm that focuses on providing a wide range of users with distributed access to virtualized hardware and/or software infrastructure over the internet. From a business perspective, it is the availability of computing resources that are scalable and billed on a usage basis. While scalability is the primary tenet of cloud computing, a host of other advantages are advertised as being inherently obtained through cloud computing.
This document is a guidebook for conducting a Measurement and Analysis Infrastructure Diagnostic (MAID) evaluation. The MAID method is a criterion-based evaluation method that is used to assess the quality of an organization’s data and the information generated from that data.
The method is organized into four phases: (1) Collaborative Planning, (2) Artifact Evaluation, (3) On-Site Evaluation, and (4) Report Results. Using the MAID evaluation criteria as a guide, a MAID team systematically studies and evaluates an organization’s measurement and analysis practices by examining the organization’s data and observing how the data is manipulated during its lifecycle, from the collection of base measures to the information provided to decision makers.
The outcome of a MAID evaluation is a detailed report of an organization’s strengths and weaknesses in measurement and analysis.
Nancy R. Mead, Thomas B. Hilburn, & Richard C. Linger
Modern society depends on software systems of ever-increasing scope and complexity. Virtually every sphere of human activity is impacted by these systems, from social interaction in our personal lives to business, energy, transportation, education, communication, government, and defense. Because the consequences of failure can be severe, dependable functionality and security are essential. As a result, software assurance is emerging as an important discipline for the development, acquisition, and operation of software systems and services that provide requisite levels of dependability and security. This report is the second volume in the Software Assurance Curriculum Project sponsored by the Department of Homeland Security. The first volume, the Master of Software Assurance Reference Curriculum (CMU/SEI-2010-TR-005), presented a body of knowledge from which to create a Master of Software Assurance degree program, as both a standalone offering and as a track within existing software engineering and computer science master’s degree programs. This report focuses on an undergraduate curriculum specialization for software assurance. The seven courses in this specialization are intended to provide students with fundamental skills for either entering the field directly or continuing with graduate-level education.
Nancy R. Mead, Julia H. Allen, Mark Ardis, Thomas B. Hilburn, Andrew J. Kornecki, Richard Linger, & James McDonald
Modern society depends on software systems of ever-increasing scope and complexity in virtually every sphere of human activity, including business, finance, energy, transportation, education, communication, government, and defense. Because the consequences of failure can be severe, dependable functionality and security are essential. As a result, software assurance is emerging as an important discipline for the development, acquisition, and operation of software systems and services that provide requisite levels of dependability and security. This report is the first volume in the Software Assurance Curriculum Project sponsored by the Department of Homeland Security. This report presents a body of knowledge from which to create a Master of Software Assurance degree program, as both a stand-alone offering and as a track within existing software engineering and computer science master’s degree programs. The report details the process used to create the curriculum and presents the body of knowledge, curriculum architecture, student prerequisites, and expected student outcomes. It also outlines an implementation plan for faculty and other professionals who are responsible for designing, developing, and maintaining graduate software engineering programs that have a focus on software assurance knowledge and practices. The second volume, Undergraduate Course Outlines (CMU/SEI-2010-TR-019), presents seven course outlines that could be used in an undergraduate curriculum specialization for software assurance.
John K. Bergey, Gary Chastek, Sholom Cohen, Patrick Donohoe, Lawrence G. Jones, & Linda Northrop
The Carnegie Mellon Software Engineering Institute held the U.S. Army Software Product Line Workshop on February 11, 2010. The workshop was a hands-on meeting to share Army and Department of Defense product line practices, experiences, and issues and to discuss specific product line practices and operational accomplishments. Participants reported encouraging progress on Army software product lines. This report synthesizes the workshop presentations and discussions.
Paul Clements & Len Bass
The primary purpose of the architecture for a software-reliant system is to satisfy the driving behavioral and quality attribute requirements. Quality attribute requirements tend to be poorly captured and poorly represented in requirements specifications, which focus on functionality. It is often up to the architect’s own initiative to capture the actual quality attribute requirements for a system under development. Quality attributes come about because of the business goals behind the system being developed. Business goals drive the conception, creation, and evolution of software-reliant systems. This report examines business goals from the point of view of the software architect. It presents a wide survey of business goal categories from the business literature and uses that survey to produce a classification of business goals. It introduces the concept of goal-subject (the person or entity who owns the business goal) and goal-object (the person or entity that the goal is intended to benefit). Those concepts are essential to the structure of a business goal scenario—a systematic way to elicit and express business goals. Using the concept of a business goal scenario drives the Pedigreed Attribute eLicitation Method (PALM), developed by the authors for eliciting architecturally significant business goals. The report illustrates how to use architecturally significant business goals to produce a set of derived quality attribute requirements that can then be vetted and elaborated with the appropriate goal-subject(s) and goal-object(s). This approach has been vetted in two workshops and the method piloted in an industrial setting.
Sagar Chaki & Arie Gurfinkel
Buffer overflows continue to be the source of a vast majority of software vulnerabilities. Solutions based on runtime checks incur performance overhead, and are inappropriate for safety-critical and mission-critical systems requiring static—that is, prior to deployment—guarantees. Thus, finding overflows statically and effectively remains an important challenge. This report presents COVERT, an automated framework aimed at finding buffer overflows in C programs using state-of-the-art software verification tools and techniques. Broadly, COVERT works in two phases: INSTRUMENTATION and ANALYSIS. The INSTRUMENTATION phase is the core phase of COVERT. During INSTRUMENTATION, the target C program is instrumented such that buffer overflows are transformed to assertion violations. In the ANALYSIS phase, a static software verification tool is used to check for assertion violations in the instrumented code, and to generate error reports. COVERT was implemented and then evaluated it on a set of benchmarks derived from real programs. For the ANALYSIS phase, experiments were conducted with three software verification tools—BLAST, COPPER, and PANA. Results indicate that the COVERT framework is effective at reducing the number of false warnings, while remaining scalable.
Lisa Brownsword, Carol Woody, Christopher J. Alberts, & Andrew P. Moore
This report describes the Carnegie Mellon Software Engineering Institute (SEI) Assurance Modeling Framework. It also discusses an initial piloting of the framework to prove its value and insights gained from that piloting for the adoption of selected assurance solutions. The SEI is developing a way to model key aspects of assurance to accelerate the adoption of assurance solutions within operational settings for the U. S. Department of Defense (DoD) and other government organizations. As part of that undertaking, SEI researchers have developed an Assurance Modeling Framework to build a profile for an assurance capability area such as vulnerability management within an assurance quality such as security. The profile consists of many views developed using selected methods and models. From the analysis of these views, inefficiencies and candidate improvements for assurance adoption can be identified.
John T. Foreman & Mary Ann Lapham
The SEI supported a large, multi-segment, software-intensive system program for six years. During this time, the SEI team observed technical, organizational, and management situations that affected the overall execution of the program. This paper records some of the observations and describes suggested (and, in some cases, implemented) solutions—for both historical purposes and for their potential benefit to future programs.
Christopher J. Alberts & Audrey J. Dorofee
Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. To help improve existing risk management practices, SEI researchers undertook a project to define what constitutes best practice for risk management. The SEI has conducted research and development in the area of risk management since the early 1990s. Past SEI research has applied risk management methods, tools, and techniques across the life cycle (including acquisition, development, and operations) and has examined various types of risk, including software development risk, system acquisition risk, operational risk, mission risk, and information security risk, among others.
In this technical report, SEI researchers have codified this experience and expertise by specifying (1) a Risk Management Framework that documents accepted best practice for risk management and (2) an approach for evaluating a program's or organization's risk management practice in relation to the framework.
Charles (Bud) Hammons
IED (improvised explosive device): programmatically, an unintended consequence or impediment that can blow up a development program.
Large-scale systems (LSS) being acquired by the Department of Defense (DoD) are frequently exemplified by the creation of multiple prime items, acquired under separate contract. The multiple prime items are often controlled by different organizations, with attendant variations in timelines and funding stability. In most cases, each of the prime items is software-intensive. LSS are encountered in several domains, including space-based systems and multi-platform systems such as the Army's Future Combat System. These are often referred to as transformational systems.
The concepts of time certain development and incremental deployment of capabilities would appear to represent a fundamental change in the programmatic environment in which LSS are acquired. Such programs need a "roadmap" for acquisition that addresses this new environment. This paper explores how continued use of the existing acquisition roadmaps opens up the potential for running into program pitfalls (programmatic IEDs) that aren't acknowledged on the map at hand.
For more information