Guiding Principles for Interoperability

NEWS AT SEI

Author

Dennis B. Smith

This library item is related to the following area(s) of work:

System of Systems

This article was originally published in News at SEI on: February 1, 2004

At the SEI, we are addressing the emerging need of interoperability between software systems and systems of systems. Addressing this need is essential for the integration of large military systems as well as other software domains, including ebusiness, egovernment, mergers and acquisitions, and communications between embedded devices of systems that are traditionally considered to be hardware, such as automobiles and aircraft.

To meet this increasing need, organizations are attempting to migrate existing individual systems that employ disparate, poorly related, and sometimes conflicting systems to more cohesive systems that produce timely, enterprise-wide data that are then made available to other users. Meeting this goal has often proven to be difficult.

As Fred Brooks pointed out more than 15 years ago, the factors that make building software inherently difficult are complexity, conformity, changeability, and invisibility [Brooks 87]. With apologies to Brooks, I assert that achieving and maintaining interoperability between systems is also inherently difficult because of

  • complexity of the individual systems and of the potential interactions between systems
  • lack of conformity among human institutions involved in the software process and resulting lack of consistency in the systems they produce
  • changing expectations placed on systems (particularly software) and the resulting volatility in the interactions
  • invisibility of all of the details within and between interoperating systems

In spite of considerable effort, technical innovations aimed at improving software engineering have not successfully reduced the problems represented by these essential characteristics. Today’s interoperating systems are likely more complex (because of the massive increase in the number of potential system-of-systems states) than those examined by Brooks. They exhibit less conformity (because of the increased diversity of the institutions involved in construction of the constituent parts), are more volatile (because of the need to accommodate widely diverse users) and have even less visibility (because of size, number of participating organizations, etc.).

I suggest five principles that will inform our efforts in the selection of problems to address and in the analysis of potential solutions.

1 There Is No Clear Distinction Between Systems and Systems of Systems

The distinction between a system and a system of systems is often unclear and seldom useful. By this I mean that many--perhaps a majority--of “systems” are actually systems of systems. The distinguishing factor is less where a boundary might lie and more where control lies: most systems are now created with some components over which the integrator has less than complete control. Further, most systems must cooperate with other systems over which the integrator often has no control. 

It is often stated that what someone considers to be a system of systems somebody else considers a system. Thus, any given entity could be seen as a component of a larger system, as a system in itself, or as a system of systems. And, more importantly, there usually is no top level, because inevitably there will be some demand to include any system of systems in a more encompassing system of systems. 

2 Interoperability Problems Are Independent of Domain

Most complex systems are now expected to interact with other complex systems. Regardless of domain, interoperability problems persist, and the costs of failures are huge. As an example, within the U.S. auto supply chain, one estimate put the cost of imperfect interoperability at one billion U.S. dollars per year, with the largest component of that cost due to mitigating problems by repairing or reentering data manually [Brunnermeier 99].

Our expectations are for even greater degrees of interoperability in the future, a goal that may prove difficult to achieve. The current generation of interoperable systems at least tends to encourage knowledgeable participants in the interaction—that is, the systems are being designed (or modified) specifically to interact with a particular system (or limited set of systems) in a controlled manner and to achieve predetermined goals. What is new about the future generations of interoperating systems is an emphasis on dynamically reconfigurable systems. These systems—or more accurately the services they provide—are expected to interoperate in potentially unplanned ways to meet unforeseen goals or threats. 

I do not suggest that the solutions eventually found for the interoperability problems should be identical across domains. However the various communities should be aware of each other and look for commonality of high-level purpose and solution strategy—if not of solution detail—within other communities.

3 Solutions Cannot Rely on Complete Information

Classic software engineering practice assumes a priori understanding of the system being built, including complete and precise comprehension of

  • assumptions or preconditions expected of the system that are required for successful use, including standards, system and environmental conditions, and data and interactions expected of other hardware, software, and users
  • functionality, services, data, and interactions to be obtained from and provided to outside agents
  • non-functional properties or quality of service required by the system and expected of the system from interacting components

For interoperable systems, the same information is required by all participants: the individual components (i.e., the individual systems), the links between them, and the composite system of systems. It would therefore seem that for an organization building a component (system), complete knowledge of all expectations is necessary to complete it. Unfortunately, we seldom (if ever) have such complete and precise specification even whena single system is expected to operate in isolation. 

The reality is that multiple organizations responsible for integrating multiple systems into interoperating systems of systems have multiple—and rarely parallel—sets of expectations about the constituent parts as well as different expectations about the entire system of systems. The decisions that they make about the overall system of systems (e.g., assumptions, preconditions, functionality, and quality of service) are just as likely to be as incomplete and imprecise as those of organizations responsible for a single system.

Given that having complete and precise information about a system of systems (and its constituent parts) is not possible, two approaches to managing the potential chaos are evident:

  1. Reduce imprecision by enforcing common requirements, standards, and managerial control.
  2. Accept imprecision and apply engineering techniques that are intended to increase precision over time, such as prototyping and spiral models of development.

The first approach alone may significantly increase interoperability, but it is also highly static and does not address the inherent imprecision in the software engineering process or the legitimate variation in individual systems. The second approach is limited in a different way, since without agreeing on some level of commonality, we will not approach the levels of interoperability we require.

4 No One-Time Solution Is Possible

We live in a dynamic and competitive world in which the needed capabilities of systems must constantly change to provide additional benefits, to counter capabilities of adversaries, to exploit new technologies, or in reaction to increased understanding or evolving desires or preferences of users. Simply put, systems must evolve to remain useful.

This evolution affects both individual systems and systems of systems. Individual systems must be modified to meet unique and changing demands of their specific context and users. The expectations that systems of systems place on constituent systems will likewise change with new demands. However, the changing demands placed on a system by its immediate owners and those placed by aggregate systems of systems in which it participates are often not the same, and in some cases are incompatible.

The result is that maintaining interoperability is an ongoing problem. This was verified by SEI interviews with experts who had worked with interoperability. In some cases, desired system upgrades did not happen because of the impending effect on related systems. In other cases, expensive (often emergency) fixes and upgrades were forced on systems by change s to other systems.

To maintain interoperability, new approaches are needed to

  • vet proposed requirements changes at the system and system-of-systems level
  • analyze the effect of proposed requirements and structural changes to systems and systems of systems
  • structure systems and systems of systems to avoid (or at least delay) the effect of changes
  • verify interoperability expectations to avoid surprises when systems are deployed

New approaches to structuring systems that anticipate changes, that vet requirements and structural changes and analyze their consequences, and that verify that systems of systems perform as anticipated will help to maintain the interoperability of related systems.

5 Networks of Interoperability Demonstrate Emergent Properties

Emergent properties are those properties of a whole that are different from, and not predictable from, the cumulative properties of the entities that make up the whole. In very large networks, it is not possible to predict the behavior of the whole network from the properties of individual nodes. Such networks are composed of large numbers of widely varied components (hosts, routers, links, users, etc.) that interact in complex ways with each other, and whose behavior “emerges” from the complex set of interactions that occur.

Of necessity, each participant in such real-world systems (both the actor in the network and the engineer who constructed it) acts primarily in his or her own best interest. As a result, perceptions of system-wide requirements are interpreted and implemented differently by various participants, and local needs often conflict with overall system goals. Although collective behavior is governed by control structures (e.g., in the case of the networks, network protocols), central control can never be fully effective in managing complex, large-scale, distributed, or networked systems.

The net effect is that the global properties, capabilities, and services of the system as a whole emerge from the cumulative effects of the actions and interactions of the individual participants propagated throughout the system. The resulting collective behavior of the complex network shows emergent properties that arise out of the interactions among the participants. 

The effect of emergent properties can be profound. In the best cases, the properties can provide unanticipated benefits to users. In the worst cases, emergent properties can detract from overall capability. In all cases, emergent properties make suspect predictions about behavior such as reliability, performance, and security. This is potentially the greatest risk to wide-scale networked systems of systems. The SEI recognizes that any long-term solution must involve better understanding and management of emergent properties.

These principles provide a basis for understanding interoperability. Future columns will outline how we are using the principles to identify solutions for basic interoperability problems.

References

[Brooks 87]
Brooks, Fred. “No Silver Bullet: Essence and Accidents of Software Engineering.” IEEE Computer 20, 4 (April 1987): 10-19.

[Brunnermeier 99]
Brunnermeier, Smita B. & Martin, Sheila A. Interoperability Cost Analysis of the U.S. Automotive Supply Chain. National Institute of Standards & Technology, March 1999.

About the Author

Dennis Smith is the leader of the SEI initiative on the integration of software-intensive Systems. This initiative focuses on interoperability and integration in large-scale systems and systems of systems. Earlier, he was the technical lead in the effort for migrating legacy systems to product lines. In this role he developed the method “Options Analysis for Reengineering” (OAR) to support reuse decision making. He has also been the project leader for the computer-aided software engineering (CASE) environments project. Smith is a co-author of the book, Principles of CASE Tool Integration. He has an M.A. and PhD from Princeton University, and a B.A from Columbia University.

Find Us Here

Find us on Youtube  Find us on LinkedIn  Find us on twitter  Find us on Facebook

Share This Page

Share on Facebook  Send to your Twitter page  Save to del.ico.us  Save to LinkedIn  Digg this  Stumble this page.  Add to Technorati favorites  Save this page on your Google Home Page 

For more information

Contact Us

info@sei.cmu.edu

412-268-5800

Help us improve

Visitor feedback helps us continually improve our site.

Please tell us what you
think with this short
(< 5 minute) survey.