NEWS AT SEI
This library item is related to the following area(s) of work:System of Systems
This article was originally published in News at SEI on: March 1, 2000
One of the most positive developments in software process thinking has been "personalization" in the Personal Software Process (PSP). PSP is concerned with building competence in a fundamental skill (programming) at the most fundamental level (the individual software engineer). An investment in PSP is, in effect, an investment in personal competence. Evidence indicates that this investment pays rich dividends to the investors-the organizations that pay for the training and the developers who complete it. As useful as PSP is, those of us experienced in building systems from commercial software components understand that the role of computer programming (in the traditional sense) is becoming less pronounced as our dependence on commercial software components becomes more pronounced. In the new world of components, software engineers are more likely to be preoccupied trying to discover what components do, determining how to structure a design problem to make best use of components, selecting how the components should be assembled, and diagnosing why the assembly is behaving strangely. Answers to these kinds of questions are precursors to a traditional programming activity, and getting the "wrong" answers inevitably leads to the "wrong" kind of programs, resulting in wasted time and effort, and, ultimately, failure.
The software engineer must possess knowledge, and sometimes very deep knowledge, about the components being used in a system in order to answer the above kinds of questions-indeed, even to know which questions to ask. Obtaining this knowledge, which I will call "component competence," requires investment. But unlike an investment in PSP, which deals with the timeless competence of computer programming (the essentials of which have not changed in 40 years), component competence must be sustained in the face of a fast-changing technology landscape. In this article, I will explain what component competence is, why it is essential to building component-based systems, and how it can be obtained "just in time" to make good engineering decisions.
Even the most unskilled chef understands (or at least believes) that the quality of a stew depends upon the quality of the ingredients used to make the stew. The same, of course, is true for component-based systems-that is, systems composed substantially from components. We can expect that the properties of the components we use will influence the properties of the system that we build, and perhaps will also influence the development process itself. For example, if all of our components are resource hogs, the final system will likely also be a resource hog. Similarly, if all of the components are "buggy" then we can be sure that the development process will be skewed toward a lot of debugging and repair work.
Which component properties most influence the development process? When we think about properties of components, we most often think about things like performance, usability, functionality, and so forth. These and other similar properties are certainly important, but they are not the properties that define the fundamental challenge of building a component-based system. Instead, I have in mind three properties that apply to most software components: complexity, idiosyncrasy, and instability. These properties are a consequence of the way the component market works rather than the way the components work:
These properties, taken together, pose a significant challenge to system designers and software engineers, especially as the number of components used in systems increases. Nowadays information systems of even modest scale will make use of a dozen or more commercial components. Knowing how any one component works can be a formidable challenge. Knowing how they all work, or the best ways to combine them, is more difficult still. More important, new component releases and the emergence of whole new categories of components happen much more quickly than the time it takes to build (or sometimes design) information systems. This means that component competence must often be obtained "just in time" to make key decisions, such as which components to buy and how to integrate them.
There is no doubt about it: a "hands-on" approach is required to obtain component competence. Components are simply too complex, their documentation too sketchy, and vendor literature too glossy to be exclusively relied upon.
There are two basic approaches to obtaining component competence-but the premise for both approaches is learning by doing. The first approach-just do it-is the more direct of the two. In this approach critical design and implementation decisions are made on the basis of available component competence. It is hoped that this competence is sufficient to avoid big mistakes, and that engineers will become more facile with the components as the project proceeds. Sometimes this works. Sometimes it doesn't. I can only say that a healthy proportion of the component-based project failures that we have encountered can be attributed to naïve assumptions about what components do and how they interact-assumptions that could have, and should have, been verified before key design and implementation commitments were made.
The second approach is more oblique. It begins with the building of toys. Before scoffing at this idea as "academic," it is important to reflect on the importance of toys in the learning process. Play is fundamental to the human condition. Philosophers and psychologists alike have long recognized homo ludens (human at play) as a natural state of being. Children (and even adults) play as an effective way of exploring how things work and their place in the world in an environment that is forgiving of mistakes. In our experience, engineers can most effectively learn about what components do, and how to combine them, through an analogous process of constructive play. Building toys allows engineers to explore possibilities without all of the complexities-and risks-inherent in a "live" design problem.
Within the past year or so, a new component technology has emerged in the marketplace, called Enterprise JavaBeans™ (EJB). EJB is a Java-based approach for building "scalable, secure, distributed, transactional, interoperable enterprise systems." These are just a few of the claims made by EJB vendors. Do you believe-or disbelieve-these claims? What is the basis for your beliefs? In our project at the SEI (COTS-Based Systems), we posed these questions to ourselves because some of our customers were beginning to nose around EJB. In order to gin up some competence quickly, we built a toy.
Our toy was a simple echo server: a client passes a string to the echo server and the server responds by sending back the string. To make things more interesting the server also attaches a prefix to the front of the string and a suffix to the end of the string it is presented. Our toy is illustrated in Figure 1. (Some EJB details, such as home and remote objects, are not shown.)
One thing that can be said for this toy is that it is simple. The motivation for its design is the desire to have as little application functionality as possible combined with the greatest coverage of EJB features possible. The clear boxes depict the components of our toy. The blue boxes depict the code we had to write, and there was precious little of that. On the other hand, because the application is so trivial we were able "play" with the toy to explore many different facets of EJB that would have been difficult to explore in a live project that has more complex functionality. Because there is so little application logic, almost all of the play is devoted to the EJB mechanisms themselves. For example:
Because the purpose of this article is not to provide an exegesis on EJB, I will not bore you with the details of these and other playful excursions into the land of EJB. I will make two observations, however. First, each excursion required only a small investment in time-on the order of one to three days. Second, it is no idle boast to state that within a matter of two or three weeks we were able to have pointed and detailed discussions with the architect of one commercial EJB server on the limits of his product and the EJB specification. We were also able to intelligently discuss, and predict, enhancements that were planned for the EJB specification. Further, we were more familiar with the EJB specification and workings of EJB products than were researchers we had met who were building "formal models" of the EJB specification. That's not at all bad for a few weeks' worth of work!
Still, the skeptical reader may observe (rightly) that system development houses have as their objective building systems, not an engineer's component competence. Unless the building of toys can be seen as a clear means to this end, it might be better to resort to the more direct "just do it" approach. Fortunately, I can make this connection between toys and engineering design with model problems.
A model problem is a toy that has been "situated" into a real design problem by the addition of design context. Context includes any or all of the following:
A model problem is really a way of posing a question: what is the best way to use a component or an ensemble of components to achieve some end objective? There may be several model solutions, although in practice we usually have one particular solution in mind. To allow us to quickly focus in on the essence of the problem, and to explore alternative model solutions quickly, we try, to the best extent possible, to maintain the parsimonious simplicity of toys. Figure 2 below depicts the structure of a model problem.
As you can see, we have added the design context to our toys. Before we build our toy we also must be sure that we are focusing on the important questions and not just playing for the sake of playing. To this end we also must define evaluation criteria: how will we know that the proposed solution is acceptable? Sometimes the criteria will focus on feasibility-can the model solution be constructed at all? Other times the criteria will include such things as performance goals or other quality attributes.
You will also observe that the model solutions produce two kinds of output, both of which are the result of a learning process. One output is a posteriori evaluation criteria. In almost every situation where we have built model solutions, we have learned something unexpected that should have been part of the a priori evaluation criteria had we known better. The second output is what we refer to as repairs. It may be that the model solution does not quite satisfy the evaluation criteria. However, it is often the case that a small change (i.e., a "repair") to the design context or the toy itself could resolve the problem. For example, a system requirement might be relaxed, or an alternative component selected.
Because toys are, by design, kept ruthlessly simple, there is still a gap between the model solution and the end system. However, the evaluation criteria and design context should provide a sufficient basis for making predictions about the utility of a model solution. In any event, model solutions provide a foundation in experience that allows us, in the words of Winston Churchill, to "pass with relief from the tossing sea of Cause and Theory to the firm ground of Result and Fact." It is this grounding in result and fact, which is a consequence of our competence-building exercises, that permits us to reduce design risk.
Returning to our EJB illustration, it is now easy to see how the simple EJB toy can be "constrained" to serve as model problems in a design activity. In fact, in our work we generally skip the toy-building activity and head straight for model problems. Thus, each of the bulleted explorations we illustrated above (distributed transactions, bean portability, etc.) were themselves model problems. We learned, for example, at the time we built the model solutions (which was over a year ago) that
Each of these investigations (except perhaps for the last bullet) was driven by a particular set of design questions posed about an information system that our project had been designing. What we learned from our brief investigations could never have been discovered from documentation and vendor literature.
By now our EJB toy is quite dated, and the competence we obtained from building it is quite stale. However, these same toys could be built with today's versions of EJB, and could possibly be extended to explore new EJB features. We are confident that doing so would be a wise investment for any project considering using EJB if the engineers do not already have current experience with the technology.
The use of commercial components poses significant challenges to the engineering design process. Most notably, it requires the availability of rather deep competence in the components being used. Unfortunately, this competence, once obtained, wastes quite rapidly in the current hyper-competitive component marketplace. The solution is to find a way to develop this competence cheaply and effectively, and in the context of a particular design problem. We do so through the development of toys and model problems, and this has proven to be extremely effective in helping us make engineering decisions based upon observable fact rather than vendor literature.
In the next issue of news@sei I will discuss how model problems fit within an iterative engineering design process. I will also describe how the "three Rs" of this process (Realize model solutions, Reflect on their utility and risk, Repair the risks) can be used to reduce design risk for component-based systems.
While I can't make you competent in EJB with a one- or two-paragraph description, I can tell you just enough about it for you to understand the examples in this article.
Developers write business logic as enterprise beans. Enterprise beans are components that are deployed into servers. EJB servers (EJB also has containers that execute in servers, but for our purposes we can lump container and server together) provide a runtime environment for enterprise beans, managing when they are created, activated, deactivated, cached, and deleted. EJB servers also provide a number of important services to beans, including transactions, naming, security, and thread management.
There are two major classes of enterprise bean: session bean, and entity bean. Session beans are used to export services to clients; each session bean can be connected to at most one client at a time. Entity beans are used to model business objects; they correspond to rows in a relational database table. The EJB server manages the flow of data between entity beans and relational databases. Many clients (most often these are session beans) can share a single entity bean.
Commercial software components are software implementations, distributed in executable form, that provide one or more services. Examples of commercial software components include Microsoft Word, Netscape Communicator, Oracle relational database, SAP, and so forth.
Kurt Wallnau is a senior member of the technical staff in the Dynamic Systems Program at the SEI, where he is co-lead of the COTS-Based Systems Initiative. Before that he was a project member in the SEI Computer-Aided Software Engineering (CASE) integration project. Prior to coming to the SEI, Wallnau was the Lockheed Martin system architect for the Air Force CARDS program, a project focused on "reusing" COTS software and standard architectures.
The views expressed in this article are the author's only and do not represent directly or imply any official position or view of the Software Engineering Institute or Carnegie Mellon University. This article is intended to stimulate further discussion about this topic.
For more information
Please tell us what you
think with this short
(< 5 minute) survey.