NEWS AT SEI
This article was originally published in News at SEI on: June 1, 1998
In this article, David Carney, Ed Morris, and Kurt C. Wallnau engage in a wide-ranging discussion about evaluation of COTS software. The views expressed in this article are those of the participants only and do not represent directly or imply any official position or view of the Software Engineering Institute or Carnegie Mellon University. This article is intended to stimulate further discussion about these topics.
How COTS software affects the development process Bill Pollak: In the introduction to the Features section, COTS Software Evaluation, we wrote about the Global Transportation Network (GTN) project. It’s easy to see how such a project could get out of hand.
Kurt Wallnau: What they were doing with GTN reflects about the best way they could have handled the complexities they were facing. What’s interesting is that from the outside, it did look like it was out of control; it did have certain characteristics of chaos. But when you spoke with the
KW: Wrong rules for COTS?
DC: Wrong rules for an NDI (non-developmental item) system. They had defined IPTs (integrated product teams). The company had a set of defined processes for product development, and since no one had told them not to use them, they were using these defined processes, and they were good old-fashioned waterfall life-cycle processes. But they were mandated to use a whole bunch of products from other sources. And it was just a fundamental mismatch.
Ed Morris: The only caution I have is that projects not doing COTS development often fail in the same way. Failure is not unique to COTS.
DC: Ed, this was six months late, seven months into the project! This was a pretty severe slip!
EM: The interesting thing is that most projects won’t know they’re six months late, seven months into the project.
KW: I think the rhetorical point is, here’s a project that was COTS-based, or NDI, that was playing by the rules of existing thought on how to build systems, and they failed miserably. And then there’s another project, which was building a much more massive NDI system—there were more than 50 COTS products in the fielded system—they were not playing by the usual rules, they were playing by a completely different set of rules, apparently, and they appear to be having some success; their customers are happy.
Dave Carney: We did see one project that was out of hand despite being apparently very well in hand. They were doing everything by the book, but it was by the wrong book. They were playing by the rules, but they were the wrong rules. And in that sense, it was chaos.
DC: I think Ed’s caution is a very valid caution. But I think it is safe to say, “Let’s leave the comparison out of it.” It is safe to say that they embarked on the project with the assumption that they were going to complete the project faster, if not better and cheaper, because they were using pre-existing components. And there was a radical mismatch between the reality of falling so far behind so quickly and the assumption that they were going to get there faster.
EM: The interesting thing is that COTS forces the life cycle forward, in fact. If this was a traditional waterfall development, they would be doing specifications. COTS forced them to implement earlier in the life cycle. Many traditional projects don’t know that they don’t know enough early on. And the interesting thing is, with this project, they could tell up front that they didn’t know enough to be doing what they were doing.
DC: You see, I don’t think it was a question of them not knowing what they were doing, it was that they didn’t know how to do it. There were smart people all through that project, but there was the sense that everyone had their hands tied because nobody knew what to do next. And they were supposed to get a delivery of NDI from someone else, and it didn’t come. And nobody knew who was in charge! And nobody picked up the phone and said, “Where’s that thing?” It was amazing!
EM: In some ways, this is analogous to a single system being built by one organization in which different pieces are built at different sites. I mean, I could go back to my previous experience, where there were multiple sites, some working on the front end of the compiler and some working on the back end. And we were always getting delayed by someone who was not delivering a piece when it was expected. The difference is that, with COTS-based systems, there is no single source that says, “You will get this at some point in time.”
DC: Well, if you take out all the good, but leave in the drawbacks to distributed development, it’s probably intuitively true that many of those drawbacks will apply themselves to a COTS system. Because a COTS-intensive development is a distributed development, by definition. So it’s prone to those kinds of drawbacks.
EM: It’s true. It’s distributed, and there’s also no control over many of the components.
KW: One of the things that Ed said that I’d like to reinforce is that when you’re dealing with COTS products, you want to try and move things forward, or up into the front end of the life cycle – whatever these things are. What I’ve seen is that the people who wanted to do the design, those who had the requirements for design at the abstract level, did not have detailed enough expertise on the individual products themselves, or what the capabilities and liabilities of the products were. Since they couldn’t test feasibility of design, they started to work in a kind of vaporous level of abstraction.
For example, they just assumed that things like browsers or CORBA would work, because these things have a really well-defined protocol, and there must be a way to map between protocols. Of course, things got a little more complicated than all of that. So they compressed the schedules because they expected to get the prototypes out within six months, because they were using off-the-shelf stuff and they expected to get a quick prototype. But when all the requirements feasibilities are compressed with detailed implementation, things start to behave in a way that’s not expected.
We’ve had people from the SEI who were essentially like a “skunkworks” attached to the design process to test feasibility at very detailed levels, using model problems and so on, to help make design decisions. And I don’t think that the program staff people were prepared to deal with that level of implementation complexity during the design. The members of the IPTs were at a high level of seniority in the group, so they felt that they were senior enough to specify requirements and do some sort of abstract design. But they were no longer really in touch with what was going on in the commercial marketplace.
BP: On another level, this is similar to some of the things that go on in the publishing world, where you take people who are used to designing for print and then you bring in desktop publishing; and when people get comfortable with desktop publishing, you bring in Web publishing. Some of this is simply future shock.
DC: I think what Kurt’s getting at is that in the old world, you really could draw your boxes, and talk about the wings going here and the engines going there, and really not worry about the connections, and eventually down the road, all those details would get filled in. You can’t do that anymore.
KW: Because you’re in control of the mechanism, because you’re going to design the whole cloth anyway.
DC: Right. And if you need water tanks in the wings, well, you add them later; you don’t worry about that right now. But the thing is, with COTS, you don’t have that freedom.
EM: The thing is, in the traditional process, the front end of the process acted as somewhat of a learning mechanism. Or at least at the point where you actually got to building something, you knew something about it. And as Kurt was saying, with COTS, you don’t know enough at the point when you are implementing.
KW: And the other irony is, you start to compress things earlier on. You end up knowing less about it. You’re trying to make big implementation commitments, instead of having a smooth commitment slope in which you gradually commit yourself to more and more design details. If you pick a product, or a set of products, it’s kind of like a step function; you’re bringing a large body of commitments in early, before you really have the basis to make those tradeoffs. And that’s another consequence of moving things forward in the life cycle. Now, on the other hand, you could treat COTS as implementation details and mechanisms, and do the design, and then hope to find products. And I guess that’s a feasible way of doing it; you just run the risk of not being able to find products that work.
BP: You mentioned that in your experience with JEDMICS (Joint Engineering Data Management Information and Control System), you employed kind of a skunkworks function.
KW: Yes, we had people from the SEI who provided the means for obtaining just-in-time product expertise to help in critical design decisions. A lot of design decisions require good, detailed understanding of the workings of the products. This is often not available to the designers, for a number of reasons. For one thing, it’s just hard to keep track of the nuances of the products.
BP: So can you abstract from this experience the idea that it’s good idea to have a skunkworks function in any evaluation?
DC: Whether you want them to or not, there are some intricate implementation issues that arise, right at the very beginning, whereas in the previous way of designing systems, there was a high-level design that
SEI Interactive, 6/98 http://www.sei.cmu.edu/interactive/Features/1998/June/COTS_Roundtable/Cots_Roundtable.htm you created while not knowing too much, and then you could leave that detail for the implementation. But the reality is, because of this compression, the implementation details are going to make themselves manifest regardless of whether you have the skunkworks function or not.
KW: I guess I hesitated because I’m becoming much more conservative as I get older about being categorical about saying, “This is how I do things.” I think the issue is, how do you get that detailed knowledge at the time when you need it to make those design decisions? And what we’re finding is working for us with JEDMICS is this use of a skunkworks function for solving model problems; to essentially provide ensemble solutions to these critical design problems. A skunkworks function can say “Not only can things work this way, but here are some good ways of doing it. Here are two or three options, because we know how these products fit together.” Another approach would be to hire a consultant.
DC: There’s another instance – a radically different COTS project. They wanted to build this COTS system, and this was a company that did large stock transactions. For them, having their system down for an hour might be, say, a $20 million loss. For a day, it might be an order of magnitude higher. So when they put their system together, they simply called up someone from the relational database company – there were other products involved, but we’ll use the database as an example—and said, “We want to build a COTS system using your product, we don’t want a custom-built system, but we need someone who knows everything internal about the product, every line of the code.” And the guy from the database company said, “Well, that would be our Vice-President in Charge of Everything.” So they asked how much this person cost, and the answer was, “Oh, he’d cost about a million dollars a day,” and they said, “Thanks. We’ll pay it.” And they got this brilliant guy to put together the system, who knew everything about the internals. Now, that’s a way to do it, if you can afford a million a day.
KW: But think about what that means. All that means is they needed to have deep, profound expertise about their products to make them work.
BP: How would you identify the abstract best practice to encapsulate what you’re talking about?
KW: I think the best practice is making sure that you have that profound product knowledge available when you need to make design decisions, and be able to know when you need to have that knowledge. And there are different ways of getting that knowledge; one of them is hiring
SEI Interactive, 6/98 http://www.sei.cmu.edu/interactive/Features/1998/June/COTS_Roundtable/Cots_Roundtable.htm expertise and the other is the skunkworks to develop just-in-time knowledge.
DC: I’d be more pessimistic. I’d say that you must be willing to realize that there is this black hole of cost—not just money cost, but cost—and try to be realistic to say, “Can we build the system given what we have? We don’t have a million dollars a day for the best consultants.” And if you can’t, then what are the tradeoffs you can make? Should you not build the system, should you settle for a little bit less, and so on? It’s all really an awareness issue. I’m agreeing with what Kurt said, but I’m suggesting that you may not be able to solve all your problems.
KW: That’s a little bit darker.
DC: Well, that was what I was aiming for. Because in the ideal world, you should know what you need to do, and then go out and do it. But that doesn’t always apply. You may not be able to do it.
KW: I agree that there’s a certain degree of uncertainty in building systems from COTS products that you need to accept, uncertainty about your knowledge of what the products do, about places where you have no knowledge at all. One of the uses of evaluation is to try to reduce the implications of uncertainty on the overall risk. I mean, how can you use evaluation to reduce or buy down that risk? Which also explains why a simple requirements-specification approach to COTS evaluation doesn’t really get at the issue. Because on one hand, you may not know exactly what the requirements are. But also, you still have the fundamental problem of discovering whether the project actually meets that particular requirement. You’re still kind of discovering. And that’s just if you’re looking at the product by itself. What we’ve discovered in JEDMICS and in other case studies is that the product-selection decisions aren’t independent of each other; they’re often dependent or co-dependent. Picking one product usually depends on picking another category of product A, B, or C, which usually has to do with the selection of another product some other place.
BP: In our feature article, you categorized five ways in which COTS products influence the design phase of development for COTS-intensive systems. Is that the beginning of a framework for COTS evaluation?
KW: I think the danger of going in the direction of generality is that there is a general process that describes everything but that doesn’t define anything in particular, and that’s the risk.
DC: You go toward specificity, but there’s a place at which you stop.
KW: Right, and we don’t know where that is yet. We believe that evaluation helps people make decisions, and there are different kinds of decisions, therefore there are different kinds of decision aids, different kinds of evaluation techniques. We don’t believe that any one of those techniques works for any and all situations. We don’t believe that we’re going to get so far in prescription that we’re going to categorize all of the different areas of variability in your problem area, and be able to index this original collection of techniques. We made a stab at that way back … about a year ago now. It was kind of a classification scheme; polarities and so on. The thing is, the space of variability is just a little too broad to be conveniently modeled in that way. So I think we’re going to go somewhere in the middle. I think of a framework as a collection of concepts that say, “This is what’s important, this is going to help you structure your understanding of evaluation. Here are some collections of techniques; here are some indicators of when they might be useful. Here are their strengths and limitations, and here are resource issues.” To me, that would be a framework. Sort of a structured body of knowledge about evaluation, without necessarily being prescriptive.
We don’t think we can ever have the definitive COTS development process that can be applied in all instances. But we certainly want to help people put together a process and select
EM: What it comes down to is that we don’t think we can ever have the definitive COTS development process that can be applied in all instances. But we certainly want to help people put together a process and select techniques.
DC: I don’t think we’ll ever have a universal process for this or for anything. But I do think that evaluation processes are absolutely doable, and our goal – I
BP: There seems to be something fundamentally different between, say, building a house and building a large software system. Somehow screws that are used in houses are different from software pieces that are used in large systems, and the key to understanding the kinds of problems that we’ve been talking about lies in understanding this difference.
DC: COTS software is bridges and tractors, it’s not screws and pipes. doesn’t really exist with software the way it does with hardware; software is mostly conceptual.
DC: At least there has been very little of what has been called “product integrity” in software.
BP: What do you mean by “product integrity”?
DC: Well, a toaster toasts bread, for example. Generally, it doesn’t boil water. But word processors have spreadsheets, and spreadsheets have editing capabilities … They tend to include more than a little, tiny, limited piece of functionality.
KW: “Integrity” means that, for example, a cell body has integrity if it’s not leaking all over the place; I mean, is there a boundary around it so that we know what this is and what it does?
DC: In fact there’s no boundary to what a word processing system does.
When we write software, we are conceptualizing something, and we conceptualize things differently. There is an external reality, but this external reality doesn’t really exist with software the way it does with hardware.
KW: There have been analogies of software to hardware, such as, “Why can’t software be like chips?” and “Why can’t software components fit together like hardware components?” And these analogies are quite useful on some levels, but often dangerous and misleading on other levels. When we write software, we are conceptualizing something, and we conceptualize things differently. And there are no universal laws. There is an external reality, but this external reality
KW: But it’s really not clear what products do these days. Look at Oracle. What does Oracle do?
DC: But you’re not talking about relational databases, you’re talking about the Oracle product. A relational database is a fairly well-defined chunk of functionality. But the Oracle product is a relational database, plus this bell, that whistle, and these little toots …
BP: And you don’t buy pieces of functionality, you buy products.
DC: Yes. And that’s just the way it is. It’s true in other fields, too; I mean, you don’t buy coils and a heating element, you buy a toaster. But because of any number of factors, one of which is age and maturity in the marketplace, toaster makers by and large don’t go too far afield.
KW: And toaster makers always make toast. But databases do lots of things.
BP: So what you’re saying is that it’s the nature of software itself that makes expansion of functionality possible.
DC: I’m saying it’s the nature of software that enables this phenomenon to occur. It’s the nature of software and the way it’s produced that makes it very easy for this spillover to happen. It’s very easy. Think of all the little shell programs you’ve written that do something, and you think, “Hey, I can add this one little line, and it’ll do this too, and do that,” and then someone else looks at it and will say, from a different conceptual point of view, “Hey, wouldn’t it be neat if it could do that, too.” And it flips up on itself, and instead of doing what it was originally intended to do, all of a sudden, it’s over here. It’s very easy for that to occur in software.
KW: I mean, the market does tend to try to produce market integrity. So we kind of know what word processors do, and we kind of know what spreadsheets do, and we kind of know what relational databases do … They differ substantially when they get into the features and what they actually implement, and that’s why the products are so hard to fit together, because the market produces those differentiators as well. It doesn’t serve anybody’s interests to make something completely compatible. Because it’s expensive to create these products, and you want your product to be hard to exchange, so that people will buy into it.
DC: I think the most interesting thing in the past six months is the way in the latest version of the Windows NT operating system, everything is done in the browser. The whole world looks like a Web site. Your whole world – your local file system, your local folder system – looks like a web site. This is a real change in perspective. And this kind of change happens all the time.
KW: We had some stable idea of the universe in which we lived, and then suddenly something came along and shook things up. And it’s Java and the Web, maybe, today, and in four years we don’t know what it’s going to be, but we know it will be something because the market wants to produce these things, for whatever factors drive it.
DC: I don’t think it’s just market. There were two guys I knew who lived their lives through Emacs. Emacs was their shell. And you can do that. It’s just that there’s all these different points of view, and you can force an editor to be an operating system, and vice versa.
BP: Embedded software also affects the integrity of consumer products. I have hundreds of functions on my CD player that I never use. So, it used to be that a toaster was a toaster, but as software comes in, it becomes part of toasters and part of CD players…
KW: And we’re going to find that our computers are going to be hooked into our TV sets, which are going to be hooked into our telephone systems. And before you know it, our toasters are going to be telling us when to put our coffee in.
DC: Well, you can schedule your toaster to call you at the office and come home to put something in to toast! All this stuff is not really science fiction.
KW: I don’t know where this brings us to COTS evaluation, but it kind of leads us to think about these esoteric kinds of questions, like why things don’t ever seem to fit together. We keep making progress, and we never quite get there, and once we really get close, as David suggested, something comes along and changes the end goals to something else. We were making progress with client-servers, and relational databases, and now we’ve got Intranets!
KW: Ritualistic evaluation is much more commonplace than people realize. It showed up when I was a contractor. Quite often there is a clear front-runner in place. And there’s a defensive mode that says “Oh, evaluation means that I have to have a list of things and I have to have a score.” So you mentally start coming up with weights and categories of lists of things to match your intuition, because you’ve been told that’s what evaluation is.
DC: I’ve read many of the directives, and the directives say “preference for,” “use where possible,” “where feasible,” and so on. And as that goes down the bureaucratic food chain to someone who is lower in command, this “preference for” becomes an absolute mandate. The question is, does the mandate look at what is most apt or most fit, or are decisions being made for bureaucratic reasons?.
KW: This doesn’t have anything to do with evaluation, it has to do with the decision to use COTS.
DC: Or the decision to buy COTS, which we have equated with evaluation. It’s often the case that programs pick a product for nontechnical reasons, then perform an evaluation that justifies the selection. We have seen that in some cases. Or sometimes a project will make an evaluation, and even though there is a low functional fit, a product is chosen because the product is on hand.
KW: I’d say that they do the evaluation as sort of a post-hoc justification. This is far more common in Contractor Land than you’d imagine. I was a contractor, and I know this; you know what you want to bid, but you have
“Buy” brings in more than you think you'll be getting. What you think you're buying is just the tip of the iceberg that you can see.
BP: It illustrates just how different “buy” versus “make” really is. Because “buy” brings in all of these other things that we’ve been talking about.
DC: “Buy” brings in more than you think you’ll be getting. When you buy, you don’t just buy what you intend to buy, you buy all this other stuff. What you think you’re buying is just the tip of the iceberg that you can see.
SEI Interactive, 6/98 http://www.sei.cmu.edu/interactive/Features/1998/June/COTS_Roundtable/Cots_Roundtable.htm to provide an evaluation, and you’ll jimmy the score to make it look right.
DC: After picking a product and committing to it, sometimes the vendor turns out to be an absolute bust; this can happen even with big, well-known vendors. The vendor might promise to send people to the program to tailor the product, then never show up; or the people that the vendor sends may not even know how to run the thing. Personally, I don’t like any solution that is mandated without investigating alternatives, and for me that includes investigating the make-buy decision itself.
DC: Yes, but sometimes you can buy something other than what you think you’re buying.
KW: Right. The world is never quite so categorical. In DoD systems, this option is more open to discussion; and really, it ought to be. But let me ask you this: Should the burden of proof be on a DoD program to demonstrate that it should or must use COTS, or should the burden of proof be on those who want to create custom-made systems?
DC: I think you’re asking the wrong question. The question should be “How do we find a point to determine when it is most appropriate to buy vs. make?” And not place a burden of proof on either, but on both.
KW: I think you’re right in an abstract and purely theoretical way. If we were dealing with a perfect world, we’d want to be completely informed before we make buy decisions. But, understandably, what we’re dealing with is the natural inertia within the government/contractor community to sustain a large base of programmers, developers, and maintainers. And I think that unless you place some mandate that requires a fairly significant burden of proof for build decisions, you'll never overcome this inertia. So I think you have to say “If you can’t show me why you can’t use COTS, you have to use COTS.”
You don’t want to be in the business of building something if you can buy it. You want to be in the business of using, so you can build something else.
KW: I agree that there should always be a make-buy decision, but some commercial organizations would tell you that this is a no-brainer: You always buy. Because there’s an economy of scale there. You don’t want to be in the business of building something if you can buy it. You want to be in the business of using, so you can build something else.
DC: I agree that there may be a tendency for DoD programs to exaggerate their uniqueness. I think that, if you looked at it closely, you’d find that there are very few requirements that are specific to DoD. In terms of surge requirements, the DoD may be unique. But in terms of system functionality, for most things that are thought to be unique to the DoD, I'll bet I could find something analogous elsewhere. But on the other hand, as the ground shifts and it becomes required to use COTS products in systems, you’re going to have some sustainment nightmares down the road. Program managers need to understand that there are ongoing sustainment costs in this new business model that have to be dealt with.
KW: We don’t really understand the sustainment issues, the economics and sustainment of COTS, as well as we ought to. And it certainly ought to be part of what it means to evaluate a solution, based on one or more COTS products. I mean, how much is it going to cost you to sustain this thing? The question presupposes that you can predict the future shape of the markets and where things are headed, which you can’t. All these sustainment estimates are based on market predictions. And who designs systems based on market predictions? We ought to, but we don’t. That’s something we just don’t know how to do at all. We’re disagreeing here on whether it’s a good idea to place a burden of proof on those who wish to do custom systems rather than COTS. All else being equal, you’d want to have enlightened decisions that were made on the basis of value. Still, there are good economic incentives to buy rather than make. Is the DoD’s business personnel management? No. So the DoD should be out of the business of building personnel management systems. It shouldn’t be building these systems.
DC: I’m not disagreeing that this is true for some systems. The point is, is the mandate being selectively applied, or appropriately applied?
KW: And it seems to me that in some cases, there ought to be a burden of proof. It might be in information management systems. “Show me why you should build one of these systems.” Boeing builds airplanes. Ford builds cars. Airplanes are like Air Force airplanes; cars are like tanks. There’s an assembly line, and they manage parts, and they manage things very well.
DC: Large banks clearly will write their own financial transaction systems. Or some pieces of it. What parts did they buy, and what parts did they build? I don’t know. But I’m willing to bet it’s not by mandate.
BP: How does the use of COTS products differ in commercial industry?
DC: Well, for one thing, industry organizations by nature care about a profit margin. And to the extent that future trends affect the profit margin, they care about them, so they will simply go in that direction. They are not constrained by government restriction, except in some cases. What is specific to the DoD is that the DoD has restrictions, legislation, and Congressional regulations to deal with. The idea that in buying something, it has to go out as a fair bid … industry organizations don’t have those kind of constraints. Someone says, “We want to buy something,” and the CEO says, “Let's buy it.”
KW: I don’t know how to categorize this, but a curious thing happened to me about three weeks ago. A big financial insurance concern— mortgages, loans, stuff like that—came to us, and they wanted to know if we could do an architectural analysis and give them some COTS-system help. And it turns out that when they were talking about COTS, they named two or three major packages that they were going to buy and customize. They called it a COTS-solution system. And I asked about the other system they were building, the one where they didn’t have COTS products, and they said that they had 150 programmers writing 2 million lines of Java. And I said, “Oh, you’re using Java; are you using browsers?” and they said, “Oh yeah, we’re using browsers” … Basically, they listed a number of COTS products. And to them these things weren’t COTS products at all. And of course, the DoD would consider that to be a major COTS decision.
DC: But let me take the other extreme of that. You wouldn’t think about writing your own operating system (OS). For the B-2, the internals of the B-2, the DoD had to make a very clear decision about whether or not they were going to write their own OS—the one that ran inside the plane. Because nuclear weapons are involved. No one else would think about an operating system. My point is, you can’t make a blanket statement about whether you should buy versus build unless you have some sort of context for it. And I think that none of the available real-time OSs worked. I know that they came to the issue and had to decide on it.
KW: They were making a real engineering decision.
DC: Right! That was a pure engineering decision. I think that in the really life-threatening nuclear systems, it’s possible to make decisions more easily.
BP: So I might conclude from what you just said that in the commercial world, they use COTS more easily.
DC: I wouldn’t have said that. They can make decisions more easily. And many of the decisions being made today are clearly that, for a wide variety of information management systems, they use commercial products.
KW: Their incentives are clearer in industry.
BP: This is the statement that I’ve heard a number of times: “They’ve solved this problem in industry.” That organization that Kurt was talking about—the big financial insurance concern—once they made the decision and were writing the 2 million lines of code, was it any easier for them than for the DoD?
DC: I think so, but I don’t know.
BP: Is that because they have more experience?
DC: Actually, I think it’s … a lot of these places have less experience with building big systems. So in a sense they’re newer kids on the block; they’re willing to go along with the newer toy, I think. The DoD tends to be much more deliberate.
BP: Because if DoD systems screw up, worse things can happen.
DC: That’s right. They’re much more conservative. Industry organizations tend to be much more enthusiastic about things.
BP: And it looks like they’re succeeding.
DC: Some are, but you don’t hear about the failures. The stockholders are the ones who hear about the failures. A lot of companies do go belly up, and the government hasn’t. But industry organizations don’t publicize their failures. And there is no investigative board that really has to report it. The government gets, I think, worse press than it deserves, by comparison. And that applies to the DoD as well.
Bill Pollak is a senior writer/editor, member of the technical staff, and team leader of the Technical Communication team at the SEI. He is the editor and co-author of A Practitioner’s Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems
(Kluwer Academic Publishers, 1993) and has written articles for the
Journal of the Association of Computing Machinery (ACM) Special Interest Group for Computer Documentation (SIGDOC) and IEEE Computer.
David Carney is a member of the technical staff in the Dynamic Systems Program at SEI. Before coming to the SEI, he was on the staff of the Institute for Defense Analysis in Alexandria, Va., where he worked with the Software Technology for Adaptable, Reliable Systems program and with the NATO Special Working Group on Ada Programming Support Environment. Before that, he was employed at Intermetrics, Inc., where he worked on the Ada Integrated Environment project.
Ed Morris is a member of the technical staff in the Dynamic Systems Program at SEI, where he is involved in the development of practices to support COTS evaluation. He has also worked with the CASE Environments Project at the SEI. He is co-author of Principles of CASE Tool Integration (Oxford University Press, 1994) and co-editor of IEEE 1348, Recommended Practice for the Adoption of CASE Tools. Before coming to the SEI, he worked at the Software Productivity Consortium, where he developed tools to model and predict the performance of Ada multitasking systems, and at SofTech, Inc., where he served as lead engineer for the development of an Ada runtime system and application support tools. Kurt C. Wallnau is a member of the technical staff in the Dynamic Systems Program at the SEI. Wallnau’s current work in the SEI COTS-Based Systems Initiative is on product and technology evaluation practices and the design process for COTS-intensive systems. Before coming to the SEI, Wallnau was system architect for Central Archive for Reusable Defense Software (CARDS), a DoD software reuse program centered on the use of COTS software in application-specific architectures. He has published several papers relating to COTS software evaluation.
For more information
Please tell us what you
think with this short
(< 5 minute) survey.