Past Conferences

Past Conference: SATURN 2013

Thank you to all SATURN 2013 attendees!

In 2013, the Software Engineering Institute (SEI) Architecture Technology User Network (SATURN) Conference celebrated its ninth year. Each year, SATURN attracts an international audience of practicing software architects, industry thought leaders, developers, technical managers, and researchers to share ideas, insights, and experience about effective architecture-centric practices for developing and maintaining software-intensive systems.

SATURN 2013 took place at the Marriott City Center in Minneapolis, Minnesota from April 29 to May 3, 2013. As in previous years, the conference was held in collaboration with IEEE Software magazine. Two attendee-selected awards, sponsored by IEEE Software, were conferred for noteworthy presentations, and selected presenters were invited to submit articles for publication in IEEE Software.

Download all the SATURN 2013 presentations now.


TwinSPIN Presentation

Architecture-Centric Procurement
Lawrence Jones and John Bergey, SEI

Purchaser: How can I leverage good architectural practices to get the best quality and value from my supplier?
Supplier: What are my customer's expectations from an architectural perspective?
Both: How can we use architecture-centric engineering practices to create a win–win situation?

Software plays a critical role in most modern systems and is often cited as the reason for cost overruns, schedule slippages, and quality problems. Today local, state, and national governmental organizations typically procure systems rather than develop them. A procuring organization needs more effective ways to reduce risk when acquiring software-reliant systems. Similarly, the supplying organization needs to reduce its risk by having more effective ways to understand what the customer really wants to guide the system development.

An architecture-centric procurement approach has proven to be an effective way of reducing risk and gaining added confidence that the system will achieve its intended functional and quality requirements. Such an approach involves incorporating key architecture-centric practices as an integral part of the software procurement and development agreement. This has the effect of raising the performance bar by requiring all potential developers to adopt good architectural practices that will benefit both purchasers and suppliers.

This presentation describes a set of architecture practices and covers why, when, where, and how they can be effectively applied, along with some examples. While we emphasize the U.S. Department of Defense context, the principles apply to broader commercial contexts involving the management of software-intensive projects where there is a purchaser–supplier relationship.

As part of the approach, we use the concept of architecture user stories. These user stories cover practices such as elicitation of the architecturally significant requirements, architecture documentation, architecture evaluation, architecture configuration management, and architecture conformance. These user stories are particularly applicable to any sizable software development effort.

This talk will help

  • acquisition project managers learn how to reduce risk and achieve system qualities by incorporating software architecture-centric practices into their procurements
  • suppliers learn how to increase competitive advantage and focus on what the acquiring customer really needs by understanding the use of software architecture technology in the acquisition and development processes

Attendees will leave with answers to the following questions:

  1. Why should an acquisition organization adopt an architecture-centric acquisition approach? And why does this also benefit the supplying organization?
  2. What architecture-centric practices should be considered? Why, where, when, and how should they be applied?
  3. How does such as approach fit into the procurement and development life cycles? 
  4. What is involved in implementing such an approach in a request for proposal, contract, or development agreement?
  5. How can suppliers and acquirers effectively collaborate to produce a win–win situation?

While we use the DoD 5000 life cycle to illustrate many of the principles and practices, we also discuss these steps in terms that apply to a nongovernment procurement life cycle.

Visit the Twin-SPIN website.

Download this presentation now.

Sustainability and Security

Architecting Long-Lived Systems
Harald Wesenberg and Einar Landre, Statoil
Arne Wiklund, Kongsberg

Statoil has recently initiated an effort to develop a solution for integrated real-time environmental monitoring of its oil fields. The monitoring starts long before exploration and drilling begins and lasts until long after production is shut down, a period of more than 70 years. Throughout these 70 years, we can assume that every technology and component used in the system will change. Still, we need to understand and utilize data collected in an early phase of the oil field life cycle throughout the operational phase and well beyond the shutdown phase.

The solution is driven by software from the deepest ocean and into the cloud. In this presentation, we take a look at the architectural drivers and considerations that went into the overall design as well as some of the analysis methods used. Especially important are standards for data formats, data integration, data augmentation, and data exchange with other institutions working in the oceanographic domain.

Download this presentation now.

Using Architecture to Guide Cybersecurity Improvements for the Smart Grid
Elizabeth Sisley, Calm Sunrise Consulting, LLC

This presentation reports on a complex system of systems, the U.S. Smart Grid, and provides advice on how to improve the cyber-security maturity of various organizations involved in different aspects of the U.S. energy grid. Specifically, we discuss how a reference architecture can be used as a focal point to improve the maturity of an organization's cyber-security efforts.

The National Institute of Standards and Technology (NIST) Interagency Report (IR) 7628, Guidelines for Smart Grid Cyber Security, objectives state, "The transformation of today's electricity system into a Smart Grid is both revolutionary and evolutionary. Persistence, diligence, and, most important, sustained public and private partnerships will be required to progress from today's one-way, electromechanical power grid to a far more efficient digitized 'system of systems' that is flexible in operations, responsive to consumers, and capable of integrating diverse energy resources and emerging technologies."

The NISTIR 7628 documents both high-level security requirements and the logical reference architecture (commonly called the "spaghetti diagram"), and both are fundamental to planning for improved cyber security. The spaghetti diagram includes all actors from the NIST Framework and Roadmap document and identifies logical communication interfaces between actors. These logical interfaces are grouped into logical interface categories, based on their security-related characteristics, which simplify the identification of security requirements. These LICs provide an interesting categorization of types of interfaces, such as those with requirements for high-availability, compute/bandwidth constraints, and interorganizational versus control systems.

The basis of this presentation is the NISTIR 7628 User's Guide, a document currently under development by the SGIP and anticipated to be published by the end of March 2013. This user's guide provides advice to an organization on how to improve cyber-security maturity, leveraging the NISTIR 7628.

This presentation focuses on the reference architecture and how it can be used to identify an organization's high-risk systems and system security requirements, with much of the User's Guide detail simplified just for context. The User's Guide is intended to provide a hands-on, step-by-step procedure that a utility can follow to identify their own organization's architecture and any security gaps. Key members of the User's Guide team are utility experts, so embedded into the guide is practical "here's how we do it" advice. A pointer to the full guide, available for public use when published, will be provided. The NISTIR 7628 has been publically available for download since its publication.

While the NISTIR 7628 and the related User's Guide are specific to the Smart Grid, a similar risk-ranked process, leveraging a reference architecture and the organization's own specific enterprise architecture, would be applicable to any organization attempting to improve their cyber-security maturity.

Many thanks to the NISTIR 7628 User's Guide team!

Download this presentation now.

Architecting Cyber-Physical Systems in the Age of the Industrial Internet
Amine Chigani, Joseph Salvo, Benjamin E. Beckmann, and Thomas Citriniti, GE Global Research

Industrial systems and the internet have been increasing in complexity since the mid-1990s and are now at a turning point where a new revolution is evolving. The convergence of the global industrial ecosystem, advanced computing and manufacturing, pervasive sensing, and ubiquitous network connectivity has set the stage for an industrial internet revolution where complex, complete cyber-physical systems (CPS) are coming online. The deployment of these systems will infiltrate a broad spectrum of domains, including energy generation and distribution, health care, transportation, manufacturing, and defense.

The industrial internet is posited to have a major impact on economic growth similar to that spurred by internet connectivity and computing investments in the second half of 1990s. Additionally, this new revolution will change the way CPS are managed, monitored, optimized, maintained, and inevitably decommissioned. Consequently, architecting next-generation CPS must address a myriad of architecture challenges related to complexity, capability, quality, and technology. We identify a critical set of these challenges.

Abstraction: The scale of CPS and interdependency among its elements will mandate a greater emphasis on systems-level, end-to-end thinking about solution architectures that stakeholders of different organizations, disciplines, and expertise can use. Architecting the software backbone of a network of CPS will require skills beyond those related to the software craft.

Standards: Enabling communication and collaboration among a wider community of stakeholders will require standardization beyond the software architecture community. Standardized architecture tools and nomenclature should include other engineering disciplines such as mechanical engineering, physics, natural sciences, mathematics, manufacturing, and others.

Big Data: The sheer number of machines expected to come online and the volume of data expected to be generated and transmitted through the industrial internet as a result will bring about big data challenges. A major architecture challenge will be to decide what gets thrown away, processed at the edge (i.e., point of contact of the CPS with the physical world), or transmitted and processed away from the point of generation (i.e., the cyber world).

Cloud: Cloud-based computing enables scale and elasticity—two essential elements of the expending and evolving nature of CPS. Cloud computing offers an affordable, efficient strategy to come aboard the industrial internet early to ensure a continued competitive edge. However, privacy issues related to export control, intellectual property, corporate identity, governance, ownership and others must be addressed.

Engineering: Stove-piped, single-discipline-focused engineering of products and services that form the components of CPS will no longer fit within an industrial internet-enabled environment. Time to market, cost, complexity, and competitiveness will require a much more robust engineering design methodology. The potential to transform how engineering is conducted by adopting a collaborative, crowdsourcing-driven approach to engineering and manufacturing is becoming a reality.

In light of these challenges, we will discuss the impact on architecture practice in the age of the industrial internet and seek discussion about the way forward from the audience.

Download this presentation now.

Modeling and Documentation

How to Build, Implement, and Use an Architecture Metamodel
Chris Armstrong, Armstrong Process Group, Inc.

ISO/IEC 42010 describes an industry standard conceptual metamodel for architecture descriptions that refers to key elements such stakeholders, concerns, viewpoints, and views. The speaker will discuss a proven, practical process for exploiting ISO/IEC 42010 using a formal UML profile for modeling the elements of an architecture description. Starting with identifying architecture stakeholders and the architecturally relevant scenarios they find themselves in, the session continues with how to capture related architecture concerns, use them to design an architecture metamodel, and then describe the relevant architecture viewpoints. The session concludes with how to implement the metamodel as a custom UML profile and how that relates to architecture modeling tool deployment.

Download this presentation now.

Modeling the Contributions of Software Architecture to the Success of an Ecosystem
John McGregor and Yates Monteith, Clemson University
Simone Amorim and Eduardo Almeida, University Federal of Bahia

The sociotechnical ecosystem surrounding an organization that produces software-intensive products is based on the interactions of suppliers, competitors, customers, and more. The glue holding these intricate relationships together is more than a shared business model. It is also an architecture. It is the ability of that architecture to support the goals of multiple organizations, which cooperate, collaborate, and sometimes compete within the market segment, which in turn contributes to the success of the ecosystem that surrounds that market segment.

In an ecosystem, collaboration among organizations happens indirectly through architecture. Business goals shared among ecosystem participants drive the design and evolution of the architectures. The health of an ecosystem is measured by its robustness, niche creation ability, and productivity. The architecture contributes to robustness if its mechanisms are sufficiently flexible to accommodate emergent technologies and behaviors. The architecture contributes to niche creation if the architecture can be extended and specialized to provide unique variations that support new ideas. The architecture contributes to productivity if it is easy to combine architecture fragments into new architectures and products.

Ecosystem modeling, which captures both business and software concerns, aids the software architect in understanding the influences that will be exerted on products developed within the ecosystem. The STRategic Ecosystem Analysis Model (STREAM), an ecosystem modeling technique developed in collaboration with the Software Engineering Institute, creates a model of the ecosystem organized around the three major facets of the ecosystem: business, software, and innovation. Each facet is critical to the success of the ecosystem, but we will focus on the software facet and, more specifically, the software architecture in this presentation. The software facet of the model comprises the software architectures used for software components and products in the ecosystem and the implementations of those architectures.

The models produced by STREAM make more obvious the linkages among the business, software, and innovation facets. One particular link between the software and business facets is the supply network in the ecosystem. Most products today are an aggregation of smaller, simpler products from a number of vendors. This is often hidden behind APIs and the indirect links to suppliers. This aspect of the model allows analysis of the flow of technical debt and other software metrics through the supply network to the final product.

STREAM consists of five practices and progresses through four phases. We will discuss the planning phase in this presentation. The planning phase explores the specific questions that are to be answered by the model. As the plan is exploited, data is collected and modeled, and information is produced from the analyses of the data. Finally, the planning phase evaluates the results and evolves the modeling plan in preparation for the next iteration through the process.

STREAM has been used to understand a number of existing ecosystems, including those of commercial and governmental organizations that must remain confidential, an Army development project whose results have been approved for release, the emerging communities around two international research projects, and the well-established Eclipse and Hadoop open-source communities.

Download this presentation now.

An Architecturally Evident Coding Style
George Fairbanks, Google

Because of Eric Evans's Domain-Driven Design, software developers are already familiar with embedding domain models in their code. But the architecture and design are usually hard to see from the code. How can you improve that? This talk describes an architecturally evident coding style that lets you drop hints to code readers so that they can correctly infer the design. You will learn why some design intent (the intentional part) is always lost between your design/architecture and your code. This presentation builds on ideas like Kent Beck's Intention Revealing Method Name pattern and provides a set of lightweight coding patterns and idioms that let you express your design intent in the code.

Download this presentation now.

Agile I

Introducing Agile in Large-Scale Projects
Vladimir Koncar, Drago Holub, Zoran Kokolj, Emina Filipovic-Juric, and Josko Bilic, Ericsson Nikola Tesla

When you work in an R&D company that develops software and hardware for next-generation products in radio access networks, your constant ambition is to find ways to increase the quality of your products and reduce development lead time. In this presentation, we share our experiences and lessons learned from introducing agile in our large-scale projects.

We are always searching for new ways of working that will help us achieve those goals, so at the beginning of 2011, when we were facing a new project that technically was our most challenging and difficult project so far, we decided to introduce agile methodology. We believed that in addition to raising quality and lowering lead time, agile could provide us with more efficiency and broaden the competence of our organization.

We were all very motivated to make agile work, and we believed in our developers and our teamwork, so we agreed on the agile framework and guidelines and promised to stick to it. We used independent, cross-functional teams as much as possible. We focused on frequent code reviews and software deliveries and continuous software integration. We introduced daily standups and frequent team retrospectives for continuous improvements. We wanted to do only the important functionalities, reduce unnecessary administration and handovers within a team and between teams, and remove waste as much as possible.

But how easy it is to introduce agile when you have a project that will

  • require an estimated 100,000 man hours, include design of completely new software and hardware, and take at least 18 monhts to develop?
  • involve teams on four different geographical sites and about 100 people?
  • finish all software development before it can be tested on completely new hardware?
  • include different stakeholders in organizations with different priorities and ideas?

One can just imagine how many things and can go wrong.

Now, with two years in agile methods, we have many lessons learned:

  • We learned that agile really empowers both teams and individuals to take charge. We saw how a team evolved through time and process, and we saw how stand-alone they could be but also what kind of difficulties they need to overcome and the kind of help they need to do that.
  • We learned the importance of team space, a team board with information and tasks, team life, and team ceremonies. We've seen how they change through time in the project in some positive but also some negative directions.
  • We saw the importance of team commitment, and it was amazing to see how innovative people could be in finding new solutions and possibilities.
  • We also learned how difficult it is to manage all the interfaces and stakeholders while keeping our teams independent in large-scale projects.

But we achieved the most important result: with agile we delivered our product with very high quality (very few faults found on the market) and cut the lead time by a third!

Download this presentation now.

How to Implement Zero-Debt Continuous Inspection Architecture in an Agile Manner
Brian Chaplin, Chaplin Solutions

This presentation describes an extract, transform, and load of key commit and code-quality data from the build system to a code-quality database. It will cover how the database was used to transform a large project team by targeted technical debt reports and email notification of incurred technical debt to developers, leads, and management.

A three-year case study of a 12,000 class, 2 million-line Java system with 25,000 commits of 250,000 program changes by 175 developers will be used. Unit tests increased ninefold to 36,000, raising the branch/line coverage from 27% to 82%. A total of 5,500 technical debt items in 10 debt categories were identified, tracked, and resolved. A similarly sized C# system will also be used along with two different version-control and build systems.

The code-quality database makes technical debt visible and actionable to managers. Architects responsible for not just business enhancements but also the long-term quality of the code base can use the database to keep debt under control, even during a sprint.

Many talk about technical debt in abstract terms. This presentation defines it in terms of precise metrics and how it can be measured and tracked in an Agile manner. Technical debt is some problem in the code that must be fixed. It's more than a degradation of a code metric. Architects need to understand details about the three groups of debt—static analysis, testing coverage, and class metrics.

How much unit test coverage is enough? Management often sets a standard of 80%, assuming the 80/20 rule is the most cost-effective. However, which 20% should go untested? This presentation asserts that 100% is an attainable goal and will detail how 100% coverage was attained as part of normal maintenance.

Debt is incurred under various pressures and is paid back later. This is especially true with coverage and metrics debt. Tracking debt is essential to making it visible and enabling its reduction later. Tracking techniques are explained, and the role of reviewers and code-quality automation is explored.

Techniques for tracking the error rate of actual commits are explained. Outstanding technical debt management reports are presented and explained along with their relationship to a trouble-ticket system.

The management of false positives is explained. About 1% of automated debt is a false positive. Under what conditions can the metrics standard be violated? For example, there are five conditions where a line or branch can't be unit tested.

The role of designing and writing testable code is explored. Architecture decisions can enable higher testability. For example, functional programming enables already tested code structures to be employed. There are good functional programming libraries of pretested logic routines that reduce the unit-testing burden.

Download this presentation now.

Implementing Contextual Design in a Corporation Without a History of Using Contextual Design
Elizabeth Correa, Verizon

Contextual design engages the people doing the work and studies their intents and problems to ensure that the software system developed is more in tune with the users' actual needs. It provides a powerful a tool for software engineers to use as input into their requirements and architecture. In a corporation without a history of using contextual design, putting a contextual design culture in place can be an overwhelming task. The author will review three specific methods and tools that the audience can put into practice when they get back to the office. Each method is one that the author has used at Verizon. When the audience leaves they will know:

  • ways to train the interviewers without overwhelming them
  • ways to target areas to use contextual design so as to use the limited resources wisely
  • ways to socialize the gathered information in a multi-country company using corporate communications

These methods save the architect time and allow him or her to successfully roll out a contextual-design culture in their corporation.

Download this presentation now.

Cloud Computing

BestBuy.com's Cloud Architecture
Joel Crabb, Best Buy, Inc.

In 2012, BestBuy.com re-envisioned its high-scale, high-availability eCommerce platform. Instead of buying a vendor product and implementing a big-bang brand-new system, BestBuy.com took the more difficult but lower risk path of evolving out of their proprietary ATG commerce system. The architecture was built in 2012 and survived the holiday selling period with no issues. This presentation covers the cloud architecture of the largest electronics retailer on earth. This is your opportunity to see inside a true linear, elastically scaling architecture from one of the highest trafficked commerce sites in North America.

BestBuy.com needed an overhaul for many reasons; however, the overriding factors were scale and flexibility. The current vendor-supplied product was not architected to scale. At BestBuy.com's scale, each component of the eCommerce architecture must scale independently, but the available commerce systems are monolithic black boxes that can only scale horizontally. BestBuy.com also needed business flexibility—the ability to change the look and feel of its front-end website in hours or days rather than months or years. These two competing paradigms were combined to form a new, unique cloud architecture that achieves both high scale and massive flexibility.

BestBuy.com's cloud architecture consists of a lightweight front end using a home-grown development framework based solely on HTML, Javascript, CSS, and Freemarker. The front end is deployed on Tomcat servers that elastically scale in the cloud. The front end is fed by a service-oriented architecture and a data contract between the front-end and back-end services. The architecture completely decouples the front and back ends and allows independent development and front-end coding with no Java skills.

The services architecture is designed for high scale and consists of multiple caching layers and Spring-based REST services. Since we are stringing together 30–50 services per front-end request in less than two seconds, the whole process is asynchronous and fault tolerant.

The data system feeding the service layer is also cloud based. A NoSQL system is used to store BestBuy.com's product catalog and serves that data as another set of REST services. This data system replicates across multiple cloud regions, multiple cloud vendors, and owned datacenters.

Finally, for personalization BestBuy.com uses a cloud-based user grid that serves up customer information from within the cloud. This is a second NoSQL system that allows individual personalization of every page on BestBuy.com.

To operate this cloud architecture and achieve high flexibility, the cloud infrastructure uses infrastructure automation. Using an Infrastructure as Code solution allows us to quickly update and change running cloud servers to new deployments. Furthermore, it allows us to run multiple concurrent versions of our cloud systems to determine which version has higher user engagement.

Overall, every part of the browsing experience on BestBuy.com is now served out of highly redundant and elastically scaling cloud architecture. This presentation will give you an in-depth look at the various aspects and considerations when scaling a massive ecommerce system.

Download this presentation now.

Automated Provisioning of Cloud and Cloudlet Applications
Jeff Boleng and Grace Lewis, SEI
Vignesh Shenoy, Varun Tibrewal, and Manoj Subramaniam, Carnegie Mellon University

Imagine a world where computing components can migrate and run securely, reliably, and automatically on any computing platform that has the required resources and can satisfy the necessary dependencies. We will present and demonstrate a proof-of-concept digital container and associated services that package computing components that can migrate automatically and be executed across a range of computing platforms with different configurations and resources available. The container will specify the computing resources required and include security (e.g., digital signing), software assurance mechanisms (pre/post-conditions and invariants), dependencies, and an execution mechanism, and it will enforce supported data input and output formats.

Digital containers are commonplace for the distribution of electronic content. The typical digital container is a metafile format whose metadata describes the different data elements included in the container. The primary use of digital containers today is for distributing multimedia content. A recently standardized digital container is used to package an executable software service consisting of one or many virtual machines to provide a computing service. This format is the Open Virtualization Format (OVF) by the Distributed Management Task Force (DMTF). The OVF format is extensible by design. This presentation outlines our initial successes extending the OVF format to automatically provision computing components. The technique includes mechanisms to characterize the computing component requirements and the cloud/cloudlet capabilities coupled with a novel technique to select the best combination of service to platform for execution. The end goal is to enable a developer of a software module or service to package the component in a well-defined and standardized digital container regardless of the implementation technology and enable widespread portability while enforcing secure and assured execution characteristics.

Specifically, we will present our experience to date developing a digital container to perform the same computing operation using different computing mechanisms (VM, emulation, byte-code, interpreted code, API library, recompile, etc.) and demonstrate automatic migration of the service across multiply configured cloudlets. This required developing a cloudlet service to advertise a computing platform's capabilities and facilitate the automated selection of the best match of service to platform. Additionally, we will show a client application that provides fully automated communication with the cloudlet service provider to discover and negotiate the service, offload the computation, provide the input data, and present results to the user.

Download this presentation now.

Applying Architectural Patterns for the Cloud: Lessons Learned During Pattern Mining and Application
Ralph Retter, Daimler TSS

When dealing with cloud computing in large enterprises, architects are often challenged by questions such as "Which cloud infrastructure is right for our enterprise?" "Is this application suitable for the cloud?" and "Why isn't it as easy to deploy an application in our data center as it is to deploy an application in my favorite public cloud?" In large enterprises, cloud computing initiatives often begin from an infrastructure-automation point of view. And often, they remain infrastructure-centric for a long time, ultimately losing the business and application focus on the way.

In this talk, the presenter shares his and others' experiences in several large enterprises to approach cloud computing from an application-architecture point of view that he and colleagues from academia and different enterprises formalized in a set of patterns for the cloud. He shows how these patterns have been mined and used to describe the architecture of cloud-native applications that support fundamental cloud principles. From there, the presenter shows how to work your way down to architectural requirements that cloud-native applications pose on the underlying infrastructure and platforms using the presented pattern language.

This presentation will show how typical patterns of cloud-platform and cloud-infrastructure offerings tackle the problems of massively scalable, highly available data stores and messaging. The presenter explains his experience with such platforms and how their properties impact application design and sometimes even business decisions. The presenter also shows how he and others have used the presented patterns to evaluate different cloud offerings for their suitability for concrete applications. After this talk, you will have a clear understanding of

  • how to approach cloud computing from an application-architecture point of view, not only in large enterprises
  • essential architecture patterns for cloud-native applications
  • patterns for the vendor-neutral description of fundamental cloud-related properties of cloud offerings and their impact on application design and even business decisions
  • the pattern-language that the presenter and his colleagues from academia and other enterprises have used to tackle cloud-related tasks, including their application in an anonymized case study

Download this presentation now.

Method Tailoring and Extensibility

Design and Analysis of Cyber-Physical Systems: AADL and Avionics Systems
Julien Delange and Peter Feiler, SEI

Cyber-physical systems (CPS) are reliant of software for their operation. Typically, CPSs are mission- and safety-critical systems, such as avionics systems on aircraft. This industry sector has experienced exponential growth in software size and interaction complexity, with rework cost reaching 70% of the software system cost. Current practice of such systems consists of a build and test approach with system engineering up front addressing safety concerns by following industry standard recommended practices such as SAE ARP 4761 and 4754, followed by software development with limited attention to nonfunctional qualities such as timing, latency, performance, reliability, safety, or security. Each of these concerns, if addressed through modeling and analysis or simulation, is captured by separate analytical models that quickly become outdated as the architecture and design evolve. The result is late discovery of system-level errors, with studies showing up to 80% leakage to this phase.

This presentation discusses the SAE International Architecture Analysis and Design Language (AADL) Standard as the basis for an analytical virtual-integration framework as a solution to this problem. This approach utilizes auto-generation analyzable architecture models from annotated AADL models to reflect architectural changes and avoid inconsistencies as the basis for validating and verifying requirements and design.

AADL was specifically designed to support modeling of software runtime architecture based on an architecture design, the hardware platform, and the physical system that this embedded software system interacts with. It reflects the interactions within and between all three parts of a CPS. It offers well-defined component concepts, such as thread and process for software, and processor, memory, bus, and device for hardware and physical system concerns. It includes operational mode specifications, three types of interaction semantics, and deployment mappings. Standardized extension to AADL includes functional and interaction Behavior specification, Error Behavior and Propagation specification, ARINC653 partitioning, Requirements specification, and validation support. AADL and its associated tool support is a community effort of industry and academic partners in America, Europe, and Japan. A number of large-scale projects have been under way since the first release of the standard in 2004, with industry sectors ranging from aerospace and avionics to health care.

The presentation first summarizes several software-induced root cause areas due to mismatched assumptions between the different parts of a system. The presentation then presents key elements of AADL and how they address the problem areas. This is followed by a technical overview of the architecture-centric virtual-integration approach that is currently being advanced by an aerospace industry initiative with partners ranging from Boeing, Airbus, Embraer, Rockwell-Collins, BAE Systems, Honeywell, and the SEI to government agencies including FAA, NASA, and the U.S. Army. This will be followed by avionics examples illustrating the effectiveness in early discovery of anomalous system behavior due unexpected latency variation, unintentional fault propagation, and impact of software deployment decisions on system reliability. The presentation closes with observations and lesson learned on the effectiveness of this virtual-integration approach.

Download this presentation now.

Tailoring a Method for System Architecture Analysis
Joakim Fröberg, Mälardalen University
Stig Larsson, Effective Change AB
Per-Åke Nordlander, BAE Systems AB

The architecture of a system involves some decisions that affect the outcome of a development effort more than others in terms of meeting system goals, system qualities, and overall project success. Engineering the system architecture of a complex system involves analyzing architectural drivers, identifying crucial design considerations, and making decisions among alternatives. Systems engineering guidelines provide models and advice for what information entities to consider when engineering the architecture of a system, such as architectural concerns, but only limited guidance of how to do it. The guides are limited both in preciseness of definition of the information entities, such as what defines an architectural requirement, and in process description, such as how the information entities relate and what order to proceed through the work tasks. These questions need to be addressed by any development team that faces an architectural driver analysis in an actual case.

We are currently performing system architecture analysis in a project developing a hybrid electric drive system for heavy automotive applications. Our analysis method is instantiated using the Method Framework for Engineering System Architectures (MFESA). We also used elements of other theories, including CAFCR, QAW, and ATAM. Execution of the project is ongoing, and roughly half of the method activities have been carried out so far.

The steps we have performed in order to instantiate and tailor the method are summarized: (1) Define the criteria for what practitioners perceive as a practical method for analyzing system architecture, (2) instantiate a method by tailoring the MFESA tasks that apply to the case, and (3) interpret meaning and make add-ons and necessary changes.

We have instantiated a method from the MFESA framework. Based on the practitioners' criteria, we alter the method to suit the case. We point out three additions that are not directly derived from the MFESA framework and could be useful in other cases. The most significant changes were as follows:

  • We employ use cases as a means to model and identify architecturally significant requirements. We choose to start with use cases and progress by elaborating the architecturally significant ones by defining detailed scenarios.
  • We interpret and define the concepts proposed by MFESA and define their relationships.
  • We propose a stepwise procedure for carrying out the work.

To summarize, we participated in a development project and were given the task to provide a system architecture definition. We defined our method by using the MFESA framework and added some method components from other theories. Still, the resulting method is not directly applicable. To perform the method, we had to clarify the interpretation of some of the work products and define the relationship between information entities. In addition, we had to specify a stepwise working procedure. Some of the additions could be considered as case-specific tailoring, and some may be useful in general. We present lessons learned from this case and discuss a possible validation effort for an architectural analysis method.

Download this presentation now.

Architecting for User Extensibility
Russell Miller, SunView Software, Inc.

The Open/Closed principle tells us that we should keep software open for extension but closed for modification. Not only is staying open for extension a sound principle, but also it is being demanded as a feature for enterprise software solutions.  Customers expect these solutions to act more like a platform than a point solution—a platform on which they can extend and build their own tailored solutions.

To achieve the level of extensibility required to satisfy market demands, it is critical to use an architectural approach that does not bake in the "what" and the "how" of the system. That is, the problem domain's object model and behavior need to be completely externalized and extensible by the customer.

Extensibility itself is nothing new; however, in the past, extension primarily meant dropping in dynamically loaded libraries or using a few software patterns like Visitor. Now it is more commonly required that this extensibility be more systemic than applying a few simple patterns.

The customer expects to do much of the extension without touching code or files. Instead, the user largely extends the system through the UI, and those extensions are immediately available for use without a system restart. It is also expected that it be easy to migrate those extensions from a development environment, to test, and into production, all without impacting system availability and with a clean rollback capability.

The speaker is the architect of just such an enterprise platform with significant requirements around extensibility. The speaker will quickly outline the extensibility requirements that the platform meets and then explain a solid architectural approach to meet these extensibility requirements.

The attendees will

  • gain insight into the general value of externalizing the object model and business logic
  • gain an understanding of the architectural approach utilized to implement the extensibility
  • see a few brief examples of how easily the customer can extend such a platform

Download this presentation now.

Agile II

The Conflict Between Agile and Architecture: Myth or Reality?
Simon Brown, Coding the Architecture

The words agile and architecture are often seen as mutually exclusive, but the real world is starting to tell a different story. Some software teams do see architecture as an unnecessary evil, whereas others are coming to the conclusion that they need to think about architecture once again. After all, even the most agile of software projects will have some architectural concerns, and software teams really should think about these things at the beginning of a development effort. Agile software projects therefore do need "architecture," but this seems to conflict with how agile has been evangelized for more than 10 years. This session will look at the conflict between agile and architecture in the context of the software development process and how the software architecture role fits into agile teams.

Download this presentation now.

Agile Architecture and Design
Pradyumn Sharma, Pragati Software Pvt. Ltd.

Agile software development methodologies have gained a lot of prominence in recent years. But one of the nagging questions that teams face is how to establish the architecture for a system in the agile way. After all, architectural decisions have a key impact on various qualities of a system; therefore, these decisions must be made early and carefully. How does this fit with the incremental and iterative nature of agile methodologies?

In this presentation, I'll cover the following topics:

  • creating an architecture vision, including desired architectural qualities, during Sprint Zero
  • identifying potential strategies for achieving the desired architectural qualities but not committing to them
  • prioritizing the architectural qualities and adding them to the product backlog along with the functional requirements
  • implementing and verifying architectural qualities with the help of real stories from the product backlog

Download this presentation now.

An Emerging Set of Integrated Architecture and Agile Practices That Speed Up Delivery
Stephany Bellomo, SEI

A well-documented, recurring problem on project teams delivering high-value features at a rapid pace (e.g., Scrum development teams tasked to deliver high-value features quickly) occurs when features are delivered at a consistent rate for a period of time, but then a setback occurs, resulting in a sudden reduction in feature-delivery speed and/or team productivity. In this presentation, we summarize findings from several interviews with government and commercial project teams that gave us insight into the practices used by successful practitioners working on rapid development projects. We describe several emerging practices applied by practitioners (often informally) to minimize or prevent this disruption. As we analyzed our interview results, we found that the most interesting practices emerged on iterative/incremental projects in which practitioners described incidents that occurred under challenging circumstances, for example, when the rapid pace slowed due to unanticipated requirements or when users were unsatisfied with the results of a demonstration.

In these situations, we found practitioners would often integrate a Scrum practice with an architectural practice to address the problem quickly and get the project back on track (we called these integrated practices). Teams applied these practices to minimize the immediate disruption and to sustain the pace of delivery over the longer term life of the project. We discovered 10 of these integrated practices, such as release planning with architectural considerations, prototyping with quality attribute focus, release planning with external dependency management, and test-driven development with quality attribute focus.

The presentation covers the following discussion topics: 

  • a summary of the integrated practices that we derived from the interviews 
  • an elaboration of a single practice—prototyping with quality attribute focus—to illustrate how it was applied by different teams under different circumstances
  • key elements required for teams to successfully apply these practices

Our hope is that capturing and sharing generalizable findings such as these lightweight, integrated practices will help other practitioners gain from the experiences of the practitioners we interviewed, address problems more rapidly, and avoid disruptive setbacks. We are working toward formalizing how these practices are integrated in a lightweight manner into modern software development, such as Scrum-based development projects, and we will share some early concepts on this as well.

Download this presentation now.

Birds of a Feather Session: Architectural Decisions: The State of Affairs and the Way Forward
Moderator: Olaf Zimmermann, University of Applied Sciences, Rapperswil (HSR FHO)

Architectural decisions have been on the radar of practitioners and researchers since the early days of software architecture. The inability to capture and share architectural decisions often results in wasted effort and ineffective use of development resources. In recent years, a number of decision-capturing templates have been proposed and model-driven tools have been released. The 2011 edition of ISO/IEC/IEEE 42010 advises us to provide evidence of consideration of alternatives and the rationale for the choices made. However, this is easier said than done. In practice, busy schedules and project dynamics often cause decision capturing to be sidelined; documentation artifacts become obsolete quickly. There is hope, though. Lightweight decision-capturing approaches have been presented at previous SATURN Conferences, and success with knowledge reuse has been reported.

In this birds-of-a-feather session, we would like to gather the collective expertise of the SATURN community regarding architectural decisions. Come to this session if

  • you are looking for ways to share architectural knowledge within practitioner communities efficiently and effectively
  • you would like to hear about the state of the practice in identifying, making, and enforcing decisions—and to help advance it
  • you have decision-capturing practices that work well in your organization that others can benefit from

This session is a highly interactive program element; think of it as a facilitated roundtable meeting. Please bring your questions, experience, opinions, and other discussion input.

Download notes from this session now.

Web & Cloud Architecture Design

The Design Space of Modern HTML5/JavaScript Web Applications
Marcin Nowak and Cesare Pautasso, University of Lugano

This presentation gives a tour of the architectural design-decision space for modern Web applications. Assuming that architects have decided to pick emerging HTML5/JavaScript technologies to build a medium-sized, highly interactive, and possibly collaborative application, this tour will explore the important consequences and discuss the implications of this decision. Thanks to our systematic perspective over the design-decision space of modern Web applications, attendees will learn to distinguish what is possible from what is challenging to achieve.

Download this presentation now.

Using ATAM to Select the Right NoSQL Database
Dan McCreary, Kelly-McCreary & Associates

New NoSQL databases offer more options to the database architect. Selecting the right NoSQL database for your project has become a nontrivial task. Yet selecting the right database can result in huge cost savings and increased agility. This presentation will show how the Architecture Tradeoff Analysis Method (ATAM) can be applied to objectively select the best database architecture for a project. We review the core NoSQL database architecture patterns (key-value stores, column-family stores, graph databases, and document databases) and then present examples of using quality trees to score business problems with alternative architectures. We also address creative ways to use combinations of NoSQL architectures, cloud database services, and frameworks such as Hadoop, HDFS, and MapReduce to build back-end solutions that combine low operational costs and horizontal scalability. The presentation includes real-world case studies of this process.

This process is outlined in the book Making Sense of NoSQL, published by Manning Publications.

Download this presentation now.

Next-Gen Web Architecture for the Cloud Era
Darryl Nelson, Raytheon

Recent advancements in JavaScript toolkits and engines have greatly expanded web application capabilities. At the same time, service-oriented architectures (SOA) and cloud platforms have achieved maturity. However, these achievements have not translated to corresponding advancements in the presentation tier. These three developments are the genesis of the next generation of web architecture style, called SOFEA.

A new, proven architectural style has emerged to facilitate the alignment of the presentation tier with SOA and cloud-computing models. As a style, it is implementation agnostic but frequently implemented with JavaScript. Often referred as SOFEA, the Service-Oriented Front-End Architecture relocates all presentation logic to the presentation tier. Model-View-Controller components are implemented in the browser instead of being shared with the server side. During the interaction with web services, only business data is transferred across the network. The architectural constraints of SOFEA inherently reduce latency in the system, improving the end-user experience. In addition, the concrete separation of concerns enhances scalability, permitting the service and cloud to concentrate on core responsibilities without the distraction of presentation logic management. SOFEA also enhances interoperability. Because web clients can access services directly, multiple and disparate RESTful (or WS-*) web services can be integrated via a SOFEA web application. Such clients can benefit from the SOA and cloud revolutions and are able to integrate available services in the presentation tier at lower cost.

This presentation gives an overview of SOFEA and associated architectural, system, and software concepts. It also covers best practices and lessons learned during recent deployments to military operational production environments.

Download this presentation now.

Fusion Methods

Lean and Mean Architecting with Risk- and Cost-Driven Architecture (RCDA)
Eltjo Poort, CGI

Amid the abundance of software methodologies, there is a recent trend toward another paradigm for software development. Several groups from industry and academia are calling for a return to the essentials, or "lean and mean" models. This trend strongly resonates with the way CGI has been improving architecting practices since 2007.

CGI has developed Risk- and Cost Driven Architecture (RCDA) to support architects in a pragmatic, lean manner. RCDA consists of a set of principles and practices that have been harvested from practitioners' experiences, supplemented by insights from literature and research, and validated by CGI's architecture community.

RCDA contains guidance for architects on a more practical, solution-oriented level than enterprise architecture approaches, while being generic enough to help architect solutions that incorporate multiple technologies and architecture layers. Architects who try to apply a fixed architecting process (like TOGAF's ADM) often have problems fitting such a process in existing sales, design, and development processes. By separating architecting practices from the process, RCDA allows for broad usage of good architecting practices, without forcing teams to adopt a completely new process. RCDA's "best-fit practice" approach makes it easy for architects to apply its guidance in existing organizations. Hundreds of CGI architects have been trained in RCDA since 2010, and they report a significant positive impact on their architecting work.

Each RCDA practice set contains core practices and supporting practices:

  • In the Requirements Analysis practice set, the requirements originating from the stakeholders are prepared for shaping a solution.
  • The Solution Shaping practice set contains practices to define, document, and cost a solution's architecture based on the driving architectural concerns.
  • The Architecture Validation practice set contains practices aimed at validating the architecture developed in previous steps against the stakeholder's needs.
  • The Architecture Fulfillment practice set is about making sure that the architecture developed and validated in previous steps is now actually implemented in the solution in the most effective way.

Each RCDA practice contains a coordinated set of activities that can easily be integrated within existing design and development processes. Linked together, the RCDA core practices form a powerful CMMI-compliant architecting process. RCDA provides guidance on how to use the appropriate practices for every architecting situation and to omit practices that would just add waste in a particular context, making it a leanarchitecting approach.

RCDA is based on four key principles:

  1. Cost and risks drive architecture.
  2. Architecture should be minimal.
  3. Architecture is both blueprint and decisions.
  4. A solution architect is a decision maker.

These principles are applied throughout the individual RCDA practices, giving the approach conceptual integrity. The first principle—cost and risks drive architecture—gives the approach its name: the concerns that have the most impact on risk and costs have the highest architectural significance. This principle makes RCDA mean, enabling architects to focus on what really matters to their stakeholders. RCDA's workflow uses architectural concerns as a backlog prioritized by risk and cost, making RCDA agile and making it easier for architects to deal with change.

Download this presentation now.

Product Analysis Jump-Start Method: Consider the Big Picture Before You Sprint into Your Project
Stephen Letourneau, Sandia National Laboratories

Often projects begin without a clear or common understanding of the problem to be solved or the solution that will be delivered. Sometimes projects jump right into building with the hope that the system architecture will evolve over time. There are several workshops, methods, and events available that are often ignored because they are viewed as overkill for most projects.

This presentation will describe a hybrid method that borrows from the Quality Attribute Workshop, Inception Phase objectives of the Unified Process, and Business Process development from the Lean Six Sigma's Kaizen event.

This practice can be tailored and used to jump-start almost any software project, even projects that are developed using agile methodologies.

Download this presentation now.

Introducing Design Pattern-Based Abstraction Modeling Construct as a Software Architecture Compositional Technique
Sargon Hasso, Wolterskluwer
Robert Carlson, Illinois Institute of Technology

We propose a technique using design patterns as abstract modeling constructs to assemble collaborative subsystems built independently in large software applications. Given a set of requirements structured as design problems, we can solve each problem individually. This activity results in creating components whose solutions are based on design patterns. Much of the published literature on design patterns spends much effort in describing this problem—pattern association. However, there is no systematic and practical way that shows how to integrate those individual solutions together. The use of patterns as integration mechanisms is different from using them, as originally conceived, as solutions to design problems. Our compositional model is based on design patterns by abstracting their collaboration model using role-modeling constructs. To describe this collaboration model, we specify design patterns as role models. For each design pattern, we examine its participants' collaboration behavior and factor out their responsibilities. A responsibility is a collection of behaviors, functions, tasks, or services. We then specify the resulting role model much like a collaboration model in UML. Our approach describes how to transform a design pattern into a role model that we can use to assemble software architecture. The proposed approach offers a complete practical design and implementation strategies, adapted from DCI (Data, Context, and Interaction) architecture, with a set of techniques with which most software engineers are familiar. We demonstrate our technique by presenting a simple case study complete with design and implementation code. To provide support in following our proposed approach, we also present a simple process that provides guidelines of what to do and how to do it.

The approach presented in this research is of practical importance. The theory serves only to validate the concrete implementation and provides generalization to a variety of implementation strategies. The key concepts to take away are these. First, design patterns' key principal properties are used as abstraction modeling constructs through collaboration. Second, the proposed approach allows for partial and evolutionary design. Third, role to object mapping is really a binding mechanism that could be utilized effectively by this duality principle: either domain-objects discovery or object-roles allocation can be deferred. The approach is scalable without adding complexity and should work with any design pattern once its collaboration model is identified. The rationale used to select a design pattern to solve design problems should also work for selecting a design pattern to solve integration problems. The design and the implementation approach that we present creates a new design paradigm that appears complex at first but once learned it becomes another powerful tool added to designers' skill sets. The compositional model requires creating abstractions out of behavioral collaboration models of design patterns; therefore, it is not as straightforward as traditional techniques like aggregation or generalization. Furthermore, since the implementation strategy follows, more or less, the DCI architecture's footsteps, it also suffers from the same added overhead introduced by that architecture.

Download this presentation now.

Mobile Computing

Architecture Patterns for Mobile Systems in Resource-Constrained Environments
Grace Lewis, Jeff Boleng, Gene Cahill, Edwin Morris, Marc Novakouski, James Root, and Soumya Simanta, SEI

First responders and others operating in crisis environments at the tactical edge increasingly use handheld devices to help with tasks such as face recognition, language translation, decision making, and mission planning.

These resource‐constrained environments are characterized by limited processing power and battery life of handheld devices, unreliable networks with limited and inconsistent bandwidth, uncertainly of available infrastructure and connectivity to the enterprise, and high cognitive load on end users. This presentation will cover three architecture patterns that address these challenges:

  • The Data Source Integration pattern relies on server‐side standardized definitions of live or cached geo-located data feeds that can be customized and filtered on a single map‐based user interface on a mobile device. This pattern addresses the limitations of unreliable networks, uncertainty of connectivity to the enterprise, and high cognitive load.
  • The Group Context Awareness pattern takes advantage of the fact that first responders and other groups at the edge typically operate in teams. It uses the context obtained from groups of handheld devices to make better decisions about how and when to disseminate and display information, taking advantage of the communication mechanisms available at the moment. This pattern addresses the challenges of limited battery life, uncertainty of available infrastructure, and high cognitive load.
  • The Cloudlet‐Based Cyber‐Foraging pattern relies on the use of cloudlets as code‐offload elements to optimize resources and increase computation capability of mobile devices. Cloudlets are discoverable, localized, stateless servers running one or more virtual machines on which soldiers can offload resource-intensive computations from their mobile devices. The pattern addresses the challenges of limited processing power and battery life, unreliable networks, uncertainty of available infrastructure, and uncertainty of connectivity to the enterprise.

Prototype applications have been implemented for each of these patterns. Experiment results and participation in simulated exercises have shown the effectiveness of the patterns in addressing the challenges of resource-constrained environments.

Download this presentation now.

eMontage: An Architecture for Rapid Integration of Situational Awareness Data at the Edge
Soumya Simanta, Gene Cahill, and Edwin Morris, SEI

First responders and others operating in crisis environments at the tactical edge increasingly make use of handheld devices in their missions. In such environments, rapid data integration for effective situational awareness is an important requirement. To address this use case, we designed a system called eMontage that allows rapid integration of data from remote situational-awareness data sources. This capability gives first responders and warfighters in resource-constrained environments access to relevant data on a single mobile device with a consistent user interface.

Specific objectives for eMontage include rapid incorporation of new data sources (e.g., sources unique to or available at the site, national and international sources, corporate sources, and charitable sources); minimized information load for users (i.e., only the right information at the right time); user control of that information load to the extent possible; and ease of use that reduces the user's training time and learning curve. An architecture for accessing and filtering data from multiple sources provides benefits such as combining data from real-time and historical sources, operating in connected or disconnected modes, supporting individual selection and filtering of data, and integrating data from multiple sources.

We will present the framework, architecture, alternatives, tradeoffs, and implementation details of the prototype. Key system characteristics of eMontage include rule-based runtime filtering, a unified user interface for all data sources, an extensible set of data sources, minimized bandwidth utilization, offloading of resource-intensive tasks, low latency, and disconnected operations. Our prototype solution enables users to construct geospatial data mashups that incorporate local and remote data from Department of Defense systems and other publicly available real-time and historical data sources such as Twitter, Foursquare, Flickr, and the National Weather Service.

Download this presentation now.

Adapting View Models as a Means for Sharing User Interface Code Between OS X and iOS
Dileepa Jayathilake, 99X Technology

This work describes a solution to a costly problem that surfaces in software product engineering on the Objective C technology stack. Particularly, it emerges when designing a software product targeting both OS X and iOS where a significant part of the functionality is common for both platforms. This raises the question of how much code can be reused between the two implementations.

Architects will need to probe myriad concerns, and the solution can easily fall victim to combinatorial explosion. Analysis in terms of design patterns can put things in perspective. Both Cocoa and Cocoa Touch (standard frameworks for application development in OS X and iOS, respectively) highly encourage embracing model-view-controller (MVC) as a design pattern. It is even necessary for using certain parts of the two frameworks. Therefore, it is wise to keep our solution inside the MVC design paradigm.

Yet strict adherence to traditional MVC hinders the architect from harnessing the power of reusing user interface semantics between OS X and iOS. Furthermore, Cocoa and Cocoa Touch differ in terms of the degree of support for implementing the observer pattern between models and views. Thus, the solution space for implementing generic user interface logic must stay outside of framework-provided and hence platform-specific view and controller base classes. However, user interface logic does not naturally fit inside models in MVC either. A diligent analysis makes clear that the need here is a cluster of classes where each one stands as an abstraction of a particular view. This relates closely to the idea of view models in the model-view-view model (MVVM) design pattern that is popular in Windows application development. In the MVVM paradigm, a view model is a container for data that are shown and manipulated in a view along with behavioral logic that rules user interaction with the view. A view model is not supposed to know about specific view instances while it is the view that hooks into the view model.

As a means of encapsulating generic user interface logic, we experimented with implementing view models to supplement the MVC design. Our architecture comprised models, views, controllers, and view models. This scheme enables developers to implement common user interface logic in view models that are shared between two implementations. In addition to the key benefit of improved code reuse, it delivers other advantages, including enhanced readability, maintainability, and testability.

View models generally sound alien to the Objective C world. However, our experimentation with this design yielded clear evidence that view models, once employed in combination with MVC in architectures for products targeting both OS X and iOS, carry enormous potential to boost code reuse in addition to several other advantages. We tested it in a product developed for both Mac and iPad and believe that it will serve as a generic design pattern for similar cases.

Download this presentation now.

Architectural Evaluation

All Architecture Evaluation Is Not the Same: Lessons Learned from More Than 50 Architecture Evaluations in Industry
Matthias Naab, Jens Knodel, and Thorsten Keuler, Fraunhofer IESE

Architecture evaluation has become a mature subdiscipline in architecting with much high-quality practical and scientific literature available. The literature does a good job of describing methods for evaluating particular quality attributes. However, detailed information on characteristics and context factors in concrete industrial settings is harder to find. After performing more than 50 architecture evaluations for industrial customers in recent years, we have collected interesting facts and findings about architecture evaluations in practice. In this presentation, we share these with other practitioners and researchers.

This session should be of special interest for two groups of stakeholders: those who need insights about their systems and might want to ask for an architecture evaluation and those who are actively involved in architecture evaluations. We demonstrate a spectrum of diversity in architecture evaluations that might surprise even experienced practitioners.

Our main goal is to present the condensed experiences of more than 50 architecture evaluations. This will help enable practitioners to classify their own architecture evaluations and to gain inspiration on the general topic of architecture evaluation. We package our lessons learned, commonalities, and unique factors of concrete cases, and we describe the architecture evaluation projects according to different characteristics. For the characteristics, we outline the bandwidth of experiences and show illustrative examples.

First, we describe the evaluation projects according to contextual factors:

  • What is the organizational constellation of the architecture evaluation? Is the company ordering the evaluation also the one developing the product evaluated?
  • Which stakeholder ordered the architecture evaluation?
  • In what context was the architecture evaluation ordered? Was the product in trouble, or was this a proactive measure?
  • What was the key goal? What questions were used to evaluate it?
  • What was the system under evaluation (anonymized from systems of diverse industries)?

Second, we describe the planning and setup of the architecture evaluation project itself:

  • How much effort do architecture evaluation projects require, and how is effort distributed among the team responsible for the product under evaluation and the evaluation team?
  • Which architecture evaluation methods were used to answer the evaluation questions?

Third, we report on outcomes of the evaluation projects:

  • What were the key results and findings of the architecture evaluations?
  • What follow-up activities did customer organizations engage in after the architecture evaluation?
  • What further benefits did customers gain from the architecture evaluation?

Reporting these experiences, we allow practitioners to get an overview of the nature and characteristics of industrial architecture evaluations. This complements the available literature on architecture evaluation methods. Thus, practitioners should be able to better judge their own situations and to know when architecture evaluations might be helpful and what they can expect from the evaluation.

Download this presentation now.

Leveraging Simulation to Create Better Software Systems in an Agile World
Jason Ard and Kristine Davidsen, Raytheon Missile Systems

Software developers need to deliver reliable, complex software-centric systems for use on hardware that is of limited availability. Often this must be accomplished quickly and cost-effectively in an environment where there are inevitable unknowns as we push the technology envelope. Products may not have a fully defined set of initial requirements, and system intricacies are not well understood in the early stages of new product development. We have found that an iterative development cycle that leverages software in simulation to model and test the product affords us the opportunity to quickly stand up a system and begin to mitigate risk from Day 1.

In our presentation, we describe how simulation acts as a design and development aid throughout the life cycle of three software-centric products we have worked on. We will show the benefits of using simulation to prototype, demonstrate functionality of representative hardware, and provide early system performance feedback.

Agile software development emphasizes frequent delivery of working software that addresses the most valuable business needs. In our experience, this can be challenging due to the lack of testing facilities and the limited availability of hardware. To address these challenges, we use simulations as prototyping and testing tools for both subsystems and the full end-to-end system. We develop simulations that exercise the deliverable software product, allowing us to demonstrate working software for customers and stakeholders early in the development cycle. We will discuss examples where such demonstrations have provided meaningful insight into the product output and facilitated the early refinement of requirements, reducing project cost and risk.

In one example, we study a simulation framework created to exercise the end-to-end (though abbreviated) functionality of the software, and how it was used to conduct system-level analysis within one month, providing valuable information for design decisions. This case study illustrates how subsystem prototyping can be extended to demonstrate the potential impact of design decisions as they are being evaluated.

Our projects achieved success by maturing the simulation incrementally for use in testing and analysis of functional product software. Simple simulation models were used to exercise production software and retained value as they were matured along with the deliverable software-centric product. This agile incremental software development approach benefited our projects by allowing accurate assessment of progress to project stakeholders and providing a clearer view to the designers and developers of what work needed to be done. We show how such product demonstrations were able to expose interface and integration issues, which if found later in the program would have resulted in costly rework during later stages of system integration.

By leveraging simulation as a tool for prototyping, system-level testing, design analysis, interface development, and product demonstration, we show that simulation-based software development creates better software systems.

Download this presentation now.

Test-Driven Non-Functionals? Test-Driven Non-Functionals!
Wilco Koorn, Xebia

We frequently observe software development teams having adopted a "test-driven" approach in software engineering. That is, they write a test first and only then write an implementation making the test succeed. This approach is quite common at the level of unit testing where it is supported by many tools such as JUnit and TestNG. It is less common at the functional test level, but we also find tool support here such a Fitnesse and Selenium. Remarkably enough, the test-driven approach seems to be lacking at the nonfunctional test level. We propose to apply a test-driven approach to nonfunctional requirements wherever possible. In this presentation, we investigate which types of nonfunctional requirements are suitable for a test-driven approach and show how we applied the idea to an application requiring "scalability" in industry practice. Finally, we make some remarks on how to prevent waste during development, as teams might easily do too much work by over-optimizing the system.

Download this presentation now.

Governance and Education

Enabling Software Excellence at a Hardware Company
Sascha Stoeter, John Hudepohl, Fredrik X. Ekdahl, and Brian P. Robinson, ABB

ABB's Software Development Improvement Program strives toward continuous improvement by assessing progress and regularly validating achievements using industry models such as Capability Maturity Model Integration and other best practices as a reference. While the initiative is planned and coordinated by a small team from ABB headquarters, the people involved are embedded in all five of ABB's divisions.

We present our four years of experience in building and nourishing a software community within a system engineering company dominated by hardware. We explain how we provide practices, tools, and training for thousands of developers and their managers spread across all but one continent. You will learn about the successes but also our challenges and lessons learned so far.

Download this presentation now.

Mission Thread Workshop: Preparation and Execution
Tim Morrow, SEI

Architecting and developing a system of systems (SoS) is a complex and daunting task. We are all familiar with integration and operational problems between a system and its software architecture due to inconsistencies, ambiguities, and omissions in addressing quality attributes. These problems are further exacerbated in an SoS because you are typically dealing with some number of existing systems that are evolving themselves and being integrated together to provide new capabilities. In this context, you are dealing with different program offices, contractors, engineering disciplines, and program life cycles that are being mashed together to form the SoS. Functionality and capability are critically important, but the architecture must be driven by the quality attributes. Specifying and addressing quality attributes early and evaluating the architecture (system and SoS) to identify risks are key to success.

Carnegie Mellon Software Engineering Institute (SEI) has developed a number of methods that, when combined, form the basis for a software architecture-centric engineering approach. The Quality Attribute Workshop (QAW) and the Architecture Tradeoff Analysis Method (ATAM) were two of the methods developed to facilitate the discussion with stakeholders at the software architecture level. We wanted to extend the approach to effectively treat SoS considerations. We were familiar with a system engineering concept of mission threads (or workflows), a sequence of steps conducted at the various nodes in the SoS in response to a stimulus. The resulting Mission Thread Workshop (MTW) is a facilitated stakeholder-centric exercise whose purpose is to help elicit and define requirements, address engineering considerations, and uncover architectural challenges and capability gaps for an SoS.

We conceived of extending the concept of a mission thread to include quality attributes in the same manner as scenarios had extended use cases. However, we discovered that developing the mission threads at the MTW in a way similar to developing scenarios in a QAW was overly cumbersome. Developing mission threads in advance of the MTW by working with subject matter experts was a key aspect of a successful MTW. Operational, sustainment, development, and acquisition mission threads are created and refined with a few key leads of the sponsoring organization to create a starting point for the MTW. This preparation enables the stakeholders participating in the MTW to focus on validating the threads, determining gaps in them, identifying architectural and engineering issues, and considering the quality attributes at individual steps in the thread and in the whole thread during the MTW. After the MTW, the findings are summarized as a set of challenges facing the SoS architects. Like its predecessors, the MTW is a repeatable technique that works equally well in commercial and DoD contexts.

The presentation is based on an upcoming MTW technical report that describes the steps of a MTW and the engagement approach developed from lessons learned with commercial and DoD organizations. The engagement approach consists of three phases, a timeline for the different activities, inputs and outputs associated with each activity, and examples of artifacts associated with the activities.

Download this presentation now.

Enterprise Architecture for the "Business of IT"
Charlie Betz

IT organizations represent significant capital and operational commitments, and business is increasingly dependent on IT performance. Yet in many ways, IT remains immature and under-managed relative to peer functions. Author and practitioner Charlie Betz has focused on applying enterprise architecture principles to the "business of IT" for the past ten years and has found recent inspiration in the principles of Lean Management. Come and hear him discuss

  • a formal enterprise architecture for IT management, including process, function, data, and systems models
  • characteristic design patterns seen in organizing large-scale IT management capabilities
  • defining and distinguishing a true Lean IT process model from older functional representations of IT such as ITIL
  • continuous improvement for IT management, and why a sound IT management data architecture is so important

Download this presentation now.

Panel Discussion

How to Architect Long-Living Software Systems
Facilitator
Rick Kazman, University of Hawaii

Participants:
Ian Gorton, SEI
Stephan Murer, Credit Suisse
Martin Naedele, ABB
Linda Northrop, SEI
Don O'Connell, Boeing

Architects designing long-living systems often need to balance stability and flexibility. Systems must endure certain changes but also respond to other changes quickly. In this context, architects face a number of challenges: Technologies evolve rapidly; for example, ten years ago no one designed web apps for smartphones. Markets change dynamically, as seen in the disappearance of numerous software vendors of tools for enterprise resource planning and business intelligence. Knowledge vaporizes due to attrition or retirement, which leads to architecture erosion. And conflicting quality attributes—such as safety and security, resilience and scale, time-to-market, and sustainability—are difficult to balance. Are established design principles sufficient to address these challenges? Do successful organizations employ proven practices, or do they have hidden secrets? This panel will discuss experiences in which unexpected architectural approaches led to successful longevity. It will explore when to reach into the known architect's toolbox and when to step out of the box.

Important News
Important Dates
SATURN 2015
April 27--May 1, 2015
Baltimore, Md.
Stay Connected

Get the latest SATURN news, important dates, and announcements on the SATURN Network blog, sign up for our email updates, follow us on Twitter (@SATURN_News, #SATURN14), and join the SATURN LinkedIn Group.

SATURN Blog RSS Saturn - LinkedIn SATURN - TWITTER
Submit Copyright Permission

All Session and Tutorial Speakers: If you have been approved to submit a presentation for inclusion in the SATURN proceedings, you can now sign and submit copyright permission online.

Click here to sign and submit.

Please note that each speaker (author) must be listed on the copyright permission, not just the primary speaker. 

SEI Customer Relations

Phone: +1 412-268-5800
Toll Free (within the USA):  +1 888-201-4479
FAX: +1 412-268-6257
E-mail:
info@sei.cmu.edu


Help us improve

Visitor feedback helps us continually improve our site.

Please tell us what you
think with this short
(< 5 minute) survey.