CERT-SEI

Lambda-WBA in an Industrial Setting

An industrial producer of automation equipment approached the PACC Performance team to analyze their product to ensure that it met its performance requirements. Of concern was that critical, real-time deadlines could be missed in configurations of their product. Their goal is the creation of technology that will predict that all configurations of their product will meet their critical timing requirements. To that end, the PBC Performance team created LambdaWBA (Lambda for latency, WBA for Worst-case with Blocking and Asynchrony), building on the LambdaABA (Average-case with Blocking and Asynchrony) as a means to aid this industrial partner to satisfy this goal.

To approach this problem, a specific configuration of the automation equipment was selected, consisting of hardware, software, specific analog and digital inputs and outputs, data rates, and application settings. Instrumentation and subsequent measurements of the runtime software were used to characterize the performance behavior (i.e., thread period, thread latency, and thread execution time) and thread interactions (i.e., semaphore communication) of all the threads executing in the runtime system. These measurements and interactions were then used to create an abstract component and connector model of the system using the PACC Starter Kit and its CCL description language. The interactions were used to indicate inter-thread communication as well as thread stimulus and response. The measurements were used to describe the performance characteristic of each thread for this configuration. Each thread was characterized as having a minimum, average, and maximum observed execution time. The minimum and maximum (or worst case) execution time used was taken as that observed during a five minute period as the automation equipment experienced the highest selected data rates.

This abstract model was then transformed by the reasoning framework’s interpretation [1] into a performance model suitable for RMA analysis [2] to predict whether or not two specific threads within the runtime system would complete before their deadline (which was defined as being at the end of their period). This prediction was based on the observed worst case execution times for all threads observed during the instrumentation and measurement activity mentioned above. The performance reasoning framework then translated the performance model to the input language of  MAST [3] to automatically determine if the runtime system, as described, was schedulable—in that the two threads would complete before their deadline. We were able to show that the system as configured was schedulable.

Additionally, we used the LambdaABA reasoning framework to predict the average latency of these two threads as an indicator as to the accuracy of the abstract model created for this development. Our hypothesis was that if our model was a sufficient representation of the runtime system, we would expect to get a low magnitude of relative error (MRE) between the average response time observed in the runtime system and that predicted by LambdaABA. This time the performance reasoning framework translated the model to the input language of SIM-MAST to generate a prediction of the average response time for the two threads of interest in the runtime system. With those predictions, the automation equipment was executed repeatedly to produce a statistical base from which to compare multiple runs against the predicted response times. A confidence interval of 99% was calculated (see [4]) that 99% of future observations would fall within 10% MRE. These results supported our hypothesis that the abstract model we created was a sufficient representation of the runtime system upon which deadline satisfaction results were based. This experience successfully proved the applicability and usefulness of the LambdaWBA performance reasoning framework.

Future Work

From our experience, it was concluded that a more principled (yet practical) approach was needed to settle upon the worst-case execution time. It has long been known [5] that various effects (e.g., caching) inject non-determinism into knowing the true worst-case execution time. Our industrial partner understood this limitation and for this initial use of LambdaWBA, it was deemed that a five minute observation period under the highest input data rate would be sufficient. However, in and of itself, this is not practical. The goal for moving forward with the LambdaWBA performance reasoning framework is to establish a practical approach for having high confidence in the observed worst case execution time for certified components in safety critical systems.

References

[1] Bass, L., Ivers, J., Klein, M, and Merson, P. Reasoning Frameworks. CMU/SEI-2005-TR-007, Software Engineering Institute, Pittsburgh, PA, 2005.

[2] Klein, M.; Ralya, T.; Pollak, B.; Obenza, R.; & González Harbour, M. A Practitioner's Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems. Boston, MA: Kluwer Academic Publishers, 1993.

[3] M. Gonzalez Harbour, J.J. Gutierrez Garcia, J.C. Palencia Gutierrez, J.M. Drake Moyano, MAST: Modeling and Analysis Suite for Real Time Applications, ECRTS, p. 0125, 13th Euromicro Conference on Real-Time Systems (ECRTS'01), 2001.

[4] Hissam, S.; Hudak, J.; Ivers, J.; Klein, M.; Larsson, M.; Moreno, G.; Northrop, L.; Plakosh, D.; Stafford, J.; Wallnau, K.; & Wood, W. Predictable Assembly of Substation Automation Systems: An Experiment Report, Second Edition (CMU/SEI-2002-TR-031, ADA418441). Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 2003.

[5] S.Basumallick and K.D.Nilsen. Cache Issues in RealTime Systems. ACM SIGPLAN Workshop on Language, Compiler, and Tool Support for Real-Time Systems, June 1994. 

SEI Blog