Distributed systems are becoming an important part of safety-critical and mission-critical systems. They are also being endowed with more autonomy and coordination capabilities to increase effectiveness and reduce operator overload. They often operate in uncertain environments and must meet both guaranteed (e.g., collision avoidance) and best effort (e.g., area coverage within deadline) requirements. We are pursuing two research directions in functional verification of such systems:
Probabilistic Verification of Distributed Coordinated Multi-Agent Systems: This project explores analytic methods for predicting the quality of coordination mechanisms for multi-agent systems in uncertain environments (e.g., robots in a minefield). In previous work, we have developed, implemented, and validated highly scalable, compositional, probabilistic model checking algorithms for such systems with limited coordination. Our current focus is on probabilistic model checking to make predictions about teams of agents with more complex coordination. Our approach is to first build Discrete Time Markov Chain (DTMC) models for each agent, based on running and observing them individually. Next these models are composed and verified, using the probabilistic model checker PRISM to obtain the prediction. We have proved the correctness of this approach formally and implemented it. We have also developed a way to quantify the error in our predictions. Such errors are unavoidable, due to the fact that our models are constructed from a finite number of observations.
Model Checking Distributed Applications: This project is exploring the use of domain-specific languages, software model checking, and code generation to produce verified distributed applications. More details are available here.
Additional details are available on individual project web pages.