Many CPSs operate in uncertain environments. From the software perspective, this means that some of its inputs are random variables. Given this randomness, the natural way to frame the verification problem is to compute the likelihood that the software satisfies a safety specification (e.g., the likelihood that a periodic real-time task never misses its deadline five times in a row, or the likelihood that the destination is reached with a preset deadline). We are exploring two main techniques for verification of stochastic CPSs:
Probabilistic Model Checking: In this approach, the system is modeled as a Markov Chain and the likelihood of a property is computed by constructing a set of equations and solving them numerically. We are exploring the use of probabilistic model checking to evaluate the quality of coordination schemes in distributed multi-agent systems.
Statistical Model Checking: In this approach, each execution of the system is treated as a Bernoulli trial and the likelihood of a property is computed via Monte Carlo simulations. A key challenge is to get a high-precision result with a small number of simulations, which is non-trivial for properties whose violations are rare events. We are exploring novel importance sampling techniques to achieve this goal for stochastic CPS software.
More details are available at individual project web pages.