Research group within the Department of Computer Science (Informatik) at Universität der Bundeswehr München
Group members:
Head: Prof. Dr.-Ing. Markus Siegle
Current members: Dipl.-Math. Alexander Gouberman, Fabian Michel M.Sc., Dr. Ulrich Vogl (ext.)
Former members: Dr.-Ing. Amin Soltanieh, Dr.-Ing. Bharath Siva Kumar Tati, Dr.-Ing. Martin Riedl, Dr. Johann Schuster, Dr.-Ing. Kai Lampka, Dr.-Ing. Matthias Kuntz, Dr.-Ing. Joachim Meyer-Kayser.
Overview
The group develops techniques and tools for supporting the design and analysis of computer and communication systems by formal quantitative methods. The focus is on model-based performance and dependability evaluation of such systems, working with stochastic models which are specified with the help of stochastic process algebra or related formalisms. We are working on the specification and automatic verification of complex performability requirements.
Transient analysis of Markov processes by state space reduction with formal error bounds
Recently, we have developed new techniques for the aggregation-based approximate transient analysis of Markov chains. We showed how to bound the stepwise increment of the error (resp. the error growth rate in the continuous-time setting) and were able to characterise situations in which the aggregation error is zero and [MS24_1,MS24_2]. Going beyond partition-based reduction schemes, we studied special aggregations based on the Arnoldi iteration, thereby pointing out cases when such aggregations are optimal [SMS25]. We also extended this line of work to continuous state spaces, calculating the approximate distribution of the transient workload of specific queues with one-sided Lévy input, such as the M/G/1 workload process, by double discretization of time and space [MS25].
Predicting project runtimes with stochastic graph models
For more than a decade now, we have been working on analytic/numerical methods for project planning, where projects are modelled with the help of stochastic precedence graphs with general task runtime distributions. The origins of this work go back to the techniques implemented in the tool PEPP from the University of Erlangen [HDKQS95]. We have developed graph-theoretic techniques for the structural analysis of general precedence graphs, enabling us to identify minimal complex clusters, i.e. minimal-size subgraphs which cannot be broken down by serial or parallel reductions. Such clusters are fitted into Markov chains after which their execution time distributions can be calculated numerically with the help of a probabilistic model checker. The resulting distributions are then back-substituted into the overall precedence graph, enabling further numerical series-parallel reduction steps [VS17_1,VS17_2]. Our theory and tool are also capable of handling resource restrictions, where we developed a heuristic algorithm which serializes parts of the precedence graph in such a way that resource conflicts are resolved. We have applied these techniques to multi-team PI planning (program increment planning) in the Scaled Agile Framework [VS23], showing that our method yields much more realistic runtime estimates than the well-known, typically over-optimistic, critical path methods.
Model checking and model repair of stochastic systems
We have been working on the rate lifting problem for Stochastic Process Algebra, which means determining the unknown rates in the sequential processes of a compositional SPA model if the rates in the combined flat model are given [SS20, SS22, SS24]. This is an example of reverse engineering with applications, among others, in model repair. Earlier, we studied the problem of model repair, i.e. the question of how an existing system should be modified in case it violates a given requirement. We consider labelled CTMC models and requirements formulated with the help of the logic CSL. Our repair method does not alter the structure of the model, but works by reducing certain well-defined sets of transition rates by a common factor [ TS15]. We consider untimed and time-bounded requirements [TS16, GST17]. Building on our model repair algorithms, we also work on controller synthesis, where a controller is constructed, to be composed in parallel with a given plant, such that the resulting system will satisfy requirements formulated in the logic asCSL.
Earlier we developed the action-based logic aCSL [HKMS00] for model checking action-labelled Markov chains stemming from stochastic process algebra specifications. Then we were working on logics which extend the combined expressivity of CSL and aCSL by specifying arbitrarily complex execution paths which are characterised by regular expressions of actions and so-called tests. Two logics resulting from this work are sPDL [KS03, KS06] and asCSL [BCHKS04].
We were among the first to develop a model checker for probabilistic / stochastic systems: The tool ETMCC enables its users to check labelled CTMCs, where Continuous Stochastic Logic (CSL) and action-based CSL (aCSL) are used as the basic requirements specification languages [HKMS03]. ETMCC realises CTL-type model checking algorithms built on top of sophisticated numerical analysis techniques, based on sparse data structures.
LARES: Specification Language and Toolset for Reconfigurable Dependable Systems
For bridging the gap between high-level formalisms and formal modelling languages, the LAnguage for REconfigurable dependable Systems (LARES) has been defined [ WGRSS09, RS12, GRS13, GGRSS09, GRS17]. LARES provides means for hierarchical modelling, i.e. it separates between the definition of structure and behaviour and introduces scopes to restrict visibility of definitions. The formal semantics of LARES is defined either by means of flat labelled Markov chains or by stochastic process algebra (CASPA, see below). The LARES website is a resource for the LARES specification, tutorials on modelling and analysis using the LARES IDE, links to example models and news on current developments.
Semantics of probabilistic / stochastic models
We studied weak bisimulation of Markov automata, therein introducing the notion of non-naively vanishing states [ SS14]. In [ SS13], we showed that the class of compact bisimilar probabilistic automata forms lattices. We also developed necessary and sufficient criteria that guarantee scale-freeness of stochastic process algebra models with weigthed immediate actions [ SS12].
Representation and analysis of large Markov chains
In many areas of system design and analysis, there is the need to generate, manipulate and analyse very large state spaces. State space explosion is a serious problem when modelling parallel and distributed systems, because of the memory limitations of the available computing equipment.
We have made contributions to the symbolic representation of large labelled CTMCs, based on binary decision diagrams (BDD) and extensions thereof. We use multi-terminal BDDs (MTBDD) and partially shared zero-suppressed MTBDDs (pZDD) [S02, LS06a]. In the presence of structured models which are composed of interacting submodels, the symbolic approach leads to provably compact representations of huge state spaces [HKNPS03]. For arbitrary models without any particular structure, the so-called activity-local scheme [LS03, LS06a] generates the state graph and processes reward-related information in a manner that is highly runtime- and memory-efficient [ LS14]. Numerical analysis methods have successfully been adapted to the ZDDdata structure [ LS06b].
We have developed several prototypical tools for BDD-based model generation, manipulation and analysis. Originally we used TIPPtool as a front end, but now our tool CASPA [KSW04] is a powerful stand-alone stochastic process algebra tool based entirely on symbolic data structures. One of our activities concerned the integration of symbolic techniques into the multi-formalism modelling tool Möbius which led, among other results, to the implementation of an AFI for state-level objects represented by ZDDs.
Projects / Cooperations
We conducted the DFG-sponsored project "Efficient Analysis Algorithms for Performability-Evaluation of Distributed Systems", where we developed novel data structures and algorithms for the analysis of Markovian models. This included a symbolic elimination algorithms for vanishing states, an efficient k-shortest path algorithm and a new parallel multilevel method for numerical analysis. Another DFG-sponsored project was PerformVER, where we developed highly expressive logics for formally specifying complex performability properties of concurrent systems. We devised efficient algorithms for symbolic model checking, and implementedd them in prototype software tools. Prior to this, we had conducted two more DFG-sponsered projects: BDDana, which investigated efficient performance analysis on the basis of BDDs, and AutoVer, where we investigated the automatic verification of quality of service requirements of distributed systems. In the past, we have cooperated, among others
- with the Universities of Aachen, Dresden, Münster, Nijmegen, Saarbrücken and Twente, supported by the DFG-NWO-sponsored project ROCKS (completed)
- with the Technical University of Munich, supported by the DFG-sponsored project "Model-based Dependability Analysis of Complex Fault-Tolerant Systems" (completed)
- with the Universities of Aachen, Bonn, Nijmegen, Saarbrücken and Twente, supported by the DFG-NWO-sponsored project VOSS (completed)
- with the Universities of Birmingham and Edinburgh, a collaboration originally supported by the DAAD project StochVer (completed)
- with the the University of Illinois at Urbana-Champaign and the University of Cape Town.