We focus on rigorous verification for embedded control systems, with a specialization in automated driving . Ensuring the safety and correctness of these cyber-physical systems is crucial given their increasing autonomy, complexity and wide range of operating conditions. Verification challenges stem from the need to validate complex software, often including neural network components, against safety and functional specifications. Our research develops scalable, automated, and code-level verification frameworks that bridge formal methods and practical implementation.

Software Verification

We develop scalable code-level verification frameworks that directly check embedded controller implementations against safety specifications. This includes traditional Adaptive Cruise Control (ACC) software as well as neural network-based controllers. By combining bounded model checking (BMC) with compositional assume-guarantee reasoning, we verify closed-loop safety properties efficiently. The CoCoSaFe framework exemplifies this approach, verifying multiple controller types within minutes on industrial-scale codebases.

Model Checking Decision-Making Components

We employ automated behavior abstraction and formal model checking to analyze both traditional controllers and Artificial Intelligence (AI) components. For neural network controllers in model predictive control settings, we utilize controlled invariant sets combined with neural network verification to certify safety or produce counterexamples. This hybrid approach provides rigorous guarantees even for learned controllers operating in uncertain environments.

Software Portability Checking for Various Hardware

To ensure safe deployment across diverse vehicle hardware configurations, we introduced an automatic portability verification framework. It formally models configuration-specific vehicle dynamics, sensing, and actuation, computing safe sets within each operational design domain. Automated driving software is then checked for safe operation across target hardware setups, enabling rapid integration, testing, and over-the-air software updates with functional safety guarantees.

Monitoring and Runtime Verification

We design lightweight runtime monitors for autonomous robotic navigation and embedded controllers. These monitors leverage offline-computed safety certificates and noise-aware progress/failure criteria to detect safety violations early and with low false alarm rates. The approach requires only filtered state estimates, enabling efficient runtime verification of black-box controllers in dynamic, uncertain environments, enhancing operational reliability without extensive onboard computation.

Learning-Based Falsification

We developed a learning-based falsification framework that combines adversarial reinforcement learning agents with model-based falsifiers to efficiently generate challenging scenarios that violate safety specifications. This hybrid method accelerates falsification in simulation, uncovering non-trivial safety violations for controllers such as adaptive cruise control, surpassing purely learning- or model-based methods.

 

Participants:

Prof. Dr.-Ing. Vladislav Nenchev

Dipl.-Ing. Prodromos Sotiriadis

 

Collaboration partners:

University of York, UK

University of Padova, IT

BMW