What it is

The focus of the seminar series is to build a bridge between research groups in Kaiserslautern with interests in formal verification, machine learning, distributed systems, and embedded systems, oriented towards the topic of certified verification. The past decades have seen the rise of powerful methods in formal verification and machine learning, including SAT and SMT-solvers, deep neural networks, program synthesis, and property-directed reachability (IC3), to mention a few. Such techniques, as well as the optimizations that are applied to increase scalability and applicability, often become so complex that bugs in those implementations start to creep in. We are interested in techniques that can ensure correctness in those implementations, including certified solvers, certified verifiers, and type systems, among others. The aim of the seminar series is to exchange ideas that could lead to the development of such techniques.

Involved groups

When and where

  • Online via Zoom
  • Date and time: Thursdays 13:00
  • Check the schedule below for exact dates.
  • A longer-term plan can be found in this Google Calendar.


  • To get the Zoom link of the next talks, please subscribe to the seminar's mailing list as follows:
  • Write a mail to sympa@informatik.uni-kl.de with subject "subscribe arg-cervera Yourfirstname Yourlastname" and empty message body.
  • If you have any questions, feel free to contact Pascal Bergsträßer (bergstraesser[at]cs.uni-kl.de).

Next talk

  • 03.02.22 at 13:00: Talk by Roland Meyer
    • Title: BMC with Memory Models as Input
    • Abstract:
      Dartagnan is a bounded model checker for concurrent programs under weak memory models that we develop in collaboration with Thomas Haas and Hernan Ponce de León. Its distinguishing feature is that the memory model is not hardcoded inside the tool but taken as part of the input. The input format is CAT, the by now standard language for memory models, which has been used to formalize x86/TSO, ARMv7, ARMv8, Power, C/C++, and Linux kernel concurrency primitives. BMC with memory models as input is challenging, because one has to encode into SMT not only the program but also its semantics as defined by the memory model. The talk will focus on the tricks we developed to keep this encoding compact.

Past talks

  • 27.01.22: Talk by Laura Kovács
    • Title: Getting Saturated with Induction
  • 09.12.21: Talk by Florin Manea
    • Title: Matching Patterns with Variables: A General Framework for Text Processing
  • 02.12.21: Talk by Artur Jeż
    • Title: Recompression-based algorithm for word equations
  • 25.11.21: Talk by Julian Gutierrez
    • Title: Games, Logic, and Formal Verification
  • 28.10.21: Talk by Alexandra Silva
    • Title: Concurrent NetKAT: Modeling and analyzing stateful, concurrent networks
  • 27.05.21: Talk by Yunwen Lei
    • Title: Statistical Learning by Stochastic Gradient Descent
  • 20.05.21: Talk by Loris D’Antoni
    • Title: Programmable Program Synthesis
  • 22.04.21: Talk by Pablo Barceló
    • Title: Explainability Queries for ML Models and its Connections with Data Management Problems
  • 25.03.21: Talk by Oliver Markgraf
    • Title: Learning in reactive Synthesis
  • 18.03.21: Talk by Chih-Hong Cheng
    • Title: Verification and Validation of DNN-based Autonomy Perception
  • 11.03.21: Talk by Ekaterina Komendantskaya
    • Title: Continuous Verification of Machine Learning: a Declarative Programming Approach
  • 25.02.21: Talk by Yutaka Nagashima
    • Title: Artificial Intelligence for Inductive Theorem Proving
  • 11.02.21: Talk by Dave Parker
    • Title: Verification with Stochastic Games: Advances and Challenges