What it is

The focus of the seminar series is to build a bridge between research groups in Kaiserslautern with interests in formal verification, machine learning, distributed systems, and embedded systems, oriented towards the topic of certified verification. The past decades have seen the rise of powerful methods in formal verification and machine learning, including SAT and SMT-solvers, deep neural networks, program synthesis, and property-directed reachability (IC3), to mention a few. Such techniques, as well as the optimizations that are applied to increase scalability and applicability, often become so complex that bugs in those implementations start to creep in. We are interested in techniques that can ensure correctness in those implementations, including certified solvers, certified verifiers, and type systems, among others. The aim of the seminar series is to exchange ideas that could lead to the development of such techniques.

Involved groups

When and where

  • Check the announcements below.

Interested?

  • To receive updates, please subscribe to the seminar’s mailing list as follows:
  • Write an email to sympa@informatik.uni-kl.de with subject “subscribe arg-cervera Yourfirstname Yourlastname” and empty message body.
  • If you have any questions, feel free to contact Pascal Bergsträßer (bergstraesser[at]cs.uni-kl.de).

Next talk

  • 24.06.2024 at 10:00 in room 44-380
    • Speaker: Julian Pasert
    • Title: Neural Termination Analysis
    • Abstract:
      We introduce a novel approach to the automated termination analysis of computer programs: we use neural networks to represent ranking functions. Ranking functions map program states to values that are bounded from below and decrease as a program runs; the existence of a ranking function proves that the program terminates. We train a neural network from sampled execution traces of a program so that the network’s output decreases along the traces; then, we use symbolic reasoning to formally verify that it generalises to all possible executions. Upon the affirmative answer we obtain a formal certificate of termination for the program, which we call a neural ranking function. We demonstrate that thanks to the ability of neural networks to represent nonlinear functions our method succeeds over programs that are beyond the reach of state-of-the-art tools. This includes programs that use disjunctions in their loop conditions and programs that include nonlinear expressions.
    • Bio:
      I recently finished my PhD at the University of Oxford under the supervision of Daniel Kroening and Tom Melham with a thesis titled “Machine Learning for Function Synthesis”. Generally, I am interested in automated reasoning or computer-aided verification of mathematical proofs, software, and hardware (in no particular order). Apart from my PhD I worked at the University of Edinburgh and the University of Innsbruck as a research associate where I also worked on computational logic, theorem proving, and term rewriting. I lived in Innsbruck, Austria most of my life and also obtained my Undergrad and Masters at the University of Innsbruck.

Past talks

  • 19.04.2024: Session consisting of 3 talks:
    • Talk 1: Andreea Costea
      • Title: Patch Space Exploration using Static Analysis Feedback
    • Talk 2: Matthew Hague
      • Title: Parikh’s Theorem Made Symbolic
    • Talk 3: Martin Lester
      • Title: Easily encode problems in SAT With This One Simple Trick: Declarative programming in C using CBMC
  • 03.02.22: Talk by Roland Meyer
    • Title: BMC with Memory Models as Input
  • 27.01.22: Talk by Laura Kovács
    • Title: Getting Saturated with Induction
  • 09.12.21: Talk by Florin Manea
    • Title: Matching Patterns with Variables: A General Framework for Text Processing
  • 02.12.21: Talk by Artur Jeż
    • Title: Recompression-based algorithm for word equations
  • 25.11.21: Talk by Julian Gutierrez
    • Title: Games, Logic, and Formal Verification
  • 28.10.21: Talk by Alexandra Silva
    • Title: Concurrent NetKAT: Modeling and analyzing stateful, concurrent networks
  • 27.05.21: Talk by Yunwen Lei
    • Title: Statistical Learning by Stochastic Gradient Descent
  • 20.05.21: Talk by Loris D’Antoni
    • Title: Programmable Program Synthesis
  • 22.04.21: Talk by Pablo Barceló
    • Title: Explainability Queries for ML Models and its Connections with Data Management Problems
  • 25.03.21: Talk by Oliver Markgraf
    • Title: Learning in reactive Synthesis
  • 18.03.21: Talk by Chih-Hong Cheng
    • Title: Verification and Validation of DNN-based Autonomy Perception
  • 11.03.21: Talk by Ekaterina Komendantskaya
    • Title: Continuous Verification of Machine Learning: a Declarative Programming Approach
  • 25.02.21: Talk by Yutaka Nagashima
    • Title: Artificial Intelligence for Inductive Theorem Proving
  • 11.02.21: Talk by Dave Parker
    • Title: Verification with Stochastic Games: Advances and Challenges