Workshop Program

Wednesday, April 9, 2025 (without PIs)

LaTeX Workshop
Dinner + Board Game Night
First Shuttle Bus to Burghausen
Second Shuttle Bus to Burghausen

Thursday, April 10, 2025

Shuttle Bus to Seminar Location
Paper Review Session (without PIs)
Organizer: Vincent
Lunch
Talks by ConVeY Doctoral Researchers
Attention Learning is Needed to Efficiently Learn Parity Function
Speaker: Yaomengxi Han
Abstract

Transformers, with their attention mechanisms, have emerged as the state-of-the-art architectures of sequential modeling and empirically outperform feed-forward neural networks (FFNNs) across many fields, such as natural language processing and computer vision. However, their generalization ability, particularly for low-sensitivity functions, remains less studied. We bridge this gap by analyzing transformers on the $k$-parity problem. Daniely and Malach (NeurIPS 2020) show that FFNNs with one hidden layer and $O(nk^7 \log k)$ parameters can learn $k$-parity, where the input length $n$ is typically much larger than $k$. In this paper, we prove that FFNNs require at least $\Omega(n)$ parameters to learn $k$-parity, while transformers require only $O(k)$ parameters, surpassing the theoretical lower bound needed by FFNNs. We further prove that this parameter efficiency cannot be achieved with fixed attention heads. Our work establishes transformers as theoretically superior to FFNNs in learning parity function, showing how their attention mechanisms enable parameter-efficient generalization in functions with low sensitivity.

A Transferability Study of Interpolation-Based Model Checking
Speaker: Marek Jankola
Abstract

Assuring the correctness of computing systems is fundamental to our society and economy, and formal verification is a class of techniques approaching this issue with mathematical rigor. Researchers have invented numerous algorithms to automatically prove whether a computational model, e.g., a software program or a hardware digital circuit, satisfies its specification. In the past two decades, Craig interpolation has been widely used in both hardware and software verification. Despite the similarities in the theoretical foundation between hardware and software verification, previous works usually evaluate interpolation-based algorithms on only one type of verification tasks (e.g., either circuits or programs), so the conclusions of these studies do not necessarily transfer to different types of verification tasks. To investigate the transferability of research conclusions from hardware to software, we adopt two performant approaches of interpolation-based hardware model checking, (1) Interpolation-Sequence-Based Model Checking and (2)Intertwined Forward-Backward Reachability Analysis Using Interpolants, for software verification. We implement the algorithms proposed by the two publications in the software verifier CPAchecker because it has a software-verification adoption of the first interpolation-based algorithm for hardware model checking from 2003, which the two publications use as a comparison baseline. To assess whether the claims in the two publications transfer to software verification, we conduct an extensive experiment on the largest publicly available suite of safety-verification tasks for the programming language C. Our experimental results show that the important characteristics of the two approaches for hardware model checking are transferable to software verification, and that the cross-disciplinary algorithm adoption is beneficial, as the approaches adopted from hardware model checking were able to tackle tasks unsolvable by existing methods. This work consolidates the knowledge in hardware and software verification and provides open-source implementations to improve the understanding of the compared interpolation-based algorithms.

Short break
General Assembly
Coffee
General Assembly
Poster Session
Shuttle Bus to Hotel Post
Social Event in Burghausen
Dinner in Burghausen

Friday, April 11 2025

Shuttle to Raitenhaslach (bring your luggage!)
Tutorial: Learning Theory – Theoretical Guarantees for Machine Learning (Part I)
Speaker: Debarghya Ghoshdastidar
Coffee Break
Tutorial: Learning Theory – Theoretical Guarantees for Machine Learning (Part II)
Speaker: Debarghya Ghoshdastidar
Report on Collaboration Progress
Lunch
Focus Groups: Challenges in Academia
Workshop: Planning the Next Retreat
Coffee Break
Presentation of Results Workshop: Planning the Next Retreat
Buffer
Shuttle to Burghausen Train Station
Train to Munich