TU Wien Informatics

AI Festival: Research Day

  • 2025-12-01
  • WWTF
  • Machine Learning
  • Public Outreach

The first day of our AI Festival showcases groundbreaking AI research, featuring leading global and local experts who share their latest insights.

AI Festival: Research Day
Picture: local_doctor / stock.adobe.com

The AI Festival is co-organized by TU Wien Informatics, CAIML, BilAI (funded by WWTF), and TU Austria.

Day 1: Research

Join us on December 1 at TU Wien Informatics for the AI Festival—a three-day celebration of ideas, discovery, and dialogue on the present and future of Artificial Intelligence.

The first day of the festival will spotlight the latest breakthroughs in AI research. Renowned international and local researchers will share their work and insights through keynote talks and panel discussions on emerging trends. Topics will include neurosymbolic AI, large language models, AI in science, explainable AI, and automated problem solving and decision making. It will be a day of deep exploration into the cutting edge of what AI can do—and where it’s going next.

The AI Festival is co-organized by TU Wien, the Center for Artificial Intelligence and Machine Learning (CAIML), the Cluster of Excellence Bilateral AI (BILAI), funded by the Austrian Science Fund (FWF), the Vienna Science and Technology Fund (WWTF), and TU Austria.

Registration

Register for Day 1: Research (Mon, Dec 1)

Program

Time
9:30–10:00 Uhr Opening with Jens Schneider, Rector of TU Wien, Gerti Kappel, Dean of the Faculty of Informatics at TU Wien, and Michael Stampfer, Managing Director of the Vienna Science and Technology Fund (WWTF). Moderated by the initiator of the Festival Nysret Musliu (TU Wien, BilAI)
10:00–11:00 Uhr Keynote by Pascal Van Hentenryck (Georgia Tech): AI for Engineering Optimization. Session chair: Nysret Musliu (TU Wien, BilAI)
11:00–11:30 Uhr Coffee Break
11:30–12:15 Uhr Invited Talk by Adam Gosztolai (MedUni Wien): Discovering And Modelling Consistent Brain Computations Across Individuals. Session chair: Clemens Heitzinger (TU Wien)
12:15–13:15 Uhr Lunch Break & Networking
13:15–14:00 Uhr Invited Talk by Svitlana Vakulenko (Vienna University of Economics and Business): Knowledge Representation Learning for Large Language Models. Session chair: Magdalena Ortiz (TU Wien)
14:00–15:00 Uhr Keynote by Michael Bronstein (University of Oxford, and scientific director of the ÖAW-hosted AITHYRA institute): AI for Biology 2.0. Session chair: Emanuel Sallinger (TU Wien)
15:00–15:30 Uhr Coffee Break
15:30–16:30 Uhr Panel Discussion: The Future of AI, with Michael Bronstein (University of Oxford, and scientific director of the ÖAW-hosted AITHYRA institute), Marta Sabou (Vienna University of Economics and Business, BilAI), Claudia Plant (University of Vienna), and Pascal Van Hentenryck (Georgia Tech). Moderated by Thomas Eiter (TU Wien, BilAI)

Our Speakers

Michael Bronstein

Michael Bronstein is the founding Scientific Director of AITHYRA which is hosted by the Austrian Academy of Sciences (ÖAW) and supported by a generous donation from the Boehringer Ingelheim Foundation. He is the Google DeepMind Professor of AI at the University of Oxford, and Honorary Professor at TU Wien. Previously, he was Head of Graph Learning Research at Twitter and a Professor at Imperial College London, and he held visiting appointments at Stanford, MIT, and Harvard. He developed geometric deep learning methods and pioneered their applications to biochemistry and structural biology, including protein and small molecule design. His distinctions include the EPSRC Turing AI World-Leading Research Fellowship, the Royal Society Wolfson Research Merit Award, and the Royal Academy of Engineering Silver Medal, alongside multiple ERC, Google, and Amazon Research Awards. He is a member of Academia Europaea and Fellow of IEEE, IAPR, and BCS, ELLIS Fellow, ACM Distinguished Speaker, and World Economic Forum Young Scientist. Beyond academia, Michael is a serial entrepreneur and founder of several startups, including Novafora, Invision (acquired by Intel), Videocites, and Fabula AI (acquired by Twitter). He is Chief Scientist-in-Residence at VantAI and advisor to biotech companies such as Relation Therapeutics and Recursion Pharmaceuticals. When off duty, he can often be found on horseback or at the opera.

Thomas Eiter

Thomas Eiter is the Head of the Institute for Logic and Computation and of the Research Unit Knowledge-Based Systems at TU Wien Informatics. He has been working in different fields of Computer Science and AI, with a focus on knowledge representation and reasoning. He is a fellow of the ACM, of the European Association for Artificial Intelligence (EurAI), and of the Asia-Pacific Artificial Intelligence Association (AAIA), as well as member of the Austrian Academy of Sciences and of Academia Europea (London). Eiter has been serving on various boards, steering bodies, and conference committees throughout his career. He is the current president of the Association for Logic Programming and past president of KR Inc.

Adam Gosztolai

Adam Gosztolai is an Assistant Professor and research group leader of the “Dynamics of Neural Systems Laboratory” at the AI Institute of the Medical University of Vienna and a research affiliate at the Department of Cognitive Sciences at the Massachusetts Institute of Technology (MIT). He studied engineering and mathematics at University College London and the University of Cambridge and obtained his PhD in mathematics from Imperial College London. Following his PhD, Adam conducted postdoctoral research at the École Polytechnique Fédérale de Lausanne (EPFL) in the fields of computational neuroscience and machine learning. For his research, he was awarded a Human Frontiers Science Foundation Fellowship, an ERC Starting Grant, and a WWTF Vienna Research Group grant. In his research, Adam studies the dynamical processes encoded in the activity of a large number of neurons in the brain to distil fundamental principles of how these collective dynamics are linked to neural processes such as cognition and motor control.

Pascal Van Hentenryck

Pascal Van Hentenryck is an A. Russell Chandler III Chair and Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. He is the director of the NSF National AI Institute for Advances in Optimization and the director of Tech AI, the AI hub at Georgia Tech. He is a fellow of AAAI and INFORMS, the recipient of two honorary doctoral degrees, and has received numerous research and teaching awards. He has written seven books and over 300 articles, with a h-index of 78 and nearly 30,000 citations. He is a founding father of constraint programming, a technology widely used for scheduling and routing in manufacturing, supply chains, logistics, and other applications, and a pioneer in AI for Engineering. He has developed several optimization systems that have been in use in industry for decades, and his research has been successfully transferred to industry through numerous projects. Van Hentenryck has given plenary talks at almost all the major conferences in AI, Operations Research, Applied Mathematics, Industrial Engineering, and Mathematical Programming, and his work has been featured in prominent news venues.

Claudia Plant

Claudia Plant is professor and head of the Data Mining and Machine Learning research group at the Faculty of Computer Science University of Vienna. Her research group focuses on new methods for exploratory data mining, e.g., clustering, anomaly detection, graph mining and matrix factorization. Many approaches relate unsupervised learning to data compression, i.e., the better the found patterns compress the data the more information we have learned. Other methods rely on finding statistically independent patterns or multiple non-redundant solutions, on ensemble learning or on nature-inspired concepts such as synchronization. Indexing techniques and methods for parallel hardware support exploring massive data. Claudia Plant has co-authored over 150 peer-reviewed publications, among them more than 30 contributions to the top-level data mining conferences KDD and ICDM and 4 Best Paper Awards. Papers on scalability aspects appeared at SIGMOD, ICDE, and the results of interdisciplinary projects in leading application-related journals such as Bioinformatics, Cerebral Cortex and Water Research.

Marta Sabou

Marta Sabou is a Professor for Information Systems and Business Engineering at the Department for Information Systems and Operations Management at the Vienna University of Economics and Business (WU Wien). Prior to this she was an FWF Elise-Richter Fellow at TU Wien. At TU Wien, she lead the Semantic Systems Research Lab which performs foundational and applied research in the area of information systems enabled by semantic (web) technologies. She has also held positions as Research Fellow at the Knowledge Media Institute (Open University, UK), Assistant Professor at the Department of New Media Technology (MODUL University, AT) and Key Expert in Semantic Technologies (Siemens). Her work is situated at the confluence of Semantic Web and Human Computation research areas. She is an accomplished academic (over 100 peer-reviewed papers, h-index 45) and takes an active role in the Semantic Web research community. Marta Sabou is a Key Researcher in the FWF Cluster of Excellence Bilateral AI, and she co-coordinates the WWTF Vienna Doctoral College on Digital Humanism.

Svitlana Vakulenko

Svitlana Vakulenko is an Assistant Professor at Vienna University of Economics and Business (WU), at the Institute for Data, Process and Knowledge Management. She is the leader of the newly established Vienna Research Group on Knowledge Representation Learning for Large Language Models that studies novel approaches for LLMs to organise and access textual information sources. She obtained her PhD degree at TU Wien in 2019, and spent a few years as a PostDoc at the University of Amsterdam and as a Machine Learning Researcher at Amazon AGI Barcelona.

Abstracts

Pascal Van Hentenryck: AI for Engineering Optimization

In many industry settings, the same optimization problem is solved repeatedly for instances taken from a distribution that can be learned or forecasted. Indeed, such parametric optimization problems are ubiquitous in applications over complex infrastructures such as electrical power grids, supply chains, manufacturing, and transportation networks. The scale and complexity of these applications have grown significantly in recent years, challenging traditional optimization approaches. This talk studies how to speed up these parametric optimization problems to meet real-time constraints present in many applications. It first reviews the concept of optimization proxies that learn the input/output mappings of parametric optimization problems, computing near-optimal feasible solutions and providing quality guarantees. The talk also presents how to “learn to optimize” highly complex optimization problems, fusing optimization methodologies with supervised learning and reinforcement learning. The methodologies are highlighted on industrial problems in grid optimization, end-to-end supply chains, logistics, and transportation systems. They reveal beautiful connections between machine learning and optimization, leveraging fundamental theoretical results to push the practice of optimization.

Adam Gosztolai: Discovering and modelling consistent brain computations across individuals

It is increasingly recognised that the computations in the brain can be understood based on the theory of dynamical systems conformed by the activity of large neural populations. Moreover, several works have observed that dominant dynamical patterns of computation are highly preserved across animals performing similar tasks. In my talk, I will argue that these preserved dynamical patterns manifest from the existence of invariances—conserved quantities and symmetries in population dynamics. I will then describe our efforts to mathematically formalise and computationally capture these invariances from the geometric activity of neural populations. Specifically, in the first part of my talk I will discuss vector field descriptions of neural dynamics, highlighting MARBLE, a geometric deep learning method that allows finding consistent latent representations across neural recordings. Then, in the second part, I will highlight current work to formulate a data-driven and predictive model for learning invariances

Svitlana Vakulenko: Knowledge Representation Learning for Large Language Models

The ability of Large Language Models (LLMs) to generate contextually relevant natural language responses is truly impressive and a growing number of people are using them on a regular basis to address their information needs. However, since LLMs are parametric models unlike databases they are not designed to reliably store data. A common solution to this limitation is to couple an LLM with an actual database or an information retrieval system, e.g., a Web search engine, such that the model can use its results as input. This approach is called Retrieval-Augmented Generation (RAG). The state-of-the-art RAG systems use dense retrieval models, which embed queries and documents into a shared vector space for similarity-based search. However, they also have an important theoretical bottleneck in their representational power. Our recent experiments demonstrate the power of alternative generative retrieval models that overcome the limitations of the dense retrieval but fall short in more complex scenarios, calling for new hybrid approaches.

Curious about our other news? Subscribe to our news feed, calendar, or newsletter, or follow us on social media.