TU Wien Informatics

20 Years
Public Lecture

Inaugural Lectures 2024

  • 2024-11-28
  • Inaugural Lecture
  • Faculty
  • Community

On October 15, we had the pleasure of officially welcoming our new Professors Magdalena Ortiz, Emanuel Sallinger, and Dominique Schröder to the faculty at their Inaugural Lectures.

fLtR: Dominique Schröder, Rector Jens Schneider, Magdalena Ortiz, Dean Gerti Kappel, Emanuel Sallinger
fLtR: Dominique Schröder, Rector Jens Schneider, Magdalena Ortiz, Dean Gerti Kappel, Emanuel Sallinger
Picture: Amélie Chapalain / TU Wien Informatics

About

On October 15, we had the pleasure of welcoming three new Professors to our faculty: Magdalena Ortiz, Emanuel Sallinger, and Dominique Schröder. After a welcome address by TU Wien Rector Jens Schneider and an introduction by our Dean Gerti Kappel, each of our new Professors presented what’s at the heart of their research and work. All three lectures illustrated the diversity of research areas and foci in the field of computer science, with topics ranging from Knowledge Representation and Reasoning (KRR), over Scalability in computer science and Neurosymbolic AI to Privacy Enhancing Technologies. In the first lecture, Magdalena Ortiz explored one of the most ambitious and exciting ideas that have driven Artificial Intelligence (AI) since its early days: The explicit representation of human knowledge and the drawing of inferences from it by means of automated reasoning. Her lecture highlighted that despite recent progress, knowledge-based rational inference remains very hard to reproduce in machines and the challenges to achieving transparent and accurate AI systems that can be trusted even in unforeseen circumstances and that can be explained in human-understandable terms.

The second lecture, held by Emanuel Sallinger, addressed the challenges of making AI systems scalable in terms of speed, complexity as well as sustainability. He offered an overview of these aspects and illustrated them in one of his major areas of research, Knowledge Graphs. In this research area, there is a need for large real-world graphs with very complex knowledge and a need for clear explanations, with sustainability as a key concern and a possible area of application at the same time. He shared insights from real-world examples of his work, like the Central Bank of Italy, and gave an outlook of what’s still to come at the intersection of data management, data science, and AI.

The third lecture of this evening was held by Dominique Schröder, whose research focus lies on privacy-enhancing technologies. The lecture was centered around the rapid growth of data collection and advances in AI that make data processing more efficient. Privacy, in this context, is often seen as problematic, on belief being that “I have nothing to hide,” which leads people to dismiss privacy concerns. Another misconception that Dominique Schröder brought up is that privacy hinders technological progress, as an obstacle to innovation in areas such as AI. In this lecture, he addressed both misconceptions and showed how technologies like encrypted communications and differential privacy in data analysis enable innovation without violating individual privacy.

After the lectures, the audience had the opportunity to bring their questions to our new Professors in a Q&A session and the evening closed with the opportunity to mingle, exchange ideas, and network with staff, students, and faculty members.

Missed our first Inaugural Lecture? Then join us on November 12 for our Inaugural Lectures with Jessica Cauchard, Daniel Müller-Gritschneder, and Paweł Woźniak.

Curious about our new Professors? Check out their abstracts, academic bios and interviews below to learn more about their research!

Lectures

Magdalena Ortiz

Links: #5QW Interview / Staff Page / Research Unit

Download Slides

One of the most ambitious and exciting ideas that has driven AI since its early days is the explicit representation of human knowledge and the drawing of inferences from it by means of automated reasoning. We are witnessing a new AI era where large-scale learning from vast quantities of data has enabled the simulation of increasingly sophisticated human-like behaviors. But despite all the recent progress, knowledge-based rational inference remains very hard to reproduce in machines.

Due to the subtle intricacies of human reasoning and the high computational cost of inference from vast amounts of knowledge, there are still many challenges to achieving a transparent and accurate AI that can be trusted even in unforeseen circumstances, and that can be explained in human-understandable terms. The field of Knowledge Representation and Reasoning has developed a rich toolkit of techniques that enable rational inference, leveraging different types of knowledge, accommodating various domain assumptions, and supporting increasingly complex information needs.

As we better understand different requirements and gain a better grasp of the fine balance between the expressiveness of languages and the cost of computation, we can now provide tailored solutions that support the desired decision-making at a minimal computational cost, making it feasible to build increasingly intelligent systems: An AI that does not require extensive training to improve its ability to make accurate predictions, as it can logically reach correct conclusions.

Magdalena Ortiz is Professor of Knowledge Representation and Reasoning at the Institute of Logic and Computation at TU Wien Informatics. She is known for her research on logic-based AI, including description logics and ontologies, knowledge graphs, and knowledge-enriched data management. Her research aims to develop formal methods for understanding, representing, and accessing complex information, particularly in the context of AI systems that can reason and make decisions based on explicit knowledge. She serves on the editorial boards and program committees of numerous leading AI journals and conferences and is engaged in the organization of significant AI education initiatives in Europe.

Emanuel Sallinger

Links: #5QW Interview / Staff Page / Research Unit

Scalability in computer science is often associated with speed. Speed is one important aspect of Artificial Intelligence, but not the only one. We do not only want answers from our AI systems, but we also want explanations of these answers. With growing input sizes, can explanations scale so that they remain useful to us as humans? Second, we need our AI systems to be sustainable – so scale down the resource usage and still give good results. Third, we need AI systems to scale with growing complexity – in my research related to Knowledge Graphs, this is often associated with complexity measures of large graphs and expressive knowledge. Solving these challenges individually is not enough. We need AI systems that jointly scale on all their core components. On the one hand, this is scalable data management, allowing effective access to data. On the other hand, this is joint scalability in the two main AI families: sub-symbolic AI – that is machine learning including LLMs – and symbolic AI – including logical reasoning. This combination is typically called bilateral or neurosymbolic AI.

Getting all three parts closely working together is not easy, but it is crucial. In my lecture, I will give an overview of these aspects and illustrate them in one of my major areas of research, Knowledge Graphs, where we have a need for all the above: Extremely large real-world graphs with very complex knowledge, the need for clear explanations, and sustainability as a key concern and application at the same time. To achieve this, we need scalable graph data management and both sub-symbolic AI methods such as Knowledge Graph Embeddings, Graph Neural Networks, and LLMs – and symbolic AI methods such as Datalog- and Vadalog-based reasoning.

I will give real-world examples of my work in these areas, like the work with our partners at the Central Bank of Italy, and an outlook and vision of what comes next at the intersection of data management, data science, and Artificial Intelligence.

Emanuel Sallinger is Professor of Databases and Artificial Intelligence and Vice Dean of Academic Affairs for Business Informatics and Data Science at TU Wien Informatics. Prior to that, he directed the VADA Lab at the University of Oxford. He holds a PhD in Computer Science, a master’s degree in Computational Intelligence and Informatics Management, and a Bachelor’s degree in Software & Information Engineering. He is a Senior Fellow of the Higher Education Academy (SFHEA).

Emanuel Sallinger is head of the Knowledge Graph Lab at DBAI, a Vienna Research Group funded by the Vienna Science and Technology Fund (WWTF). He also leads the SIG Knowledge Graphs of the Center for AI and ML (CAIML) at TU Wien. His research interests are in scalable data management and artificial intelligence technologies, especially those connecting theory to practice. This particularly includes approaches connecting symbolic AI – logical reasoning – and subsymbolic AI, i.e., machine learning including LLMs. His focus lies on Knowledge Graphs and scalable reasoning in such systems, both in terms of knowledge-based/logic-based reasoning and machine learning-based reasoning.

Dominique Schröder

Links: #5QW Interview / Staff Page / Research Unit

Download Slides

Data collection is growing rapidly due to the increasing number of connected devices, from smartphones to smart home systems, and advances in artificial intelligence that make data processing more efficient. Businesses and governments are collecting data to enhance decision-making, improve services, and create personalized experiences. For example, smart healthcare can monitor patients remotely, connected devices can optimize energy use, and smart cities can reduce traffic congestion. However, this data collection raises privacy concerns, such as the risk of sensitive health data being exposed, devices recording conversations without consent, and personal data being shared with third parties.

Privacy is often seen as a problem or unnecessary because of common prejudices. One is the belief that “I have nothing to hide,” which leads people to dismiss privacy concerns as irrelevant to them personally. Another misconception is that privacy hinders technological progress, with some seeing it as an obstacle to innovation in areas such as AI, smart devices, or personalized services. In this talk, I will address both misconceptions. First, I will show how even small pieces of information can uniquely track and identify individuals, proving that privacy concerns affect everyone, not just those with “something to hide”. Second, I will show how privacy-enhancing technologies enable modern advances without violating individual privacy. From encrypted communications to differential privacy in data analysis, these tools allow us to innovate while protecting personal information at the same time, proving that privacy and technological progress can coexist.

Magdalena Ortiz is Professor of Knowledge Representation and Reasoning at the Institute of Logic and Computation at TU Wien Informatics. She is known for her research on logic-based AI, including description logics and ontologies, knowledge graphs, and knowledge-enriched data management. Her research aims to develop formal methods for understanding, representing, and accessing complex information, particularly in the context of AI systems that can reason and make decisions based on explicit knowledge. She serves on the editorial boards and program committees of numerous leading AI journals and conferences and is engaged in the organization of significant AI education initiatives in Europe.