TU Wien Informatics

20 Years

About

Our research aims at enabling machines to mimic human-like intelligence, via models and algorithms to process vast amounts of data, recognize patterns, and make decisions, using techniques from machine and deep learning as well as symbolic reasoning.

In our research activities, we cover all modalities, including natural language processing and computer vision. We focus especially on (i) explainable AI, (ii) deep learning with logical constraints for safe AI , (iii) hybrid model-based approaches to explainable, fair, and robust AI, (iv) general AI via predictive coding and active inference, and (v) intelligent applications, such as in healthcare and law.

The research Unit Artificial Intelligence Techniques is part of the Institute of Logic and Computation.

Thomas Lukasiewicz
Thomas Lukasiewicz T. Lukasiewicz

Head of Research Unit
Univ.Prof. Dipl.-Inf. Dr.

Bayar Ilhan Menzat
Bayar Ilhan Menzat B. Menzat

PostDoc Researcher
PhD

Simon Frieder
Simon Frieder S. Frieder

Visiting Scientist
MSc

Maxime Kayser
Maxime Kayser M. Kayser

Visiting Scientist
BSc MSc

2024

2023

2022

  • BECEL: Benchmark for Consistency Evaluation of Language Models / Jang, M., Kwon, D. S., & Lukasiewicz, T. (2022). BECEL: Benchmark for Consistency Evaluation of Language Models. In N. Calzolari, C.-R. Huang, & H. Kim (Eds.), Proceedings of the 29th International Conference on Computational Linguistics (pp. 3680–3696). International Committee on Computational Linguistics.
  • Clustering Generative Adversarial Networks for Story Visualization / Li, B., Torr, P. H. S., & Lukasiewicz, T. (2022). Clustering Generative Adversarial Networks for Story Visualization. In MM ’22: Proceedings of the 30th ACM International Conference on Multimedia (pp. 769–778). Association for Computing Machinery. https://doi.org/10.1145/3503161.3548034
  • Explaining Chest X-Ray Pathologies in Natural Language / Kayser, M., Emde, C., Camburu, O.-M., Parsons, G., Papiez, B., & Lukasiewicz, T. (2022). Explaining Chest X-Ray Pathologies in Natural Language. In L. Wang, Q. Dou, P. T. Fletcher, S. Speidel, & S. Li (Eds.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 (pp. 701–713). https://doi.org/10.1007/978-3-031-16443-9_67
  • NoiER: An Approach for Training more Reliable Fine-Tuned Downstream Task Models / Jang, M., & Thomas Lukasiewicz. (2022). NoiER: An Approach for Training more Reliable Fine-Tuned Downstream Task Models. IEEE/ACM Transactions on Audio, Speech and Language Processing, 30, 2514–2525. https://doi.org/10.1109/TASLP.2022.3193292
  • Learning to Model Multimodal Semantic Alignment for Story Visualization / Li, B., & Lukasiewicz, T. (2022). Learning to Model Multimodal Semantic Alignment for Story Visualization. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 4741–4747). Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.findings-emnlp.346
    Download: PDF (2.68 MB)
  • Syntactically Rich Discriminative Training: An Effective Method for Open Information Extraction / Mtumbuka, F., & Lukasiewicz, T. (2022). Syntactically Rich Discriminative Training: An Effective Method for Open Information Extraction. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (pp. 5972–5987). Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.emnlp-main.401
    Download: PDF (397 KB)
  • Memory-Driven Text-to-Image Generation / Li, B., Torr, P. H. S., & Lukasiewicz, T. (2022). Memory-Driven Text-to-Image Generation. In The 33rd British Machine Vision Conference Proceedings. 33rd British Machine Vision Conference, London, United Kingdom of Great Britain and Northern Ireland (the).
  • Image-to-Image Translation with Text Guidance / Li, B., Torr, P. H. S., & Lukasiewicz, T. (2022). Image-to-Image Translation with Text Guidance. In The 33rd British Machine Vision Conference Proceedings (pp. 1–14).
  • Predictive Coding beyond Gaussian Distributions / Pinchetti, L., Salvatori, T., Yordanov, Y., Millidge, B., Song, Y., & Lukasiewicz, T. (2022). Predictive Coding beyond Gaussian Distributions. In S. Koyejo, S. Mohamed, & A. Agarwal (Eds.), Advances in Neural Information Processing Systems 35 (NeurIPS 2022) (pp. 1280–1293).
  • Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text Correspondence / Jang, M., Mtumbuka, F., & Lukasiewicz, T. (2022). Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text Correspondence. In Findings of the Association for Computational Linguistics: NAACL 2022 (pp. 2030–2042). Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.findings-naacl.156
  • NP-Match: When Neural Processes meet Semi-Supervised Learning / Wang, J., Lukasiewicz, T., Massiceti, D., Hu, X., Pavlovic, V., & Neophytou, A. (2022). NP-Match: When Neural Processes meet Semi-Supervised Learning. In Proceedings of the 39th International Conference on Machine Learning (pp. 22919–22934). PMLR.
  • Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations in a Label-Abundant Setup / Yordanov, Y., Kocijan, V., Lukasiewicz, T., & Camburu, O.-M. (2022). Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations in a Label-Abundant Setup. In Y. Goldberg, K. Zornitsa, & Y. Zhang (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 3486–3501). Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.findings-emnlp.255
  • Universal Hopfield Networks: A General Framework for Single-Shot Associative Memory Models / Millidge, B., Salvatori, T., Song, Y., Lukasiewicz, T., & Bogacz, R. (2022). Universal Hopfield Networks: A General Framework for Single-Shot Associative Memory Models. In Proceedings of the 39th International Conference on Machine Learning (pp. 15561–15583).
  • Predictive Coding: Towards a Future of Deep Learning beyond Backpropagation? / Millidge, B., Salvatori, T., Song, Y., Bogacz, R., & Lukasiewicz, T. (2022). Predictive Coding: Towards a Future of Deep Learning beyond Backpropagation? In L. De Raedt (Ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22) (pp. 5538–5545). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/774
  • Learning on Arbitrary Graph Topologies via Predictive Coding / Salvatori, T., Pinchetti, L., Millidge, B., Song, Y., Bao, T., Bogacz, R., & Lukasiewicz, T. (2022). Learning on Arbitrary Graph Topologies via Predictive Coding. In Advances in Neural Information Processing Systems 35 (NeurIPS 2022) (pp. 38232–38244). Neural information processing systems foundation.
  • Knowledge-Grounded Self-Rationalization via Extractive and Natural Language Explanations / Majumder, B. P., Camburu, O.-M., Lukasiewicz, T., & McAuley, J. (2022). Knowledge-Grounded Self-Rationalization via Extractive and Natural Language Explanations. In K. Chaudhuri, S. Jegelka, & L. Song (Eds.), Proceedings of the 39th International Conference on Machine Learning (pp. 14786–14801). MLResearch Press.
  • Explanations for Negative Query Answers under Inconsistency-Tolerant Semantics / Lukasiewicz, T., Malizia, E., & Molinaro, C. (2022). Explanations for Negative Query Answers under Inconsistency-Tolerant Semantics. In L. De Raedt (Ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (pp. 2705–2711). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/375
  • Deep Learning with Logical Constraints / Giunchiglia, E., Stoian, M. C., & Lukasiewicz, T. (2022). Deep Learning with Logical Constraints. In L. De Raedt (Ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (pp. 5478–5485). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/767
  • (Non-)Convergence Results for Predictive Coding Networks / Frieder, S., & Lukasiewicz, T. (2022). (Non-)Convergence Results for Predictive Coding Networks. In Proceedings of the 39th International Conference on Machine Learning (pp. 6793–6810). http://hdl.handle.net/20.500.12708/187543

 

  • Deep learning für das Semantic Web / Hohenecker, P. (2016). Deep learning für das Semantic Web [Diploma Thesis, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2016.37489
    Download: PDF (1.11 MB)

Soon, this page will include additional information such as reference projects, conferences, events, and other research activities.

Until then, please visit Artificial Intelligence Techniques’ research profile in TISS .