TU Wien Informatics

20 Years

Reality Check: SHROOM Hackathon to Detect LLMs’ Hallucinations

  • By Gábor Recski / Theresa Aichinger-Fankhauser (edt.)
  • 2024-01-26
  • Research
  • CAIML

CAIML and the Data Science Research Unit organized the first Natural Language Processing (NLP) Hackathon at TU Wien Informatics.

Reality Check: SHROOM Hackathon to Detect LLMs’ Hallucinations

On January 11, 2024, the SHROOM Hackathon was hosted by the Research Unit for Data Science and the Center for Artificial Intelligence and Machine Learning (CAIML). It was the first Natural Language Processing (NLP) Hackathon at TU Wien Informatics, focussing on detecting hallucinations in large language models (LLMs). The increasing popularity of LLMs driven by popular applications such as ChatGPT brought about a growing need for research to detect outputs that are fluent but false, misleading, or irrelevant. This inspired NLP researchers at the University of Helsinki to organize SHROOM, the Shared Task on Hallucinations and Related Observable Overgeneration Mistakes.

The NLP Hackathon allowed researchers and students to brainstorm and implement ideas for the task and form a team that will participate in the official competition. The Hackathon started at 14:00 and lasted until 23:00. The participants worked in small teams, each rapidly developing prototype solutions based both on their own ideas and on recent scientific literature on hallucination detection. The most successful ideas will now form the basis of the TU Wien team’s submission to the SemEval 2024 workshop in May.

We’d like to thank the organizers Varvara Arzt (Data Science Research Unit), Mohammad Mahdi Azarbeik (CAIML), Ilya Lasy (CAIML), and Gábor Recski (Data Science Research Unit) for their efforts to make this event a success!

Curious about our other news? Subscribe to our news feed, calendar, or newsletter, or follow us on social media.