1000 Ideas Success: Kevin Blasiak’s Project Secures FWF Funding
We’re delighted to announce that Kevin Marc Blasiak’s project, Can AI Build Mental Immunity? Countering Indoctrination has been selected for funding by the FWF!

Picture: Aleksandra Blasiak
We’re delighted to announce that Kevin Blasiak’s project, Can AI Build Mental Immunity? Countering Indoctrination has been selected for funding by the Austrian Science Fund (FWF)!
The project was selected for funding in FWF’s 1000 Ideas program, awarded over €170,000, and is set to run until August 2027. Can AI Build Mental Immunity? Countering Indoctrination explores how Artificial Intelligence can be used to empower people against online manipulation and harmful influence. Instead of removing content after the fact, a chatbot will be developed that helps users recognize and resist extremist or manipulative messages. The chatbot engages users in conversations using weakened versions of harmful arguments to build their critical thinking and emotional resilience. Grounded in values like empathy and autonomy, the system is co-developed with experts across psychology, education, and digital safety. The broader goal is to create ethical AI tools that strengthen democratic values and protect, especially, young people in increasingly immersive and synthetic digital environments.
1000 Ideas is designed to spark bold, unconventional research by funding early-stage ideas that fall outside traditional funding frameworks. Open to all disciplines, the scheme offers funding for up to two years, targeting innovative projects that challenge existing paradigms. What sets 1000 Ideas apart is its distinctive selection process, which includes a double-anonymized peer review to ensure fairness and rigor. By supporting daring concepts with real potential, 1000 Ideas aims to fuel scientific breakthroughs that might otherwise never get off the ground.
About
Dr. Kevin Marc Blasiak is a Postdoctoral Researcher at the Research Unit Artifact-based Computing and User Research at TU Wien Informatics and leads the Responsible Computing Circle at the Center for Technology & Society (CTS). He holds a PhD in Information Systems from the University of Queensland, Australia, and has published in outlets such as Information Systems Journal, Business & Information Systems Engineering, and Communications of the AIS. His research focuses on responsible information systems with particular emphasis on persuasion technologies, trust and safety, social media governance, and value-sensitive approaches to AI governance. Current projects include an FFG-funded initiative, developed with NGOs in violence prevention, that explores chatbot-based interventions to counter online radicalization. He has worked with international organizations and government agencies, including projects with the Australian Department of Home Affairs and the Global Internet Forum to Counter Terrorism (GIFCT). He is also a contributor to the Stanford Trust & Safety Teaching Consortium. Through this work, he aims to advance both theoretical insights and practical strategies for the responsible development and governance of digital platforms and emerging technologies.
Curious about our other news? Subscribe to our news feed, calendar, or newsletter, or follow us on social media.