TU Wien Informatics

20 Years

“Don’t Humanize AI!”

  • By Theresa Aichinger-Fankhauser
  • 2023-06-12
  • Public Lecture
  • Research
  • Doctoral School

Ricardo Baeza-Yates, international AI expert, unravels why AI is acting irresponsibly and what we can do about it.

“Don’t Humanize AI!”
Picture: Amélie Chapalain / TU Wien Informatics

On June 7, 2023, TU Wien Informatics, the Institute for Human Sciences (IWM), the Center for Artificial Intelligence and Machine Learning (CAIML), and the TU Wien Informatics Doctoral School hosted the joint public lecture within the Digital Humanism Fellowship Program at EI 7 Lecture Hall. Ricardo Baeza-Yates, AI expert, former Yahoo Research VP, and one of the first researchers tackling AI bias, gave the lecture on “Responsible AI”.

“When we talk about responsible AI, we should talk about responsible humans. We need to find new ways of teaching and research to tackle the current challenges with AI, not only about AI. I’m pleased that Ricardo Baeza-Yates, highly esteemed AI expert and at the forefront of the Digital Humanism Initiative, joins us this semester as IWM fellow and guest professor,” Dean of TU Wien Informatics Gerti Kappel stated in her welcome address. Ludger Hagedorn, Head of the Patočka Archive at IWM, introduced the Digital Humanism Fellowship Program and the speakers; Hannes Werthner, pre-eminent in Digital Humanism and former Dean of TU Wien Informatics moderated the talk.

Irresponsible AI

His main goal is not only to keep humans in the loop but humans in charge: Ricardo Baeza-Yates explained why Ethical AI and Trustworthy AI are neither good terms nor promising scientific approaches because, with them, we are humanizing algorithms. “Algorithms have no concept of ethics and justice; human traits can not be applied to machines. My first advice would be: Do not anthropomorphize!”, Baeza-Yates urges.

Irresponsible AI is not only a technical problem but a human one. Numerous accounts of harmful AI deployments are made public in the AI Incident Database. But this is just the tip of the iceberg – most problems are undocumented.

The curse of bias

The first and foremost problem with irresponsible behavior is the well-known ‘curse’ of bias. If we give biased data to algorithms, they can even amplify the harm done. “We cannot just blame the system or the data. Also at fault are the developers creating and feeding the models,” Baeza-Yates explains and gives prominent examples: COMPAS, a popular software tool used in the United States criminal justice system to assess the risk of recidivism. Initially created as a support tool, its inherent racial bias can tremendously affect judges’ decisions. ChatGPT also discriminates. When users asked it to complete the sentence “Two Muslims walk into a…” it responded with a long list of slurs before coming to more reasonable replies linked to the stated facts. Face imaging has been all over the news, with headlines declaring that algorithms can even detect sexual or political orientation via image recognition. “Of course, this is not true,” Baeza-Yates explains, “Algorithms are merely re-discovering stereotypes, not doing science.”

Limits of machines, limits of our planet

Machine learning has apparent limitations. Humans are good at abstracting information, but to achieve this, one must learn to filter and ‘forget’ information, which AI usually can’t. Moreover, it cannot learn what is not in the data. Accuracy is not what we should worry about, according to Baeza-Yates, but the impact of errors. “We have to be humble. If we are unsure about certain information, we cannot let a model train on falsehoods.” If we want to think responsibly about our future, we also need to address the waste of resources by AI – Not only the electricity and further recourses need in training. “Billions of people will use AI every day. The resources consumed in this usage are vast. When we turn to AI to solve our problems, we must consider its ecological impact,” Baeza-Yates is sure.

Responsible AI

Autonomy, beneficence, and justice are three essential principles for responsible AI. Autonomy emphasizes preserving human agency and ensuring transparency and control over AI decisions. Beneficence focuses on maximizing the benefits of AI while minimizing harm, promoting well-being, and addressing societal challenges. Justice means fairness, equality, and avoiding discrimination in AI development and deployment, ensuring equitable access and unbiased decision-making.

The use of data

Data Processing is one of the most tricky aspects of this endeavor. Baeza-Yates advises developers and researchers to identify whether any of our data processing falls under Article 22 of GDPR (referring to automated decisions), and if so, make sure to: (1) Be transparent. Give individuals information about the processing for transparency. (2) Explain. Introduce simple ways for people to know what is happening and be able to intervene and challenge decisions. (3) Maintain. Uphold regular checks to ensure the systems work as intended, with continuous validation, testing, and maintenance.

Fake and truth

Responsible AI requires a strong focus on accountability. As we delve into generative AI, where machines can create content, manipulate images, and generate information, ensuring responsible use and guarding against potential pitfalls becomes crucial. One concern is the proliferation of manipulations and blurring lines between fake and truth. According to Baeza-Yates, the abundance of information generated by AI systems creates a new “babel tower” of knowledge, where different narratives and perspectives clash. Ultimately, responsible AI requires a commitment to ensuring transparency, accountability, and fairness. Whether in development, deployment, or use: responsible AI is a must to ensure the beneficial integration of technology into our lives.

About Ricardo Baeza-Yates

Ricardo Baeza-Yates is a professor at the Institute for Experiential Artificial Intelligence at Northeastern University, Boston. He was chief technology officer of NTENT, VP of Research at Yahoo Labs, and founded and led the Yahoo Labs in Barcelona and Santiago de Chile from 2006 to 2015. Between 2008 and 2012, he oversaw Yahoo Labs in Haifa, Israel, and started the London lab in 2012. Baeza-Yates is a part-time professor at the Department of Information and Communication Technologies of the Universitat Pompeu Fabra in Barcelona, Spain, and at the Department of Computing Science of Universidad de Chile in Santiago. In 2005, he was an ICREA research professor at Universitat Pompeu Fabra. Until 2004, he was a professor and founding director of the Center for Web Research at Universidad de Chile.

Additionally, Baeza-Yates is a co-author of the best-seller Modern Information Retrieval textbook, published in 1999 by Addison-Wesley, with a second enlarged edition in 2011, which won the ASIST 2012 Book of the Year award. He is also a co-author of the second edition of the Handbook of Algorithms and Data Structures, Addison-Wesley, 1991, and co-editor of Information Retrieval: Algorithms and Data Structures, Prentice-Hall, 1992, among more than 600 other publications.

From 2002 to 2004, he was elected to the board of governors of the IEEE Computer Society and the ACM Council from 2012 to 2016. He has received the Organization of American States award for young researchers in exact sciences, the Graham Medal for innovation in computing given by the University of Waterloo to distinguished alumni, the CLEI Latin American distinction for contributions to CS in the region, and the National Award of the Chilean Association of Engineers, among other distinctions. In 2003, he was the first computer scientist to be elected to the Chilean Academy of Sciences, and, since 2010, is a founding member of the Chilean Academy of Engineering. In 2009, he was named ACM Fellow and, in 2011, an IEEE Fellow.

About the Digital Humanism Fellowship Program

Digital Humanism is at the forefront of current debates concerning human-technology interaction. To advance the interdisciplinary dialogue between informatics, humanities, and politics, TU Wien Informatics, the Institute for Human Sciences (IWM), the Austrian Federal Ministry for Climate Action, Energy, Mobility, Innovation, and Technology (BMK), TU Wien Informatics Doctoral School and the Center for Artificial Intelligence and Machine Learning (CAIML) cooperate within Digital Humanism Initiatives. In March 2022, IWM, TU Wien Informatics, and CAIML launched the Digital Humanism Fellowship Program to foster academic exchange across disciplines and institutional boundaries.

Curious about our other news? Subscribe to our news feed, calendar, or newsletter, or follow us on social media.