“Why We Rely on Badly Behaving Machines”
Toby Walsh tells us about the future of AI and why we should reconsider existing approaches to technology development.
In the 1940s, IBM director Thomas J. Watson reputedly stated that the world will really only need five computers. His pragmatic prediction has been proven terribly wrong, but there is an important lesson to be learned: The future of technology is by definition not fixed.
Toby Walsh, internationally renowned AI expert and Humboldt Prize Awardee, talked about the many predicaments on the winding road of tech development at the Vienna Gödel Lecture, on 9 June 2022 at TU Wien’s AudiMax. Rector of TU Wien Sabine Seidler and Dean of TU Wien Informatics Gerti Kappel emphasized the importance of research on the interconnections of technology and society. “With the Vienna Gödel Lectures, we are thrilled to offer critical insight on these matters to the public at large”, Gerti Kappel states. To have Toby Walsh, one of the most influential voices of AI worldwide, holding this year’s Gödel Lecture is a special honor to the organizer and head of the Vienna Center for Logic and Algorithms (VCLA) Stefan Szeider, who moderated the talk.
Technological change and our responsibility as humans
We have come far. From machines doing basic calculations to programs that can learn and change themselves – but with all the euphoria about seemingly endless possibilities, we have to acknowledge the fact that machines have limits. For Toby Walsh, the crucial limitation is that they are not moral. A smartphone doesn’t feel pain and can’t be punished, it is not sentient and can therefore hardly be held accountable for its actions. In the past decades, the focus in AI development was on machines being “intelligent”, and not on them being “artificial”. In some ways, we give them more credit than is due and still have to be immensely careful about what responsibilities we assign to them.
Walsh identifies five important lessons regarding the development of artificial intelligence in line with Neil Postman’s 1998 speech on “Things we need to know about technological change”.
1. Ask what it gives and takes. Technological change is a trade-off. Our lives have become increasingly more convenient, but there are things we lose. Using online maps to guide us wherever we go has changed our way of life, but also physically altered our brains. Studies have shown that brain regions needed for special understanding have become increasingly smaller. So, the question is – what we are willing to give up through further developing AI?
2. Ask who benefits and who loses. Benefits and costs of tech are not evenly distributed. AI is driven by data that reflects the bias of the circumstances in which it was captured. Even if we are especially careful, this remains a fundamental problem – because we are using this technology to predict things the data just can’t tell us. We have seen this in many questionable applications of AI, for credit scores, school grading, or employment. Moreover, AI as a field is inherently biased, with a “sea of white dudes” designing and creating machines that often have no relation to different genders or minorities.
3. Ask what powerful ideas come with. Not only can we change tech, but also society. We are building machines that can take over strenuous tasks, and many jobs that start disappearing. But are we reaping the benefits? As a society, we have to rethink our gridlocked perceptions of work. Four-day work weeks are as productive and people are happier, so why not let machines lift the burden of an always too short weekend? With tech development, we have to tackle big structural changes to ensure quality lives for all.
4. Ask about consequences. Tech is not additive but vast and unpredictable. We have to be aware of the manifold consequences AI entails and always be ready to rethink, reconsider, and right the wrongs. Facebook wanted to predict meaningful engagement with likes and clicks and has taken us to fake news, polarizing people, and the anti-social media we have today. But changing the inner workings of an algorithm – like Elon Musk so eagerly promises with Twitter – is not the solution, because any system focusing on these proxies will lead us to a similar outcome. This is why we have to constantly reconsider what we truly want to achieve with tech.
5. Ask about false absolutes. Technological development is not part of a natural order. When it comes to artificial intelligence, we should consider its “artificiality” as an advantage. There is no reason to believe that human intelligence is the ultimate goal, rather we should be aware of what tech can do with and for us. One example is artificial flight: Planes are obviously far more efficient for human travel than if we would let birds do the job, but in no way do they resemble birds’ natural flight. Machines are not better or worse, but they can accomplish different things. And ultimately, they are a product of the human mind and creativity.
Keeping these lessons in mind, we should think carefully about where and how we introduce AI. In the short term we have to be pessimistic, Walsh states, because we are currently in the midst of identifying and solving crucial issues. But in the long run, we will be able to create better lives if we embrace technology. And AI will eventually help us down this bumpy road.
About Toby Walsh
Toby Walsh is one of the world’s leading researchers in Artificial Intelligence. He is a Laureate Fellow and Scientia Professor of Artificial Intelligence in the School of Computer Science and Engineering at UNSW Sydney. He leads the Algorithmic Decision Theory Group at CSIRO Data61, Australia’s Centre of Excellence for ICT Research.
Newspapers refer to him as the “rock star” of Australia’s digital revolution. Walsh’s regular appearances in the media testify not only to his popularity as a scientist and educator but to his longstanding efforts in calling AI and its impacts to the attention of a broader public. He is passionate about AI regulations to ensure the public good, playing a leading role in the campaign to ban lethal autonomous weapons and advising the United Nations, the European Parliament, and federal governments.
Toby Walsh is an advocate for science education. His outreach activities include various popular science books, among them “2062: The World that AI Made” (2018), “Machines That Think: The Future of Artificial Intelligence” (2019), and his latest publication “Machines Behaving Badly: The Morality of AI” (2022).
Toby Walsh is a Humboldt Prize Awardee and has won the NSW Premier’s Prize for Excellence in Engineering and ICT and the ACP Research Excellence Award. He has been elected to the Australian Academy of Science, he is an ACM fellow, as well as a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and of the European Association for Artificial Intelligence. Throughout his international research career, he has held positions in England, Scotland, France, Germany, Italy, Ireland, and Sweden.
About Vienna Gödel Lectures
Named after the famous Austrian-American logician, mathematician and philosopher Kurt Gödel (1906-1978) and introduced in 2013, the annual Vienna Gödel Lectures bring world-class scientists to Vienna. The lecture series illustrates the fundamental and disruptive contribution of computer science to our information society. It investigates how our discipline explains and shapes the world we live in—and thereby our lives as such.
Watch the Lecture
Curious about our other news? Subscribe to our news feed, calendar, or newsletter, or follow us on social media.