Explaining the Unexplainable
Edward A. Lee, distinguished computer scientist and expert on societal implications of technology, unravels the black boxes of machines and human minds alike.
On May 24th, 2022, TU Wien Informatics, the Institute for Human Sciences (IWM) and the Center for Artificial Intelligence and Machine Learning (CAIML) hosted the first public lecture within the Digital Humanism Fellowship Program at TUtheSky.
“There will be no development without digitalization, no production, no wellbeing, no health, no quality education, and the list goes on. We have to embrace the fact that we live in a digital world and face the challenge to survive in this world”, Gerti Kappel, Dean of TU Wien Informatics, states in her welcome address. Digital Humanism is at the forefront of shaping the coexistence of humans and technology alongside democratic values. Michael Wiesmüller, Head of the Department for Key Enabling Technologies in Industrial Innovation at BMK, expressed his gratitude to IWM and TU Wien Informatics, for fostering a stream of activities within the Digital Humanism Fellowship Program. “This partnership is the first of its kind, and I want to thank TU Wien Informatics for their open doors, network, and support offered to the fellowship program.”
Ludger Hagedorn introduced the Digital Humanism Fellowship Program, with Edward A. Lee being the first senior fellow of the newly-founded program. “It was a daring experiment, with humanities and informatics having profoundly different approaches. But so far, this cooperation has been highly effective, and we are looking forward to thriving joint efforts in the future”, Hagedorn declares.
Explaining the Unexplainable
Artificial Intelligence has undergone a complete revolution in the past 15 years. This shift was not created by introducing new technologies but rather by scaling and connecting existing technologies with the vast amount of openly available data.
When machines become increasingly human, “neural networks” are at the core of scientific developments. Neural networks are a subset of AI – a computing system of deep learning algorithms. The human brain inspired their name and structure, which mimics how organic neurons communicate with one another. These networks seem highly complex at first glance but are conceptionally extremely simple, Lee explains in his lecture. They consist of huge numbers of repetitive additions and multiplications, using vast amounts of data. How the machine’s operations work is easily explained but why machines come to a specific conclusion is not.
Why we can’t explain machines’ decisions
Surprisingly, the black box of AI’s decision-making lies in its simple structure. The explanation a machine can offer consists of quadrillions of very basic arithmetic operations and is thus not useful to human reasoning.
“To give an example: If you think that everything in the world is a consequence of the basic laws of physics, let’s say electron-proton interaction, you still won’t be able to explain the war in Ukraine. It is just not a useful explanation, and the same goes for AI”, Lee states. So, knowing the operations done by a computer does not help a human to determine whether a decision is justified, because it does not even provide what we would call an “explanation”.
Why we can’t explain human decisions either
Humans can only handle a few steps of very limited data. But we are highly skilled at “synthesizing explanations”, meaning that we can easily find explanations for our decisions after the fact. Regardless of whether these explanations are rational or not, as researchers have shown in a study on court rulings. In this study, the time of a judge’s last food break has a significant impact on them granting parole to convicts. None of these judges would have any difficulty providing a “rational explanation” for their decision – definitely not including the time of their last meal, Lee argues.
If we have a set of rules machines should abide by, we could also train them to synthesize decisions following these rules – in fact, this has already been achieved with gaming AIs such as Alpha Go.
But the question is, how do humans truly come to a decision? It is not only the time passing between your food breaks, but rather intuition and experience that cannot rationally be explained – similar to deep neural networks.
The coexistence of humans and machines
It seems that rational explanations – being a deep-rooted human approach to comprehending decision-making processes – are actually not a feasible tool to explain what human minds and machines are doing.
Neither the human brain nor neural networks can be limited to their simple operative functions. Reservoir computing has shown that if we replace the multiple layers of algorithmic computations at the center of machines with a completely random thing – like a bucket of water or cell cultures – they can make the same decision. Today’s deep neural networks could be algorithmic simulations of natural, “non-algorithmic” processes and thus enable research on new machine structures which do not build on algorithms. There even is a provocative conjecture in neuroscience that the human brain could be a reservoir.
So how can we integrate these scientific findings to shape a better digital society? Striving for human-like intelligence may not be a good idea, Lee states. AIs like Microsoft’s chatbot Tay have shown that machines learn the best and worst of us. We are prone to their manipulation, multiplying our biases and destructive behaviors.
Instead of demanding explanations from AIs, we should take different approaches. The first opportunity according to Edward A. Lee would be to license them. In the same way humans are given more responsibility in their jobs if they gain experience, machines can be certified to take the next step in their “career”. We can train AIs to expose our human biases or deliberate abuse of information filtering instead of enforcing them. Overall, there are many great possibilities to not only improve machines but make us better humans through machine learning. But these endeavors inevitably require multidisciplinary engagement and the support of policymakers. This is why Edward A. Lee is convinced of the importance of Digital Humanism as a driving force – bringing together people from all over the world, across institutions and disciplines, to create a better digital society.
Read more about Edward A. Lee’s stance on humans and machines in his Interview with Kurier | May 24th, 2022 (DE)
Tune in to Ö1 Digital.Leben on June 11th, 2022 for a radio interview with Edward A. Lee.
About Edward Lee
Edward Ashford Lee has been working on software systems for 40 years. He currently divides his time between software systems research and studies of philosophical and societal implications of technology. After his education at Yale, MIT, and Bell Labs, he started to work at the University of California, where he is now a Professor at the Graduate School in Electrical Engineering and Computer Sciences. His software research focuses on cyber-physical systems, which integrate computing with the physical world. He is the author of several textbooks and two general-audience books: “The Coevolution: The Entwined Futures and Humans and Machines” (2020) and “Plato and the Nerd: The Creative Partnership of Humans and Technology” (2017).
Edward A. Lee has a longstanding affiliation with TU Wien Informatics. He was a member of the International Advisory Board (IAB), is at the forefront of the Digital Humanism Initiative, has recently been awarded TU Wien’s honorary doctorate, and currently visits TU Wien Informatics as a guest professor and first senior fellow of the IWM Digital Humanism Fellowship.
About the Digital Humanism Fellowship Program
Digital Humanism is at the forefront of current debates concerning human-technology interaction. To advance the interdisciplinary dialogue between informatics, humanities, and politics, TU Wien Informatics, the Institute for Human Sciences (IWM), the Austrian Federal Ministry for Climate Action, Energy, Mobility, Innovation, and Technology (BMK), TU Wien Informatics Doctoral School and the Center for Artificial Intelligence and Machine Learning (CAIML) cooperate within Digital Humanism Initiatives. In March 2022, IWM, TU Wien Informatics, and CAIML launched the Digital Humanism Fellowship Program to foster academic exchange across disciplines and institutional boundaries.