Walking the Red Line – AI and Regulations
How can we regulate complex algorithmic systems? Law experts and computer scientists join forces to tackle the challenges of democratic technology development.
Over the last years, governments and international organizations have developed AI strategies and are currently passing the first AI regulations. World leaders have acknowledged that the vast amount of data we generate every day, and the ways we process and use it, have substantial effects on public security.
In an international panel discussion on May 5th, 2022, Marc Rotenberg, president and founder of the Center for AI and Digital Policy, Christiane Wendehorst, law and digitalization expert and professor for civil law at the University of Vienna, and Hannes Werthner, professor and former Dean of TU Wien Informatics addressed AI policies, regulations, and their societal challenges.
TU Wien Informatics has been dealing with AI research since the late 1980s when the Institute for Databased Systems and Artificial Intelligence was established at the former Department of Informatics. With the Center for Artificial Intelligence and Machine Learning (CAIML), TU Wien and the Faculty of Informatics have initiated one of the most proficient AI centers worldwide, “that is why we are thrilled to host this interdisciplinary panel, facilitating synergies between computer science, law, and policymaking,” states Gerti Kappel, Dean of the TU Wien Informatics, in her welcome address.
Governed by whom? – Decision-making processes revisited
The questions of how AIs make decisions and how we can make these decisions transparent are central to the public discourse on AI and the driving force of many AI strategies. “Since the era of machine learning, we can’t reconstruct how machines come to their conclusions. We essentially give decision-making power about people to machines we don’t understand,” Marc Rotenberg states, “this is particularly concerning given the commercialization and widespread use of AI in areas such as law enforcement and employment.”
These ‘black boxes’ have, for the first time, made us truly think about how decision-making works. Not only do machines have a bias, but human decision-making is full of prejudice and ambiguity. The challenges we face in the light of AI technology have ironically led us to revisit our own decision-making. “And this is a good thing,” Christiane Wendehorst is convinced, “because we can find ways to make better decisions in the future.”
Scientific findings should provide the foundation for these improved decision practices. “That two law experts are discussing AI at an informatics faculty shows the transformation of computer science. To find answers, we need other disciplines, shaping the role of humans and technology alike,” Hannes Werthner emphasizes.
Too little too late? – Existing approaches to regulations
The 2021 Artificial Intelligence and Democratic Values Index released a global ranking of countries based on national AI policies and practices – with Austria currently holding 5th place. Many federal governments are passing AI laws, but according to the index’s initiator Marc Rotenberg, their success can only be measured if considering policymakers’ commitment to democratic values and a meaningful engagement with the public. The standards applied in AI strategies of international organizations like UNESCO and OECD have become higher, and policymakers like the EU switched to a much more precise approach to regulations, e.g., in the EU AI Act.
But the implementation of these laws in real life often leads to frustration. The EU General Data Protection Regulation (GDPR) is the prime example of this challenge. Praised as the ‘gold standard’ of data protection, it is still one of the most well-known EU laws – not least because of its rigid convertibility to everyday needs, its broad applicability resulting in unmanageable amounts of complaints to data protection agencies, and loopholes for companies. “We have to learn our lessons from the past,” Christiane Wendehorst explains, “and take a more risk-based approach. A bad example is the GDPR’s understanding of personal data, which is so broad that it fails to raise the level of protection we actually have. We ignore it and click on ‘Accept all Cookies.’ We need limits and definite rules, not only procedures.”
The recently negotiated EU Digital Services Act (DSA) shows a new direction for technology laws within the EU. One substantial improvement is the rejection of a ‘one fits all’ solution. Larger companies face stricter rules because they pose a higher systemic risk, “and this is an approach I would like to see in the AI Act. If you look at recruitment software, a small company with biased technology might not greatly impact the overall job market. Still, a company with extensive coverage definitely will,” Wendehorst claims.
How we interpret existing laws is just as crucial to Marc Rotenberg, especially when it comes to the use of algorithms. The Cybersecurity Administration of China has passed a new set of recommendation algorithm regulations similar to the DSA’s regulations. Whereas transparency, user control, and limitations to data companies are part of both laws, the Chinese government also wants to ensure approved recommendations are in line with its political standards. “We have to create, implement and interpret laws alongside democratic values. Otherwise, the great potential of regulations is undermined – with a dramatic outcome we can already witness in authoritarian regimes,” Marc Rotenberg states.
Where is the red line? – Controversial practices and the future
Cameras in public places, real-time identification linked to databases, emotion recognition – brilliant technologies gone terribly wrong. “How we use technology is ultimately a political issue,” knows Marc Rotenberg, “many technologies can potentially endanger individual freedom and public safety, but we have to reach a political consensus on defining and regulating them.”
The German Ethics Committee on Data put forward a risk-based criticality pyramid, proposing a ban for red-zoned algorithmic systems with an “untenable potential of harm.” Christiane Wendehorst, a member of the Ethics Committee, recalls that “agreeing what systems are in the red zone was extremely difficult. You can easily describe what you don’t want, but to translate this into programming the systems is very hard.”
The law has shown that defining these red lines is crucial for real-life implementation in many areas. For example, we often sign contracts without reading the small print. “This is common practice because there are protective laws that out rule unfair ‘small print’. We need a similar situation for algorithmic systems to integrate them safely in our everyday lives,” Wendehorst explains.
“Why do kitchen appliances need quality seals, but algorithmic software doesn’t?” Rotenberg asks, pointing out that a quality marker like the EU CE Seal could make a difference in developing and applying technology. To draw red lines is a process we are facing right now. Hannes Werthner is convinced that “Informatics is at the core of this development. We need to find ways to combine these complex software systems’ social and technical innovation for a better future.”
Check out the panel discussion and many more interesting lectures on the DIGHUM Youtube Channel.
The panel discussion “Algorithms. Data. Surveillance – Is There a Way Out?” was hosted by the TU Wien Informatics Doctoral School and the Center for Artificial Intelligence and Machine Learning (CAIML) in the context of Digital Humanism. The panel was moderated by Josef Broukal.
In May 2022, Marc Rotenberg is visiting us as guest professor at the TU Wien Informatics Doctoral School. Read more about his stance in his guest commmentary in Wiener Zeitung “Im Daten-, Algorithmen- und Überwachungsdschungel”.