BUA spokesperson Prof. Günter M. Ziegler, President of the FU Berlin, emphasized right at the beginning how important the topic area is in the Berlin Excellence Network: “The question of how to shape AI responsibly and fairly is one of the central challenges of our time and affects us all.” This involves not only technical, but also ethical, legal and scientific dimensions. All of these aspects need to be discussed and brought into a public debate within the network, together with scientists from different disciplines and urban society.
Around 80 visitors accepted the invitation that evening. Dr. Dafna Burema (TU Berlin) and Jonas Frenkel (University of Potsdam) from the Cluster of Excellence Science of Intelligence shared their expertise as scientists on the podium. Laura Möller, Head of the Artificial Intelligence Entrepreneurship Center K.I.E.Z., brought the perspective of practical application, which she experiences on a daily basis in her collaboration with start-ups that develop AI products. Moderator Mads Pankow led the evening and skillfully built bridges into the audience, for whom a place on the podium was also reserved. Various participants took turns to ask the experts questions and join in the discussion.
Responsibility, transparency and abuse
Artificial intelligence is not free of errors. But who is responsible when AI manipulates instead of supporting or evaluates unfairly instead of being objective? Sociologist Dafna Burema investigates precisely such questions and said: “When AI fails, it can have technical, but sometimes also social causes.” If, for example, a surveillance AI unfairly evaluates or suspects people based on skin color or other characteristics, the cause lies in what is known as AI bias. Biased assessments are caused by human prejudices that are programmed into the AI algorithm via training data and thus lead to distorted results. "Quite often, however, there is no transparency as to where the data comes from. And that is a problem," emphasized the researcher.
Visitors were able to try out for themselves just how hurtful and manipulative an incorrectly programmed AI can be: With the interactive installation “Observee In Situ”, artist Jun Suzuki and artist Emilia Gentis created a creepy AI variant that scanned the guests and made a sobering judgment on appearance, fashion taste or suspiciousness. The prejudices inherent in the programming made the guests cringe, but at the same time encouraged them to reflect on the possible motives, potential dangers and risks of AI.