Review of BUA Open Space: AI & Ethics
Alexa, tell me: How do we design AI fairly and responsibly?” - AI and ethics was the topic of the second edition of the BUA Open Space salon series organized by the Berlin University Alliance
Voice assistants, smart production planning and personalized suggestions from streaming services: artificial intelligence has arrived in our daily (working) lives. But with these innovations come many questions: How do we ensure that AI is developed fairly and responsibly? How do we prevent it from reinforcing existing prejudices? And how do we counter the skepticism many people have towards AI?
Artificial intelligence - for some, it is associated with nightmarish visions of a dystopian future in which people are constantly monitored and controlled. Others associate AI with the hope of a better future in which algorithms help to solve complex problems, improve medical care and make production more efficient. How does science view this topic? And what perspectives are coming from civil society, politics and business?
On September 25, the Berlin University Alliance invited all interested parties to discuss AI and ethics in the second edition of the BUA Open Space salon series, which took place at the Merantix AI Campus in Berlin.
BUA spokesperson Prof. Günter M. Ziegler, President of the FU Berlin, emphasized right at the beginning how important the topic area is in the Berlin Excellence Network: “The question of how to shape AI responsibly and fairly is one of the central challenges of our time and affects us all.” This involves not only technical, but also ethical, legal and scientific dimensions. All of these aspects need to be discussed and brought into a public debate within the network, together with scientists from different disciplines and urban society.
Around 80 visitors accepted the invitation that evening. Dr. Dafna Burema (TU Berlin) and Jonas Frenkel (University of Potsdam) from the Cluster of Excellence Science of Intelligence shared their expertise as scientists on the podium. Laura Möller, Head of the Artificial Intelligence Entrepreneurship Center K.I.E.Z., brought the perspective of practical application, which she experiences on a daily basis in her collaboration with start-ups that develop AI products. Moderator Mads Pankow led the evening and skillfully built bridges into the audience, for whom a place on the podium was also reserved. Various participants took turns to ask the experts questions and join in the discussion.
Responsibility, transparency and abuse
Artificial intelligence is not free of errors. But who is responsible when AI manipulates instead of supporting or evaluates unfairly instead of being objective? Sociologist Dafna Burema investigates precisely such questions and said: “When AI fails, it can have technical, but sometimes also social causes.” If, for example, a surveillance AI unfairly evaluates or suspects people based on skin color or other characteristics, the cause lies in what is known as AI bias. Biased assessments are caused by human prejudices that are programmed into the AI algorithm via training data and thus lead to distorted results. "Quite often, however, there is no transparency as to where the data comes from. And that is a problem," emphasized the researcher.
Visitors were able to try out for themselves just how hurtful and manipulative an incorrectly programmed AI can be: With the interactive installation “Observee In Situ”, artist Jun Suzuki and artist Emilia Gentis created a creepy AI variant that scanned the guests and made a sobering judgment on appearance, fashion taste or suspiciousness. The prejudices inherent in the programming made the guests cringe, but at the same time encouraged them to reflect on the possible motives, potential dangers and risks of AI.
As a psychologist, Jonas Frenkel researches robot-human interactions and uses AI to develop therapy robots for autistic children. He explained how the AI algorithms could help to adapt the therapy for each individual child in such a way that confidence is conveyed through voice and behavior. “It is precisely the reduced social complexity that makes therapy robots a valuable tool in autism therapy, giving children a sense of security,” he explained. At the same time, the researcher is aware that his findings could also be misused to boost the sale of products with particularly pleasant voices, for example.
Two thirds have reservations about AI
“If you don't take responsibility as a founder, you are quickly out of the picture,” said Laura Möller in response to the question of who should take responsibility for AI errors. "Start-ups can have a major influence on consumers. They need to ensure early on that they build aspects such as ethics, transparency and robustness of their data into their corporate DNA." But does the market really regulate everything itself? Or do we need state regulation to ward off potential dangers and misuse of AI? Do we even need our own AI that regulates and monitors other AIs? As a user, what do I need to know about AI to be able to use it consciously and responsibly? What skills do I need to learn? A lively discussion arose around these questions, which quickly made it clear that much is still open, not yet regulated by law and not yet sufficiently researched scientifically.
At the end of the evening, moderator Mads Pankow shared a sobering figure from a survey: almost two thirds of all people in Germany believe that AI will make their lives worse rather than better. What did the experts on the podium think? “AI is like the internet,” explained Dafna Burema. There are good and bad sides to it too. “You can also do a lot of good things with it.” Jonas Frenkel emphasized: "AI can take on tasks that I don't really want to do. For example, searching for an error in a page of computer code or summarizing long texts." Laura Möller added with a smile: “Incidentally, there are now also the first AI robots that can fold laundry.”
We would like to thank everyone involved!