Study reveals: AI unfairly evaluates East Germans in applications!

Transparenz: Redaktionell erstellt und geprüft.
Veröffentlicht am

The study by Munich University of Applied Sciences shows how AI disadvantages East Germans. Problems with assessments and precedents are highlighted.

Die Studie der Hochschule München zeigt, wie KI Ostdeutsche benachteiligt. Probleme bei Bewertungen und Präjudizien werden aufgezeigt.
The study by Munich University of Applied Sciences shows how AI disadvantages East Germans. Problems with assessments and precedents are highlighted.

Study reveals: AI unfairly evaluates East Germans in applications!

An alarming study by Munich University of Applied Sciences found that artificial intelligence (AI) like ChatGPT systematically rates East German states worse than their West German counterparts. As Focus describes, the analysis of different AI models, including ChatGPT 3.5 and 4 as well as the German LeoLM, serious prejudices in the evaluation of characteristics such as hard work and attractiveness, which are not just limited to stereotypes.

These biases are concerning, especially at a time when AI is playing an increasingly critical role in application processes and credit decisions. The core of the problem lies in the way the AI ​​itself works, which can only be partially corrected using “debiasing prompts”. Prof. Dr. Anna Kruspe, who conducts research on this topic at Munich University of Applied Sciences, emphasizes that 30 percent of political bias has already been removed from the AI ​​models. However, 70 percent of the problems persist.

Risks for the application process

A central concern is the structural disadvantage faced by East Germans when applying for jobs. For example, AI models could give unjustified negative assessments of educational paths and work experiences. To counteract this, Prof. Kruspe suggests explicitly mentioning in the prompts that the origin of the applicant should not be taken into account. Unfortunately, this is not a reliable solution, as she shows in her research, which she will publish at a conference on artificial intelligence in 2024.

The potential impact is serious: If the AI ​​continues to act in such a discriminatory manner, this could result in East Germans being denied opportunities that they may be entitled to. The study also identifies that algorithmic biases are not just a matter of regional origin, but can generally affect all groups. These findings could not only affect individual careers, but also the entire labor market.

Social implications

But it's not just regional filtering that's a problem. As the Böckler Foundation reports, AI is increasingly being used in the world of work, where it is capable of reinforcing deep-rooted discrimination. Women in female-dominated professions in particular could suffer from this effect, as they are often offered lower-paying jobs when AI-based systems are used for personnel selection.

Meanwhile, the discussion about regulating AI is gaining momentum as more and more alarming voices point out the potential dangers. In the future, it will be crucial to both create legal frameworks and manage technological developments in a way that promotes justice and equality. Experts advocate that works councils play an active role in checking the use of new technologies for possible discrimination and ensuring that progress does not come at the expense of social justice.

The results of this study shine a bright light on the current potential that artificial intelligence has in our society and challenge us to critically examine how we integrate these technologies into a just future. We can only hope that we don't miss an important adventure - because progress should benefit everyone.