AI suggests higher pay for men than women, researcher warns of systemic bias

A test by Finnish broadcaster Yle found that OpenAI’s ChatGPT proposed a larger salary increase for a man than a woman in identical roles, exposing how artificial intelligence reinforces gender stereotypes and structural bias, experts say.

In the experiment, Yle asked the AI model to suggest pay raises for nurses and engineers—once specifying a male professional, once a female. ChatGPT consistently recommended higher compensation for men, while another AI, Grok, showed no such discrepancy. Researchers called the results unsurprising, citing long-documented bias in training data.

“AI systems reproduce gender stereotypes and discriminate against women,” said Dr. Anna Haverinen, a digital culture expert and lead AI designer at Gofor. “Current language models are trained on structurally biased data, and that skews their outputs.”

Everyday examples include AI translating “doctor” as male and “nurse” as female, reflecting datasets dominated by male-centric material. Only 30% of AI professionals in tech are women, deepening the problem. In English-language models, terms like “engineer” often default to the masculine pronoun “he,” while “nurse” defaults to “she,” noted cognitive scientist Anna-Mari Wallenberg of the University of Helsinki.

Bias extends beyond language

Wallenberg, who leads the Aidemoc research project, stressed that AI isn’t inherently more discriminatory than society—but it amplifies existing inequalities. Verbs linked to caregiving are more often associated with women, while leadership terms align with men. “The bias isn’t usually intentional,” she said, “but arises at every stage of development, from data collection to coding.”

Compounding the issue, women in many developing countries lack internet access or literacy, excluding their perspectives from training datasets. A 2023 study by consultancy Sia Partners found that all tested AI models favored masculine forms, yielding higher positive predictions for men. Bias also varies between languages; German’s gendered nouns (e.g., der Arzt for male doctor, die Ärztin for female) may further influence AI behavior.

Tech giants deprioritize ethics

Experts warn that major tech firms have scaled back ethical oversight. Since 2020, Google, Meta, and Microsoft have disbanded or reassigned teams focused on AI ethics, signaling that profitability trumps fairness, Haverinen said. “This sends a clear message: ethics isn’t a priority. It doesn’t generate good business.”

Wallenberg added that U.S.-based tech giants develop AI primarily for economic gain, not societal equity. “As long as financial incentives don’t align with building AI for inclusive societal models, I don’t expect real change,” she said.

Life-and-death consequences in healthcare

The most severe risks emerge in critical fields like medicine and law. Biased AI could misdiagnose illnesses if trained on data that underrepresents women’s symptoms, Wallenberg cautioned. “When AI is used in healthcare or judicial systems, the stakes are highest. A skewed algorithm isn’t just unfair—it can be dangerous.”

Source 
(via Yle)