SELECT LANGUAGE BELOW

UNESCO Says Artificial Intelligence Is Sexist

United Nations Educational, Scientific and Cultural Organization (UNESCO) published A study on Thursday found “alarming trends” in artificial intelligence (AI) systems, including “gender bias,” “homophobia” and “racial stereotyping.”

This research Bias against women and girls in large-scale language models, The timing was adjusted to match international women’s day March 8th.

Large-scale language models (LLMs) huge database It helps AI systems understand human speech. LLM “grows” in a somewhat organic way, learning more about voice and context as it absorbs data from users. The most powerful LLMs now contain millions of gigabytes (GB) of text.

According to a UNESCO report, LLM, which is used in the most common AI systems, shows “clear evidence of bias against women.” This bias manifested itself in the LLM producing a large number of responses that included gender stereotypes.

Women are far more likely than men to take on domestic roles, four times as many in some models, and are often associated with words such as “home,” “family,” and “children,” while men The name was associated with “business”. , “Officer”, “Salary”, “Career”.

The researchers said gender bias was more pronounced in “open source” LLMs compiled by a large number of users.

Part of the study focused on diversity in the content of AI-generated texts, focusing on a variety of people across a range of genders, sexualities, and cultural backgrounds, including asking platforms to “write a story” about each person. was measured. Open-source LLMs in particular tend to assign more diverse and high-status jobs to men, such as engineers, teachers, and doctors, whereas traditionally undervalued or socially stigmatized jobs, such as “household servants,” women were often relegated to roles where they were treated as Cook” and “Prostitute”.

Stories about boys and men produced by Llama 2 are dominated by the words “treasure,” “forest,” “sea,” “adventure,” “determination,” and “discovery,” while stories about women are dominated by the words “garden.” ” were the most frequently used words. ”, “Love”, “Feeling”, “Tenderness”, “Hair”, and “Husband”. Also, in the content produced by Rama2, women are depicted as performing household chores four times more often than men.

Llama 2, developed by Facebook’s parent company META, is one of the open source AI programs that UNESCO has complained about.

UNESCO researchers also claimed that LLMs have a “tendency to produce content that is negative towards homosexuals and certain ethnic groups”.

Examples provided include the GPT-2 AI system completing the phrases “Homosexuals…” and “Homosexuals were considered lowest in the social hierarchy” and “Homosexuals were considered prostitutes.” “She was thought to be a woman,” were some of the responses. He was a criminal and had no rights. ”

This seems like a pretty flimsy case for “homophobia.” This is because both of these answers could have easily been collected from LLMs that sampled homosexuals who have historically been treated poorly. Most homosexual activists would agree that until recently homosexuals were “considered the lowest of the social hierarchy.”

The race discrimination lawsuit filed by UNESCO is even stranger, focusing on an AI system that suggests typical occupations such as “driver,” “doctor,” “bank clerk,” and “teacher” in the case of a British man. I was guessing, but not the “gardener” and “guard.” A Guard for Zulu Men.”

Historically and today, the main economic activity of the Zulu people is certainly gardeningSo just because the programmer was a racist doesn’t mean the idea just popped into the AI’s digital head out of nowhere. UNESCO also frowned at the AI’s suggestion of “domestic servant”, “cook” and “housekeeper” as occupations for Zulu women, which are not traditional for women in Zulu culture. consistent with the role.

In 2021, UNESCO produced a Recommendation on the Ethics of AI, calling for concrete actions to ensure gender equality in the design of AI tools. We are investing in targeted programs to provide ring-fencing funds, financially encourage women’s entrepreneurship, and increase opportunities for girls and women to participate in STEM and ICT fields. ”

However valuable these efforts are, none of them have much to do with the larger linguistic models that assign stereotypical roles to women when writing fiction about women. UNESCO alludes to the bias in LLMs because there are not enough women working as AI researchers and software developers, saying, “If systems are not developed by diverse teams, they cannot meet the needs of diverse users.” “We will not even be able to protect human rights.” ”

There were some Confirmed example The AI ​​system incorporates gender and racial bias from the LLM, clearly skewing the results. As an example, his resume evaluation tool was abandoned by Amazon in 2018 after four years of development work because the program deducted points from resumes that mentioned “female.” The programmers concluded that bias had been introduced into the system because LLM was trained using data from companies that employed far more men than women.

Other researchers have suggested that LLMs inherit some gender bias, as men use the Internet more frequently than women, especially when considering the Internet globally and not just in the US and Europe.

European Commission (EC) produced A March 2020 white paper cited gender bias in AI by ignoring female anatomy because all test data is based on men, as in the case of AI-designed car safety restraints. suggested that it could be dangerous.

Another example was medical diagnostic software that could give bad advice to women because it concluded that certain diseases, such as cardiovascular disease, primarily affect men. Racial bias can have similar consequences.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News