total-news-1024x279-1__1_-removebg-preview.png

SELECT LANGUAGE BELOW

AI may issue harsher punishments, severe judgments than humans: Study

A new study by researchers at the Massachusetts Institute of Technology (MIT) suggests that artificial intelligence falls short of humans in making decisions and is more likely to impose harsher penalties and penalties for rule-breakers.

If AI systems were used to predict the likelihood of reoffending a crime, the findings could have real-world implications, resulting in longer prison sentences and higher bail. The study says it could be

Researchers at the University of Massachusetts, Canadian universities and non-profits have studied machine learning models and found that AIs, when not properly trained, make tougher decisions than humans.

The researchers found scenarios in which people could break rules, such as having aggressive dogs in apartment complexes that ban certain breeds or using obscene language in online comment sections. I’ve created four virtual code configurations to create the .

Human participants were then labeled with pictures or text, and their responses were used to train an AI system.

“I think most artificial intelligence/machine learning researchers assume that human judgments about data and labels are biased, but this result shows something even worse,” says the computer science expert. Marzyeh Ghassemi, assistant professor and head of the Healthy ML Group, said. Artificial Intelligence Lab at MIT.

“These models cannot even replicate the already biased human judgment because of the flaws in the data they are trained on,” Gassemi continued. “Humans would label image and text features differently if they knew they would be used to make decisions.”

Musk warns of AI impact on elections, calls for scrutiny: ‘Things are going crazy fast’

A new study suggests that artificial intelligence could make tougher decisions than humans when entrusted with judgment. (St. Petersburg)

Companies across the country and around the world are adopting AI technology or looking to use it to assist in routine tasks that are normally handled by humans.

The new research, led by Garsemi, explores “how well AI can mimic human judgment.” The researchers found that when humans train the system using “prescriptive” data (where humans explicitly label possible violations), AI systems perform better when trained using “descriptive data”. I decided that I would reach a more human-like response than

How deepfakes are on the verge of destroying political accountability

Descriptive data is defined as when humans label pictures or text based on facts, such as describing the presence of fried food on a picture of a dinner plate. The study found that when descriptive data is used, AI systems over-predict violations, such as the presence of fried foods that violate school hypothetical rules that ban fried foods and meals containing high concentrations of sugar. is often

AIPhotography

Taken on March 31, 2023, the words artificial intelligence can be seen in this illustration. (Reuters/Dad Ruvic/Illustration)

The researchers created hypothetical code for four different settings, including school lunch restrictions, dress codes, apartment pet rules, and online comment section rules. Humans were then asked to label factual features of the photos and text, such as the comment section containing obscenities, and another group asked if the photos and text violated the rules of the hypothesis. asked if there was

For example, the study showed people pictures of dogs and investigated whether the puppies violated a policy in a hypothetical apartment complex that prohibited the keeping of aggressive dog breeds on the premises. The researchers then compared responses to questions asked under the umbrella of normative and descriptive data, and found that humans were 20% more likely to report that their dog violated apartment house rules based on descriptive data. It has been found.

AI could become a ‘Terminator’ and gain an edge over humans in Darwin’s laws of evolution, report warns

The researchers then trained an AI system using normative data and another AI system using descriptive data about four hypothesis settings. Studies have found that systems trained on descriptive data are more likely than standard models to incorrectly predict potential rule violations.

court and gavel

Inside the court where you can see the gavel. (St. Petersburg)

“This shows that data really matters,” Aparna Balagopalan, an electrical engineering and computer science graduate student at the Massachusetts Institute of Technology who helped author the study, told MIT News. “When training a model to detect rule violations, it is important to match the training context to the deployment context.”

Researchers argued that data transparency could help AI problems predicting hypothetical violations, or training systems using both descriptive and small amounts of normative data.

Crypto Criminals Beware: AI Targets You

“The way to solve this is to transparently acknowledge that if you want to replicate human judgment, you should only use data collected in that environment,” Garsemi told MIT News.

“Otherwise, you end up with a system that has very strict moderation, much stricter than humans do. The model is not.”

Illustration of ChatGPT and Google Bard logos

Illustration of ChatGPT and Google Bard logos (Jonathan Lah/NurPhoto via Getty Images)

The report comes amid growing fears in some professional industries that AI could wipe out millions of jobs. A Goldman Sachs report earlier this year found that generative AI could replace and impact 300 million jobs worldwide. Another study by outplacement and executive coaching firm Challenger Gray and Christmas found that AI chatbot ChatGPt could replace at least 4.8 million jobs in the United States.

CLICK HERE TO GET THE FOX NEWS APP

AI systems such as ChatGPT can mimic human conversation based on prompts given by humans. According to a recent research report from the National Bureau of Economic Research, the system could benefit some professional industries, such as customer service representatives, who were able to increase their productivity with the help of OpenAI’s pre-trained generative transformations. One thing has already been proven.

Leave a Reply

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp