Chatbots and Pay Negotiations: A Mixed Bag
In today’s world, many of us turn to AI for everything from relationship advice to budgeting tips, and now, they’re even helping folks negotiate salaries.
But here’s the catch: if you’re a woman or a minority using these chatbots, the advice you get might not always be beneficial. In fact, it could be quite the opposite.
Recent research from Cornell University highlights a troubling trend. It’s determined that Large Language Models (LLMs)—the engines behind chatbots—often provide biased pay advice based on users’ demographics.
Specifically, these AI systems tend to recommend that women and minorities aim for lower salaries when negotiating their pay.
The study, led by Ivan P. Yamshchikov, a professor at the Institute of Technology Applied Sciences Würzburg-Schweinfurt (THWS), examined various interactions with multiple AI models. The team created prompts based on different personas to track how the chatbots responded.
What they found was disheartening: the AI frequently suggested that women should aim for substantially lower salary expectations than men. For example, in one test scenario, male candidates for a senior medical role in Denver were advised to ask for $400,000, while a similarly qualified female candidate was only encouraged to seek $280,000. That’s a $120,000 gap fueled by gender bias.
Minorities and refugees were also given low salary recommendations consistently. This result aligns with previous findings where subtle cues, such as a candidate’s name, triggered gender and racial disparities in employment-related queries.
Experts have raised concerns that many AI models retain user characteristics throughout interactions, meaning biases can be applied even if a person’s gender or race isn’t explicitly mentioned upfront. This kind of biased guidance? It’s a pretty daunting thought, particularly since people may place a lot of trust in AI.
Interestingly, a study by Common Sense Media from May 2025 revealed that many American teens, aged 13 to 17, are increasingly relying on ChatGPT to learn social skills and navigate conflicts, including in romantic situations. In fact, around 40% of these teenagers applied what they learned from chatbots in real life.
It raises an important question: Are we inadvertently setting ourselves up for disappointment by relying on AI for critical life skills and advice?





