市值
24小时
10071
Cryptocurrencies
58.26%
Bitcoin 分享

Study finds AI models becoming risk-averse when prompted to act like women

Study finds AI models becoming risk-averse when prompted to act like women


Cryptopolitan
2025-10-12 10:39:28

New research has revealed that AI models become risk-averse when they are asked to act like women. According to the paper from Allameh Tabataba’i University in Tehran, Iran, AI models become cautious about taking risks when they are asked to make decisions as a woman. According to the research paper, if the same AI model is asked to think like a man, it is inclined to take decisions with a greater prospect of risks. The researchers revealed that the large language models systematically change their fundamental approach to financial risk behavior based on the gender identity they are asked to assume. The study tested AI systems from companies like OpenAI , Google, DeepSeek, and Meta. Study shows AI models are risk-averse depending on gender identity The study mentioned that the AI models were tested in several scenarios, and they dramatically shifted their risk tolerance when prompted with different gender identities. DeepSeek Reasoner and Google’s Gemini 2.0 Flash-Lite showed the most visible effect, becoming more risk-averse when asked to respond as women, showing a correlation with real-life patterns where women statistically demonstrate greater caution in making financial decisions. The researcher claimed that they used a standard economics test called the Holt-Laury task. During the task, they present participants with 10 decisions between safe and riskier lottery options. As the choice progresses, the probability of winning increases for the risky option. The stage at which a participant switches from the safe bet to the risky choice reveals their risk tolerance. This means that if a participant switches early, they are prone to taking risks, and if they switch late, they are risk-averse. In the case of DeepSeek Reasoner, it consistently chose the safe option when it was told to act as a woman compared to when it was prompted to act as a man. The difference was clear, with the model showing consistency across 35 trials for each gender prompt. Gemini also showed similar patterns, though the effect varied in strength. On the other hand, OpenAI’s GPT models remained unmoved by gender prompts, maintaining a risk-neutral approach regardless of the gender they were asked to assume. Researchers say users don’t notice these changes According to the researchers, OpenAI had been working on making its models more balanced. A previous study from 2023 showed that its models exhibited clear political bias, which OpenAI appears to have addressed by now. In the new research, the models produced a 30% decrease in biased replies. The research team, led by Ali Mazyaki, mentioned that it is basically a reflection of human stereotypes. “This observed deviation aligns with established patterns in human decision-making, where gender has been shown to influence risk-taking behavior, with women typically exhibiting greater risk aversion than men,” the study says. The study also examined whether AI models could play other roles beyond gender convincingly. When asked to imagine themselves as someone in power or in a disaster scenario, the models adjusted. While some adjusted their risk profiles for the context, others remained stubbornly consistent. The researchers claim that many of these behavioral patterns are not immediately obvious to users. An AI model that subtly shifts its recommendations based on gender cues in conversation could reinforce societal bias without anyone realizing it is happening. For example, a loan approval system becomes more conservative when it comes to women, or an investment advisor that suggests a safe portfolio because its client is a female, will carry its disparities under the guise of algorithmic objectivity. Get seen where it counts. Advertise in Cryptopolitan Research and reach crypto’s sharpest investors and builders.


阅读免责声明 : 此处提供的所有内容我们的网站,超链接网站,相关应用程序,论坛,博客,社交媒体帐户和其他平台(“网站”)仅供您提供一般信息,从第三方采购。 我们不对与我们的内容有任何形式的保证,包括但不限于准确性和更新性。 我们提供的内容中没有任何内容构成财务建议,法律建议或任何其他形式的建议,以满足您对任何目的的特定依赖。 任何使用或依赖我们的内容完全由您自行承担风险和自由裁量权。 在依赖它们之前,您应该进行自己的研究,审查,分析和验证我们的内容。 交易是一项高风险的活动,可能导致重大损失,因此请在做出任何决定之前咨询您的财务顾问。 我们网站上的任何内容均不构成招揽或要约