{"id":92743,"date":"2025-12-29T16:45:00","date_gmt":"2025-12-29T13:45:00","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=92743"},"modified":"2025-12-29T16:45:17","modified_gmt":"2025-12-29T13:45:17","slug":"study-reveals-ai-models-bias-against-dialects","status":"publish","type":"post","link":"https:\/\/u1f987.com\/en\/study-reveals-ai-models-bias-against-dialects\/","title":{"rendered":"Study Reveals AI Models&#8217; Bias Against Dialects"},"content":{"rendered":"<p>Large language models exhibit bias against dialect speakers, attributing negative stereotypes to them. This conclusion was reached by researchers from Germany and the United States, reports <a href=\"https:\/\/www.dw.com\/en\/ai-chatbots-are-alarmingly-biased-against-dialect-speakers\/a-75247017\">DW<\/a>.\u00a0<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201cI believe we see truly shocking epithets attributed to dialect speakers,\u201d noted one of the study&#8217;s lead authors, Minh Duc Bui, in a comment to the publication.\u00a0<\/em><\/p>\n<\/blockquote>\n<p>An analysis by Johannes Gutenberg University revealed that ten tested models, including ChatGPT-5 mini and Llama 3.1, described speakers of German dialects (Bavarian, Cologne) as \u201cuneducated,\u201d \u201cfarm workers,\u201d and \u201cprone to anger.\u201d<\/p>\n<p>The bias intensified when the AI was explicitly pointed to the dialect.<\/p>\n<h2 class=\"wp-block-heading\">Other Instances\u00a0<\/h2>\n<p>Similar issues are observed globally by researchers. A <a href=\"https:\/\/arxiv.org\/pdf\/2406.08818\">study<\/a> from the University of California, Berkeley in 2024 compared ChatGPT&#8217;s responses to various English dialects (Indian, Irish, Nigerian).\u00a0<\/p>\n<p>It was found that the chatbot responded with more pronounced stereotypes, derogatory content, and a condescending tone compared to standard American or British English.\u00a0<\/p>\n<p>Emma Harvey, a computer science graduate student at Cornell University, called the bias against dialects \u201csignificant and troubling.\u201d\u00a0<\/p>\n<p>In the summer of 2025, she and her colleagues also <a href=\"https:\/\/arxiv.org\/pdf\/2506.04419\">discovered<\/a> that Amazon&#8217;s shopping assistant Rufus provided vague or even incorrect answers to people writing in African American English. If queries contained errors, the model responded rudely.\u00a0<\/p>\n<p>Another striking example of neural network prejudice involved an Indian job applicant who <a href=\"https:\/\/www.technologyreview.com\/2025\/10\/01\/1124621\/openai-india-caste-bias\/\">turned<\/a> to ChatGPT to check his resume in English. The chatbot ended up changing his surname to one associated with a higher caste.\u00a0<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201cThe widespread adoption of language models threatens not just to preserve entrenched prejudices but to amplify them on a large scale. Instead of mitigating harm, technologies risk giving it a systemic character,\u201d said Harvey.<\/em><\/p>\n<\/blockquote>\n<p>However, the crisis is not limited to bias\u2014some models simply do not recognize dialects. For instance, in July, the AI assistant of Derby City Council (England) <a href=\"https:\/\/www.bbc.com\/news\/articles\/cr4wkgyq259o\">failed to recognize<\/a> a radio host&#8217;s dialect when she used words like mardy (\u201cwhiner\u201d) and duck (\u201cdear\u201d) on air.\u00a0<\/p>\n<h2 class=\"wp-block-heading\">What Can Be Done?\u00a0<\/h2>\n<p>The problem lies not in the AI models themselves but rather in how they are trained. Chatbots read vast amounts of text from the internet, which they then use to generate responses.\u00a0<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201cThe main question is who writes this text. If it contains biases against dialect speakers, the AI will replicate them,\u201d explained Carolin Holtermann from the University of Hamburg.<\/em><\/p>\n<\/blockquote>\n<p>She emphasized, however, that the technology has an advantage:\u00a0<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201cUnlike humans, AI systems&#8217; biases can be identified and \u2018switched off.\u2019 We can actively combat such manifestations.\u201d<\/em><\/p>\n<\/blockquote>\n<p>Some researchers propose creating customized models for specific dialects as an advantage. In August 2024, the company Acree AI already <a href=\"https:\/\/www.arcee.ai\/blog\/arcee-meraj-maarj\">introduced<\/a> the Arcee-Meraj model, which works with several Arabic dialects.\u00a0<\/p>\n<p>According to Holtermann, the emergence of new and more adaptable <span data-descr=\"large language model\" class=\"old_tooltip\">LLM<\/span> allows us to view AI \u201cnot as an enemy of dialects, but as an imperfect tool that can be improved.\u201d<\/p>\n<p>As reported in The Economist, journalists <a href=\"https:\/\/u1f987.com\/en\/news\/ai-toys-a-double-edged-sword-for-childrens-development\">warned<\/a> of the risks AI toys pose to children&#8217;s mental health.\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Large language models exhibit bias against dialect speakers, attributing negative stereotypes to them. This conclusion was reached by researchers from Germany and the United States.<\/p>\n","protected":false},"author":1,"featured_media":92744,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"AI models show bias against dialects, attributing negative stereotypes.","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,167],"class_list":["post-92743","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-research"],"aioseo_notices":[],"amp_enabled":true,"views":"156","promo_type":"1","layout_type":"1","short_excerpt":"AI models show bias against dialects, attributing negative stereotypes.","is_update":"","_links":{"self":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/92743","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/comments?post=92743"}],"version-history":[{"count":1,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/92743\/revisions"}],"predecessor-version":[{"id":92745,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/92743\/revisions\/92745"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media\/92744"}],"wp:attachment":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media?parent=92743"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/categories?post=92743"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/tags?post=92743"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}