{"id":23583,"date":"2025-05-01T16:54:41","date_gmt":"2025-05-01T13:54:41","guid":{"rendered":"https:\/\/forklog.com\/en\/study-finds-politeness-with-ai-models-is-futile\/"},"modified":"2025-05-01T16:54:41","modified_gmt":"2025-05-01T13:54:41","slug":"study-finds-politeness-with-ai-models-is-futile","status":"publish","type":"post","link":"https:\/\/u1f987.com\/en\/study-finds-politeness-with-ai-models-is-futile\/","title":{"rendered":"Study Finds Politeness with AI Models is Futile"},"content":{"rendered":"<p>A new <a href=\"https:\/\/arxiv.org\/pdf\/2504.20980\">study<\/a> by researchers from George Washington University reveals that politeness towards AI models is a waste of computational resources.<\/p>\n<p>Adding words like &#8220;please&#8221; and &#8220;thank you&#8221; to prompts has a negligible impact on the subsequent quality of chatbot responses.<\/p>\n<p>Experts found that polite language is generally &#8220;orthogonal to the substantive good and bad output tokens&#8221; and has &#8220;minimal impact on the dot product&#8221;\u2014meaning such words occupy separate areas in the model&#8217;s internal space and hardly affect the outcome.<\/p>\n<p>The article contradicts a Japanese <a href=\"https:\/\/arxiv.org\/pdf\/2402.14531\">study<\/a> from 2024, which claimed that politeness enhances artificial intelligence performance. That research tested GPT-3.5, GPT-4, PaLM-2, and Claude-2.<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-qw.googleusercontent.com\/docsz\/AD_4nXe7khANCfvb-N3FdI_exta_XQ8-HZcf_dZnD4LrkxVkYIHz8ucdAhcmbZrXH5A_RFeZ4LYSsNjH6ghyUK07igdPSCNDNK45bXLkiwMuo5NHTNOhArqOTZ8uXO0yCkBGuh9F9t0XeQ?key=Ts_QCblkZdO_hiG7UYVswhiE\" alt=\"Study Finds Politeness with AI Models is Futile\"\/><figcaption class=\"wp-element-caption\">Performance growth depending on the level of politeness. Data: study.<\/figcaption><\/figure>\n<p>David Acosta, Director of AI at Arbo AI, <a href=\"https:\/\/decrypt.co\/317176\/polite-chatgpt-pointless-new-research\">noted<\/a> that discrepancies in results are due to the overly simplified model from George Washington University.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;Contradictory results on politeness and AI performance are generally due to cultural differences in training data, nuances in task-specific prompt design, and contextual interpretations of politeness, necessitating cross-cultural experiments and task-adapted evaluation systems for clarification,&#8221; he commented.<\/p>\n<\/blockquote>\n<p>The team behind the new work acknowledged that their model is &#8220;intentionally simplified&#8221; compared to commercial systems like ChatGPT. However, they believe that applying the approach to more complex neural networks will yield the same result.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;Whether an AI response is inadequate depends on the training of the <span data-descr=\"large language model\" class=\"old_tooltip\">LLM<\/span> that forms the <span data-descr=\"a way of representing words, phrases, or other data as vectors of numbers that can be processed by neural networks\" class=\"old_tooltip\">embeddings<\/span> of tokens, and the content of the tokens in the query\u2014not on whether we were polite to it or not,&#8221; the study states.<\/p>\n<\/blockquote>\n<p>In April, OpenAI CEO Sam Altman <a href=\"https:\/\/u1f987.com\/en\/news\/openai-invests-millions-in-polite-user-interactions\">stated<\/a> that the company spent tens of millions of dollars on responses to users who wrote &#8220;please&#8221; and &#8220;thank you.&#8221; <\/p>\n","protected":false},"excerpt":{"rendered":"<p>A new study by researchers from George Washington University reveals that politeness towards AI models is a waste of computational resources. Adding words like &#8220;please&#8221; and &#8220;thank you&#8221; to prompts has a negligible impact on the subsequent quality of chatbot responses. Experts found that polite language is generally &#8220;orthogonal to the substantive good and bad [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":23582,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"","news_style_id":"","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,167],"class_list":["post-23583","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-research"],"aioseo_notices":[],"amp_enabled":true,"views":"35","promo_type":"","layout_type":"","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/23583","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/comments?post=23583"}],"version-history":[{"count":0,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/23583\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media\/23582"}],"wp:attachment":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media?parent=23583"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/categories?post=23583"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/tags?post=23583"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}