{"id":17875,"date":"2024-10-18T12:24:55","date_gmt":"2024-10-18T09:24:55","guid":{"rendered":"https:\/\/forklog.com\/en\/researchers-compel-ai-robots-to-harm-humans\/"},"modified":"2024-10-18T12:24:55","modified_gmt":"2024-10-18T09:24:55","slug":"researchers-compel-ai-robots-to-harm-humans","status":"publish","type":"post","link":"https:\/\/u1f987.com\/en\/researchers-compel-ai-robots-to-harm-humans\/","title":{"rendered":"Researchers Compel AI Robots to Harm Humans"},"content":{"rendered":"<p>Experts have hacked AI robots, forcing them to perform actions prohibited by safety protocols and ethical standards, such as detonating bombs. This is detailed in a <a href=\"https:\/\/robopair.org\/files\/research\/robopair.pdf\">Penn Engineering<\/a> article.<\/p>\n<p>Researchers from the University of Pennsylvania&#8217;s School of Engineering described how their RoboPAIR algorithm managed to bypass safety protocols on three AI-driven robotic systems.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">Chatbots like ChatGPT can be jailbroken to output harmful text. But what about robots? Can AI-controlled robots be jailbroken to perform harmful actions in the real world?<\/p>\n<p>Our new paper finds that jailbreaking AI-controlled robots isn&#8217;t just possible.<\/p>\n<p>It&#8217;s alarmingly easy. ? <a href=\"https:\/\/t.co\/GzG4OvAO2M\">pic.twitter.com\/GzG4OvAO2M<\/a><\/p>\n<p>\u2014 Alex Robey (@AlexRobey23) <a href=\"https:\/\/twitter.com\/AlexRobey23\/status\/1846914890029748272?ref_src=twsrc%5Etfw\">October 17, 2024<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;Our new paper states that <span data-descr=\"the process of removing software restrictions\" class=\"old_tooltip\">jailbreaking<\/span> AI-controlled robots isn&#8217;t just possible. It&#8217;s alarmingly easy,&#8221; noted one of the authors, Alex Robey.<\/p>\n<\/blockquote>\n<p>Under normal conditions, AI-controlled bots refuse to carry out harmful orders. For instance, they would not knock shelves onto people.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;Our findings have demonstrated for the first time that the risks of hacked <span data-descr=\"large language models\" class=\"old_tooltip\">LLM<\/span> extend far beyond text generation, considering the high likelihood of physical harm in the real world from hacked robots,&#8221; the researchers write.<\/p>\n<\/blockquote>\n<p>According to them, RoboPAIR was able to compel robots to perform harmful actions with &#8220;100% success rate.&#8221; They executed various tasks:<\/p>\n<ul class=\"wp-block-list\">\n<li>The unmanned bot Dolphin was made to collide with a bus, barriers, and pedestrians, and to run red lights and stop signs;<\/li>\n<li>Another robot, Jackal, identified the most dangerous spot to detonate a bomb, blocked an emergency exit, toppled warehouse shelves onto a person, and collided with people indoors.<\/li>\n<\/ul>\n<p>Robey <a href=\"https:\/\/ai.seas.upenn.edu\/news\/penn-engineering-research-discovers-critical-vulnerabilities-in-ai-enabled-robots-to-increase-safety-and-security\/\">emphasized<\/a> that simple software fixes are insufficient to eliminate the vulnerability. He called for a reassessment of the approach to integrating AI into physical bots.<\/p>\n<p>Earlier in October, experts highlighted the use of AI by malicious actors to bypass stringent <span data-descr=\"Know Your Customer\" class=\"old_tooltip\">KYC<\/span> measures on cryptocurrency exchanges.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Experts have hacked AI robots, forcing them to perform actions prohibited by safety protocols and ethical standards, such as detonating bombs. This is detailed in a Penn Engineering article. Researchers from the University of Pennsylvania&#8217;s School of Engineering described how their RoboPAIR algorithm managed to bypass safety protocols on three AI-driven robotic systems. Chatbots like [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":17874,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"","news_style_id":"","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1111,652],"class_list":["post-17875","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-cybersecurity","tag-robots"],"aioseo_notices":[],"amp_enabled":true,"views":"19","promo_type":"","layout_type":"","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/17875","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/comments?post=17875"}],"version-history":[{"count":0,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/17875\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media\/17874"}],"wp:attachment":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media?parent=17875"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/categories?post=17875"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/tags?post=17875"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}