{"id":72101,"date":"2022-12-29T14:05:53","date_gmt":"2022-12-29T12:05:53","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=72101"},"modified":"2025-09-08T12:54:03","modified_gmt":"2025-09-08T09:54:03","slug":"study-finds-ai-code-generators-create-security-vulnerabilities","status":"publish","type":"post","link":"https:\/\/u1f987.com\/en\/study-finds-ai-code-generators-create-security-vulnerabilities\/","title":{"rendered":"Study finds AI code generators create security vulnerabilities"},"content":{"rendered":"<p>A group of researchers from Stanford said that the use of AI-based code-generation systems is more likely to introduce security vulnerabilities, according to <a href=\"https:\/\/techcrunch.com\/2022\/12\/28\/code-generating-ai-can-introduce-security-vulnerabilities-study-finds\/\" target=\"_blank\" rel=\"noopener nofollow\" title=\"\">TechCrunch<\/a>.<\/p>\n<p>The researchers focused primarily on the Codex, <a href=\"https:\/\/u1f987.com\/en\/news\/openai-unveils-codex-an-ai-tool-for-automatic-code-generation\">presented<\/a> by OpenAI in August 2021. They recruited 47 developers of varying skill levels to tackle security problems in several programming languages, including Python, JavaScript, and C.<\/p>\n<p>According to the study, participants who used Codex were more likely to write faulty and &#8216;unsafe&#8217; code than the control group. Programmers using AI also expressed greater confidence in their solutions.<\/p>\n<p>Experts say that developers without adequate cybersecurity knowledge should use such tools with caution.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cThose who use them to accelerate tasks in which they already have skills should carefully recheck the results and the context,\u201d the researchers added.<\/p>\n<\/blockquote>\n<p>Megha Shrivastava, co-author of the study, stressed that the results are not a condemnation of Codex and other code-generation systems. In her words, such tools are useful for tasks that do not involve high risk.<\/p>\n<p>The researchers proposed several ways to increase the safety of AI-code generation systems, including a mechanism for prompt refinement. They say this is akin to a supervisor who reviews code drafts.<\/p>\n<p>They also urged developers of cryptographic libraries to secure default settings, as current AI-system parameters are not always free from exploits.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cOur aim is to make a broader statement about the use of code-generation models. Further work is needed to study these problems and to develop methods to address them,\u201d said co-author Neil Perry.<\/p>\n<\/blockquote>\n<p>He said that introducing vulnerabilities into security systems is not the only drawback of AI-generated code systems. He mentioned the issue of potential copyright violations due to the use of publicly available code to train Codex.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201c[For these reasons] we largely express caution about using these tools to replace teaching novice developers reliable coding practices,\u201d added Shrivastava.<\/p>\n<\/blockquote>\n<p>In May, one of the developers found that Copilot <a href=\"https:\/\/u1f987.com\/en\/news\/ai-leaks-private-keys-from-crypto-wallets\">\u00ableaked\u00bb<\/a> private keys from crypto wallets.<\/p>\n<p>In October, a group of programmers announced <a href=\"https:\/\/u1f987.com\/en\/news\/developers-to-sue-microsoft-over-training-ai-with-their-code\">the filing of a class-action lawsuit against Microsoft<\/a> for training AI using their code.<\/p>\n<p>In July 2021, Copilot was suspected of copying copyright-protected fragments of open-source software.<\/p>\n<p>Subscribe to ForkLog&#8217;s Telegram news: <a href=\"https:\/\/t.me\/forklogAI\" target=\"_blank\" rel=\"noopener nofollow\" title=\"\">ForkLog AI<\/a> \u2014 all the news from the world of AI!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A group of researchers from Stanford said that using AI-based code-generation systems is more likely to create security vulnerabilities.<\/p>\n","protected":false},"author":1,"featured_media":72102,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1111,167],"class_list":["post-72101","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-cybersecurity","tag-research"],"aioseo_notices":[],"amp_enabled":true,"views":"8","promo_type":"1","layout_type":"1","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/72101","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/comments?post=72101"}],"version-history":[{"count":1,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/72101\/revisions"}],"predecessor-version":[{"id":72103,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/72101\/revisions\/72103"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media\/72102"}],"wp:attachment":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media?parent=72101"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/categories?post=72101"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/tags?post=72101"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}