{"id":79089,"date":"2023-05-22T12:36:31","date_gmt":"2023-05-22T09:36:31","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=79089"},"modified":"2025-09-11T09:02:45","modified_gmt":"2025-09-11T06:02:45","slug":"g7-leaders-agree-to-regulate-risky-ai","status":"publish","type":"post","link":"https:\/\/u1f987.com\/en\/g7-leaders-agree-to-regulate-risky-ai\/","title":{"rendered":"G7 leaders agree to regulate risky AI."},"content":{"rendered":"<p>Leaders of the Group of Seven (G7) have agreed on the need to regulate generative AI, expressing concern about its \u201cdisruptive potential.\u201d Bloomberg reports this.<\/p>\n<p>Under the so-called Hiroshima Process, governments plan to hold intergovernmental talks. The first results of the discussions are expected by the end of the year.<\/p>\n<p>Japanese Prime Minister Fumio Kishida believes that human-centered AI can be enabled by safe <span data-descr=\"cross-border data transfer or exchange of data between countries or regions\" class=\"old_tooltip\">cross-border data flow<\/span>. He pledged to provide financial support for such efforts.<\/p>\n<p>The push for tighter regulation follows the rapid rise of tools such as ChatGPT. The concerns lie in AI\u2019s ability to generate realistic text and images that malicious actors could use in disinformation campaigns.<\/p>\n<p>British Prime Minister Rishi Sunak intends to develop a policy to manage AI risks and benefits. To this end, he <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2023-05-18\/uk-plans-meeting-on-ai-risks-with-openai-deepmind-bosses\" target=\"_blank\" rel=\"noopener nofollow\" title=\"\">\u043f\u0440\u0438\u0433\u043b\u0430\u0441\u0438\u043b<\/a> OpenAI chief Sam Altman and representatives of other companies to the United Kingdom.<\/p>\n<p>The European Union <a href=\"https:\/\/u1f987.com\/en\/news\/european-parliament-backs-amendments-to-ai-bill\">neared<\/a> the adoption of the AI Act. The document would require service providers to inform users about the purposes for which algorithms are used, and would prohibit facial recognition in public places.<\/p>\n<p>Japanese authorities have backed a softer approach to AI regulation.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cIt is important that the government take tough action in accordance with a strict law to address serious problems,\u201d said Vadhvani Hiroki Habuka, a senior fellow at the Center for AI and Advanced Technologies.<\/p>\n<\/blockquote>\n<p>However, he says, the legislation should not be too detailed. Otherwise, the rules risk lagging behind changes in technology.<\/p>\n<p>Habuka also believes that establishing an international standard for regulating generative AI at this stage will be a difficult task. Among G7 leaders there are a number of disagreements over values deemed acceptable in society.<\/p>\n<p>Georgetown University senior fellow Keiko Esinagi says it is essential to involve as many countries as possible in the discussions, including low-income countries.<\/p>\n<p>Earlier in May, Sunak promised <a href=\"https:\/\/u1f987.com\/en\/news\/rishi-sunak-to-raise-ai-risks-at-g7-summit\">to raise the issue of AI regulation<\/a> at the G7 summit.<\/p>\n<p>In the same month, Altman <a href=\"https:\/\/u1f987.com\/en\/news\/sam-altman-urges-the-us-to-regulate-artificial-intelligence\">testified before the U.S. Congress<\/a>. He urged lawmakers to enact rules regulating the technology.<\/p>\n<p>Earlier in May, Vice President Kamala Harris <a href=\"https:\/\/u1f987.com\/en\/news\/white-house-weighs-ai-risks-with-tech-giants\">discussed with the heads of tech giants<\/a> the risks associated with artificial intelligence.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Leaders of the Group of Seven (G7) have agreed on the need to regulate generative AI, expressing concern about its \u201cdisruptive potential.\u201d<\/p>\n","protected":false},"author":1,"featured_media":79090,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1972,36],"class_list":["post-79089","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-g7","tag-regulation"],"aioseo_notices":[],"amp_enabled":true,"views":"22","promo_type":"1","layout_type":"1","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/79089","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/comments?post=79089"}],"version-history":[{"count":1,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/79089\/revisions"}],"predecessor-version":[{"id":79091,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/79089\/revisions\/79091"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media\/79090"}],"wp:attachment":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media?parent=79089"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/categories?post=79089"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/tags?post=79089"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}