{"id":59126,"date":"2022-03-23T15:00:04","date_gmt":"2022-03-23T13:00:04","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=59126"},"modified":"2025-09-04T20:16:16","modified_gmt":"2025-09-04T17:16:16","slug":"nvidia-unveils-a-new-generation-of-server-ai-chips","status":"publish","type":"post","link":"https:\/\/u1f987.com\/en\/nvidia-unveils-a-new-generation-of-server-ai-chips\/","title":{"rendered":"Nvidia unveils a new generation of server AI chips"},"content":{"rendered":"<p>At the annual GTC 2022 conference, Nvidia <a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/03\/22\/ai-factories-hopper-h100-nvidia-ceo-jensen-huang\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">\u0430\u043d\u043e\u043d\u0441\u0438\u0440\u043e\u0432\u0430\u043b\u0430<\/a> several new chips and technologies designed to accelerate the speed of artificial intelligence algorithm computations.<\/p>\n<p>The corporation <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">\u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u0438\u043b\u0430<\/a> architecture for next-generation Hopper GPUs and the H100 chip built on it, designed for machine-learning tasks.<\/p>\n<p>The device is manufactured on a 4-nm process and contains 80 billion transistors. It is the company&#8217;s first GPU to support PCIe Gen5 and to use HBM3, delivering memory bandwidth of 3 TB\/s.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/u1f987.com\/wp-content\/uploads\/nvidia-hopper-architecture-h100-sxm-1024x576.jpg\" alt=\"Nvidia \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u0438\u043b\u0430 \u043d\u043e\u0432\u043e\u0435 \u043f\u043e\u043a\u043e\u043b\u0435\u043d\u0438\u0435 \u0441\u0435\u0440\u0432\u0435\u0440\u043d\u044b\u0445 \u0418\u0418-\u0447\u0438\u043f\u043e\u0432\" class=\"wp-image-168364\" srcset=\"https:\/\/u1f987.com\/wp-content\/uploads\/nvidia-hopper-architecture-h100-sxm-1024x576.jpg 1024w, https:\/\/u1f987.com\/wp-content\/uploads\/nvidia-hopper-architecture-h100-sxm-300x169.jpg 300w, https:\/\/u1f987.com\/wp-content\/uploads\/nvidia-hopper-architecture-h100-sxm-768x432.jpg 768w, https:\/\/u1f987.com\/wp-content\/uploads\/nvidia-hopper-architecture-h100-sxm-1536x864.jpg 1536w, https:\/\/u1f987.com\/wp-content\/uploads\/nvidia-hopper-architecture-h100-sxm.jpg 1600w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Hopper architecture H100 GPU. Data: Nvidia.<\/figcaption><\/figure>\n<p>The company stated that the H100 is three times faster than its predecessor A100 for FP16, FP32 and FP64 computations, and six times faster for 8-bit floating-point operations.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u00abFor training giant <a href=\"https:\/\/u1f987.com\/en\/news\/what-are-transformers-machine-learning\">transformers<\/a>, the H100 proved nine times more productive, training in just a few days rather than weeks\u00bb, said Paresh Hariya, Nvidia&#8217;s Senior Director of Product Management.<\/p>\n<\/blockquote>\n<p>The H100 will go on sale in the third quarter of 2022.<\/p>\n<p>Nvidia also <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-introduces-grace-cpu-superchip\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">\u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u0438\u043b\u0430<\/a> the Grace CPU Superchip, based on the ARMv9 architecture. It comprises two Grace chips linked by NVLink, delivering data transfer speeds up to 900 GB\/s.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/u1f987.com\/wp-content\/uploads\/grace-cpu-superchip-1024x576.png\" alt=\"Nvidia \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u0438\u043b\u0430 \u043d\u043e\u0432\u043e\u0435 \u043f\u043e\u043a\u043e\u043b\u0435\u043d\u0438\u0435 \u0441\u0435\u0440\u0432\u0435\u0440\u043d\u044b\u0445 \u0418\u0418-\u0447\u0438\u043f\u043e\u0432\" class=\"wp-image-168365\" srcset=\"https:\/\/u1f987.com\/wp-content\/uploads\/grace-cpu-superchip-1024x576.png 1024w, https:\/\/u1f987.com\/wp-content\/uploads\/grace-cpu-superchip-300x169.png 300w, https:\/\/u1f987.com\/wp-content\/uploads\/grace-cpu-superchip-768x432.png 768w, https:\/\/u1f987.com\/wp-content\/uploads\/grace-cpu-superchip-1536x864.png 1536w, https:\/\/u1f987.com\/wp-content\/uploads\/grace-cpu-superchip.png 1600w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Grace CPU Superchip. Data: Nvidia.<\/figcaption><\/figure>\n<p>Grace CPU Superchip contains 144 ARM cores and consumes around 500 W. The chip supports LPDDR5x memory, delivering up to 1 TB\/s bandwidth.<\/p>\n<p>Grace CPU Superchip will be available in the first half of 2023.<\/p>\n<p>In addition to hardware, Nvidia updated a range of AI software services, including the Maxine SDK for audio and video enhancement and the Rivay toolkit for developing natural-language processing systems.<\/p>\n<p>The company announced the creation of a new AI supercomputer, Eos. The installation will be equipped with 4,600 H100 GPUs, delivering 18.4 exaflops of performance. The system will be used solely for internal company research.<\/p>\n<p>Earlier, under regulatory pressure from the US, the EU and the UK, Nvidia abandoned the purchase of ARM for $40 billion.<\/p>\n<p>In the same month, the chipmaker reported a <a href=\"https:\/\/u1f987.com\/en\/news\/nvidia-cmp-chip-sales-fall-77-in-q4\">decline in revenue from selling GPUs<\/a> used for mining cryptocurrencies.<\/p>\n<p>In January, Meta announced the <a href=\"https:\/\/u1f987.com\/en\/news\/meta-to-build-the-worlds-largest-ai-supercomputer\">creation of the world&#8217;s largest AI supercomputer<\/a> based on Nvidia processors.<\/p>\n<p>Subscribe to ForkLog News on Telegram: <a href=\"https:\/\/t.me\/forklogAI\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">ForkLog AI<\/a> \u2014 all the AI news!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>At the annual GTC 2022 conference, Nvidia announced several new chips and technologies designed to accelerate the speed of AI algorithm computations.<\/p>\n","protected":false},"author":1,"featured_media":59127,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1295,1294],"class_list":["post-59126","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-chips","tag-nvidia"],"aioseo_notices":[],"amp_enabled":true,"views":"71","promo_type":"1","layout_type":"1","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/59126","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/comments?post=59126"}],"version-history":[{"count":1,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/59126\/revisions"}],"predecessor-version":[{"id":59128,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/59126\/revisions\/59128"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media\/59127"}],"wp:attachment":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media?parent=59126"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/categories?post=59126"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/tags?post=59126"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}