{"id":60867,"date":"2022-05-02T14:30:06","date_gmt":"2022-05-02T11:30:06","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=60867"},"modified":"2025-09-05T08:50:20","modified_gmt":"2025-09-05T05:50:20","slug":"deepmind-unveils-80-billion-parameter-visual-language-model","status":"publish","type":"post","link":"https:\/\/u1f987.com\/en\/deepmind-unveils-80-billion-parameter-visual-language-model\/","title":{"rendered":"DeepMind unveils 80-billion-parameter visual language model"},"content":{"rendered":"<p>The AI laboratory DeepMind <a href=\\\"https:\/\/www.deepmind.com\/blog\/tackling-multiple-tasks-with-a-single-visual-language-model\\\" target=\\\"_blank\\\" rel=\\\"noreferrer noopener nofollow\\\">developed<\/a> the Flamingo family of models, capable of handling a larger workload with less costly and labour-intensive training.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">Introducing Flamingo \ud83e\udda9: a generalist visual language model that can rapidly adapt its behaviour given just a handful of examples. Out of the box, it&#8217;s also capable of rich visual dialog. <\/p>\n<p>Read more: <a href=\\\"https:\/\/t.co\/xEzqTizoJQ\\\">https:\/\/t.co\/xEzqTizoJQ<\/a> 1\/ <a href=\\\"https:\/\/t.co\/GjlnDzbyOQ\\\">pic.twitter.com\/GjlnDzbyOQ<\/a><\/p>\n<p>\u2014 DeepMind (@DeepMind) <a href=\\\"https:\/\/twitter.com\/DeepMind\/status\/1519686445258231811?ref_src=twsrc%5Etfw\\\">April 28, 2022<\/a><\/p><\/blockquote>\n<p> <script async src=\\\"https:\/\/platform.twitter.com\/widgets.js\\\" charset=\\\"utf-8\\\"><\/script><\/p>\n<p>The model is designed to combine text and image inputs to produce a text-only answer.<\/p>\n<p>Flamingo was trained on a dataset created for multimodal machine-learning research. The dataset consists of 185 million images and 182 GB of text sourced from the public internet.<\/p>\n<p>One of Flamingo&#8217;s components is the pre-trained language model Chinchilla LM with 70 billion parameters. DeepMind merged the algorithm with elements of visual learning. The engineers also added &#8216;intermediate components of the new architecture&#8217; that keep data isolated and frozen, giving it an 80\u2011billion-parameter Flamingo VLM.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cA single Flamingo model can achieve top results across a wide range of tasks, competing with approaches that require task-specific fine-tuning on larger amounts of data,\u201d the developers said.<\/p>\n<\/blockquote>\n<p>According to the organisation&#8217;s representatives, Flamingo outperforms previous multi-step training approaches. The model is also more effective than fine-tuned algorithms that rely on larger amounts of data.<\/p>\n<p>In the long term, Flamingo could reduce energy consumption during AI training and lessen the need for high-performance hardware. However, the company did not disclose details of how they achieved such results.<\/p>\n<p>The developers emphasised that Flamingo can be quickly adapted to resource-constrained settings and to low-resource tasks such as assessing AI bias.<\/p>\n<p>Earlier in April, DeepMind unveiled the 70\u2011billion\u2011parameter language model Chinchilla.<\/p>\n<p>In February, the British AI lab <a href=\"https:\/\/u1f987.com\/en\/news\/deepmind-develops-ai-for-automatic-code-generation\">demonstrated the AlphaCode<\/a>, which can write code on its own.<\/p>\n<p>In December 2021, DeepMind <a href=\"https:\/\/u1f987.com\/en\/news\/deepmind-unveils-a-280-billion-parameter-language-model\">developed the large language model Gopher<\/a>, containing 280 billion parameters.<\/p>\n<p>Subscribe to ForkLog&#8217;s news on Telegram: <a href=\\\"https:\/\/t.me\/forklogAI\\\" target=\\\"_blank\\\" rel=\\\"noreferrer noopener nofollow\\\">ForkLog AI<\/a> \u2014 all the news from the world of AI!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The AI lab DeepMind has developed the Flamingo family of models, capable of handling a larger workload with less costly and labour-intensive training.<\/p>\n","protected":false},"author":1,"featured_media":60868,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1474],"class_list":["post-60867","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-deepmind"],"aioseo_notices":[],"amp_enabled":true,"views":"25","promo_type":"1","layout_type":"1","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/60867","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/comments?post=60867"}],"version-history":[{"count":1,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/60867\/revisions"}],"predecessor-version":[{"id":60869,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/60867\/revisions\/60869"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media\/60868"}],"wp:attachment":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media?parent=60867"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/categories?post=60867"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/tags?post=60867"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}