{"id":47579,"date":"2021-08-12T14:30:50","date_gmt":"2021-08-12T11:30:50","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=47579"},"modified":"2025-09-01T22:03:40","modified_gmt":"2025-09-01T19:03:40","slug":"israeli-startup-develops-an-affordable-alternative-to-gpt-3","status":"publish","type":"post","link":"https:\/\/u1f987.com\/en\/israeli-startup-develops-an-affordable-alternative-to-gpt-3\/","title":{"rendered":"Israeli startup develops an affordable alternative to GPT-3"},"content":{"rendered":"<p>Israeli startup AI21 Labs has developed the language model Jurassic-1 Jumbo, which has surpassed the competitive GPT-3 in the number of parameters and vocabulary size.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">Today we\u2019re launching AI21 Studio, a platform where you can instantly access our state-of-the-art language models to build your own applications \u2014 including a 178B parameters model, Jurassic-1 Jumbo. We can\u2019t wait to see what you create! <a href=\"https:\/\/t.co\/NhCnF1Emcy\">https:\/\/t.co\/NhCnF1Emcy<\/a><\/p>\n<p>\u2014 AI21 Labs (@AI21Labs) <a href=\"https:\/\/twitter.com\/AI21Labs\/status\/1425412216245952512?ref_src=twsrc%5Etfw\">August 11, 2021<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>The company said that the largest version of the model contains 178 billion parameters. That is 3 billion more than GPT-3, though it significantly lags behind PanGu-Alpha from Huawei or Wu Dao 2.0.<\/p>\n<p>Jurassic-1 can recognize 250,000 lexical elements, including expressions, words and phrases. This is five times more than other similar systems, according to the developers.\u00a0<\/p>\n<p>The Jurassic-1 Jumbo vocabulary has also been among the first to cover &#8216;multiword&#8217; elements, for example, &#8216;Empire State Building&#8217;. This means the model can have a richer semantic representation of concepts that make sense to people, the developers said.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cExpanding the boundaries of language-based artificial intelligence requires more than simple pattern recognition offered by current language models\u201d, \u2014<a href=\"https:\/\/venturebeat.com\/2021\/08\/11\/ai21-labs-trains-a-massive-language-model-to-rival-openais-gpt-3\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">said<\/a> Yoav Shoham, CEO of AI21 Labs.<\/p>\n<\/blockquote>\n<p>The company also said that their goal is to democratize the use of language models for everyone who wants them.<\/p>\n<p>Currently the algorithm is in open beta testing and available to developers via API or an interactive web environment.<\/p>\n<p>As reported in August<a href=\"https:\/\/u1f987.com\/en\/news\/openai-unveils-codex-an-ai-tool-for-automatic-code-generation\"> OpenAI introduced the Codex platform for automatically writing code from textual prompts<\/a> based on GPT-3.<\/p>\n<p>In early August<a href=\"https:\/\/u1f987.com\/en\/news\/an-elevator-mistook-a-corgi-for-a-scooter-an-ai-algorithm-was-granted-patent%e2%80%91filing-rights-and-other-ai-news\"> Microsoft developed the language model MEB with 135 billion parameters<\/a> and integrated it into the Bing search engine.<\/p>\n<p>In June the Beijing Academy of Artificial Intelligence<a href=\"https:\/\/u1f987.com\/en\/news\/china-develops-a-language-model-ten-times-larger-than-gpt-3\"> presented the WuDao 2.0 model, ten times larger than GPT-3<\/a>.<\/p>\n<p>In January<a href=\"https:\/\/u1f987.com\/en\/news\/microsoft-creates-technology-to-talk-to-the-dead-google-unveils-a-gpt-3-rival-and-other-ai-news\"> Google introduced the Switch Transformer language model<\/a>, which contains 1.6 trillion parameters.<\/p>\n<p>Subscribe to ForkLog news on Telegram:<a href=\"https:\/\/t.me\/forklogAI\" target=\"_blank\" rel=\"noreferrer noopener\"> ForkLog AI<\/a> \u2014 all the news from the AI world!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Israeli startup AI21 Labs has developed the language model Jurassic-1 Jumbo, which surpassed GPT-3 in the number of parameters and vocabulary size.<\/p>\n","protected":false},"author":1,"featured_media":26216,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1223],"class_list":["post-47579","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-startups"],"aioseo_notices":[],"amp_enabled":true,"views":"25","promo_type":"1","layout_type":"1","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/47579","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/comments?post=47579"}],"version-history":[{"count":1,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/47579\/revisions"}],"predecessor-version":[{"id":47580,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/47579\/revisions\/47580"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media\/26216"}],"wp:attachment":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media?parent=47579"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/categories?post=47579"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/tags?post=47579"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}