{"id":75585,"date":"2023-03-15T13:01:24","date_gmt":"2023-03-15T11:01:24","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=75585"},"modified":"2025-09-10T12:30:46","modified_gmt":"2025-09-10T09:30:46","slug":"openai-unveils-gpt-4-a-large-multimodal-model","status":"publish","type":"post","link":"https:\/\/u1f987.com\/en\/openai-unveils-gpt-4-a-large-multimodal-model\/","title":{"rendered":"OpenAI unveils GPT-4, a large multimodal model"},"content":{"rendered":"<p>OpenAI unveiled a large <span data-descr=\"models capable of taking text, images, audio, and video content as input data\" class=\"old_tooltip\">multimodal<\/span> model GPT-4.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">Announcing GPT-4, a large multimodal model, with our best-ever results on capabilities and alignment: <a href=\"https:\/\/t.co\/TwLFssyALF\">https:\/\/t.co\/TwLFssyALF<\/a> <a href=\"https:\/\/t.co\/lYWwPjZbSg\">pic.twitter.com\/lYWwPjZbSg<\/a><\/p>\n<p>\u2014 OpenAI (@OpenAI) <a href=\"https:\/\/twitter.com\/OpenAI\/status\/1635687373060317185?ref_src=twsrc%5Etfw\">March 14, 2023<\/a><\/p><\/blockquote>\n<p> <script async=\"\" src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>According to the announcement, GPT-4 can solve \u201ccomplex problems with greater accuracy thanks to its broader general knowledge and capabilities.\u201d<\/p>\n<p>According to the developers, the model can assume a given role at the user&#8217;s request. For example, you can ask it to become a lawyer or a tutor. In that case, GPT-4 will process queries related to a specific field of knowledge more accurately.<\/p>\n<p>In the demonstration <a href=\"https:\/\/www.youtube.com\/watch?v=outcGtbnMuQ\" target=\"_blank\" rel=\"noopener nofollow\" title=\"video\">\u0432\u0438\u0434\u0435\u043e<\/a> the OpenAI president Greg Brockman showed how to teach the service to quickly answer questions related to taxation.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cThis model is so good at mental arithmetic. It has broad and flexible capabilities,\u201d he said.<\/p>\n<\/blockquote>\n<p>OpenAI added that compared with GPT-3.5, the new algorithm is more reliable, more creative, and capable of handling nuanced instructions.<\/p>\n<p>Compared with its predecessor, GPT-4 generates substantially longer texts: 25,000 words versus 3,000.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/u1f987.com\/wp-content\/uploads\/Znimok-ekrana-58-1024x576.png\" alt=\"Word limit set in ChatGPT and GPT-4\" class=\"wp-image-200636\" srcset=\"https:\/\/u1f987.com\/wp-content\/uploads\/Znimok-ekrana-58-1024x576.png 1024w, https:\/\/u1f987.com\/wp-content\/uploads\/Znimok-ekrana-58-300x169.png 300w, https:\/\/u1f987.com\/wp-content\/uploads\/Znimok-ekrana-58-768x432.png 768w, https:\/\/u1f987.com\/wp-content\/uploads\/Znimok-ekrana-58-1536x864.png 1536w, https:\/\/u1f987.com\/wp-content\/uploads\/Znimok-ekrana-58.png 1920w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Word limit set in ChatGPT and GPT-4. Data: OpenAI.<\/figcaption><\/figure>\n<p>In a separate video, it is stated that the model possesses a number of capabilities that the previous version did not, including the ability to \u201creason\u201d about images uploaded by users.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" decoding=\"\" src=\"https:\/\/u1f987.com\/wp-content\/uploads\/Znimok-ekrana-2023-03-15-115407.webp\" alt=\"GPT-4 describes what it sees in the picture\" class=\"wp-image-200637\"\/><figcaption>GPT-4 describes what it sees in the picture. Data: OpenAI.<\/figcaption><\/figure>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cGPT-4 is a large multimodal model which, although less capable than humans in many real-world scenarios, demonstrates human-level performance on a range of professional and academic tests,\u201d the announcement says.<\/p>\n<\/blockquote>\n<p>According to OpenAI employee Andrej Karpathy, image processing means that AI can \u201csee.\u201d<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">? GPT-4 is out!!<br \/>\u2014 ? it is incredible<br \/>\u2014 ? it is multimodal (can see) <br \/>\u2014 ? it is on trend w.r.t. scaling laws<br \/>\u2014 ? it is deployed on ChatGPT Plus: <a href=\"https:\/\/t.co\/WptpLYHSCO\">https:\/\/t.co\/WptpLYHSCO<\/a><br \/>\u2014 ? watch the developer demo livestream at 1pm: <a href=\"https:\/\/t.co\/drEkxQMC9H\">https:\/\/t.co\/drEkxQMC9H<\/a> <a href=\"https:\/\/t.co\/WUYzwyxOqa\">https:\/\/t.co\/WUYzwyxOqa<\/a><\/p>\n<p>\u2014 Andrej Karpathy (@karpathy) <a href=\"https:\/\/twitter.com\/karpathy\/status\/1635691329996062725?ref_src=twsrc%5Etfw\">March 14, 2023<\/a><\/p><\/blockquote>\n<p> <script async=\"\" src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>In addition, OpenAI released a <a href=\"https:\/\/cdn.openai.com\/papers\/gpt-4.pdf\" target=\"_blank\" rel=\"noopener nofollow\" title=\"\u043d\u0430\u0443\u0447\u043d\u0443\u044e \u0440\u0430\u0431\u043e\u0442\u0443\">\u043d\u0430\u0443\u0447\u043d\u0443\u044e \u0440\u0430\u0431\u043e\u0442\u0443<\/a> on GPT-4. However, the developers decided not to disclose details about the model size, the training run, or the data used in this process.<\/p>\n<p>The technology is available to ChatGPT Plus subscribers with some restrictions. The company has also opened a waitlist for those wishing to use the API of the new model.<\/p>\n<p>OpenAI said it is already collaborating with some companies to integrate the algorithm into their applications, including Duolingo, Stripe and Khan Academy.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"550\" src=\"https:\/\/u1f987.com\/wp-content\/uploads\/duolingo-max-1024x550.webp\" alt=\"duolingo-max\" class=\"wp-image-200638\" srcset=\"https:\/\/u1f987.com\/wp-content\/uploads\/duolingo-max-1024x550.webp 1024w, https:\/\/u1f987.com\/wp-content\/uploads\/duolingo-max-300x161.webp 300w, https:\/\/u1f987.com\/wp-content\/uploads\/duolingo-max-768x413.webp 768w, https:\/\/u1f987.com\/wp-content\/uploads\/duolingo-max.webp 1390w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Duolingo app with GPT-4 integration. Data: Duolingo.<\/figcaption><\/figure>\n<p>After the announcement, Microsoft confirmed rumors that the new Bing runs on an optimized-for-search version of GPT-4.<\/p>\n<p>Back in November 2022, OpenAI <a href=\"https:\/\/u1f987.com\/en\/news\/openai-unveils-chatgpt-a-chatbot-designed-for-dialogue\">introduced ChatGPT<\/a>. Within two months, the service had become the fastest-growing in history, reaching 100 million active users.<\/p>\n<p>In February 2023, Microsoft released an updated <a href=\"https:\/\/u1f987.com\/en\/news\/microsoft-unveils-updated-bing-powered-by-chatgpt\">based on ChatGPT<\/a>.<\/p>\n<p>In March, the number of active users of the service <a href=\"https:\/\/u1f987.com\/en\/news\/bings-daily-active-users-top-100-million\">surpassed<\/a> 100 million.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI unveiled a large multimodal model, GPT-4.<\/p>\n","protected":false},"author":1,"featured_media":75586,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1201,1190],"class_list":["post-75585","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-chatbots","tag-openai"],"aioseo_notices":[],"amp_enabled":true,"views":"32","promo_type":"1","layout_type":"1","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/75585","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/comments?post=75585"}],"version-history":[{"count":1,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/75585\/revisions"}],"predecessor-version":[{"id":75587,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/75585\/revisions\/75587"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media\/75586"}],"wp:attachment":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media?parent=75585"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/categories?post=75585"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/tags?post=75585"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}