{"id":95873,"date":"2026-04-03T11:06:14","date_gmt":"2026-04-03T08:06:14","guid":{"rendered":"https:\/\/u1f987.com\/en\/?p=95873"},"modified":"2026-04-03T11:10:19","modified_gmt":"2026-04-03T08:10:19","slug":"google-unveils-the-gemma-4-open-model-family","status":"publish","type":"post","link":"https:\/\/u1f987.com\/en\/google-unveils-the-gemma-4-open-model-family\/","title":{"rendered":"Google Unveils the Gemma 4 Open Model Family"},"content":{"rendered":"<p>Google has introduced Gemma 4, a new family of open AI models designed for advanced reasoning and agentic workflows.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">We just released Gemma 4 \u2014 our most intelligent open models to date.<\/p>\n<p>Built from the same world-class research as Gemini 3, Gemma 4 brings breakthrough intelligence directly to your own hardware for advanced reasoning and agentic workflows.<\/p>\n<p>Released under a commercially\u2026 <a href=\"https:\/\/t.co\/W6Tvj9CuHW\">pic.twitter.com\/W6Tvj9CuHW<\/a><\/p>\n<p>\u2014 Google (@Google) <a href=\"https:\/\/twitter.com\/Google\/status\/2039736220834480233?ref_src=twsrc%5Etfw\">April 2, 2026<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script> <\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>\u201cGemma 4 is our most intelligent open model to date, providing an unprecedented level of intelligence per parameter,\u201d the statement reads.<\/em><\/p>\n<\/blockquote>\n<p>Since the launch of the first generation, developers have downloaded Gemma over 400 million times, creating more than 100,000 model variants within the Gemmaverse ecosystem. The latest version is built on the same research and technology as the Gemini 3 chatbot.<\/p>\n<h2 class=\"wp-block-heading\">Various Sizes<\/h2>\n<p>The Gemma 4 neural network family includes four versions: Effective 2B (E2B), Effective 4B (E4B), 26B Mixture of Experts (MoE), and 31B Dense.<\/p>\n<p>The compact E2B and E4B, with 2.3 billion and 4.5 billion active parameters respectively, focus on multimodality, low latency, and seamless integration. They can be run on a smartphone or a regular laptop.<\/p>\n<p>The 26B MoE and flagship 31B (with 26 billion and 31 billion parameters) require a graphics accelerator like the Nvidia H100 with 80 GB of memory. These models are optimized for researchers and developers.<\/p>\n<p>The senior versions perform well in benchmarks. In the global Arena AI open text model rankings, the flagship 31B ranks third, while the 26B ranks sixth. According to developers, the new lineup surpasses competitors&#8217; models, which are 20 times larger.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/u1f987.com\/wp-content\/uploads\/img-0c09f7d2d084e42d-804812183429868.webp\" alt=\"image\" class=\"wp-image-277916\"\/><figcaption class=\"wp-element-caption\">Source: Google.<\/figcaption><\/figure>\n<h2 class=\"wp-block-heading\">Key Features<\/h2>\n<p>One of the main advantages of Gemma 4 is its advanced reasoning capabilities. The models can construct complex logic and plan multi-step tasks. They show significant progress in mathematics benchmarks and follow instructions accurately.<\/p>\n<p>Other features include:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Agentic workflows<\/strong> \u2014 built-in support for function calls, structured output in <span data-descr=\"JavaScript Object Notation \u2014 a text-based data exchange format based on JavaScript\" class=\"old_tooltip\">JSON<\/span> format, and system instructions allows for the creation of autonomous assistants that interact with tools and <span data-descr=\"application programming interface\" class=\"old_tooltip\">API<\/span>;<\/li>\n<li><strong>Code generation<\/strong> \u2014 Gemma 4 supports high-quality code writing offline, turning a workstation into a local AI assistant;<\/li>\n<li><strong>Vision and audio<\/strong> \u2014 all models process video and images with variable resolution, recognize text, and analyze diagrams. E2B and E4B also support speech recognition and understanding;<\/li>\n<li><strong>Extended context window<\/strong> \u2014 compact versions support 128,000 tokens, while larger ones support up to 256,000. This is sufficient for processing entire repositories or large documents in a single request;<\/li>\n<li><strong>Multilingualism<\/strong> \u2014 the model family can work with more than 140 languages.<\/li>\n<\/ul>\n<p>Gemma 4 is already available in Google AI Studio and Google AI Edge Gallery. Integration is also supported by popular third-party tools and frameworks, including Hugging Face, vLLM, llama.cpp, MLX, Ollama, NVIDIA NIM, and LM Studio.<\/p>\n<p>The models can be customized via Google Colab, Vertex AI, or on local graphics cards. For production, deployment is available on Google Cloud, including Cloud Run, GKE, and Sovereign Cloud.<\/p>\n<p>Earlier in April, Google <a href=\"https:\/\/u1f987.com\/en\/news\/google-launches-affordable-ai-video-generator-veo-3-1-lite\">introduced<\/a> a new AI model for video generation \u2014 Veo 3.1 Lite.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Google has introduced Gemma 4, a new family of open AI models designed for advanced reasoning and agentic workflows.<\/p>\n","protected":false},"author":1,"featured_media":95874,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"Google unveils Gemma 4, a new family of open AI models for advanced reasoning.","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,738],"class_list":["post-95873","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-google"],"aioseo_notices":[],"amp_enabled":true,"views":"69","promo_type":"1","layout_type":"1","short_excerpt":"Google unveils Gemma 4, a new family of open AI models for advanced reasoning.","is_update":"","_links":{"self":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/95873","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/comments?post=95873"}],"version-history":[{"count":1,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/95873\/revisions"}],"predecessor-version":[{"id":95875,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/posts\/95873\/revisions\/95875"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media\/95874"}],"wp:attachment":[{"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/media?parent=95873"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/categories?post=95873"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/u1f987.com\/en\/wp-json\/wp\/v2\/tags?post=95873"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}