Telegram (AI) YouTube Facebook X
Ру
Google Enhances AI Chips and Launches Agent Platform

Google Enhances AI Chips and Launches Agent Platform

Google unveils AI tools and $750M fund for automation.

Google has unveiled a suite of tools designed to aid companies in automating tasks through AI agents. Concurrently, a $750 million fund was launched alongside the announcement of new AI processors.

AI Agent Software

At its annual conference in Las Vegas, the company’s cloud division showcased a toolkit for creating AI agents and managing their operations within companies. Among the innovations is a dedicated mailbox where virtual assistants send reports on completed tasks.

The Gemini Enterprise Agent Platform will feature two new modules: Memory Bank and Memory Profile, enabling agents to retain interaction histories with users. Agent Simulation will allow developers to thoroughly test tools before deployment.

Google introduced updates for the Workspace application suite and described a scenario where AI agents radically transform the daily life of an ordinary worker.

The corporation stated that employees could use Gemini Enterprise to create virtual assistants without coding.

The company also announced Projects—a collaborative platform designed for interaction among workers or with support operators.

The tool integrates information from various sources such as Workspace, Microsoft’s OneDrive, and corporate chats, allowing work to be conducted with the necessary context.

New Fund

Google Cloud announced a $750 million fund aimed at assisting consulting firms like McKinsey, Accenture, and Deloitte in implementing agent AI for clients.

DeepMind will provide select firms with early access to Gemini AI models before their official release.

The capital will be directed towards training engineers, developing AI agents through the corporate platform, and co-funding projects and pre-sales activities.

“Consulting firms are at the heart of some of the largest transformations occurring with clients. They understand the situation and bring unique expertise in specific industries and knowledge of business processes,” said Kevin Ichhpurani, head of Google Cloud’s global partner ecosystem.

New Chips

Google Cloud has introduced a new generation of its proprietary tensor processing units (TPUs), designed to accelerate and reduce the cost of AI computations.

The lineup includes two versions:

  • TPU 8t — designed for AI development;
  • TPU 8i — better suited for inference.

Google holds a strong position among AI chip manufacturers, competing with Nvidia. In recent months, TPUs have seen increased demand in Silicon Valley.

The new processors store more information, reducing response latency.

“It’s about ensuring the lowest possible response latency at the lowest possible cost per operation,” noted Mark Lohmeyer, Google’s vice president of compute infrastructure.

AI services are created and launched using systems capable of rapidly processing large data sets—identifying connections and patterns that are then expressed mathematically. Calculations, program launches, and services are executed on processors with large amounts of built-in memory.

This approach allows AI responses to be almost instantaneous, as the component does not need to retrieve data from external sources.

TPU 8t units can be clustered in groups of 9600. At such scales, energy consumption becomes a key factor. The new chips offer 124% greater performance per watt compared to the previous generation. TPU 8i offers 117%.

Back in April, Google discussed with the U.S. Department of Defense the possibility of integrating Gemini into Pentagon systems across all information access categories—from open to top-secret.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK