Site iconSite icon ForkLog

Anthropic rolls out Claude Opus 4.7 for advanced development

Anthropic rolls out Claude Opus 4.7 for advanced development

Anthropic has introduced Claude Opus 4.7—its most capable Opus model to date.

The new release is available to all paid Claude users and via the API—$5 per million input tokens, $25 per million output tokens.

Key improvements

Opus 4.7 is strongest on complex tasks. Users are entrusting it with work that previously demanded close supervision, the developers said.

In agentic programming the model outperformed its predecessor by 10%, and by 13% in visual data handling. Gains elsewhere were more modest.

The model’s visual capabilities are markedly expanded: it processes images up to 2,576 pixels on the long edge (around 3.75 megapixels)—more than three times the previous Claude versions.

Source: Anthropic

Opus 4.7 follows instructions more strictly. Prompts written for older models may yield unexpected results: they interpreted instructions loosely, whereas the new version takes them literally. Retuning prompts is recommended.

The latest Claude can also remember information across sessions—it stores notes in files and can reuse them in each new conversation.

Anthropic added a new “effort level” xhigh (“extra high”) between high and max. It lets users fine-tune the trade-off between depth of analysis and response speed.

In Claude Code, the default effort level is raised to xhigh across all plans.

Other additions include:

Curbing cyber capabilities

Opus 4.7 is weaker than Mythos Preview in cybersecurity. Anthropic intentionally curtailed these capabilities during training. The model includes guardrails that block prohibited and high-risk requests.

“What we learn from real-world deployment of these safeguards will help us progress toward our ultimate goal—a broad release of Mythos-class models,” the startup’s team noted. 

Anthropic invited security professionals who want to use Opus 4.7 for legitimate purposes (vulnerability research, pentesting) to join a new Cyber Verification programme.

For users, the constraints have proved a headache. Some clients complain the model refuses to write code because it “suspects malware in every request”.

OpenAI’s response

OpenAI announced a “major update” to Codex, currently available only on macOS.

The new version can interact with apps on the user’s computer: see the screen, click and type with its own cursor. On Mac, multiple agents can run in parallel without disrupting other software.

Built-in browser, plugins and the development lifecycle

Codex has a built-in browser: you can annotate pages directly, giving the agent precise instructions. This may be useful for front-end and game development.

Developers plan to extend browser-control capabilities beyond the local environment.

Codex also adds support for gpt-image-1.5 for image generation and iteration. Combined with screenshots and code, this enables visual concepts, front-end design, mockups and games in a single interface.

OpenAI released more than 90 additional plugins that combine skills, app integrations and MCP servers. These include Atlassian Rovo for JIRA, CircleCI, CodeRabbit, GitLab Issues, Microsoft Suite, Neon by Databricks, Remotion, Render and Superpowers.

Codex adds support for GitHub comments, multiple terminal tabs and SSH connections to remote devboxes (alpha).

Users can open files directly in the sidebar with enhanced previews for PDFs, spreadsheets, slides and documents, and use a new summary pane to track the agent’s plans, sources and artefacts.

Memory and planning

Codex can plan future work and automatically resume long-running tasks—potentially over days or weeks. Teams use automation for everything from code-review requests to tracking tasks in Slack, Gmail and Notion.

Source: OpenAI

Developers improved the assistant’s memory. Codex can retain useful context from past dialogues—personal preferences and corrections.

The model also proactively suggests useful actions, picking up where the user left off. For example, the agent can find open comments in Google Docs, pull context from Slack, Notion and the codebase, and then produce a prioritised action list.

A new GPT model

OpenAI also unveiled a “reasoning” AI model, GPT‑Rosalind, to accelerate drug discovery.

It is named after the British biophysicist Rosalind Franklin, whose research helped reveal the structure of DNA and laid the foundations of modern molecular biology.

OpenAI notes that in the United States developing a new drug takes on average 10–15 years. The outcome is often determined in the earliest research phases. The biggest hurdles involve sifting vast troves of scientific publications and specialised databases.

GPT‑Rosalind aims to serve as a biologist’s assistant: summarising scientific texts, forming hypotheses, designing experiments and processing information. The model is particularly strong on tasks involving proteins, molecules, genes and related biological structures.

On the BixBench benchmark (real-world bioinformatics analysis), GPT‑Rosalind posted one of the best results among models with published data.

On LABBench2, it outperformed GPT‑5.4 in six of 11 tasks. The largest margin was on CloningQA, which requires designing DNA and enzymes for molecular cloning protocols.

Source: OpenAI.

OpenAI also published a free Life Sciences plugin for Codex on GitHub. It is available to all users and connects the AI to more than 50 public scientific databases and domain tools.

On April 16, Google released Gemini 3.1 Flash TTS—an updated speech-synthesis model based on the Gemini 3 generation.

Exit mobile version