Skip to content
Models · Apr 19, 2026

Anthropic releases Claude Opus 4.7 with improved reasoning and vision capabilities

New model shows consistent benchmark gains over prior version, expanded image resolution support, and introduces intermediate effort tier for reasoning tasks.

Trust56
HypeSome hype

1 source

ShareXLinkedInEmail
TL;DR
  • Anthropic launched Claude Opus 4.7, featuring improvements in long-context reasoning, instruction-following, and self-verification compared to Opus 4.6, with unchanged pricing.
  • The model shows gains on multiple benchmarks: SWE-Bench Pro rose approximately 11 points to 64.3%, while document reasoning improved to 80.6% from 57.1%.
  • Vision capabilities expanded to accept images up to 2,576 pixels on the long edge—roughly 3.75 megapixels, three times prior Claude models—enabling better computer-use agent performance.
  • A new tokenizer increases token usage by up to 35% for identical inputs, though Anthropic reports overall token consumption remains down to 50% through improved reasoning efficiency.
  • Claude Code now defaults to a new 'xhigh' reasoning effort level positioned between the existing 'high' and 'max' tiers, aimed at better performance on complex tasks.

Anthropic publicly announced Claude Opus 4.7 as its latest flagship model, positioned as a successor to Opus 4.6 with three explicitly stated behavioral improvements: enhanced handling of extended reasoning tasks, more precise instruction adherence, and strengthened self-verification mechanisms before response generation. The model became available immediately on Claude's platform and integrated into Claude Code with day-one support.

Benchmark reporting indicates consistent incremental progress. SWE-Bench Pro scores reached 64.3%, representing approximately 11-point gains over the prior version; SWE-Bench Verified climbed to 87.6% (roughly 7-point improvement); and document reasoning capability rose to 80.6% from previously reported 57.1%. Additional benchmarks showed single-digit to mid-range point gains across reasoning and coding tasks. Third-party evaluation services reported Opus 4.7 as their respective top-ranked model.

A technically significant change involved image processing: Opus 4.7 now accepts images up to 2,576 pixels on the longest edge, approximately 3.75 megapixels total—a threefold increase from prior Claude versions. This expansion reduces automatic downscaling and is intended to improve computer-use agent performance, especially for screenshot-dense workflows and visual data extraction from complex diagrams.

Token economy analysis revealed a new tokenizer underlying Opus 4.7, creating potential cost implications. Identical inputs can generate 1.0 to 1.35 times more tokens depending on content type, triggering discussion about whether this represents a new base model or a continuation with architectural modifications. Anthropic's response included increasing token limits for all subscriber tiers to offset higher per-token consumption. Despite increased token-per-input ratios, the company claims overall token consumption compared to equivalent Opus 4.6 operations remains reduced by up to 50% through efficiency gains.

Claude Code introduced a new 'xhigh' reasoning effort mode, positioned between the existing 'high' and 'max' tiers, and set it as the default for Opus 4.7 users. The product also added task budgets in public beta and /ultrareview functionality alongside broader Auto mode access for Claude Code Max subscribers. Independent evaluators reported mixed results on specialized tasks—LlamaIndex showed dramatic improvements on chart recognition (13.5% to 55.8%) but minimal gains on formatting and layout tasks, suggesting capability gains apply unevenly across document-processing use cases.

Sources
  1. 01Latent Space — swyx[AINews] Anthropic Claude Opus 4.7 - literally one step better than 4.6 in every dimension
Also on Models

Stories may contain errors. Dispatch is assembled with AI assistance and curated by human editors; despite the trust-score filter, mistakes happen. We correct publicly — every article links to its revision history. Nothing here is financial, legal, or medical advice. Verify before relying on any claim.

© 2026 Dispatch. No ads. No sponsorships. No paid placement. Reader-supported via Ko-fi.

Built by a person who cares about honest AI news.