Anthropic has overtaken OpenAI in Ramp’s latest AI Index for verified business customers, with 34.4% of participating companies paying for Anthropic services versus 32.3% for OpenAI. It is the first time Anthropic has taken the top spot in Ramp’s enterprise-oriented dataset, and that matters because it suggests a change in how companies are actually wiring AI into production workflows, procurement processes, and governance stacks.

The shift is not a universal market verdict. Ramp’s index is built from expense data across more than 50,000 companies that use the platform, which makes it broad enough to be meaningful but still far from a complete view of enterprise AI buying. Even so, the direction of travel is notable: OpenAI has long been treated as the default enterprise AI supplier, while Anthropic’s growth in verified business usage now points to a more competitive, more fragmented purchasing environment.

Technically, the most important question is not which model brand is “winning,” but why enterprise teams appear to be routing more spend toward Anthropic in this sample. The answer likely sits in the intersection of product packaging, governance posture, and deployment fit. Anthropic has leaned into enterprise-friendly controls and a model line that many buyers perceive as easier to operationalize under stricter policy constraints. For teams building internal copilots, agent workflows, or regulated knowledge systems, that combination can matter more than raw model benchmarks.

OpenAI remains a major force in enterprise AI, and Ramp’s own commentary suggests the picture is mixed by segment rather than flipped wholesale. The company’s data indicates Anthropic had already been strong in high-adoption categories such as finance, technology, and professional services, while OpenAI retained an edge in other firms even as that lead narrowed over recent months. In other words, the change looks more like a rebalancing across enterprise cohorts than a sudden collapse in OpenAI demand.

That distinction matters for product teams. If procurement patterns are drifting toward Anthropic, platform owners should assume buyers are paying closer attention to model behavior under policy constraints, auditability, and the ease of aligning AI usage with internal controls. The practical consequences show up in integration design: routing rules between vendors, policy enforcement at the middleware layer, logging and retention choices, and fallback logic when one provider’s API or feature set is unavailable.

Enterprises increasingly need multi-LLM architectures for exactly this reason. A single-model strategy can simplify integration, but it also concentrates risk: vendor lock-in, sudden pricing changes, shifting model behavior, and governance gaps when different business units use AI differently. If Anthropic’s momentum continues in regulated or high-scrutiny environments, teams may need to harden their abstraction layers so they can swap providers without rewriting policy engines, prompt templates, or data handling controls.

There is also a procurement implication. Verified enterprise usage tends to reflect more than user preference; it reflects the outcome of a buying process that includes legal review, security assessment, data residency questions, and contract terms. If Anthropic is winning more of those decisions inside Ramp’s sample, that could indicate its enterprise packaging is lining up well with current buyer requirements. For product leaders, that is a signal to revisit how AI vendors are evaluated: not just by model quality, but by how they support approvals, isolation boundaries, and operational oversight.

At the same time, this should not be read as evidence that OpenAI is losing the broader enterprise race. Ramp’s index is one proxy, not a census. OpenRouter’s leaderboard, which samples a different population, has also been cited as a useful cross-check, and TechCrunch notes that OpenAI last ranked above Anthropic there in December 2025. That kind of cross-source variation is exactly why this data should be treated as directional rather than definitive.

The most defensible interpretation is that Anthropic has momentum in a subset of enterprise buyers that are disproportionately important for AI deployment decisions: organizations that care about governance, integration predictability, and policy alignment as much as model capability. OpenAI still appears to have a broader footprint, especially outside the cohorts where Anthropic is strongest. But the gap in Ramp’s data has narrowed enough to matter.

What to watch next is whether the lead persists across subsequent Ramp updates and whether other independent trackers show the same shift. If the pattern holds, it would suggest a meaningful change in enterprise vendor preference, especially among teams that are moving from experimentation to governed production use. The operational question for those teams is not whether to bet on one provider permanently, but how quickly their tooling can absorb a supplier mix that is becoming more fluid.