OpenAI has split the gap between its $20 Plus plan and $200 top tier with a new $100/month ChatGPT Pro subscription. That sounds like a simple pricing adjustment, but it is actually a clearer statement about where the premium AI market is headed: not toward one “best” subscription, but toward tiers that map to how intensely people use the product.

The timing matters because OpenAI is no longer just selling general-purpose chat. The new Pro plan leans heavily on Codex, the company’s coding-focused tool, and OpenAI says it is meant for “longer, high-effort” sessions. The Verge’s reporting makes the comparison especially explicit: Pro offers five times more Codex usage than the $20 Plus tier, while the $100 price point lands directly against Anthropic’s Max tier for Claude. In other words, this is not a consumer upsell dressed up as convenience. It is a workload product.

That distinction matters technically. Once a subscription is tied to extended coding sessions and heavier tool use, the real scarce resources are no longer just model quality or feature count. They are tokens, latency, concurrency, and the amount of server-side compute OpenAI is willing to allocate to a single user. A plan like this is a way to ration those resources without forcing everyone into a single expensive top tier. The pricing is doing infrastructure management as much as revenue optimization.

Seen that way, the new midpoint is a sensible move. Previously, subscribers had a relatively steep jump from $20 to $200 per month. TechCrunch noted that OpenAI had been sitting on a pricing ladder that left serious users with an awkward choice between a mainstream plan and a premium tier that was hard to justify unless you were truly pushing the product hard. The $100 tier creates a more believable path for power users who do not need the absolute ceiling, but who are already living inside the product long enough for usage caps to matter.

That is also why the Codex emphasis is strategically important. Coding workflows are unusually good at exposing AI economics because they are iterative, tool-heavy, and expensive in aggregate. A developer can spend a long session prompting, regenerating, testing, and refining, which turns “chat” into a sustained inference workload. If OpenAI can make that experience feel materially better than Plus without reserving it only for the most expensive tier, it can capture a user segment that values throughput more than novelty.

The competitive read is just as clear. Anthropic has made Claude Code central to its pitch, and its Max tier sits at the same $100 price point. OpenAI’s move looks designed to keep coders from drifting toward Anthropic by giving them a better-fitting option inside ChatGPT’s own lineup. That is a subtle but important change: the battle is no longer only about who has the best model. It is about which company can package model access, coding tools, and usage limits into a subscription that matches how builders actually work.

What this signals for the next phase of AI products is straightforward. As these tools become embedded in longer-running, higher-value workflows, subscription labels will matter less than the economics underneath them. Plans will increasingly be shaped by session length, tool depth, and inference load rather than a flat list of “premium” features. The new $100 Pro tier is a sign that premium AI is maturing into a workload business, where pricing is starting to resemble infrastructure economics more than software bundling.