OpenAI has split ChatGPT’s premium ladder again, this time with a $100-per-month Pro subscription that sits between the $20 Plus plan and the existing $200 top tier. The pricing alone is notable, but the more important signal is what OpenAI is attaching to the tier: according to reporting from The Verge, Pro includes five times more Codex usage than Plus and is intended for longer, high-effort coding sessions. In practical terms, OpenAI is no longer framing premium access as a bundle of features. It is pricing around workload intensity.

That matters because Codex-heavy usage is not a static entitlement problem; it is a resource-allocation problem. If a user spends more time in long-running coding sessions, the service has to absorb more tokens, sustain more context over time, and keep more interactive loops alive with acceptable latency. A tier that explicitly grants 5x more Codex usage than Plus is therefore closer to a compute budget than a conventional consumer subscription. The price is not only about willingness to pay. It is a proxy for how much server-side capacity OpenAI is prepared to reserve for a single account.

This is also a market signal. OpenAI is positioning the new plan against Anthropic’s Claude Max tier, which also sits at $100, while The Verge notes the company is trying to win users from Anthropic’s Claude Code workflow. That comparison suggests the real competition is not over chatbot novelty. It is over which vendor can support heavier developer sessions with enough throughput, responsiveness, and predictable limits to become the default place where code work happens.

For engineering teams, the relevant question is what this means for operational behavior. The answer starts with tokens. Longer Codex sessions imply more prompt growth, more tool calls, and more back-and-forth iterations inside the same session window. Even if model quality stays constant, the economics shift as context length grows and repeated turns consume more of the available budget. A workload-oriented tier implicitly acknowledges that some users will generate enough demand to strain ordinary consumer limits, and that the cost of serving them cannot be flattened across a general-purpose plan.

Latency is the next constraint. Long, high-effort coding sessions tend to be less tolerant of slow turn times because they are used interactively: asking for a refactor, checking a diff, revising instructions, and repeating. If the Pro plan is meant to support that kind of use, then the user experience will depend less on headline model capability than on how quickly responses return under sustained load. Even a well-priced tier can feel fragile if latency climbs when a session becomes complex or long-lived.

Concurrency is the other side of the same problem. A subscription that promises heavier Codex use has to contend with how many simultaneous tasks one user can keep active before performance degrades or limits appear. That may not be visible in the marketing copy, but it is central to deployment planning. Teams adopting Pro for agentic coding, code review, or iterative debugging should assume the meaningful unit of analysis is not just a seat, but the volume and duration of active sessions per seat.

The pricing structure also sharpens the ROI question. At $100 per month, Pro is not an impulse upgrade, but it is low enough to invite serious use by individual developers and small teams that were previously stuck between the $20 Plus tier and the $200 flagship plan. For Codex-centric workflows, that middle ground could be attractive if it preserves acceptable throughput for the tasks that actually consume time. If not, the plan risks becoming a more expensive way to discover the same ceilings.

That is why the most important indicators to watch are operational rather than promotional. Teams should look at Codex usage quality of service, sustained session latency, and whether concurrency limits surface under realistic workloads. They should also track how much of a project’s token spend is concentrated in a small number of power users, because that will determine whether the Pro plan scales cleanly or becomes a cost outlier.

OpenAI’s move does not prove that the company has hit a hard capacity wall, and it would be premature to read it that way. But the structure of the plan is revealing. By tying a $100 tier to 5x Codex usage and longer sessions, OpenAI is explicitly recognizing that premium AI software is becoming a workload management problem. For developers and product teams, that is the useful frame: not which plan has the most features, but which tier can absorb the most compute without turning every session into a budgeting exercise.