The biggest change in AI this week is not a new model release. It is a change in posture: OpenAI, Anthropic, and Google are now coordinating on how to detect when their models are being copied at scale. Through the Frontier Model Forum, the companies are sharing information aimed at spotting adversarial distillation — the practice of using a model’s outputs to train a cheaper surrogate that imitates much of the original behavior.

That matters because it reframes model copying from an abstract intellectual-property complaint into an operational security problem. If you run a frontier model as an API, every inference call is both a product event and a possible extraction event. The more capable and widely available the model becomes, the more attractive it is to probe, sample, and compress into a lower-cost competitor.

Adversarial distillation is straightforward in concept, even if it is hard to stop in practice. An attacker queries a target model with large volumes of prompts, captures the responses, and then trains another model on those input-output pairs. The resulting system does not need access to the original weights. It only needs enough coverage of the original model’s behavior to reproduce its decision patterns, style, or task performance at a fraction of the serving cost.

That makes hosted models especially exposed. Open-weight systems can be copied directly if someone gets the weights, but API-delivered models create a different kind of vulnerability: the product itself is the channel. As long as the model is available through normal-looking prompts, a determined user can distribute requests over time, across accounts, or across superficially legitimate workloads. From the provider’s point of view, extraction blends into ordinary demand.

That is why detection is difficult. The attacker does not have to look like an attacker. They can send prompts that resemble everyday usage, stay within rate limits, and spread requests across many sessions or tenants. For the provider, distinguishing a real customer from a harvesting operation becomes a classification problem with expensive false positives. If defenses are too loose, the model is copied. If they are too aggressive, legitimate customers get throttled or blocked.

The Frontier Model Forum collaboration suggests the labs think this is now a shared engineering problem, not just an internal trust-and-safety issue. Information-sharing about suspicious usage signatures, abuse patterns, and mitigation tactics could help establish a de facto security layer for frontier APIs. In practice, that might mean better anomaly detection, stronger account verification for high-volume access, prompt-pattern analysis, tighter throttling, and response-level instrumentation designed to spot large-scale harvesting before it finishes.

That kind of coordination would also affect product design. API exposure has always been a growth channel: it lets model makers sell usage, learn from customers, and build ecosystems around their platforms. But it is also a liability if the same interface makes it easy to extract capabilities and reproduce them elsewhere. Once that tradeoff becomes central, providers have incentives to raise friction for suspicious large-scale usage, segment customers into different trust tiers, or even create premium inference products with stronger protection and stricter access controls.

The commercial implication is blunt: model leadership is no longer defended only by having the best benchmarks. It is also defended by protecting the economics around inference. If a frontier model can be cheaply imitated from its outputs, then the provider is not just losing intellectual property; it is losing margin, pricing power, and, over time, the advantage of being first.

That is why this collaboration matters beyond the headline. If the biggest AI labs are now sharing defenses against extraction, the industry may be entering a phase where security around model outputs becomes part of the core platform stack. The next competitive frontier may not be who can build the strongest model in isolation, but who can deliver it at scale without making it easy to copy.