Meta is signaling that its next wave of model releases will be open only in a qualified sense. According to reporting from The Decoder, the company plans to release parts of its new AI models as open source, while keeping other components proprietary and reviewing safety risks before anything ships. The largest models are not expected to be public.
That distinction is the whole story. Meta is not reverting to a blanket open-release posture, and it is not promising a fully inspectable frontier system. Instead, it appears to be slicing the stack: making some artifacts available to developers and researchers while preserving control over the most valuable weights, system components, and deployment decisions.
What Meta is actually opening
The practical meaning of this move depends on what Meta chooses to include in the release. If the company publishes select weights, interfaces, or tooling around the models, outside developers may get enough to build with the system, adapt it, or evaluate it in constrained settings. But that is very different from releasing the entire model family in a way that lets others reproduce the full training and inference pipeline end to end.
That difference matters because the word “open” can cover a lot of ground. Open weights, open code, open evaluation harnesses, and open training recipes each unlock different levels of use. A partial release can improve adoption without giving away the complete technical advantage. It can also leave outside teams dependent on Meta for the most important capabilities.
Why the Wang-era models are a notable signal
These are the first models associated with Alexandr Wang’s influence inside Meta, after his arrival through the Scale AI deal. That makes the release more than a product update. It is also an organizational signal about how Meta wants to operate at the frontier: move quickly, attract developers, and preserve leverage over the systems that matter most.
If the old Llama-era narrative was about broad openness as a brand position, this next phase looks more selective. Meta can still claim ecosystem leadership, but it no longer has to pay the full strategic cost of putting its best frontier models into the wild.
Openness without full reproducibility
For technical users, the key issue is reproducibility. A selective release may be enough for integration work, benchmarking against a limited slice of functionality, or fine-tuning adjacent components. It is not enough to fully verify how the system behaves under the hood if the largest models remain closed and core pieces of the stack are withheld.
That has real consequences for developers and researchers:
- You may be able to test against Meta-provided artifacts without being able to recreate the complete model.
- You may inherit performance characteristics without a clear path to retraining or deep inspection.
- You may get tooling benefits, but still be locked out of the most capable configuration.
In other words, a partial release can widen adoption while keeping the most valuable technical asset proprietary. That is good distribution strategy, but it is not the same thing as open science.
Safety review as a distribution gate
Meta says it will review safety risks before releasing anything, which gives the company an internal justification for keeping control over the release process. That is not trivial. Safety review can be a legitimate step in deciding what should ship publicly, especially for powerful models.
But it also changes the meaning of the release. If safety assessment sits upstream of publication, then “open source” starts to look less like a standing commitment and more like a curated approval channel. Meta can decide which artifacts are safe enough to expose, which ones should stay internal, and which capabilities should never be broadly distributed.
That gatekeeping may be sensible from a governance perspective. It also means developers should not assume they are getting a full, permissive open-model package.
Why this matters for the open-model market
Meta has spent years positioning itself as the company most willing to push open models into the market. This new strategy preserves that position, but in a more selective form. The company can continue to shape developer expectations, attract tooling ecosystems, and influence deployment defaults without fully commoditizing its strongest frontier work.
That matters competitively. In an AI stack increasingly defined by who controls model distribution rather than who publishes the loudest open-source claim, Meta is trying to occupy a middle ground: open enough to become the default platform for many builders, closed enough to keep strategic differentiation at the top end.
That could pressure rivals in a way that pure capability competition does not. If Meta becomes the easiest place to build around open-ish models, it can win distribution even when it is not fully disclosing the best system.
What technical teams should watch next
The real question is not whether Meta says “open source.” It is what exactly shows up in the release.
Engineering teams should look for:
- model weights or only partial checkpoints
- source code versus deployment wrappers
- evals and benchmarks
- architecture details
- fine-tuning recipes
- inference and serving tooling
Each one changes the usefulness of the release. A model family with weights, code, and training details is a very different proposition from a controlled publication of selected components. The former can become infrastructure. The latter is more likely to become a distribution vehicle.
Meta’s announcement, then, is less about openness as an ideology than openness as a lever. The company appears to want the adoption benefits of open models without surrendering the frontier systems that define competitive advantage.



