A lot of automation programmes still tell the same story in two different languages.

On the factory floor, individual cells are hitting spec. Cycle times are inside the commissioning envelope. Robots are running. Vision systems are classifying parts. Local OEE dashboards look respectable. In the board pack, though, the number that matters most is stubbornly underperforming: line-level throughput.

That gap is becoming harder to ignore in 2026 because the limiting factor is no longer just mechanical or computational. It is the integration layer between cells and the systems that actually govern flow: line-level scheduling, MES, ERP and WMS. The cells can be fast and accurate. The programme still stalls if those gains do not propagate into end-to-end planning, dispatch, reconciliation and inventory movement.

This is why the plateau now sits between the cell and the line. The data path is where ROI leaks get hidden. If line-level metrics never make it into the cell-automation pipeline, then the automation stack optimizes local performance while leaving the system-level bottleneck untouched.

A recent audit pattern makes the problem concrete. A Tier 1 automotive supplier with three plants in central Europe had installed 86 robots across body-in-white. The team could point to strong cell-level performance: cycle time inside target, commissioning complete, shift-level metrics looking healthy. Yet line-level throughput had moved only 4% over three years against a 22% plan.

The uncomfortable detail was not the robotics. It was the software wrapping it.

The integration layer had been built as overflow work by a small internal IT team: PLC bridge code, MES write-back routines, and a homegrown OEE pipeline stitched together over several release cycles. It had never been designed as a first-class control surface. It was there to make individual cells visible and minimally compliant, not to carry scheduling constraints, quality events, changeovers and inventory status into a single flow model. The result was a familiar one: strong local automation, weak line cohesion.

That pattern does not belong to one vertical. It shows up in electronics assembly, where component availability and test-cell utilisation can look good while the line idles on missing synchronisation data. It shows up in warehouse logistics, where pick stations are automated but the WMS does not reliably feed real-time slotting, replenishment and exception logic back into the work cells. The stack works at the point of action and fails at the point of coordination.

The root cause is usually not a lack of dashboards. It is a lack of shared data contracts.

Most automation environments still treat MES, ERP and WMS as systems of record that sit adjacent to the robotics stack rather than inside the control loop. That means line-level scheduling data is not consistently carried into cell-automation pipelines. When a changeover is delayed, a batch is reprioritised, or material is short, the cell often learns too late—or through a bespoke integration path that breaks under edge cases. The local controller keeps executing its plan while the line has already changed.

For AI vendors, this has a blunt implication: model quality at the cell level is no longer enough if the product cannot observe and react to upstream and downstream constraints.

That matters especially for deployment design. AI-enabled automation stacks are increasingly sold as decision layers, not just perception or motion systems. But if those stacks cannot ingest schedule state, inventory state, order priority and quality events in near real time, they will produce outputs that are technically impressive and operationally misaligned. A reinforcement model can recommend a fast action at a cell and still damage throughput if it is blind to line pacing. A scheduling assistant can optimize a workcell and still create WMS friction that takes longer to unwind than the original gain.

This is where end-to-end throughput visibility becomes more than a reporting feature. It is the operating model.

Teams that want to prove ROI need a path that runs in both directions. Line-level scheduling and exception data must flow into cell orchestration. Cell-level performance, fault events and work completion must flow back into MES, ERP and WMS without manual reconciliation. If those systems remain loosely coupled, the programme will continue to produce local wins and global ambiguity.

The technical shape of the fix is clear enough, even if the implementation is messy.

Start with standardized data contracts across the automation stack. The point is not abstract interoperability; it is making sure that events such as work-order release, material arrival, quality hold, machine fault, changeover complete and dispatch delay are represented in a way that both the control layer and the enterprise layer can understand. Proprietary point-to-point mappings can work for a pilot cell. They do not scale across a plant network.

Next, instrument the line as a system, not a collection of cells. That means dashboards that show not just cell uptime or local OEE, but the causal chain from schedule adherence to material availability to line throughput to order completion. If the only metric that improves is the one closest to the machine, you are probably measuring the wrong thing.

Then close the feedback loop. A cell should not just execute instructions; it should be able to consume updated line context when the schedule changes. Likewise, MES, ERP and WMS should not be passive archives of what happened after the fact. They need to reflect what the automation layer is actually experiencing, in time to affect dispatching and planning decisions.

For manufacturers, the practical deployment implication is that integration work can no longer be treated as a support function hidden under capex. It is the system boundary that determines whether AI-enabled automation compounds value or simply redistributes it from one part of the plant to another. For vendors, the product implication is equally direct: the differentiator is increasingly less about a single model or robot and more about whether your stack can participate in line-level coordination without bespoke engineering every time.

That is why the current plateau should be read carefully. It is not evidence that automation has stopped working. It is evidence that the industry has learned to optimize cells faster than it has learned to connect them to the rest of the operation.

And until that integration layer is treated as the primary ROI bottleneck, most programmes will continue to look better in the cell than they do on the line.