Amazon Web Services is moving AI agent browsing away from the open-web-default model and toward something closer to managed enterprise workstation policy. With support for Chrome enterprise policies and custom root CA certificates in Amazon Bedrock AgentCore Browser, organizations can now define where browser-based agents are allowed to go, how the browser behaves, and which internal services it can trust.
That shift matters because browser access is one of the easiest ways for an agent to become useful — and one of the quickest ways for it to create security problems. AWS frames the change as a response to that tension: unrestricted web access can expose agents to unauthorized domains, credential storage in a browser password manager, and downloads that bypass approved workflows. The new controls give operators a way to turn browser activity into a policy-governed workflow instead of an open-ended session.
The practical significance is not the browser itself, but the control plane around it. Chrome enterprise policies already exist as a mature enterprise mechanism, with more than 450 settings covering browser behavior. By bringing those settings into Bedrock AgentCore Browser, AWS is effectively borrowing an established governance model and applying it to agentic browsing.
Policy knobs now reach the agent browser
The main technical addition is straightforward: Bedrock AgentCore Browser can now accept familiar Chrome enterprise JSON policy configuration. That means teams can define granular control over browsing behavior rather than depending on coarse allow-or-deny access.
AWS calls out several policy surfaces in particular:
- URL filtering to constrain where agents can navigate
- Downloads restrictions to limit file retrieval and storage behavior
- Password manager controls to prevent agents from storing or using credentials in ways that violate policy
Those controls matter because they map directly to the ways a browser can leak data or drift outside an intended workflow. A browser that can reach the public internet is not inherently safe for an agent simply because the agent is automated; the policy layer is what determines whether the agent can touch sensitive endpoints, pull unvetted artifacts, or interact with login flows that were never meant to be machine-driven.
The second major piece is connectivity. Many enterprises rely on internal services protected by a private certificate authority, and many also route traffic through SSL inspection infrastructure. Without trusted roots, HTTPS connections to those systems fail certificate validation. AWS says custom root CA certificates are now supported in AgentCore Browser so agents can connect to internal services and operate behind corporate SSL-intercept proxies by trusting the organization’s certificate authority.
That makes the feature more than a browser preference toggle. It is a trust model update. By allowing a private CA to be added, the service can participate in the same PKI assumptions that already govern enterprise endpoints and managed browsers.
What the security story actually changes
The security case for these controls is clear enough: reducing browser freedom reduces exposure. If the agent cannot browse arbitrary domains, store credentials in a password manager, or download files outside approved locations, there are fewer opportunities for credential leakage or data exfiltration.
But the flip side is just as important. Policy-first control only works if the policy is correct.
A misconfigured URL allowlist can break legitimate workflows or, worse, leave room for unintended destinations. A poorly scoped download policy may block necessary artifacts or allow files into places downstream systems do not inspect. And if browser access depends on custom trust roots, the organization inherits all the usual risks of certificate management, including stale roots, overbroad trust, and inconsistent rollout across environments.
The addition of SSL-intercept support is a good example of the governance trade-off. Enterprises often deploy interception to inspect traffic and enforce controls, but that infrastructure also changes the semantics of TLS trust. When agent browsing relies on those pathways, operators need to know exactly what is being trusted, where certificates are distributed, and how policy enforcement interacts with inspection proxies.
In that sense, AWS is not removing the security burden so much as relocating it. The browser becomes more governable, but the operator becomes responsible for making the governance precise.
What teams will have to put in place
Rolling this out in production will require more than enabling a feature flag.
First, teams will need a policy management discipline around Chrome enterprise JSON. That includes deciding which settings are mandatory, which are workload-specific, and how policy changes are reviewed and versioned. If multiple agent workloads share the same browser substrate, the policy model needs to scale without becoming a hand-edited patchwork.
Second, organizations will need a PKI strategy. If internal services are exposed through a private certificate authority, the browser trust store needs to be aligned with the enterprise certificate lifecycle. That means involving PKI teams early, defining certificate distribution and rotation practices, and deciding how custom roots are approved.
Third, security teams will want to align browser policy with existing identity and data handling controls. URL filtering alone does not solve access governance if the agent can still reach sensitive systems through permitted paths. Likewise, password manager restrictions need to match whatever credential mediation model the enterprise already uses for automation.
In practice, this suggests phased deployment. Teams are likely to start with a bounded set of AI workloads, validate policy behavior, and then expand coverage once they are confident the browser policy set matches enterprise expectations.
Why this changes the market conversation
This launch is also a strategic signal. AI agent browsers are increasingly being evaluated not just on what they can open, but on how tightly they can be controlled. A policy-first browser for agents is a different product category from a generic browser session with guardrails bolted on after the fact.
That distinction could matter for vendors building agent platforms and for buyers comparing deployment options. Enterprises that already standardize on Chrome policy management may now expect policy portability into AI runtimes. If the browser is the execution surface for web-based agents, then browser policy becomes part of the agent architecture, not an adjacent administrative concern.
It also raises the bar for competitors. Support for granular browser behavior, trusted roots, and internal connectivity is becoming less of a bonus feature and more of an enterprise requirement for agentic systems that need to operate inside regulated environments.
What to watch next
For teams evaluating the feature, the questions to track are operational, not promotional:
- Which Chrome enterprise policies are actually enforced in your target workflow?
- Can you audit policy application and browser actions cleanly across agent runs?
- Do custom root CA certificates integrate cleanly with your private certificate authority processes?
- How do incident metrics change once browser access is constrained by policy?
Those are the indicators that will show whether policy-based browsing is becoming a real control plane or just another configuration surface.
The larger test is whether organizations can maintain governance as they scale agent usage. If they can, Bedrock AgentCore Browser becomes a more credible fit for regulated environments where web access is necessary but cannot be left open-ended. If they cannot, the browser will still work — but the policy rails may become another place where complexity accumulates faster than confidence.



