New Oct 23, 2025

Considerations for Safe Agentic Browsing

Browsers, engines, etc. All from Microsoft Edge Blog View Considerations for Safe Agentic Browsing on blogs.windows.com

Earlier today, we launched a Preview of Actions in Edge, an experimental, opt-in agentic browser feature, available for testing and research purposes. Actions in Edge uses modern CUA (Computer-Using Agent) models to complete tasks for users in their browsers. We are excited about the many exciting and emergent possibilities this feature brings, but as a new technology, it introduces new potential attack vectors that we and the rest of the industry are taking on.

We take very seriously our responsibility to keep our users safe on the web. This space is so new and uncharted that we cannot do that in isolation. Our goal with this preview is to explore these new waters with a small set of engaged users and researchers who have a clear understanding of the possibilities and the potential risks of agentic browsers. We have built a number of mitigations and are working closely with the AI and security research community to develop and test new approaches which we will be testing with our active community over the next few months. We welcome all input and feedback and will be actively engaged on our Discord channel here.

Users of Actions in Edge feature should carefully review the risks and warnings in Edge before enabling the feature and be vigilant when browsing the web with it enabled.

“Ignore All Previous Instructions”: Prompt injection attacks

AI chatbots have been dealing with prompt injection attacks since their inception, with early attacks being more annoying than outright dangerous. But as AI Assistants have become more capable of doing things (with connectors, code generation, etc.), the risks have risen. Agentic browsers, by virtue of the additional power they bring and their access to the breadth of the open web, add more opportunities for attackers to take advantage of gaps and holes.

This is not a theoretical concern: Researchers, including our own security teams, have already published proof-of-concept exploits that use prompt injection to take control of early agentic browsers. These concepts demonstrate that, without protections, attackers can easily craft content that steals users’ data or performs unintended transactions on their behalf.

Our approach to Prompt Injection attacks

The key to any protection strategy is defense-in-depth:

Protecting from untrusted Input

This phase includes the most basic protection: limit where Copilot gets data from. In this preview we have implemented the following top-level site blocks to avoid known or risky sites. For any site that Actions in Edge can access, the data from those sites is checked carefully at multiple stages, and marked as untrusted. The following mitigations are currently live or in testing. Experienced security professionals will know that the ability to be responsive to novel attacks is as important as the security blocks themselves.

Detecting and blocking deviations from the task

Modern AI models, by design, take somewhat unpredictable paths to accomplish the tasks they are set. This can make it challenging to determine whether or not the model is doing what it was asked to do.

In Actions for Edge, we add checks to detect hidden instructions, task drift, and suspicious context and to ask for confirmation when risk is higher. Examples include:

[caption id="attachment_26077" align="aligncenter" width="422"]Screenshot of Edge Actions UI. A task is paused and the agent asks: "It looks like "en.wikipedia.org" might not be related to this action. Should I continue?" With options for the user to continue or cancel the action. Relevance checks confirm with the user when a site seems unrelated to the original task[/caption]

Limit access to sensitive data or dangerous actions

Finally, to mitigate the impact of any bypasses, when the model is running, the browser limits its access to sensitive data or dangerous actions. In this preview, we have disabled the ability for the model to use form fill data, including passwords.

Other restrictions include (but are not limited to):

As we test and evaluate both the use cases that the community discovers and find valuable, and the security concerns, we will work to close off additional avenues of potential risk.

Closing

We’re keen to learn from your testing—what tasks you try, how Copilot performs, and what new risks you encounter—so we can make the experience safer and more useful. If you have feedback or questions, please share them in the preview channels.
Scroll to top