Over the last several decades, the Windows team has added a stream of additional security mitigation features to the platform to help application developers harden their applications against exploit. I commonly referred to these mitigations as the Alphabet Soup mitigations because each was often named by an acronym, DEP/NX, ASLR, SEHOP, CFG, etc. The vast majority of these mitigations were designed to help shield applications with memory-safety vulnerabilities, helping prevent an attacker from turning a crash into reliable malicious code execution.
By default, most of these mitigations were off-by-default for application compatibility reasons– Windows has always worked very hard to ensure that each new version is compatible with the broad universe of software, and enabling a security mitigation by default could unexpectedly break some application and prevent users from having a good experience in a new version of Windows.
There were some exceptions; for instance, some mitigations were enabled by default for 64-bit applications because the very existence of a 64-bit app during the mid-200Xs was an existence proof that the application was being maintained.
In one case, Windows offered the user an explicit switch to turn on a mitigation (DEP/NX) for all processes, regardless of whether they opted-in:
But, generally, application developers were required to opt-in to new mitigations by setting compiler/linker flags, registry keys, or by calling the SetProcessMitigationPolicy API. One key task for product security engineers in each product cycle was to research the new mitigations available in Windows and opt the new version of their product (e.g. IE, Outlook, Word, etc) into the newest mitigations.
The requirement that developers themselves opt-in was frustrating to some security architects though– what if there was some older app that was no longer maintained but that could be protected by one of these new mitigations?
In response, EMET (Enhanced Mitigation Experience Toolkit) was born. This standalone application provided a user-friendly experience to enabling mitigations for an app; under the covers, it twiddled the bits in the registry for the process name.
EMET was useful, but it exposed the tradeoff to security architects: They could opt a process into new mitigations, but ran the risk of causing the app to break entirely, or only in certain scenarios. They would have to extensively test each application and mitigation to ensure compatibility across the scenarios they cared about.
EMET 5.52 went out of support way back in 2018, but had since been replaced by the Exploit Protection node in the Windows Security App. Exploit Protection offered a very similar user-experience to EMET, allowing the user to specify protections on a per-app basis as well as across all apps.

If you dig into the settings, you can see the available options:
You can also see the settings on a “per-program” basis:
…including the settings put into the registry by application installers and the like.

While built into Windows, Exploit Protection also works with Microsoft Defender for Endpoint (MDE), enabling security admins to easily deploy rules across their entire tenant. Some rules offer an “Audit mode”, which would allow a security admin to check whether a given rule is likely to be compatible with their “real-world” deployment before being deployed in enforcement mode.
Beyond the Windows UI and MDE, mitigations can also be deployed via a PowerShell module; often, you’ll use the Export link on a machine that’s configured the way you like and then import that XML to your other desktops.
The Big Challenge
The big challenge with Exploit Protection (and EMET before it) is that, if these mitigations were safe to apply by default, we would have done so. Any of these mitigations could conceivably break an application in a spectacular (or nearly invisible) way.
Exploit Mitigations like “Bottom Up ASLR” are opt-in because they can cause compatibility issues with applications that make assumptions about memory layout. Opting an application into a mitigation can cause the application to crash at startup, or later, at runtime, when the application’s (now incorrect) assumption causes a memory access error. Crashes could occur every time, or randomly.
When a mitigation is hit, you might see an explicit “block” event in the Event Log or Defender Portal events, or you might not. That’s because in some cases, a mitigation doesn’t mean the operation is just blocked, instead Windows terminates it. You might look to see whether Watson has captured a crash of the application as it starts, but typically debugging these sorts of things entails starting the target application under a debugger and stepping through its execution until a failure occurs. That is rarely practical for anyone other than the developers of the application (who have its private symbols and source code). If excluding an application from a mitigation doesn’t work, it may be the case that the executable launches some other executable that also needs an exclusion. You might try collecting a Process Monitor log to see whether that’s the case.
Developer Best Practices
In the ideal case, developers will themselves opt-in (and verify) all available security mitigations for their apps, ensuring that they do not effectively “offload” the configuration and verification process to their customers.
With the increasing focus on security across the software ecosystem, we see that best practice followed by most major application developers, particularly in the places where it’s most useful (browsers and internet clients). Browser developers in particular tend to go far beyond the “alphabet soup” mitigations and also design their products with careful sandboxing such that, even if remote code execution is achieved, it is confined to a tight sandbox to protect the rest of the system.
Thanks for your help in protecting users!
-Eric