At this point I have close to a decade of working with Azure and AWS/GCP and I can confidently say Azure is the worst when it comes to security, objectively.
Performance, "I don't like the portal", service and capacity availability, and such complaints are somewhat subjective or fixable but I deeply believe Microsoft is the most insecure of the cloud giants on a measurable level.
Anyone that is serious about security should just avoid Microsoft, this has honestly been the case since the early '00s at the least.
I think it’s not just the security of the platform itself either that’s measurably worse - it’s also way easier to end up with insane security configurations with the hellscape that is Entra. It all just feels like it’s held together with duct tape.
The deep integration with AD (now Entra) was the strongest selling point for Azure, but it’s also by far the biggest issue with the platform IMO.
There’s also just no consistency in the platform - the CLI for instance has totally different flags and names depending on which sub command you’re using. It’s like this everywhere in Azure.
> There’s also just no consistency in the platform - the CLI for instance has totally different flags and names depending on which sub command you’re using. It’s like this everywhere in Azure.
For all of AWS's faults, one of the reason I really like them is how consistent everything is. There were so many instances where I could correctly guess the right command for the AWS CLI based on how other services worked, I could never do that with GCP or Azure.
I would love to read an article about how AWS ensures this kind of consistency. Given how Azure and GCP both messed this up, it's clearly not a trivial problem (even though it may seem like one)
As someone who is greatly motivated to moving off Azure (to onprem, not to another cloud), do you know of any good collection of Azure security issues I could use as 'ammunition'? Would be greatly appreciated!
I have some notes somewhere but unfortunately they don't have citations, these are just some of the vulns they've had in the last couple years:
• Storm-0558 Breach (2023): Chinese hackers exploited a leaked signing key from a crash dump to access U.S. government emails, affecting 60,000+ State Department communications
• Azure OpenAI Service Exploitation (2024): Hackers bypassed AI guardrails using stolen credentials to generate illicit content, leading to Microsoft lawsuits against developers in Iran, UK, and Vietnam
• CVE-2025-21415 (CVSS 9.9): Spoofing vulnerability in Azure AI Face Service allowed authentication bypass and privilege escalation
• CVE-2023-36052: Azure CLI logging flaw exposed plaintext credentials in CI/CD pipelines, risking sensitive data leakage
• Azurescape (2022): Container escape vulnerability enabled cross-tenant access in Azure Container Instances, discovered by Palo Alto Networks
• ChaosDB (2022): Wiz researchers exploited CosmosDB’s Jupyter Notebook integration to access thousands of customer databases including Fortune 500 companies
• Executive Account Takeover Campaign (2024): Phishing campaign compromised 500+ executive accounts via Azure collaboration tools with MFA manipulation
If your company or workplace is considering migrating from cloud to on-prem or from one cloud to another, I do this professionally btw, feel free to reach out at this temporary email and we can chat: pale.pearl2178 at fastmail.com (to prevent my real email being scraped from HN).
Security issues/CVEs should never be used as a motivation to get off of a particular platform, otherwise we'd never use Linux, macOS, or Windows (I hope you're a fan of OpenBSD... sometimes).
If these issues remain unfixed after being disclosed, or a pattern of fixes that took much longer than you feel they should have, that's valuable ammunition as it shows the organization isn't responsive to security issues.
I agree you shouldn't write off any platform/software/etc based solely on the number of vulnerabilities. I also agree that how responsive they are to fixing things is a factor to consider. But I think that's only _a_ factor.
Take something like a container escape vulnerability.
We could have Vendor A where they're just running containerd on a bunch of hosts on a single network segment and throwing everyone's containers at it so a container escape vulnerability essentially gets you access to everything any of their customers are running.
Where-as Vendor B segments running containers into VMs, so a container escape vulnerability means you can only access your own data. Not great because if one container is compromised that gives them a path into the rest of your workloads, but at least I know they're maintaining a pretty solid wall between tenants.
Then there's Vendor C that actually runs containers using some micro-VM framework so each container is running fully isolated by a hypervisor with a fully separate emulated network stack, etc so the escape really gets them no more access than they had inside the container.
A pattern of issues like Vendor A is, well, a pattern. A series of issues that show their systems are fundamentally not designed for proper isolation between tenants and are lacking defense-in-depth measures to mitigate the fallout of the inevitable security issues is a very good reason to write off Vendor A regardless of how quickly they respond to the issues.
I'm not going to go back and review all the Azure issues, but my recollection from the few writeups I've read definitely paint a picture of a lot more "Vendor A" type issues than I'd be comfortable with.
Not strictly security, but there are several long-standing issues with Azure DevOps build pipelines and Artifacts feeds. Using a private artifact feed in your pipeline inexplicably adds minutes to the amount of time it takes to restore packages. And publishing C# NuGet packages together with the source/symbol files is a poorly supported and poorly documented mess (it doesn't help that NuGet support in the dotnet CLI is missing support for important and long requested features only available by using the full fat NuGet client or MSBuild directly).
Another reason to be worried by Microsoft’s Azure security guidelines which state “Identity is the new perimeter”.
Well, the perimeter is not a gate but a cattle guard, and I am not surprised to see some wolves eating a secret and a cow swaggering into the road.
Azure service APIs have always conflated the principles of “reachability from the public internet” and “anonymous access” into a single concept called “Public Access” which, for Azure KV, has 6 different public/private configuration combinations!
This vulnerability report did not include the Key Vault Networking settings for “Public network access”, so more testing (but not much more) is needed to see if the proxy side door can circumvent a resource ACL or private endpoint or both.
It's not just "identity", but "authorization". Really, what they mean is "defense in depth" minus firewalls (because the "in depth" part makes those less relevant), I think. And... that is a reasonable position... provided you get the "in depth" part right, which includes not having proxies that bypass authorization.
Binary Security found the undocumented APIs for Azure API Connections. In this post we examine the inner workings of the Connections allowing us to escalate privileges and read secrets in backend resources for services ranging from Key Vaults, Storage Blobs, Defender ATP, to Enterprise Jira and SalesForce servers.
Well at the bottom of the article, they mention that Microsoft first closed the issue as invalid, and on the second attempt they closed it as "cannot be reproduced" (after fixing it).
I've reported a trivial way to infer details about passwords in Windows. (Ctrl-arrow in password fields in Windows 8 jumped by character group even when hidden so if a prefilled password was 123 abc.de it would stop after 3, after space (I think), after c, after dot and finally after e.)
All I got was an email: that is interesting bye bye. But it was fixed in the next patch or the next after I think.
So I didn't care to report the two bigger problems I found with Azure Information Protection [1][2] I thought about reporting them but decided against it.
And I will continue to tell people that I don't care to do free work for MS when they won't even give me a t-shirt, a mug or even acknowledge it.
Maybe if one is a security researcher it can be worth it but if you just find something interesting you'll probably be better rewarded by reddit or HN, yes, the upvotes are worthless but less so than a dismissive email.
[1] one in the downloadable AIP tooling where you can easily smuggle clear text information with rock solid plausible deniability - I found it by accident after having implemented a part of a pipeline in the most obvious way I could think of.
[2]: the second had to do with how one can configure SharePoint to automatically protect files with AIP on download, the only problem being if you logged in using another login sequence (sorry for the lack of details, this was before the pandemic and it was just a small part of what I was working on at the time) SharePoint would conveniently forget all about it despite all efforts by me, the security admin at the company and the expert that Microsoft sent to fix it.
Ha ... ha ... ha ... ha ... did they give you the run around for several months until you dropped the issue? It's actually pretty astounding that they don't get sued for this practice. If a company is paying for support and are given illiterate noobs then that is breach of contract I would think. I would never recommend entering a contract with MSFT, they produce trash products they can't support and are more invested in their Legal team than actual product.
Reminds me of an issue I reported years ago to the super-special-premier support my company pays for. I never got to somebody who actually understood the issue but there were several managers who constantly tried to have meetings and close the ticket.
I thought the same when a friend of mine reported something to Apple. I would guess it's SOP at this point across big tech, unless something is too big to ignore.
You might have no idea how expensive providing great support to customers is when you're an vendor like Apple or a Microsoft. It's like backports, which are even more unbelievably expensive still, and those are gone industry-wide for that reason.
Think of the cost of opportunity in having smart, capable, experienced staff doing support or backports instead of actual dev work. (Especially backports, which when they were done frequently they were done precisely because customers are risk-averse, so a great deal more review and testing (with a much larger test matrix) was required for backports, with attendant huge increase in cost.) That cost is enormous. But of course they do need to provide some support, and at some point some really good support for the really serious bugs, and the vendor will in time do it, but first the customer demand and pressure has to build.
I can't speak to Apple, but wrt Microsoft, you're not appreciating just how bad support is (or even the documentation is) and you're not appreciating how much people pay for support on top of the product.
I feel like I know more about M365 than anyone I talk to at MS. That's bad.
The intention of the password entry dots isn’t to prevent folks with unrestricted physical access to the machine from exfiltrating information, it’s to stop it from appearing in screenshares and casual “over the shoulder” observations.
Honestly I’m surprised they even acknowledged that as a bug, given there are many ways to get a whole lot more info than what you demonstrated, for instance the builtin “eye” button that is purpose built to reveal the full password to anyone with physical access to the machine wishing to see it.
Suppose user U has read access to Subscription S, but doesn't have access to keyvault K.
If user U can gain access to keyvault K via this exploit, it is scary.
[Vendors/Contingent staff will often be granted read-level access to a subscription under the assumption that they won't have access to secrets, for example.]
(I'm open to the possibility that I'm misunderstanding the exploit)
My reading on this is that the Reader must have read access to the API Connection in order to drive the exploit [against a secure resource they lack appropriate access to]. But a user can have Reader rights on the Subscription which does cascade down to all objects, including API Connections.
Maybe with enough traction, they'll lose out on huge contracts because of stuff like this. Seems the only way to get stuff fixed is to attach dollars to it.
>The Connector for Key Vaults is maybe the one with the highest impact.
Yeah, no joke. Considering how well protected Azure Key Vaults typically are, and what's in them (secrets, certificates etc) this is huge way to compromise a lot of other things. It's finding the keys to the doors.
At this point I have close to a decade of working with Azure and AWS/GCP and I can confidently say Azure is the worst when it comes to security, objectively.
Performance, "I don't like the portal", service and capacity availability, and such complaints are somewhat subjective or fixable but I deeply believe Microsoft is the most insecure of the cloud giants on a measurable level.
Anyone that is serious about security should just avoid Microsoft, this has honestly been the case since the early '00s at the least.
I think it’s not just the security of the platform itself either that’s measurably worse - it’s also way easier to end up with insane security configurations with the hellscape that is Entra. It all just feels like it’s held together with duct tape.
The deep integration with AD (now Entra) was the strongest selling point for Azure, but it’s also by far the biggest issue with the platform IMO.
There’s also just no consistency in the platform - the CLI for instance has totally different flags and names depending on which sub command you’re using. It’s like this everywhere in Azure.
> There’s also just no consistency in the platform - the CLI for instance has totally different flags and names depending on which sub command you’re using. It’s like this everywhere in Azure.
For all of AWS's faults, one of the reason I really like them is how consistent everything is. There were so many instances where I could correctly guess the right command for the AWS CLI based on how other services worked, I could never do that with GCP or Azure.
I would love to read an article about how AWS ensures this kind of consistency. Given how Azure and GCP both messed this up, it's clearly not a trivial problem (even though it may seem like one)
As someone who is greatly motivated to moving off Azure (to onprem, not to another cloud), do you know of any good collection of Azure security issues I could use as 'ammunition'? Would be greatly appreciated!
UPD: note to self - this seems like a good resource https://www.cloudvulndb.org/results
I have some notes somewhere but unfortunately they don't have citations, these are just some of the vulns they've had in the last couple years:
• Storm-0558 Breach (2023): Chinese hackers exploited a leaked signing key from a crash dump to access U.S. government emails, affecting 60,000+ State Department communications
• Azure OpenAI Service Exploitation (2024): Hackers bypassed AI guardrails using stolen credentials to generate illicit content, leading to Microsoft lawsuits against developers in Iran, UK, and Vietnam
• CVE-2025-21415 (CVSS 9.9): Spoofing vulnerability in Azure AI Face Service allowed authentication bypass and privilege escalation
• CVE-2023-36052: Azure CLI logging flaw exposed plaintext credentials in CI/CD pipelines, risking sensitive data leakage
• Azurescape (2022): Container escape vulnerability enabled cross-tenant access in Azure Container Instances, discovered by Palo Alto Networks
• ChaosDB (2022): Wiz researchers exploited CosmosDB’s Jupyter Notebook integration to access thousands of customer databases including Fortune 500 companies
• Executive Account Takeover Campaign (2024): Phishing campaign compromised 500+ executive accounts via Azure collaboration tools with MFA manipulation
If your company or workplace is considering migrating from cloud to on-prem or from one cloud to another, I do this professionally btw, feel free to reach out at this temporary email and we can chat: pale.pearl2178 at fastmail.com (to prevent my real email being scraped from HN).
Great, thanks!
For me it's just a distant dream now, but I bet business will be booming for you in the coming years, especially if you're located in Europe ;)
This list of vulns nobody was ever bothered with except for 1 (Storm-0558) doesn't prove your ridiculously sensational comment above
Security issues/CVEs should never be used as a motivation to get off of a particular platform, otherwise we'd never use Linux, macOS, or Windows (I hope you're a fan of OpenBSD... sometimes).
If these issues remain unfixed after being disclosed, or a pattern of fixes that took much longer than you feel they should have, that's valuable ammunition as it shows the organization isn't responsive to security issues.
I agree you shouldn't write off any platform/software/etc based solely on the number of vulnerabilities. I also agree that how responsive they are to fixing things is a factor to consider. But I think that's only _a_ factor.
Take something like a container escape vulnerability.
We could have Vendor A where they're just running containerd on a bunch of hosts on a single network segment and throwing everyone's containers at it so a container escape vulnerability essentially gets you access to everything any of their customers are running.
Where-as Vendor B segments running containers into VMs, so a container escape vulnerability means you can only access your own data. Not great because if one container is compromised that gives them a path into the rest of your workloads, but at least I know they're maintaining a pretty solid wall between tenants.
Then there's Vendor C that actually runs containers using some micro-VM framework so each container is running fully isolated by a hypervisor with a fully separate emulated network stack, etc so the escape really gets them no more access than they had inside the container.
A pattern of issues like Vendor A is, well, a pattern. A series of issues that show their systems are fundamentally not designed for proper isolation between tenants and are lacking defense-in-depth measures to mitigate the fallout of the inevitable security issues is a very good reason to write off Vendor A regardless of how quickly they respond to the issues.
I'm not going to go back and review all the Azure issues, but my recollection from the few writeups I've read definitely paint a picture of a lot more "Vendor A" type issues than I'd be comfortable with.
Not strictly security, but there are several long-standing issues with Azure DevOps build pipelines and Artifacts feeds. Using a private artifact feed in your pipeline inexplicably adds minutes to the amount of time it takes to restore packages. And publishing C# NuGet packages together with the source/symbol files is a poorly supported and poorly documented mess (it doesn't help that NuGet support in the dotnet CLI is missing support for important and long requested features only available by using the full fat NuGet client or MSBuild directly).
We just migrated off Azure after one to many deprecation or downtimes caused by some random new feature or change of how permissions work. We gave up.
Another reason to be worried by Microsoft’s Azure security guidelines which state “Identity is the new perimeter”.
Well, the perimeter is not a gate but a cattle guard, and I am not surprised to see some wolves eating a secret and a cow swaggering into the road.
Azure service APIs have always conflated the principles of “reachability from the public internet” and “anonymous access” into a single concept called “Public Access” which, for Azure KV, has 6 different public/private configuration combinations!
This vulnerability report did not include the Key Vault Networking settings for “Public network access”, so more testing (but not much more) is needed to see if the proxy side door can circumvent a resource ACL or private endpoint or both.
It's not just "identity", but "authorization". Really, what they mean is "defense in depth" minus firewalls (because the "in depth" part makes those less relevant), I think. And... that is a reasonable position... provided you get the "in depth" part right, which includes not having proxies that bypass authorization.
Binary Security found the undocumented APIs for Azure API Connections. In this post we examine the inner workings of the Connections allowing us to escalate privileges and read secrets in backend resources for services ranging from Key Vaults, Storage Blobs, Defender ATP, to Enterprise Jira and SalesForce servers.
Oh a confuse delegate vulnerability. Azure is not the only cloud provider with that oversight, let me tell you.
That’s a scary vulnerability. There’s no mention of the bug bounty paid out for it but I hope it was substantial.
Well at the bottom of the article, they mention that Microsoft first closed the issue as invalid, and on the second attempt they closed it as "cannot be reproduced" (after fixing it).
So from that I can imply there was no payment.
I've reported a trivial way to infer details about passwords in Windows. (Ctrl-arrow in password fields in Windows 8 jumped by character group even when hidden so if a prefilled password was 123 abc.de it would stop after 3, after space (I think), after c, after dot and finally after e.)
All I got was an email: that is interesting bye bye. But it was fixed in the next patch or the next after I think.
So I didn't care to report the two bigger problems I found with Azure Information Protection [1][2] I thought about reporting them but decided against it.
And I will continue to tell people that I don't care to do free work for MS when they won't even give me a t-shirt, a mug or even acknowledge it.
Maybe if one is a security researcher it can be worth it but if you just find something interesting you'll probably be better rewarded by reddit or HN, yes, the upvotes are worthless but less so than a dismissive email.
[1] one in the downloadable AIP tooling where you can easily smuggle clear text information with rock solid plausible deniability - I found it by accident after having implemented a part of a pipeline in the most obvious way I could think of.
[2]: the second had to do with how one can configure SharePoint to automatically protect files with AIP on download, the only problem being if you logged in using another login sequence (sorry for the lack of details, this was before the pandemic and it was just a small part of what I was working on at the time) SharePoint would conveniently forget all about it despite all efforts by me, the security admin at the company and the expert that Microsoft sent to fix it.
> the expert that Microsoft sent to fix it.
Ha ... ha ... ha ... ha ... did they give you the run around for several months until you dropped the issue? It's actually pretty astounding that they don't get sued for this practice. If a company is paying for support and are given illiterate noobs then that is breach of contract I would think. I would never recommend entering a contract with MSFT, they produce trash products they can't support and are more invested in their Legal team than actual product.
Reminds me of an issue I reported years ago to the super-special-premier support my company pays for. I never got to somebody who actually understood the issue but there were several managers who constantly tried to have meetings and close the ticket.
> there were several managers who constantly tried to have meetings and close the ticket.
Managers on the support side or your teams?
I thought the same when a friend of mine reported something to Apple. I would guess it's SOP at this point across big tech, unless something is too big to ignore.
You might have no idea how expensive providing great support to customers is when you're an vendor like Apple or a Microsoft. It's like backports, which are even more unbelievably expensive still, and those are gone industry-wide for that reason.
Think of the cost of opportunity in having smart, capable, experienced staff doing support or backports instead of actual dev work. (Especially backports, which when they were done frequently they were done precisely because customers are risk-averse, so a great deal more review and testing (with a much larger test matrix) was required for backports, with attendant huge increase in cost.) That cost is enormous. But of course they do need to provide some support, and at some point some really good support for the really serious bugs, and the vendor will in time do it, but first the customer demand and pressure has to build.
I can't speak to Apple, but wrt Microsoft, you're not appreciating just how bad support is (or even the documentation is) and you're not appreciating how much people pay for support on top of the product.
I feel like I know more about M365 than anyone I talk to at MS. That's bad.
The intention of the password entry dots isn’t to prevent folks with unrestricted physical access to the machine from exfiltrating information, it’s to stop it from appearing in screenshares and casual “over the shoulder” observations.
Honestly I’m surprised they even acknowledged that as a bug, given there are many ways to get a whole lot more info than what you demonstrated, for instance the builtin “eye” button that is purpose built to reveal the full password to anyone with physical access to the machine wishing to see it.
The caller still needs at least the Reader role, so it was limited to accounts that were added to the Azure subscription as only Readers.
I'm glad they fixed it, but this doesn't seem too scary??
Suppose user U has read access to Subscription S, but doesn't have access to keyvault K.
If user U can gain access to keyvault K via this exploit, it is scary.
[Vendors/Contingent staff will often be granted read-level access to a subscription under the assumption that they won't have access to secrets, for example.]
(I'm open to the possibility that I'm misunderstanding the exploit)
My reading on this is that the Reader must have read access to the API Connection in order to drive the exploit [against a secure resource they lack appropriate access to]. But a user can have Reader rights on the Subscription which does cascade down to all objects, including API Connections.
Your take is spot on, sir.
It's a feature not a bug: "Azure’s Security Vulnerabilities Are Out of Control" - https://www.lastweekinaws.com/blog/azures_vulnerabilities_ar...
> Let’s start with some empathy, because let’s face it: Nobody sets out to build something insecure except maybe a cryptocurrency exchange.
:-)
Nobody sets out to build something insecure but if they go with Azure....
"Microsoft confirms partial loss of security log data on multiple platforms" - https://www.cybersecuritydive.com/news/microsoft-loss-securi...
"Microsoft called out for ‘blatantly negligent’ cybersecurity practices" - https://www.theverge.com/2023/8/3/23819237/microsoft-azure-b...
At least this new one seems to have been fixed within two months: 6 Jan to Feb 20th.
So this was vulnerable ? https://azure.microsoft.com/en-us/explore/global-infrastruct...
Maybe with enough traction, they'll lose out on huge contracts because of stuff like this. Seems the only way to get stuff fixed is to attach dollars to it.
>The Connector for Key Vaults is maybe the one with the highest impact.
Yeah, no joke. Considering how well protected Azure Key Vaults typically are, and what's in them (secrets, certificates etc) this is huge way to compromise a lot of other things. It's finding the keys to the doors.