> Or at least, you can do that if you still care about MSAA. :)
I am a huge fan of all traditional forms of supersampling, intra-frame-only anti-aliasing techniques. The performance cost begins to make sense when you realize that these techniques are essentially perfect for the general case. They actually increase the information content of each pixel within each frame. Many modern techniques rely on multiple consecutive frames to build a final result. This is tantamount to game dev quackery in my book.
SSAA is even better than MSAA and is effectively what you are using in any game where you can set the "render scale" to a figure above 100%. It doesn't necessarily need to come in big scary powers of two (it used to and made enablement a problem). Even small oversampling rates like 110-130% can make a meaningful difference in my experience. If you can afford to go to 200% scale, you will receive a 4x increase in information content per pixel.
Yeah, and we can actually afford to do it nowadays. I'm currently making a game with very retro graphics (think Minecraft-level pixelated stuff)
Sure, the textures themselves aren't too high fidelity, but since the pixel shader is so simple, it's quite feasible to do tricks which would be impossible even ten years ago. I can run the game even with 8x SSAA (that means 8x8=64 samples per pixel) and almost ground truth, 128x anisotropic filtering.
There's practically zero temporal aliasing and zero spatial aliasing.[0]
Now of course, some people don't like the lack of aliasing too much - probably conditioning because of all the badly running, soulless releases - but I think that this direction is the future of gaming. Less photorealism, more intentional graphics design and crisp rendering.
(edit: swapped the URL because imgur was compressing the image to shit)
4x is the upper bound. On average across all of gaming the information gain is going to be much more pedestrian, but where the extra information is not predictable it does make a big impact. For some pixels you do get the full 4x increase and those are the exact pixels for which you needed it the most.
You still don't get the 4X increase on those pixels. You get it compressed down to a blended by coverage estimation of elements within the pixel. With a 4x higher res display instead of 4x MSAA or 4x SSAA you get more info in that area because you preserve more spatial aspects of the elements of it, instead of just coverage aspects.
Ok let's try this. In this scenario you have access to a channel with up to 4x more information than what you otherwise would have access to. Exactly how the information is sampled is application specific and will introduce variability into the final result.
If we want to get very pedantic, the information gain per pixel could actually be far more dramatic than 4x under any super sampling strategy. Assume a pathological case like a very dark room where 100% render scale doesn't have a single pixel that picks up the edge of a prop in a corner. At higher render scale maybe you start to get a handful of pixels that represent the feature. Even if you blend these poorly, you still get better than nothing. Some might argue that going from zero information to any information at all represents an infinite gain.
Depending on details of the rasterizer, at the lower resolution bits of those can still show up at the same rate, just with more frames not catching any of them.
If it's just one pixel of it showing up, when it is picked up it is likely overrepresented, so may represent a loss rather than a gain relative to the baseline where it perfectly supersampled it would say show up with 0.1% intensity.
The all black frames may be more accurate relative to baseline than the ones that pick it up with much stronger intensity.
If the feature perfectly supersampled would show up with 50.1% intensity the frames with it may be more accurate than the frames without it, but now it will be more common.
I have always heard that MSAA doesn't work well with deferred rendering which is why it is not longer used[0] which is a shame. Unreal TAA can be good in certain cases but breaks down if both the character and background are in motion, at least from my testing. I wish there was a way to remove certain objects from TAA. I assume this could be done by rendering the frame both with the object and without it there but that seems wasteful. Rendering at a higher resolution seems like a good idea only when you already have a high end card.
MSAA doesn't do anything for shader aliasing, it only handles triangle aliasing, and modern renderers have plenty of shader aliasing so there isn't much reason to use MSAA.
The high triangle count of modern renders might in some cases cause the MSAA to become closer to SSAA in terms of cost and memory usage, all for a rather small AA count relative to a temporal method.
Temporal AA can handle everything, and is relatively cheap, so it has replaced all the other approaches. I haven't used Unreal TAA, does Unreal not support the various vendor AI driven TAA's?
Unreal has plugins to support other AA including DLSS and FSR (which can both be used just for AA IIRC). I tried FSR and it didn't work as well as the default TAA for certain cases, but I'm pretty sure I just had some flickering issue in my project that I was trying to use TAA to solve as a band aid so maybe not a great example of which AA methods are good. I'm not an expert and only use Unreal in my spare time.
Even when using forward+ style rendering it's still common for certain effects to get handed off to deferred passes, which precludes easily supporting MSAA anyway. For example the recent Doom and Call of Duty games use that hybrid approach, so you won't find MSAA support in them despite their use of forward shading.
It's very rare for games to be 100% forward nowadays, outside of ones specifically built for VR or mobile.
From what I have heard, forward is still used for games that are developed especially for VR. Unreal docs:
>there are some trade-offs in using the Deferred Renderer that might not be right for all VR experiences. Forward Rendering provides a faster baseline, with faster rendering passes, which may lead to better performance on VR platforms. Not only is Forward Rendering faster, it also provides better anti-aliasing options than the Deferred Renderer, which may lead to better visuals[0]
This page is fairly old now, so I don't know if this is still the case. I think many competitive FPS titles use forward.
"forward+" term was used by paper introducing tile-based light culling in compute shader, compared to the classic way of just looping over every possible light in the scene.
I'm a huge fan too, but my understanding is that traditional intra-frame anti-aliasing (SSAA, MSAA, FXAA/SMAA, etc.) does not increase the information capacity of the final image, even though it may appear to do that. For more information per frame, you need one or more of: Higher resolution, higher dynamic range, sampling across time, or multi-sampled shading (i.e. MSAA, but you also run the shader per subsample).
MSAA does indeed have a higher information capacity, since the MSAA output surface has a 2x, 4x or 8x higher resolution than the rendering resolution, it's not a 'software post-processing filter' like FXAA or SMAA.
The output of a single pixel shader invocation is duplicated 2, 4 or 8 times and written to the MSAA surface through a triangle-edge coverage mask, and once rendering to the MSAA surface has finished, a 'resolve operation' happens which downscale-filters the MSAA surface to the rendering resolution.
SSAA (super-sampling AA) is simply rendering to a higher resolution image which is then downscaled to the display resolution, e.g. MSAA invokes the pixel shader once per 'pixel' (yielding multiple coverage-masked samples) and SSAA once per 'sample'.
I probably only have a basic understanding of MSAA, but isn't its advantage reduced owing to detail levels even in situations where it could be used. There's so many geometry edges in models and environments (before you consider aspects like tessellation) to AA and intricate shading on plane surfaces so to get a good result you're effectively supersampling much of the image.
Really appreciate the detailed article! I was on the team that shipped D3D11 and helped with the start of D3D12. I went off to other things and lost touch - the API has come a long way! So many of these features were just little whispers of ideas exchanged in the hallways. So cool to see them come to life!
One of the features that was hyped about DirectX 12 was that it had explicit support for multiple GPU's even if they were different models from different manufacturers.
As far as I know very few, if any, games currently take advantage of this feature. It's somewhat interesting that this is the case, when you think of it many PC's and laptops do have 2 GPU's, one as a Discrete Graphics card, and the other that comes integrated with the CPU itself (and over the last few year's these integrated GPU's have become powerful enough themselves to run some demanding games at low or medium settings at 60 fps)
Last time I tried to do this (run iGPU as a compute coprocessor for the CPU) it was no longer supported, the driver enumerating both but forcing the use of one only.
Me, a tabletop RPG gamer and board gamer who hasn't played computer games in years (literally more than a decade): "Huh, that's interesting. Why are they rolling a 12-sided die between 1 and 3 times, choosing how many times it will be rolled by rolling a 6-sided die and dividing the number in half rounded up?"
Because before I clicked on the article (or the comments), that's the only sense I could make of the expression "d3d12" — rolling a d12, d3 times.
Out of the modern non-console 3D APIs (Metal, D3D12, Vulkan), Vulkan is arguably the worst because it simply continued the Khronos tradition of letting GPU vendors contribute random extensions which then may or may not be promoted to the core API, lacking any sort of design vision or focus beyond the Mantle starting point.
Competition for 3D APIs is more important than ever (e.g. Vulkan has already started 'borrowing' a couple of ideas from other APIs - for instance VK_KHR_dynamic_rendering (e.g. the 'render-pass-object-removal') looks a lot like Metal's transient render pass system, just adapter to a non-OOP C API).
It sounds like maybe Khronos should handle 3D APIs like they do with slang[1] and just host someone else's API such as Diligent Engine or BGFX and then let them govern it's development.
Just like with Mantle, Slang came to Khronos, because after they stated they would not be doing any further GLSL development, and it was up to the community to keep using HLSL or do something else themselves, a year later NVidia decided to contribute the Slang project.
But that's not even the point. Everyone could collaborate on the common API and make it better. Where were Apple and MS? Chasing their NIHs instead of doing it, so you argument is missing the point. It's not about how good it is (and it is good enough, though improving it is always welcome), it's about it being the only collaborative and shared effort. Everything else is simply irrelevant in result even if it could be better in some ways.
Seeing what a clusterf#ck OpenGL 3 and 4 were (and Vulkan ended up being), and how Direct3D 11 is probably the most usable 3D API right now, both Apple and Microsoft were absolutely right to chase their own APIs. If not for competition from Metal and D3D12, Vulkan would still be forcing render passes and static pipelines.
No, they were absolutely wrong. It's like saying it was wrong for the Web to have common standards and everyone should be using ActiveX, Flash, Silverlight and who knows what else. That argument is complete fallacy. It's really good that the Web managed to get rid of those. But I'm sure lock-in proponents will never get tired for arguing that NIH is "the right way to go".
> It's like saying it was wrong for the Web to have common standards
Meanwhile, in real world, it is Chrome that is setting the standards, and everybody is following it while holding up a fig leaf to maintain some semblance of dignity. Why? Because W3C failed in making decent standards. Is CSS and Javascript anyone's idea of good architecture?
Vulkan only exists because DICE and AMD were nice enough to contribute their Mantle work to Khronos, otherwise they would still be wondering what OpenGL vNext should look like.
> otherwise they would still be wondering what OpenGL vNext should look like.
And the world would have been a better place. All we needed in OpenGL 5 was a thread-safe API with encapsulated state and more work on AZDO and DSA. Now, everyone has to be a driver developer. And new extensions are released one-by-one to make Vulkan just a little more like what OpenGL 5 should have been in the first place. Maybe in 10 more years we'll get there.
I used to be a big OpenGL advocate all the way up to Long Peaks, eventually I came to realise it is another committee driven API, and writing a couple of plugin backends was never that much of a deal.
Just see the shading mess as well, with GLSL lagging behind, most devs using HLSL due to its market share, and now Slang, which again was contributed by NVidia.
Also the day LunarG loses out their Vulkan sponsorship, the SDK is gone.
Oh, I agree that OpenGL 3 and 4 were absolute disasters, D3D11 is right there for how to do this mostly correctly, and OpenGL doesn't even stand close by comparison. But by the time 4.5 and 4.6 came out, things were going in the right direction. And then Vulkan killed the momentum.
Imagine a OpenGL which takes inspiration from D3D11 and dares to be even more user-friendly and intuitive. Instead, we got Vulkan, yay.
I can't imagine the world without Vulkan because while it is a lot lower level and more difficult to work with, it makes things like DXVK not only possible but quite performant. Gaming on Linux has been accelerated super strongly by projects like that.
Nothing is ever going to be profitable on GNU/Linux except for using it to drive SaaS and other such schemes. People that use Linux on the desktop are, on average, much less likely to want to pay for any kind of software; this has been true for 30 years... Steam might change this, we'll see.
It doesn't matter what Microsoft thinks of Proton, Google v Oracle had a pretty solid outcome
> In a 6–2 majority, the Court ruled that Google's use of the Java APIs was within the bounds of fair use, reversing the Federal Circuit Appeals Court ruling and remanding the case for further hearing.
As long as the Steam Deck remains popular, game developers do have an incentive to make sure their games work acceptably under Proton. I don't know why you think this isn't a viable long term strategy, the Win32 ABI is incredibly stable which is exactly what makes wine/proton work so well.
Hard disagree. OpenGL state management was unfixable if it had to keep compatibility with OpenGL 2. That's why OpenGL 3/4 ended up being such huge messes.
The main problem with Vulkan is that Apple decided to go with its own Metal API, completely fracturing the graphics space.
IMO without SPIR-V, OpenGL 5 would still be at a disadvantage in many scenarios. It's possible we would have gotten widespread support for it even without Vulkan, though.
Dx12 was release 2 years before Vulkan…
And there are plenty of advantages to having control over an API instead of putting it in some slow external organization without focus
I think Vulkan adoption was hurt by how much more complicated it was to use efficiently early on.
Considering that DX12 made it out earlier, and it took some time for Vulkan to finally relax some of its rules enough to be relatively easy to use efficiently, I think it just lost momentum.
> Or at least, you can do that if you still care about MSAA. :)
I am a huge fan of all traditional forms of supersampling, intra-frame-only anti-aliasing techniques. The performance cost begins to make sense when you realize that these techniques are essentially perfect for the general case. They actually increase the information content of each pixel within each frame. Many modern techniques rely on multiple consecutive frames to build a final result. This is tantamount to game dev quackery in my book.
SSAA is even better than MSAA and is effectively what you are using in any game where you can set the "render scale" to a figure above 100%. It doesn't necessarily need to come in big scary powers of two (it used to and made enablement a problem). Even small oversampling rates like 110-130% can make a meaningful difference in my experience. If you can afford to go to 200% scale, you will receive a 4x increase in information content per pixel.
Yeah, and we can actually afford to do it nowadays. I'm currently making a game with very retro graphics (think Minecraft-level pixelated stuff)
Sure, the textures themselves aren't too high fidelity, but since the pixel shader is so simple, it's quite feasible to do tricks which would be impossible even ten years ago. I can run the game even with 8x SSAA (that means 8x8=64 samples per pixel) and almost ground truth, 128x anisotropic filtering.
There's practically zero temporal aliasing and zero spatial aliasing.[0] Now of course, some people don't like the lack of aliasing too much - probably conditioning because of all the badly running, soulless releases - but I think that this direction is the future of gaming. Less photorealism, more intentional graphics design and crisp rendering.
(edit: swapped the URL because imgur was compressing the image to shit)
[0] https://files.catbox.moe/46ih7b.png
It can't be 4x increase because the additional information will be correlated and predictable.
if i render a linear gradient at increasingly higher resolutions, I certainly am not creating infinite information in the continuum limit obviously
4x is the upper bound. On average across all of gaming the information gain is going to be much more pedestrian, but where the extra information is not predictable it does make a big impact. For some pixels you do get the full 4x increase and those are the exact pixels for which you needed it the most.
You still don't get the 4X increase on those pixels. You get it compressed down to a blended by coverage estimation of elements within the pixel. With a 4x higher res display instead of 4x MSAA or 4x SSAA you get more info in that area because you preserve more spatial aspects of the elements of it, instead of just coverage aspects.
Ok let's try this. In this scenario you have access to a channel with up to 4x more information than what you otherwise would have access to. Exactly how the information is sampled is application specific and will introduce variability into the final result.
If we want to get very pedantic, the information gain per pixel could actually be far more dramatic than 4x under any super sampling strategy. Assume a pathological case like a very dark room where 100% render scale doesn't have a single pixel that picks up the edge of a prop in a corner. At higher render scale maybe you start to get a handful of pixels that represent the feature. Even if you blend these poorly, you still get better than nothing. Some might argue that going from zero information to any information at all represents an infinite gain.
Depending on details of the rasterizer, at the lower resolution bits of those can still show up at the same rate, just with more frames not catching any of them.
If it's just one pixel of it showing up, when it is picked up it is likely overrepresented, so may represent a loss rather than a gain relative to the baseline where it perfectly supersampled it would say show up with 0.1% intensity.
The all black frames may be more accurate relative to baseline than the ones that pick it up with much stronger intensity.
If the feature perfectly supersampled would show up with 50.1% intensity the frames with it may be more accurate than the frames without it, but now it will be more common.
I have always heard that MSAA doesn't work well with deferred rendering which is why it is not longer used[0] which is a shame. Unreal TAA can be good in certain cases but breaks down if both the character and background are in motion, at least from my testing. I wish there was a way to remove certain objects from TAA. I assume this could be done by rendering the frame both with the object and without it there but that seems wasteful. Rendering at a higher resolution seems like a good idea only when you already have a high end card.
[0] eg https://docs.nvidia.com/gameworks/content/gameworkslibrary/g...
MSAA doesn't do anything for shader aliasing, it only handles triangle aliasing, and modern renderers have plenty of shader aliasing so there isn't much reason to use MSAA.
The high triangle count of modern renders might in some cases cause the MSAA to become closer to SSAA in terms of cost and memory usage, all for a rather small AA count relative to a temporal method.
Temporal AA can handle everything, and is relatively cheap, so it has replaced all the other approaches. I haven't used Unreal TAA, does Unreal not support the various vendor AI driven TAA's?
Unreal has plugins to support other AA including DLSS and FSR (which can both be used just for AA IIRC). I tried FSR and it didn't work as well as the default TAA for certain cases, but I'm pretty sure I just had some flickering issue in my project that I was trying to use TAA to solve as a band aid so maybe not a great example of which AA methods are good. I'm not an expert and only use Unreal in my spare time.
>Temporal AA can handle everything
With the tradeoff of producing a blurry mess.
Sometimes TAA artifacts are so distracting that I end up disabling AA altogether and consider it an improvement.
>I have always heard that MSAA doesn't work well with deferred rendering which is why it is not longer used
Yes, but is deferred still go-to method? I think MSAA is good reason to go with "forward+" methods.
Even when using forward+ style rendering it's still common for certain effects to get handed off to deferred passes, which precludes easily supporting MSAA anyway. For example the recent Doom and Call of Duty games use that hybrid approach, so you won't find MSAA support in them despite their use of forward shading.
It's very rare for games to be 100% forward nowadays, outside of ones specifically built for VR or mobile.
From what I have heard, forward is still used for games that are developed especially for VR. Unreal docs:
>there are some trade-offs in using the Deferred Renderer that might not be right for all VR experiences. Forward Rendering provides a faster baseline, with faster rendering passes, which may lead to better performance on VR platforms. Not only is Forward Rendering faster, it also provides better anti-aliasing options than the Deferred Renderer, which may lead to better visuals[0]
This page is fairly old now, so I don't know if this is still the case. I think many competitive FPS titles use forward.
>"forward+" methods.
Can you expound on this?
[0] https://dev.epicgames.com/documentation/en-us/unreal-engine/...
Can you expound on this?
"forward+" term was used by paper introducing tile-based light culling in compute shader, compared to the classic way of just looping over every possible light in the scene.
Aren't some forms of aliasing specifically temporal in nature? For example, high detail density induced flicker on movement.
I appreciate people standing up for classical stuff, but I don't want the pendulum swung too far back the other way either.
That flickering is heavily reduced already by mipmapping and anisotropic filtering.
I'm a huge fan too, but my understanding is that traditional intra-frame anti-aliasing (SSAA, MSAA, FXAA/SMAA, etc.) does not increase the information capacity of the final image, even though it may appear to do that. For more information per frame, you need one or more of: Higher resolution, higher dynamic range, sampling across time, or multi-sampled shading (i.e. MSAA, but you also run the shader per subsample).
MSAA does indeed have a higher information capacity, since the MSAA output surface has a 2x, 4x or 8x higher resolution than the rendering resolution, it's not a 'software post-processing filter' like FXAA or SMAA.
The output of a single pixel shader invocation is duplicated 2, 4 or 8 times and written to the MSAA surface through a triangle-edge coverage mask, and once rendering to the MSAA surface has finished, a 'resolve operation' happens which downscale-filters the MSAA surface to the rendering resolution.
SSAA (super-sampling AA) is simply rendering to a higher resolution image which is then downscaled to the display resolution, e.g. MSAA invokes the pixel shader once per 'pixel' (yielding multiple coverage-masked samples) and SSAA once per 'sample'.
I probably only have a basic understanding of MSAA, but isn't its advantage reduced owing to detail levels even in situations where it could be used. There's so many geometry edges in models and environments (before you consider aspects like tessellation) to AA and intricate shading on plane surfaces so to get a good result you're effectively supersampling much of the image.
MSAA actually does, it stores more information per pixel using a special buffer format
What extra information goes in there?
[dead]
Really appreciate the detailed article! I was on the team that shipped D3D11 and helped with the start of D3D12. I went off to other things and lost touch - the API has come a long way! So many of these features were just little whispers of ideas exchanged in the hallways. So cool to see them come to life!
One of the features that was hyped about DirectX 12 was that it had explicit support for multiple GPU's even if they were different models from different manufacturers.
As far as I know very few, if any, games currently take advantage of this feature. It's somewhat interesting that this is the case, when you think of it many PC's and laptops do have 2 GPU's, one as a Discrete Graphics card, and the other that comes integrated with the CPU itself (and over the last few year's these integrated GPU's have become powerful enough themselves to run some demanding games at low or medium settings at 60 fps)
https://www.pcgamer.com/ashes-of-the-singularity-benchmark-u...
Last time I tried to do this (run iGPU as a compute coprocessor for the CPU) it was no longer supported, the driver enumerating both but forcing the use of one only.
Me, a tabletop RPG gamer and board gamer who hasn't played computer games in years (literally more than a decade): "Huh, that's interesting. Why are they rolling a 12-sided die between 1 and 3 times, choosing how many times it will be rolled by rolling a 6-sided die and dividing the number in half rounded up?"
Because before I clicked on the article (or the comments), that's the only sense I could make of the expression "d3d12" — rolling a d12, d3 times.
Gamers call DirectX 3D 12 "DX12". I've seen "D3D12" mostly used by graphics programmers. I'm not a graphics programmer though
Game developers also call it DX12. We like to name targets with two characters and a number.
It should have been Vulkan from the start instead of another NIH.
Out of the modern non-console 3D APIs (Metal, D3D12, Vulkan), Vulkan is arguably the worst because it simply continued the Khronos tradition of letting GPU vendors contribute random extensions which then may or may not be promoted to the core API, lacking any sort of design vision or focus beyond the Mantle starting point.
Competition for 3D APIs is more important than ever (e.g. Vulkan has already started 'borrowing' a couple of ideas from other APIs - for instance VK_KHR_dynamic_rendering (e.g. the 'render-pass-object-removal') looks a lot like Metal's transient render pass system, just adapter to a non-OOP C API).
It sounds like maybe Khronos should handle 3D APIs like they do with slang[1] and just host someone else's API such as Diligent Engine or BGFX and then let them govern it's development.
[1]: https://shader-slang.org/
Just like with Mantle, Slang came to Khronos, because after they stated they would not be doing any further GLSL development, and it was up to the community to keep using HLSL or do something else themselves, a year later NVidia decided to contribute the Slang project.
You can read about DX12 being worse here: https://themaister.net/blog/2021/11/
But that's not even the point. Everyone could collaborate on the common API and make it better. Where were Apple and MS? Chasing their NIHs instead of doing it, so you argument is missing the point. It's not about how good it is (and it is good enough, though improving it is always welcome), it's about it being the only collaborative and shared effort. Everything else is simply irrelevant in result even if it could be better in some ways.
They were having fun with Sony and Nintendo.
Seeing what a clusterf#ck OpenGL 3 and 4 were (and Vulkan ended up being), and how Direct3D 11 is probably the most usable 3D API right now, both Apple and Microsoft were absolutely right to chase their own APIs. If not for competition from Metal and D3D12, Vulkan would still be forcing render passes and static pipelines.
No, they were absolutely wrong. It's like saying it was wrong for the Web to have common standards and everyone should be using ActiveX, Flash, Silverlight and who knows what else. That argument is complete fallacy. It's really good that the Web managed to get rid of those. But I'm sure lock-in proponents will never get tired for arguing that NIH is "the right way to go".
> It's like saying it was wrong for the Web to have common standards
Meanwhile, in real world, it is Chrome that is setting the standards, and everybody is following it while holding up a fig leaf to maintain some semblance of dignity. Why? Because W3C failed in making decent standards. Is CSS and Javascript anyone's idea of good architecture?
Sure, but imagine real world where ActiveX is a thing and arguments like above are presented. South Korea had fun dealing with this nonsense.
Direction of the idea you are advocating for is completely wrong.
Vulkan only exists because DICE and AMD were nice enough to contribute their Mantle work to Khronos, otherwise they would still be wondering what OpenGL vNext should look like.
> otherwise they would still be wondering what OpenGL vNext should look like.
And the world would have been a better place. All we needed in OpenGL 5 was a thread-safe API with encapsulated state and more work on AZDO and DSA. Now, everyone has to be a driver developer. And new extensions are released one-by-one to make Vulkan just a little more like what OpenGL 5 should have been in the first place. Maybe in 10 more years we'll get there.
I used to be a big OpenGL advocate all the way up to Long Peaks, eventually I came to realise it is another committee driven API, and writing a couple of plugin backends was never that much of a deal.
Just see the shading mess as well, with GLSL lagging behind, most devs using HLSL due to its market share, and now Slang, which again was contributed by NVidia.
Also the day LunarG loses out their Vulkan sponsorship, the SDK is gone.
Oh, I agree that OpenGL 3 and 4 were absolute disasters, D3D11 is right there for how to do this mostly correctly, and OpenGL doesn't even stand close by comparison. But by the time 4.5 and 4.6 came out, things were going in the right direction. And then Vulkan killed the momentum.
Imagine a OpenGL which takes inspiration from D3D11 and dares to be even more user-friendly and intuitive. Instead, we got Vulkan, yay.
Spot on, and then on the Web we get 10 year delay on hardware capabilities, and still no debugging tools after 15 years.
I can't imagine the world without Vulkan because while it is a lot lower level and more difficult to work with, it makes things like DXVK not only possible but quite performant. Gaming on Linux has been accelerated super strongly by projects like that.
Gaming on Linux is doing just fine on Android/Linux.
The problem is making gaming on GNU/Linux profitable, Vulkan will not fix that, while Proton is not a solution that will work out long term.
Nothing is ever going to be profitable on GNU/Linux except for using it to drive SaaS and other such schemes. People that use Linux on the desktop are, on average, much less likely to want to pay for any kind of software; this has been true for 30 years... Steam might change this, we'll see.
Sure.
> Linux Beats Mac Dramatically In Humble Bundle Total Payments
https://web.archive.org/web/20150415180723/http://www.thepow...
> Linux users pay 3x that of Windows users for Humble Indie Bundle 3
https://web.archive.org/web/20111130182955/https://www.geek....
Old links precisely because it happened before or soon after Steam came to Linux.
Relying on Microsoft being happy with Proton isn't a long term strategy, additionally no company management stays around forever.
It doesn't matter what Microsoft thinks of Proton, Google v Oracle had a pretty solid outcome
> In a 6–2 majority, the Court ruled that Google's use of the Java APIs was within the bounds of fair use, reversing the Federal Circuit Appeals Court ruling and remanding the case for further hearing.
https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_...
Keep wishing.
As long as the Steam Deck remains popular, game developers do have an incentive to make sure their games work acceptably under Proton. I don't know why you think this isn't a viable long term strategy, the Win32 ABI is incredibly stable which is exactly what makes wine/proton work so well.
Zero effort, Valve is the one doing the work.
Because apparently Linux folks haven't learnt the OS/2 lesson.
Hard disagree. OpenGL state management was unfixable if it had to keep compatibility with OpenGL 2. That's why OpenGL 3/4 ended up being such huge messes.
The main problem with Vulkan is that Apple decided to go with its own Metal API, completely fracturing the graphics space.
All alternatives to Vulkan predate it, and it only exists thanks to Mantle's gift.
Not really. Metal technically existed before Vulkan, but it underwent a huge revision in 2017 after Vulkan release.
All APIs have revisions.
IMO without SPIR-V, OpenGL 5 would still be at a disadvantage in many scenarios. It's possible we would have gotten widespread support for it even without Vulkan, though.
OpenGL supports SPIR-V: https://www.khronos.org/opengl/wiki/SPIR-V
Dx12 was release 2 years before Vulkan… And there are plenty of advantages to having control over an API instead of putting it in some slow external organization without focus
I think Vulkan adoption was hurt by how much more complicated it was to use efficiently early on.
Considering that DX12 made it out earlier, and it took some time for Vulkan to finally relax some of its rules enough to be relatively easy to use efficiently, I think it just lost momentum.