Re: Recognize cards that expose GLSL 1.30 as DX10 capable even if they don't support EXT_GPU_SHADER4
Am 23.05.2014 um 00:23 schrieb Andrei Slăvoiu <andrei.slavoiu(a)gmail.com>:
Mesa drivers do not expose EXT_GPU_SHADER4 as all functionality it offers is already part of OpenGL 3.0. This causes all cards that are not explicitly recognized by wine (for example all GCN cards) to be treated as DX9 cards and presented to Windows apps as Radeon 9500. The problem is that without GL_EXT_gpu_shader4 or GL_ARB_shader_texture_lod we cannot handle the texldd instruction. Right now we cannot use core contexts, so unless the driver exposes one of those extensions in compatibility contexts we are stuck on a shader model 2 feature level :-( .
The level detection your patch changes isn't directly related to this, as the shader model version support is controlled differently. But a Radeon 9500 matches a card with shader model 2 support much better than a Radeon R9. Does this driver / gpu combination support GL_ARB_shader_texture_lod?
On Tuesday 27 May 2014 15:56:57 Stefan Dösinger wrote:
Am 23.05.2014 um 00:23 schrieb Andrei SlÄvoiu <andrei.slavoiu(a)gmail.com>:
Mesa drivers do not expose EXT_GPU_SHADER4 as all functionality it offers is already part of OpenGL 3.0. This causes all cards that are not explicitly recognized by wine (for example all GCN cards) to be treated as DX9 cards and presented to Windows apps as Radeon 9500.
The problem is that without GL_EXT_gpu_shader4 or GL_ARB_shader_texture_lod we cannot handle the texldd instruction. Right now we cannot use core contexts, so unless the driver exposes one of those extensions in compatibility contexts we are stuck on a shader model 2 feature level :-( .
The level detection your patch changes isn't directly related to this, as the shader model version support is controlled differently. But a Radeon 9500 matches a card with shader model 2 support much better than a Radeon R9.
Does this driver / gpu combination support GL_ARB_shader_texture_lod?
Yes, GL_ARB_shader_texture_lod is exposed by mesa for all drivers in both core and compatibility contexts. So should I check for GL_ARB_shader_texture_lod instead of glsl version?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Am 2014-05-27 16:56, schrieb Andrei Slavoiu:> Yes,
GL_ARB_shader_texture_lod is exposed by mesa for all drivers in both core and compatibility contexts. So should I check for GL_ARB_shader_texture_lod instead of glsl version? No, I'd say use something like if (EXT_gpu_shader4 || ( ARB_shader_texture_lod && glsl_version >= 1.30). -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQIcBAEBAgAGBQJThLDBAAoJEN0/YqbEcdMwkD4P/jSQ8W3qJ36kMNmuJ8XbA8nx zM3DH4KMOIY3Zgt1Mf/j8ycP/zhqy+GPkXDnWctaCAkrvrqeRoItMEbeIcgMI+yH vGBJLIBMuF7NyQ9K4jIxpdHnO81YOkklZPbLV62Fxv5zgrQCrIidm4kY/1EnvOr3 pXUbT1E596fxnZTvxD1/Y5MASrTrmfE5sDlsI49k0xzXQRBbc+gosJHVryUWSf3x 4vv29NiYW0CL0MVhbd0GUoY9HlMZnzeshhWO4689BFKYYTCi3BSBilfcr9qiAsx0 ud7EjehnD11NUnQmb1ail/XZRT+FpUEqSWkYO/GYw4wqthJcrnPVXa+AiSajEAjC b4ODtZUegjTZJ+bEEwEAW797tpo3KA9JIAl/c+zktE23PK3APSgmm3hD0Pr7PSNk rd9kKplvsXvvk/eqii6uJ8lx0ZnuoDkRgDkSbTzdy7eEfOGJTQ+1Mr78rPT2Lgpw AVzIUtqfbcT1fBOib0Cl6D38mI56hNb9oCMt4P2T8JxJXhnAAFK53e8SdVymE7zx 1vj4sEouFs0xw2CChhAPIkiWH4iFuSZWVGnudEy94cp/1iGO9m+zQk2unsqrzENs VmpAEvLcA63gtzchqZmyMibWbj+OhuM6NkicCObGuddZ1PXRkbbSV5hADSM8ZHnb b0Adkmlm5rdNfTS5p6Ys =ehtk -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Am 2014-05-27 17:35, schrieb Stefan Dösinger:
Am 2014-05-27 16:56, schrieb Andrei Slavoiu:> Yes,
GL_ARB_shader_texture_lod is exposed by mesa for all drivers in both core and compatibility contexts. So should I check for GL_ARB_shader_texture_lod instead of glsl version? No, I'd say use something like if (EXT_gpu_shader4 || ( ARB_shader_texture_lod && glsl_version >= 1.30). An even nicer solution would be to use the capabilities reported by the shader backend. See select_shader_backend() and and shader_backend->shader_get_caps. Unsupported vertex shaders would match a dx 7 card, VS 1.x dx 8, VS 2/3 dx9 and 4 dx10. For separating dx6 and and 7 the fixed function fragment pipeline info is needed (number of supported texture units).
Note that we currently do not support pixel shaders on dx8 cards, and probably never will. That's because those cards use separate GL extensions (ATI_fragment_shader and GL_NV_register_combiners + GL_NV_texture_shader). Thus it's better to look only at vertex shaders for now to prevent dx8 cards from being reported as dx7. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBAgAGBQJThOPoAAoJEN0/YqbEcdMwvvcP/jVwAxed6lqHNdMy9qrfwJ25 sfK2Npf0tiuvdWT5t/pGMhiAF/abWoS9w5qhD8OMEXo4TpYOS4IkLxWSJcJ/adAD w0cyduvRu0nONy5apHPaR4py6EIYrZB2n4WZmoj/QHJJjWzFSKGeQweuohcr8NvJ ddc/di9hjyquGuiTUNRU+hHiW3HPYVe4CJNZEH3jXyG6q0e6GgztxS6p10x2RcZY SlSKnOcSgEZAQvFPlZ50QWz6oElKFGzrkODcQZhuh8OHxfOntqzJyXg8PB5Ft2L0 6cHtVE5eTuXT6thcOds1759aQ1qLO7zOC59wP3N72Wp5L8TU/rN+Gg4ooHnV2ctw gkBZ0SkgsK7tR2aQ+gc9zU4CUEcX4HRuO3vMgSElUGl4Df6yuSKLteVy/eyKXlrX pHDcOdO899978tk+NibwQZOHyKo8l5jpzdVpkiOkd72lk3xLicIGlN3zKHZL0Cfw Esq3bI60boPbOVdYP+yI6X8a/UoBtpIvWxe5dfFQJqzm+gOnkvcEY5QEEx8P9IPd 895BrPcUOIxMCpPlZ26Op7eIGgtKPpRi2TJ7BwYgwWAdgeBNuQnlUcdkCn9FFvSQ jYvKze39X+q5vYxSAlKVAszhMOOXQM/NC9Vol1WfiL1T6+CFWhrR+cPd0x0So6uD 9u1plWqMoP7rf2QtxAPR =fZmC -----END PGP SIGNATURE-----
În ziua de Mar 27 Mai 2014, la 21:13:44, Stefan Dösinger a scris:
Am 2014-05-27 17:35, schrieb Stefan Dösinger:
Am 2014-05-27 16:56, schrieb Andrei Slavoiu:> Yes,
GL_ARB_shader_texture_lod is exposed by mesa for all drivers in both core and compatibility contexts. So should I check for GL_ARB_shader_texture_lod instead of glsl version?
No, I'd say use something like if (EXT_gpu_shader4 || ( ARB_shader_texture_lod && glsl_version >= 1.30).
An even nicer solution would be to use the capabilities reported by the shader backend. See select_shader_backend() and and shader_backend->shader_get_caps. Unsupported vertex shaders would match a dx 7 card, VS 1.x dx 8, VS 2/3 dx9 and 4 dx10. For separating dx6 and and 7 the fixed function fragment pipeline info is needed (number of supported texture units).
Note that we currently do not support pixel shaders on dx8 cards, and probably never will. That's because those cards use separate GL extensions (ATI_fragment_shader and GL_NV_register_combiners + GL_NV_texture_shader). Thus it's better to look only at vertex shaders for now to prevent dx8 cards from being reported as dx7.
Actually, d3d_level_from_gl_info is misleading (broken even), as it will check for SM3 capability and then return that the card is capable of directx 10 (which requires SM4). I'll look into a better way to report the capabilities to the callers (maybe use d3d feature levels?) and also for a way to share the code that identifies the capabilities with shader_glsl_get_caps. But that in a future patch, after the second version of this one gets submitted.
On 23 May 2014 00:23, Andrei Slăvoiu <andrei.slavoiu(a)gmail.com> wrote:
already part of OpenGL 3.0. This causes all cards that are not explicitly recognized by wine (for example all GCN cards) to be treated as DX9 cards and presented to Windows apps as Radeon 9500.
TAHITI, PITCAIRN and CAPE VERDE should be recognized, newer cards aren't. On 27 May 2014 17:35, Stefan Dösinger <stefandoesinger(a)gmail.com> wrote:
No, I'd say use something like if (EXT_gpu_shader4 || ( ARB_shader_texture_lod && glsl_version >= 1.30). That's just wrong, GLSL 1.30 + ARB_shader_texture_lod doesn't imply SM4, only SM3.
On 28 May 2014 21:48, Andrei Slăvoiu <andrei.slavoiu(a)gmail.com> wrote:
Actually, d3d_level_from_gl_info is misleading (broken even), as it will check for SM3 capability and then return that the card is capable of directx 10 EXT_gpu_shader4 adds (among others) support for "native" integers and bitwise operations. It can't be supported (in hardware) on a SM3 GPU. The specific extensions used in d3d_level_from_gl_info() are somewhat arbitrary, we could have used e.g. ARB_geometry_shader4 instead of EXT_gpu_shader4 here as well.
Note that the intention of the code this function is a part of is to come up with a somewhat reasonable card to report to the application in case of unrecognized GL implementations, while e.g. shader_glsl_get_caps() is meant to report actual capabilities. It would perhaps be nice to use the actual shader backend etc. caps for d3d_level_from_gl_info(), but note that that wouldn't necessarily be easy, or make the situation much better. For the Mesa case you mention in the original patch it would arguably make the situation worse, since we can't currently properly do shader model 4 on Mesa.
On 27 May 2014 17:35, Stefan Dösinger <stefandoesinger(a)gmail.com> wrote:
No, I'd say use something like if (EXT_gpu_shader4 || ( ARB_shader_texture_lod && glsl_version >= 1.30).
That's just wrong, GLSL 1.30 + ARB_shader_texture_lod doesn't imply SM4, only SM3. Actually, it does. No SM3 card can expose GLSL 1.30.
On 28 May 2014 21:48, Andrei Slăvoiu <andrei.slavoiu(a)gmail.com> wrote:
Actually, d3d_level_from_gl_info is misleading (broken even), as it will check for SM3 capability and then return that the card is capable of directx 10 EXT_gpu_shader4 adds (among others) support for "native" integers and bitwise operations. It can't be supported (in hardware) on a SM3 GPU. The specific extensions used in d3d_level_from_gl_info() are somewhat arbitrary, we could have used e.g. ARB_geometry_shader4 instead of EXT_gpu_shader4 here as well. Like you say, the extensions are arbitrary, so why not use GLSL version as well? GLSL 1.30 also adds support for integers and bitwise operations. All functionality of EXT_gpu_shader4 is exposed by either GLSL 1.30 or ARB_shader_texture_lod.
Note that the intention of the code this function is a part of is to come up with a somewhat reasonable card to report to the application in case of unrecognized GL implementations, while e.g. shader_glsl_get_caps() is meant to report actual capabilities. It would perhaps be nice to use the actual shader backend etc. caps for d3d_level_from_gl_info(), but note that that wouldn't necessarily be easy, or make the situation much better. For the Mesa case you mention in the original patch it would arguably make the situation worse, since we can't currently properly do shader model 4 on Mesa. I'm not actually interested in using DirectX 10 (yet). The reason I started messing with this part of the code is that World of Warcraft renders complete garbage when the card is presented as Radeon 9500. Changing this to Radeon X1600 improves things a bit, only the background is broken, characters and menus appear fine. Finally, with Radeon HD 2900 the only broken rendering is with the shadows. Those remain broken even when using the PCI ID of my real card, a KAVERI, so it's probably a mesa bug.
The reason I prefer to improve the fallback instead of simply adding the PCI ID for my card to the list of known cards is that with the current model there will always be cards that are not recognized by wine and a good fallback will prevent the experience for a newbie being "nothing works, wine sucks".
On 2 June 2014 18:51, Andrei Slavoiu <andrei.slavoiu(a)gmail.com> wrote:
Actually, it does. No SM3 card can expose GLSL 1.30. You're right, I mixed up 1.30 and 1.50 there. I probably should have actually checked before writing a reply. (We'll probably end up using 1.50 for e.g. geometry shaders on Mesa, but that aside.) Checking for ARB_shader_texture_lod as well is redundant in that case though.
Like you say, the extensions are arbitrary, so why not use GLSL version as well? GLSL 1.30 also adds support for integers and bitwise operations. All Sure, checking the GLSL version is fine.
The reason I prefer to improve the fallback instead of simply adding the PCI ID for my card to the list of known cards is that with the current model there will always be cards that are not recognized by wine and a good fallback will prevent the experience for a newbie being "nothing works, wine sucks". Sure. The number of applications that really cares about the name of the card returned etc. should be fairly limited though, and I'm a bit surprised that World of Warcraft does. Are you sure the issue is really with the reported card, as opposed to e.g. the amount of video memory associated with it? CARD_AMD_RADEON_9500 has 64MB, which really isn't a lot by today's standards.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Am 2014-06-02 23:11, schrieb Henri Verbeet:
You're right, I mixed up 1.30 and 1.50 there. I probably should have actually checked before writing a reply. (We'll probably end up using 1.50 for e.g. geometry shaders on Mesa, but that aside.) Checking for ARB_shader_texture_lod as well is redundant in that case though. No, because without ARB_shader_texture_lod we only support SM 2.0, leading to inconsistencies between caps and PCI ID.
We can also decide that we don't care about such consistency. In this case the version logic can be removed entirely, or just used when we cannot match the GL renderer string and have to guess a GPU. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBAgAGBQJTjWeUAAoJEN0/YqbEcdMw814P/jKrZAcCMRHwzdLcSYMOSPa4 miKtuup9xaj0ZL0mVvUw8jgSEVGw5lOIvZ4s6o2BshzQbgHlBu9+lEN/nDIWg+gG WKjiJIaN1Q4FOjRbkqNaNJ2kwMeMNwccb5l5Y+swEDZUWeyjPNdqwrubvbMvJme9 IU3DyeQdk3+G3KgC6tf0FQp54qSbRMTnNQABWoBJRLMeLKd6FGP113++jk+ah4kU k6g/aSfVUtbMkrep8N40GIMSGo2Y0j7OZxukzC/pWkR+GIa3YP1fodN2VUGgF9Gs jvjhhsXOc4V/VBq/Ohm+V6ULWyAZNY3bTh3TIbjx/Sxq7f2vyJkiuqZWe4yy8CVr 6smDLGNHRs3jSj95+/m+mn2bI5bivuAQCTNQ2dZhAo4kHEoOe9wrTKzShCH+2Heg fRAA00filFFtHNjaJJFt91AJJDU8mnWvKU2miWZO+YkTN7ScbnHLQ39woDxKA0ZL fw6fo+oa0Lx5gQZfs9M90sRoCzsJIBAWFzKqwFashuy2xHGxzvUAcZ0QgFcI5XBW 9GuesTjbwk1uWRFm6Bojs4cbGSUK0wWhGST5aUTVg8+/3RtipxNDX04C84N58X19 uYvsMcON6fVxEEcZE6BPY1f1tuIjvVAVP7VUMPazh+XdUguWUbqpgqnmceFU/uFj APLf/3sGfFai4Q6iYNzr =Vmzt -----END PGP SIGNATURE-----
On 3 June 2014 08:13, Stefan Dösinger <stefandoesinger(a)gmail.com> wrote:
No, because without ARB_shader_texture_lod we only support SM 2.0, leading to inconsistencies between caps and PCI ID.
And without e.g. ARB_geometry_shader4 we can only do SM3. You'll always have that issue unless you either use the computed D3D caps to guess a card, or copy the code used to compute them.
We can also decide that we don't care about such consistency. In this case the version logic can be removed entirely, or just used when we cannot match the GL renderer string and have to guess a GPU.
That's the only place where this function actually does something anyway. There are calls to d3d_level_from_gl_info() in select_card_nvidia_binary() and select_card_amd_binary() for historic reasons, but at this point removing them wouldn't alter the behaviour of those functions in a significant way. (Hypothetic future versions of the proprietary drivers that remove extensions aside.)
participants (4)
-
Andrei Slavoiu -
Andrei Slăvoiu -
Henri Verbeet -
Stefan Dösinger