Am 23.05.2014 um 00:23 schrieb Andrei Slăvoiu andrei.slavoiu@gmail.com:
Mesa drivers do not expose EXT_GPU_SHADER4 as all functionality it offers is already part of OpenGL 3.0. This causes all cards that are not explicitly recognized by wine (for example all GCN cards) to be treated as DX9 cards and presented to Windows apps as Radeon 9500.
The problem is that without GL_EXT_gpu_shader4 or GL_ARB_shader_texture_lod we cannot handle the texldd instruction. Right now we cannot use core contexts, so unless the driver exposes one of those extensions in compatibility contexts we are stuck on a shader model 2 feature level :-( .
The level detection your patch changes isn't directly related to this, as the shader model version support is controlled differently. But a Radeon 9500 matches a card with shader model 2 support much better than a Radeon R9.
Does this driver / gpu combination support GL_ARB_shader_texture_lod?
On Tuesday 27 May 2014 15:56:57 Stefan Dösinger wrote:
Am 23.05.2014 um 00:23 schrieb Andrei SlÄvoiu andrei.slavoiu@gmail.com:
Mesa drivers do not expose EXT_GPU_SHADER4 as all functionality it offers is already part of OpenGL 3.0. This causes all cards that are not explicitly recognized by wine (for example all GCN cards) to be treated as DX9 cards and presented to Windows apps as Radeon 9500.
The problem is that without GL_EXT_gpu_shader4 or GL_ARB_shader_texture_lod we cannot handle the texldd instruction. Right now we cannot use core contexts, so unless the driver exposes one of those extensions in compatibility contexts we are stuck on a shader model 2 feature level :-( .
The level detection your patch changes isn't directly related to this, as the shader model version support is controlled differently. But a Radeon 9500 matches a card with shader model 2 support much better than a Radeon R9.
Does this driver / gpu combination support GL_ARB_shader_texture_lod?
Yes, GL_ARB_shader_texture_lod is exposed by mesa for all drivers in both core and compatibility contexts. So should I check for GL_ARB_shader_texture_lod instead of glsl version?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Am 2014-05-27 16:56, schrieb Andrei Slavoiu:> Yes,
GL_ARB_shader_texture_lod is exposed by mesa for all drivers in both core and compatibility contexts. So should I check for GL_ARB_shader_texture_lod instead of glsl version?
No, I'd say use something like if (EXT_gpu_shader4 || ( ARB_shader_texture_lod && glsl_version >= 1.30).
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Am 2014-05-27 17:35, schrieb Stefan Dösinger:
Am 2014-05-27 16:56, schrieb Andrei Slavoiu:> Yes,
GL_ARB_shader_texture_lod is exposed by mesa for all drivers in both core and compatibility contexts. So should I check for GL_ARB_shader_texture_lod instead of glsl version?
No, I'd say use something like if (EXT_gpu_shader4 || ( ARB_shader_texture_lod && glsl_version >= 1.30).
An even nicer solution would be to use the capabilities reported by the shader backend. See select_shader_backend() and and shader_backend->shader_get_caps. Unsupported vertex shaders would match a dx 7 card, VS 1.x dx 8, VS 2/3 dx9 and 4 dx10. For separating dx6 and and 7 the fixed function fragment pipeline info is needed (number of supported texture units).
Note that we currently do not support pixel shaders on dx8 cards, and probably never will. That's because those cards use separate GL extensions (ATI_fragment_shader and GL_NV_register_combiners + GL_NV_texture_shader). Thus it's better to look only at vertex shaders for now to prevent dx8 cards from being reported as dx7.
În ziua de Mar 27 Mai 2014, la 21:13:44, Stefan Dösinger a scris:
Am 2014-05-27 17:35, schrieb Stefan Dösinger:
Am 2014-05-27 16:56, schrieb Andrei Slavoiu:> Yes,
GL_ARB_shader_texture_lod is exposed by mesa for all drivers in both core and compatibility contexts. So should I check for GL_ARB_shader_texture_lod instead of glsl version?
No, I'd say use something like if (EXT_gpu_shader4 || ( ARB_shader_texture_lod && glsl_version >= 1.30).
An even nicer solution would be to use the capabilities reported by the shader backend. See select_shader_backend() and and shader_backend->shader_get_caps. Unsupported vertex shaders would match a dx 7 card, VS 1.x dx 8, VS 2/3 dx9 and 4 dx10. For separating dx6 and and 7 the fixed function fragment pipeline info is needed (number of supported texture units).
Note that we currently do not support pixel shaders on dx8 cards, and probably never will. That's because those cards use separate GL extensions (ATI_fragment_shader and GL_NV_register_combiners + GL_NV_texture_shader). Thus it's better to look only at vertex shaders for now to prevent dx8 cards from being reported as dx7.
Actually, d3d_level_from_gl_info is misleading (broken even), as it will check for SM3 capability and then return that the card is capable of directx 10 (which requires SM4). I'll look into a better way to report the capabilities to the callers (maybe use d3d feature levels?) and also for a way to share the code that identifies the capabilities with shader_glsl_get_caps.
But that in a future patch, after the second version of this one gets submitted.
On 23 May 2014 00:23, Andrei Slăvoiu andrei.slavoiu@gmail.com wrote:
already part of OpenGL 3.0. This causes all cards that are not explicitly recognized by wine (for example all GCN cards) to be treated as DX9 cards and presented to Windows apps as Radeon 9500.
TAHITI, PITCAIRN and CAPE VERDE should be recognized, newer cards aren't.
On 27 May 2014 17:35, Stefan Dösinger stefandoesinger@gmail.com wrote:
No, I'd say use something like if (EXT_gpu_shader4 || ( ARB_shader_texture_lod && glsl_version >= 1.30).
That's just wrong, GLSL 1.30 + ARB_shader_texture_lod doesn't imply SM4, only SM3.
On 28 May 2014 21:48, Andrei Slăvoiu andrei.slavoiu@gmail.com wrote:
Actually, d3d_level_from_gl_info is misleading (broken even), as it will check for SM3 capability and then return that the card is capable of directx 10
EXT_gpu_shader4 adds (among others) support for "native" integers and bitwise operations. It can't be supported (in hardware) on a SM3 GPU. The specific extensions used in d3d_level_from_gl_info() are somewhat arbitrary, we could have used e.g. ARB_geometry_shader4 instead of EXT_gpu_shader4 here as well.
Note that the intention of the code this function is a part of is to come up with a somewhat reasonable card to report to the application in case of unrecognized GL implementations, while e.g. shader_glsl_get_caps() is meant to report actual capabilities. It would perhaps be nice to use the actual shader backend etc. caps for d3d_level_from_gl_info(), but note that that wouldn't necessarily be easy, or make the situation much better. For the Mesa case you mention in the original patch it would arguably make the situation worse, since we can't currently properly do shader model 4 on Mesa.
On 27 May 2014 17:35, Stefan Dösinger stefandoesinger@gmail.com wrote:
No, I'd say use something like if (EXT_gpu_shader4 || ( ARB_shader_texture_lod && glsl_version >= 1.30).
That's just wrong, GLSL 1.30 + ARB_shader_texture_lod doesn't imply SM4, only SM3.
Actually, it does. No SM3 card can expose GLSL 1.30.
On 28 May 2014 21:48, Andrei Slăvoiu andrei.slavoiu@gmail.com wrote:
Actually, d3d_level_from_gl_info is misleading (broken even), as it will check for SM3 capability and then return that the card is capable of directx 10
EXT_gpu_shader4 adds (among others) support for "native" integers and bitwise operations. It can't be supported (in hardware) on a SM3 GPU. The specific extensions used in d3d_level_from_gl_info() are somewhat arbitrary, we could have used e.g. ARB_geometry_shader4 instead of EXT_gpu_shader4 here as well.
Like you say, the extensions are arbitrary, so why not use GLSL version as well? GLSL 1.30 also adds support for integers and bitwise operations. All functionality of EXT_gpu_shader4 is exposed by either GLSL 1.30 or ARB_shader_texture_lod.
Note that the intention of the code this function is a part of is to come up with a somewhat reasonable card to report to the application in case of unrecognized GL implementations, while e.g. shader_glsl_get_caps() is meant to report actual capabilities. It would perhaps be nice to use the actual shader backend etc. caps for d3d_level_from_gl_info(), but note that that wouldn't necessarily be easy, or make the situation much better. For the Mesa case you mention in the original patch it would arguably make the situation worse, since we can't currently properly do shader model 4 on Mesa.
I'm not actually interested in using DirectX 10 (yet). The reason I started messing with this part of the code is that World of Warcraft renders complete garbage when the card is presented as Radeon 9500. Changing this to Radeon X1600 improves things a bit, only the background is broken, characters and menus appear fine. Finally, with Radeon HD 2900 the only broken rendering is with the shadows. Those remain broken even when using the PCI ID of my real card, a KAVERI, so it's probably a mesa bug.
The reason I prefer to improve the fallback instead of simply adding the PCI ID for my card to the list of known cards is that with the current model there will always be cards that are not recognized by wine and a good fallback will prevent the experience for a newbie being "nothing works, wine sucks".
On 2 June 2014 18:51, Andrei Slavoiu andrei.slavoiu@gmail.com wrote:
Actually, it does. No SM3 card can expose GLSL 1.30.
You're right, I mixed up 1.30 and 1.50 there. I probably should have actually checked before writing a reply. (We'll probably end up using 1.50 for e.g. geometry shaders on Mesa, but that aside.) Checking for ARB_shader_texture_lod as well is redundant in that case though.
Like you say, the extensions are arbitrary, so why not use GLSL version as well? GLSL 1.30 also adds support for integers and bitwise operations. All
Sure, checking the GLSL version is fine.
The reason I prefer to improve the fallback instead of simply adding the PCI ID for my card to the list of known cards is that with the current model there will always be cards that are not recognized by wine and a good fallback will prevent the experience for a newbie being "nothing works, wine sucks".
Sure. The number of applications that really cares about the name of the card returned etc. should be fairly limited though, and I'm a bit surprised that World of Warcraft does. Are you sure the issue is really with the reported card, as opposed to e.g. the amount of video memory associated with it? CARD_AMD_RADEON_9500 has 64MB, which really isn't a lot by today's standards.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Am 2014-06-02 23:11, schrieb Henri Verbeet:
You're right, I mixed up 1.30 and 1.50 there. I probably should have actually checked before writing a reply. (We'll probably end up using 1.50 for e.g. geometry shaders on Mesa, but that aside.) Checking for ARB_shader_texture_lod as well is redundant in that case though.
No, because without ARB_shader_texture_lod we only support SM 2.0, leading to inconsistencies between caps and PCI ID.
We can also decide that we don't care about such consistency. In this case the version logic can be removed entirely, or just used when we cannot match the GL renderer string and have to guess a GPU.
On 3 June 2014 08:13, Stefan Dösinger stefandoesinger@gmail.com wrote:
No, because without ARB_shader_texture_lod we only support SM 2.0, leading to inconsistencies between caps and PCI ID.
And without e.g. ARB_geometry_shader4 we can only do SM3. You'll always have that issue unless you either use the computed D3D caps to guess a card, or copy the code used to compute them.
We can also decide that we don't care about such consistency. In this case the version logic can be removed entirely, or just used when we cannot match the GL renderer string and have to guess a GPU.
That's the only place where this function actually does something anyway. There are calls to d3d_level_from_gl_info() in select_card_nvidia_binary() and select_card_amd_binary() for historic reasons, but at this point removing them wouldn't alter the behaviour of those functions in a significant way. (Hypothetic future versions of the proprietary drivers that remove extensions aside.)