-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Am 2014-05-29 01:33, schrieb Andrei Slăvoiu:
Drop the check for glsl_version. All wine shaders use #version 120 so it doesn't matter.
That's not quite right. Yes, it is true that we never use GLSL 130, so the version check isn't representative for what wined3d actually supports. For shader model 4 support we need much more than just GLSL 130, half of the d3d10 infrastructure is still missing.
GL_ARB_shader_texture_lod doesn't imply shader model 4 support. My Radeon X1600 supports this extension on OSX, and this is a shader model 3 card. The X1600 doesn't support EXT_gpu_shader4 or GLSL 130.
The point of this code is to check the capabilities of the card, not the capabilities of our d3d implementation. Thus to prevent SM 3 cards from being reported with PCI IDs that suggest SM 4 capabilities you need to check for all the other functionality that's supported by EXT_gpu_shader4 / GLSL 130.
Even if we have a SM 4 capable card our d3d9 implementation does not expose SM 4 support. But neither does Microsoft's d3d9 implementation.
There's also GLX_MESA_query_renderer. It gives us the PCI IDs and video memory size directly, without all the string parsing guesswork. We cannot use it wined3d directly. A possible approach would be to expose a similar WGL_WINE/MESA_query_renderer extension from winex11.drv and winemac.drv and use that in wined3d. The current wined3d guesswork code could be moved to winex11.drv and used in cases where GLX_MESA_query_renderer is not supported. (OSX has similar functionality that's always available)
User32 also has a function (EnumDisplayDevices) to query the GPU identification string, and some applications (Company of Heroes demo) fail if the user32 GPU name differs from the d3d9 GPU name. So maybe a WGL extension isn't quite the right interface, and it should be something that does not require a GL context so user32.dll can use it as well. EnumDisplayDevices does not export all the information wined3d needs though - the PCI IDs and video memory size are missing I think.
În ziua de Joi 29 Mai 2014, la 10:56:57, ați scris:
Am 2014-05-29 01:33, schrieb Andrei Slăvoiu:
Drop the check for glsl_version. All wine shaders use #version 120 so it doesn't matter.
The point of this code is to check the capabilities of the card, not the capabilities of our d3d implementation. Thus to prevent SM 3 cards from being reported with PCI IDs that suggest SM 4 capabilities you need to check for all the other functionality that's supported by EXT_gpu_shader4 / GLSL 130.
Even if we have a SM 4 capable card our d3d9 implementation does not expose SM 4 support. But neither does Microsoft's d3d9 implementation.
Correct me if I'm wrong, but the code that decides what shader model d3d9 exposes is shader_glsl_get_caps which looks like this: if (gl_info->supported[EXT_GPU_SHADER4] && gl_info-
supported[ARB_SHADER_BIT_ENCODING]
&& gl_info->supported[ARB_GEOMETRY_SHADER4] && gl_info-
glsl_version >= MAKEDWORD_VERSION(1, 50)
&& gl_info->supported[ARB_DRAW_ELEMENTS_BASE_VERTEX] && gl_info-
supported[ARB_DRAW_INSTANCED])
shader_model = 4; /* ARB_shader_texture_lod or EXT_gpu_shader4 is required for the SM3 * texldd and texldl instructions. */ else if (gl_info->supported[ARB_SHADER_TEXTURE_LOD] || gl_info-
supported[EXT_GPU_SHADER4])
shader_model = 3; else shader_model = 2;
So wine's d3d9 will expose SM 3 with just glsl 1.20 and GL_ARB_shader_texture_lod. Or am I missing something?
There's also GLX_MESA_query_renderer. It gives us the PCI IDs and video memory size directly, without all the string parsing guesswork. We cannot use it wined3d directly. A possible approach would be to expose a similar WGL_WINE/MESA_query_renderer extension from winex11.drv and winemac.drv and use that in wined3d. The current wined3d guesswork code could be moved to winex11.drv and used in cases where GLX_MESA_query_renderer is not supported. (OSX has similar functionality that's always available)
I was wondering what it would take to get rid of all this guessing and get the PCI ID directly, thanks for the pointers. I'll look into it after I get a better understanding of the existing code.
User32 also has a function (EnumDisplayDevices) to query the GPU identification string, and some applications (Company of Heroes demo) fail if the user32 GPU name differs from the d3d9 GPU name. So maybe a WGL extension isn't quite the right interface, and it should be something that does not require a GL context so user32.dll can use it as well. EnumDisplayDevices does not export all the information wined3d needs though - the PCI IDs and video memory size are missing I think.
So use the string provided by EnumDisplayDevices and the PCI ID and memory size from the WGL extension?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Am 2014-05-29 20:34, schrieb Andrei Slăvoiu:
So use the string provided by EnumDisplayDevices and the PCI ID and memory size from the WGL extension?
Something like that, yes. It needs a few separate pieces of work though - right now EnumDisplayDevicesW is a stub (see dlls/user32/misc.c). I suggest to start with only the WGL extension part and keep the PCI ID -> device name table in wined3d for now. Once this is in place we can worry about EnumDisplayDevices.
All this is related to dual GPU support as well. Support for this is virtually nonexistent in winex11, user32 and wined3d, and Linux as a whole doesn't really support a dual-head configuration where one monitor is driven by an AMD GPU and the other is driven by an Nvidia GPU. This puts you in the unfortunate position of having to think about multi-GPU support (to avoid making things worse) but not really being able to test things and/or adding lots of infrastructure.
OSX and Windows 7 support such multi-gpu configurations.
În ziua de Joi 29 Mai 2014, la 10:56:57, Stefan Dösinger a scris:
Am 2014-05-29 01:33, schrieb Andrei Slăvoiu:
Drop the check for glsl_version. All wine shaders use #version 120 so it doesn't matter.
GL_ARB_shader_texture_lod doesn't imply shader model 4 support. My Radeon X1600 supports this extension on OSX, and this is a shader model 3 card. The X1600 doesn't support EXT_gpu_shader4 or GLSL 130.
After reading the message again I think I finally understood what you meant. I'll send a try 4 that adds back glsl_version check (try 2 was missing a brace). Do you think returning 93 (meaning d3d level 9_3 or d3d 9 with SM 3) would be too ugly? So wine can choose X1300 as card if ARB_shader_texture_lod is exposed but not GLSL 1.30.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Am 2014-05-29 20:53, schrieb Andrei Slăvoiu:
After reading the message again I think I finally understood what you meant. I'll send a try 4 that adds back glsl_version check (try 2 was missing a brace). Do you think returning 93 (meaning d3d level 9_3 or d3d 9 with SM 3) would be too ugly? So wine can choose X1300 as card if ARB_shader_texture_lod is exposed but not GLSL 1.30.
What's wrong with 9 in that case?
În ziua de Vin 30 Mai 2014, la 09:36:01, Stefan Dösinger a scris:
Am 2014-05-29 20:53, schrieb Andrei Slăvoiu:
After reading the message again I think I finally understood what you meant. I'll send a try 4 that adds back glsl_version check (try 2 was missing a brace). Do you think returning 93 (meaning d3d level 9_3 or d3d 9 with SM 3) would be too ugly? So wine can choose X1300 as card if ARB_shader_texture_lod is exposed but not GLSL 1.30.
What's wrong with 9 in that case?
I find it strange that there are 2 completely different code paths for determining card capabilities. So I was thinking to share the code between d3d_level_from_gl_info and shader_glsl_get_caps. Since such shared code needs to be able to differentiate between d3d 9 with SM 2 and d3d 9 with SM 3 I was wondering if it would be a good idea to expose SM info to the current callers of d3d_level_from_gl_info to pick Radeon 9500 for SM2 or Radeon X1600 for SM3.
Alternatively I could leave it return 9 as it does now and add an extra parameter that will return the shader model. Probably that would be cleaner.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Am 2014-05-30 19:51, schrieb Andrei Slăvoiu:
I find it strange that there are 2 completely different code paths for determining card capabilities. So I was thinking to share the code between d3d_level_from_gl_info and shader_glsl_get_caps.
Yes, the double implementation of those similar functionalities is unfortunate and fixing this is a good idea. I'd say migrating to MESA_query_renderer supersedes this work and is the better long-term goal.
Since such shared code needs to be able to differentiate between d3d 9 with SM 2 and d3d 9 with SM 3 I was wondering if it would be a good idea to expose SM info to the current callers of d3d_level_from_gl_info to pick Radeon 9500 for SM2 or Radeon X1600 for SM3.
I think just looking at the vertex shader version should work, as I suggested in an earlier mail. I don't think this needs a DirectX version at all. Also, don't call shader_glsl_get_caps directly - call the get_caps method exported by the selected shader backend.
There are two caveats: To separate version 7 from version 6 you'll need information from the fixed function fragment pipeline. This shouldn't be hard to do.
The other thing to keep in mind is that by calling the shader backend, the card may present itself with a different PCI ID when switching between ARB and GLSL shaders. Arguably this is even the correct thing to do, as the ARB shader backend supports only Shader Model 2 on most cards. Reporting a Radeon 9500 instead of Radeon HD 8000 when the user selects ARB shaders sounds like a good idea in general. It may make debugging shader backend differences harder though.
On 29 May 2014 10:56, Stefan Dösinger stefandoesinger@gmail.com wrote:
There's also GLX_MESA_query_renderer. It gives us the PCI IDs and video memory size directly, without all the string parsing guesswork.
But note that the PCI IDs aren't that useful, because you still need some information from gpu_description_table[] and driver_version_table[]. This means you need to match either the PCI ID to a known list, or match the renderer string against some known list, like we currently do. With vendors using several PCI IDs for the same card, matching the renderer string may be the easier approach. (Although on the other hand matching PCI IDs could perhaps be shared with Mesa somehow, or perhaps use something like udev hwdb properties.)