Re: wined3d: Recognize cards that expose ARB_shader_texture_lod as DX10 capable even if they don't support EXT_gpu_shader4 (try 3)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Am 2014-05-29 01:33, schrieb Andrei Slăvoiu:
Drop the check for glsl_version. All wine shaders use #version 120 so it doesn't matter. That's not quite right. Yes, it is true that we never use GLSL 130, so the version check isn't representative for what wined3d actually supports. For shader model 4 support we need much more than just GLSL 130, half of the d3d10 infrastructure is still missing.
GL_ARB_shader_texture_lod doesn't imply shader model 4 support. My Radeon X1600 supports this extension on OSX, and this is a shader model 3 card. The X1600 doesn't support EXT_gpu_shader4 or GLSL 130. The point of this code is to check the capabilities of the card, not the capabilities of our d3d implementation. Thus to prevent SM 3 cards from being reported with PCI IDs that suggest SM 4 capabilities you need to check for all the other functionality that's supported by EXT_gpu_shader4 / GLSL 130. Even if we have a SM 4 capable card our d3d9 implementation does not expose SM 4 support. But neither does Microsoft's d3d9 implementation. There's also GLX_MESA_query_renderer. It gives us the PCI IDs and video memory size directly, without all the string parsing guesswork. We cannot use it wined3d directly. A possible approach would be to expose a similar WGL_WINE/MESA_query_renderer extension from winex11.drv and winemac.drv and use that in wined3d. The current wined3d guesswork code could be moved to winex11.drv and used in cases where GLX_MESA_query_renderer is not supported. (OSX has similar functionality that's always available) User32 also has a function (EnumDisplayDevices) to query the GPU identification string, and some applications (Company of Heroes demo) fail if the user32 GPU name differs from the d3d9 GPU name. So maybe a WGL extension isn't quite the right interface, and it should be something that does not require a GL context so user32.dll can use it as well. EnumDisplayDevices does not export all the information wined3d needs though - the PCI IDs and video memory size are missing I think. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBAgAGBQJThvZZAAoJEN0/YqbEcdMwsY8QAJG1NQVMSqjwaguVR0cwwNXq IMYCRWL+5cJPxK4Eodt4bF+zZDObqCz9r3hAJVlkA6bJIcflNHJwedT0ddeOAfpe 0Xc8g+NvspN715yNt51PpzcnlSqh2vS18qp1JLb4l00pZuPt8PMNqC0gaePkv+tY 1wY3B/kgbl0TfcIRpxWbwZBcIAM2h4g+FGCHFPFeznhMYWqFaJ67OKW1KAdfpBTT lPZsaNDXtU8hSaVQYVk7ITulUl0C8tW/MExnh4zclz6b/wQuG+/M0WRg5v/YDlNq 0ZAsVqT60+MbepR87ehYDnLB7RMjZMjZhEZ4/drafFbOQBmqL7YJjgbFsbBBXLTt COf/1mjiz7TptUvSUyDRc27o1K+YP1UoswsVdXT37sWkP9gtHtQFeLLNx8ZuR4pa 2yQ7xNIBi7KPNBjz9Q0RKYvc11oGVUvFSuD0gLgNuHZxdOmBWwZyHeSce9adHHGU 5t4Oz0FsTZuEiiuJXPTDatp06KQQpUZ/c7FgcO7bxIpn/NNUCZGKZdKT/ZYM1waf BEtBL2Ov+DzRnvRM3/lVPdaiMD1IY54bqTeU7NOiKCEFhBl0elCAblj155tVaKdR 5WekiCEsvB8S4FE6stDa+GewpBWUZiVLGSL49S0MViPqAbkbgcjcNlRVg6IEH2Zo tyQIl6/9DwdY1TCllA9W =Z1W8 -----END PGP SIGNATURE-----
În ziua de Joi 29 Mai 2014, la 10:56:57, ați scris:
Am 2014-05-29 01:33, schrieb Andrei Slăvoiu:
Drop the check for glsl_version. All wine shaders use #version 120 so it doesn't matter.
The point of this code is to check the capabilities of the card, not the capabilities of our d3d implementation. Thus to prevent SM 3 cards from being reported with PCI IDs that suggest SM 4 capabilities you need to check for all the other functionality that's supported by EXT_gpu_shader4 / GLSL 130.
Even if we have a SM 4 capable card our d3d9 implementation does not expose SM 4 support. But neither does Microsoft's d3d9 implementation.
Correct me if I'm wrong, but the code that decides what shader model d3d9 exposes is shader_glsl_get_caps which looks like this: if (gl_info->supported[EXT_GPU_SHADER4] && gl_info-
supported[ARB_SHADER_BIT_ENCODING] && gl_info->supported[ARB_GEOMETRY_SHADER4] && gl_info- glsl_version >= MAKEDWORD_VERSION(1, 50) && gl_info->supported[ARB_DRAW_ELEMENTS_BASE_VERTEX] && gl_info- supported[ARB_DRAW_INSTANCED]) shader_model = 4; /* ARB_shader_texture_lod or EXT_gpu_shader4 is required for the SM3 * texldd and texldl instructions. */ else if (gl_info->supported[ARB_SHADER_TEXTURE_LOD] || gl_info- supported[EXT_GPU_SHADER4]) shader_model = 3; else shader_model = 2;
So wine's d3d9 will expose SM 3 with just glsl 1.20 and GL_ARB_shader_texture_lod. Or am I missing something?
There's also GLX_MESA_query_renderer. It gives us the PCI IDs and video memory size directly, without all the string parsing guesswork. We cannot use it wined3d directly. A possible approach would be to expose a similar WGL_WINE/MESA_query_renderer extension from winex11.drv and winemac.drv and use that in wined3d. The current wined3d guesswork code could be moved to winex11.drv and used in cases where GLX_MESA_query_renderer is not supported. (OSX has similar functionality that's always available)
I was wondering what it would take to get rid of all this guessing and get the PCI ID directly, thanks for the pointers. I'll look into it after I get a better understanding of the existing code.
User32 also has a function (EnumDisplayDevices) to query the GPU identification string, and some applications (Company of Heroes demo) fail if the user32 GPU name differs from the d3d9 GPU name. So maybe a WGL extension isn't quite the right interface, and it should be something that does not require a GL context so user32.dll can use it as well. EnumDisplayDevices does not export all the information wined3d needs though - the PCI IDs and video memory size are missing I think.
So use the string provided by EnumDisplayDevices and the PCI ID and memory size from the WGL extension?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Am 2014-05-29 20:34, schrieb Andrei Slăvoiu:
So use the string provided by EnumDisplayDevices and the PCI ID and memory size from the WGL extension? Something like that, yes. It needs a few separate pieces of work though - right now EnumDisplayDevicesW is a stub (see dlls/user32/misc.c). I suggest to start with only the WGL extension part and keep the PCI ID -> device name table in wined3d for now. Once this is in place we can worry about EnumDisplayDevices.
All this is related to dual GPU support as well. Support for this is virtually nonexistent in winex11, user32 and wined3d, and Linux as a whole doesn't really support a dual-head configuration where one monitor is driven by an AMD GPU and the other is driven by an Nvidia GPU. This puts you in the unfortunate position of having to think about multi-GPU support (to avoid making things worse) but not really being able to test things and/or adding lots of infrastructure. OSX and Windows 7 support such multi-gpu configurations. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBAgAGBQJTiDp9AAoJEN0/YqbEcdMwMywP/RT0fxOeoBYHoJ7G5Tim2Ge/ OL5sq2lQx2DDMIUlJZsp7HRQ7kuaIL86wp0xikOcADa2Lx9mGjr0n4wHSmZi0tse 3RSVJDn1mWPLC6A2YJORLCFX4XZgmkTmcgzm01OozKCNRSJZxxkmCUAEIh4m2WS5 PbBbjHDFs+Y9MmPMl9ThxEV3FjQo4az/lITYqmMRVJFL5wjYT7CvlmfQaNtfmfWa RU+BjexqOjwX0E2RyOobXvGhEOGMDRk+RmqJ0zlV9+W6aSPnC2OHVLmKjM36Bq6V R5rs9ry7IorXpCrXFJ/7c6E5ST0SJPL9QiI5+I9qSOooLgCT10dSXiuvYItXfIrH 9KdiW3abQ2JzVbVD3nE1Y6T4y/GdrPgutYxiFx6IP5brfcoKJWqwpNNu8/ad+ydI d7UmHfkz7naG/uK7bE7zULCSqaT0laFcjiV8wNxi7g8Jsv5uj4LaxvsfhV28xI4r UTp/KNoXibn+a00MuAwJcRm767rnNdnb8qCetSv4i6b2QWUZFHNVsAz8GVrVJmRM eiPvqRsxl+752YjUB5oTv29zLVGudfmKbV691mKIxKv9kqYCxfywnn9C2l2JLrIf kUbvHIokcZ2bB2C03P3E+IarONa+4IMuRbE3FGCfKkhRSi252MooBwlxQKa3S6P6 wzW7P/NLfG9Lk5Gx49Tf =PbT+ -----END PGP SIGNATURE-----
În ziua de Joi 29 Mai 2014, la 10:56:57, Stefan Dösinger a scris:
Am 2014-05-29 01:33, schrieb Andrei Slăvoiu:
Drop the check for glsl_version. All wine shaders use #version 120 so it doesn't matter.
GL_ARB_shader_texture_lod doesn't imply shader model 4 support. My Radeon X1600 supports this extension on OSX, and this is a shader model 3 card. The X1600 doesn't support EXT_gpu_shader4 or GLSL 130.
After reading the message again I think I finally understood what you meant. I'll send a try 4 that adds back glsl_version check (try 2 was missing a brace). Do you think returning 93 (meaning d3d level 9_3 or d3d 9 with SM 3) would be too ugly? So wine can choose X1300 as card if ARB_shader_texture_lod is exposed but not GLSL 1.30.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Am 2014-05-29 20:53, schrieb Andrei Slăvoiu:
After reading the message again I think I finally understood what you meant. I'll send a try 4 that adds back glsl_version check (try 2 was missing a brace). Do you think returning 93 (meaning d3d level 9_3 or d3d 9 with SM 3) would be too ugly? So wine can choose X1300 as card if ARB_shader_texture_lod is exposed but not GLSL 1.30. What's wrong with 9 in that case? -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQIcBAEBAgAGBQJTiDThAAoJEN0/YqbEcdMwKk0P/2MOncKu93p9IKhv+taJ4ZT2 n4OFBY/ENwNXQUG2sfj36siNc27uFelcJAl6PV5hvTVnVpPZBgjXLIiwXyNlPjIE ZefKNIxclPd/n+fozQlO88VO+dLfde8qCHc3Ywk4uK2gthDgLx3WVrzoolhBb1Bd eNim6OcFn6oL8R9Mq9bWZqDMYemSV/OiR1gWREBzbjFTZcA3ZyCph7houv/fxSbh XE4dpRW4EorW0uzI2i8S5Ti75r3EJIfaRqnJerxyn3PrIaQB2ffkk0eE2VEX8I4n rfg2AVqC3M4RHup5N+6hIvJBKxwXcT03zWNfIUXhzqtFnZhDVY040/Vx9iDaupGG qnzRjTcsfyQjXhTYd+J5NHSTk4LPPhpJaGjm6urj8R34JhUbu9P8tt4Dms11cuCp s6lRujJtL6AuuSl78HS1RjnKZhoKbB9g3/UP6v81xpbfy2lh3Kk2hmpq0NK28tRa QtENO6jr+7qGriwghn6ja5mpV2UdkSWEFCPUi+qj4/6RFPpUwtd0nPNna1z2UgqY N/OedGNYz5eKlox1HRuHQpIO0BFEJfqzvL8Mq323QTCXEQLkzDu1BAR6FiXV48RX RjQH52Eggg940zOFUt67VOQ2k2IunyOGSPfL/SenfxJ21MziAtyL1ok7Fsmtigpf iYndngkbnAkRFpPV7BDe =vrik -----END PGP SIGNATURE-----
În ziua de Vin 30 Mai 2014, la 09:36:01, Stefan Dösinger a scris:
Am 2014-05-29 20:53, schrieb Andrei Slăvoiu:
After reading the message again I think I finally understood what you meant. I'll send a try 4 that adds back glsl_version check (try 2 was missing a brace). Do you think returning 93 (meaning d3d level 9_3 or d3d 9 with SM 3) would be too ugly? So wine can choose X1300 as card if ARB_shader_texture_lod is exposed but not GLSL 1.30.
What's wrong with 9 in that case?
I find it strange that there are 2 completely different code paths for determining card capabilities. So I was thinking to share the code between d3d_level_from_gl_info and shader_glsl_get_caps. Since such shared code needs to be able to differentiate between d3d 9 with SM 2 and d3d 9 with SM 3 I was wondering if it would be a good idea to expose SM info to the current callers of d3d_level_from_gl_info to pick Radeon 9500 for SM2 or Radeon X1600 for SM3. Alternatively I could leave it return 9 as it does now and add an extra parameter that will return the shader model. Probably that would be cleaner.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Am 2014-05-30 19:51, schrieb Andrei Slăvoiu:
I find it strange that there are 2 completely different code paths for determining card capabilities. So I was thinking to share the code between d3d_level_from_gl_info and shader_glsl_get_caps. Yes, the double implementation of those similar functionalities is unfortunate and fixing this is a good idea. I'd say migrating to MESA_query_renderer supersedes this work and is the better long-term goal.
Since such shared code needs to be able to differentiate between d3d 9 with SM 2 and d3d 9 with SM 3 I was wondering if it would be a good idea to expose SM info to the current callers of d3d_level_from_gl_info to pick Radeon 9500 for SM2 or Radeon X1600 for SM3. I think just looking at the vertex shader version should work, as I suggested in an earlier mail. I don't think this needs a DirectX version at all. Also, don't call shader_glsl_get_caps directly - call the get_caps method exported by the selected shader backend.
There are two caveats: To separate version 7 from version 6 you'll need information from the fixed function fragment pipeline. This shouldn't be hard to do. The other thing to keep in mind is that by calling the shader backend, the card may present itself with a different PCI ID when switching between ARB and GLSL shaders. Arguably this is even the correct thing to do, as the ARB shader backend supports only Shader Model 2 on most cards. Reporting a Radeon 9500 instead of Radeon HD 8000 when the user selects ARB shaders sounds like a good idea in general. It may make debugging shader backend differences harder though. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIcBAEBAgAGBQJTiMfCAAoJEN0/YqbEcdMwcnMP/iWFXx0L/765OI73ZrNtOGfK V7eeo1e/NwcNrqjmyux2AiaGAEpeDIqIPr3hUlvJtOSe7Vm6LnY0XNUtGGV0Hdlh XEIKm/Obl6GrUbjPWCOnWIRmvPRj/SS3r48pG/shYy81dEUBMLhRrXmVNqvacMdR 7xP5NNEBhJI8uI8EYgTlcgmyZ6Iv8an+uu5afJTsnrvBZBNJzM1Kh6RhHjb5r7Qq REB6HzO3ryu3zMv3wSN6iNMJo+Tod8edO+IEpDfblPD+GiBSQScXX7UWPdPpnpUw QVdxA4crGBlu1/+a3sbI1M4Q66EGnlATWT7NwKI3ubj6VEjMqFkSC3QZzY3wLnX3 MxR8MJWww5IgwCSS5VTyhLqsmuFSl5ch0P1Iq2P35M3Tq85g1i1av/B8x5PuxufT J6lmX1ohcyMsEAR6qtb84OCpofeddUQgxVlAjpVuv3i4lzLZjc5WgW46t3pYxM3f /lgxMF8iA7D+TwuUZ2RC+wJM4qAcB7jpwMucFgWuIk4AEBlDEgJR5q7k71HaCpFK wp2fIX9jLU9UJb4d9wxcULhiA6rg+SgS2eFM7pSrKsvyhqQMcGhQDV1xvKRt71SW r3tY6Ov/7VSAdfT5MLfYoGmHqMuK9tSaekmqGRKw/yeMUECfL52DfFPbQl/7uQik OsKveCiQsOtsSN5QUiao =lXuc -----END PGP SIGNATURE-----
On 29 May 2014 10:56, Stefan Dösinger <stefandoesinger(a)gmail.com> wrote:
There's also GLX_MESA_query_renderer. It gives us the PCI IDs and video memory size directly, without all the string parsing guesswork. But note that the PCI IDs aren't that useful, because you still need some information from gpu_description_table[] and driver_version_table[]. This means you need to match either the PCI ID to a known list, or match the renderer string against some known list, like we currently do. With vendors using several PCI IDs for the same card, matching the renderer string may be the easier approach. (Although on the other hand matching PCI IDs could perhaps be shared with Mesa somehow, or perhaps use something like udev hwdb properties.)
participants (3)
-
Andrei Slăvoiu -
Henri Verbeet -
Stefan Dösinger