Re: [PATCH 4/6] wined3d: Use d3d_level_from_gl_info in match_dx10_capable
On 14 May 2013 23:46, Stefan Dösinger <stefan(a)codeweavers.com> wrote:
@@ -613,14 +635,7 @@ static BOOL match_apple_nonr500ati(const struct wined3d_gl_info *gl_info, const static BOOL match_dx10_capable(const struct wined3d_gl_info *gl_info, const char *gl_renderer, enum wined3d_gl_vendor gl_vendor, enum wined3d_pci_vendor card_vendor, enum wined3d_pci_device device) { - /* DX9 cards support 40 single float varyings in hardware, most drivers report 32. ATI misreports - * 44 varyings. So assume that if we have more than 44 varyings we have a dx10 card. - * This detection is for the gl_ClipPos varying quirk. If a d3d9 card really supports more than 44 - * varyings and we subtract one in dx9 shaders its not going to hurt us because the dx9 limit is - * hardcoded - * - * dx10 cards usually have 64 varyings */ - return gl_info->limits.glsl_varyings > 44; + return d3d_level_from_gl_info(gl_info) >= 10; }
I'm not sure d3d_level_from_gl_info() is quite reliable enough for this. For example, Mesa doesn't implement EXT_gpu_shader4, and possibly never will. Using an actual hardware limit seems more reliable.
participants (1)
-
Henri Verbeet