Wine-Bug: https://bugs.winehq.org/show_bug.cgi?id=47039
There are GPUs which support ARB_shader_bit_encoding with GLSL version 1.2 and uvec4 data type is available since GLSL 1.3. Spotted with Intel Ironlake Mobile.
Signed-off-by: Paul Gofman gofmanp@gmail.com --- dlls/wined3d/glsl_shader.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/dlls/wined3d/glsl_shader.c b/dlls/wined3d/glsl_shader.c index 7d4678965d..0d0e8b25c9 100644 --- a/dlls/wined3d/glsl_shader.c +++ b/dlls/wined3d/glsl_shader.c @@ -442,7 +442,7 @@ static void shader_glsl_append_imm_vec4(struct wined3d_string_buffer *buffer, co wined3d_ftoa(values[2], str[2]); wined3d_ftoa(values[3], str[3]);
- if (gl_info->supported[ARB_SHADER_BIT_ENCODING]) + if (gl_info->supported[ARB_SHADER_BIT_ENCODING] && gl_info->glsl_version >= MAKEDWORD_VERSION(1, 30)) { const unsigned int *uint_values = (const unsigned int *)values;
@@ -3316,7 +3316,8 @@ static void shader_glsl_get_register_name(const struct wined3d_shader_register * case WINED3D_DATA_UNORM: case WINED3D_DATA_SNORM: case WINED3D_DATA_FLOAT: - if (gl_info->supported[ARB_SHADER_BIT_ENCODING]) + if (gl_info->supported[ARB_SHADER_BIT_ENCODING] + && gl_info->glsl_version >= MAKEDWORD_VERSION(1, 30)) { string_buffer_sprintf(register_name, "uintBitsToFloat(uvec4(%#xu, %#xu, %#xu, %#xu))", reg->u.immconst_data[0], reg->u.immconst_data[1],
On Thu, Apr 18, 2019 at 7:07 PM Paul Gofman gofmanp@gmail.com wrote:
Wine-Bug: https://bugs.winehq.org/show_bug.cgi?id=47039
There are GPUs which support ARB_shader_bit_encoding with GLSL version 1.2 and uvec4 data type is available since GLSL 1.3. Spotted with Intel Ironlake Mobile.
Signed-off-by: Paul Gofman gofmanp@gmail.com
I'm not sure but I think that it could be preferred to disable ARB_shader_bit_encoding when GLSL version < 1.30. We disable other extensions conditionally in wined3d_adapter_init_gl_caps(), e.g. ARB_shader_bit_encoding or ARB_shader_bit_encoding.
On Fri, Apr 19, 2019 at 12:35 PM Józef Kucia joseph.kucia@gmail.com wrote:
On Thu, Apr 18, 2019 at 7:07 PM Paul Gofman gofmanp@gmail.com wrote:
Wine-Bug: https://bugs.winehq.org/show_bug.cgi?id=47039
There are GPUs which support ARB_shader_bit_encoding with GLSL version 1.2 and uvec4 data type is available since GLSL 1.3. Spotted with Intel Ironlake Mobile.
Signed-off-by: Paul Gofman gofmanp@gmail.com
I'm not sure but I think that it could be preferred to disable ARB_shader_bit_encoding when GLSL version < 1.30. We disable other extensions conditionally in wined3d_adapter_init_gl_caps(), e.g. ARB_shader_bit_encoding or ARB_shader_bit_encoding.
I meant to use ARB_draw_indirect and ARB_texture_multisample as examples (I failed to copy & paste).
On 4/19/19 13:35, Józef Kucia wrote:
I'm not sure but I think that it could be preferred to disable ARB_shader_bit_encoding when GLSL version < 1.30. We disable other extensions conditionally in wined3d_adapter_init_gl_caps(), e.g. ARB_shader_bit_encoding or ARB_shader_bit_encoding. .
I was thinking of it, but my reasoning under not doing so was that ARB_shader_bit_encoding is potentially usable even with GLSL 1.2 if to avoid unsigned integers, and I thought marking the extension disabled while it is actually enabled can be a bit obscure.
On Fri, 19 Apr 2019 at 15:42, Paul Gofman gofmanp@gmail.com wrote:
On 4/19/19 13:35, Józef Kucia wrote:
I'm not sure but I think that it could be preferred to disable ARB_shader_bit_encoding when GLSL version < 1.30. We disable other extensions conditionally in wined3d_adapter_init_gl_caps(), e.g. ARB_shader_bit_encoding or ARB_shader_bit_encoding. .
I was thinking of it, but my reasoning under not doing so was that ARB_shader_bit_encoding is potentially usable even with GLSL 1.2 if to avoid unsigned integers, and I thought marking the extension disabled while it is actually enabled can be a bit obscure.
It may be worth trying to make it work with ivec4() instead of uvec4(). Ironlake is perhaps a little special in that it does have true integer support, but not GLSL 1.30.
On 4/19/19 14:42, Henri Verbeet wrote:
On Fri, 19 Apr 2019 at 15:42, Paul Gofman gofmanp@gmail.com wrote:
It may be worth trying to make it work with ivec4() instead of uvec4(). Ironlake is perhaps a little special in that it does have true integer support, but not GLSL 1.30.
Do you think it will be portable when converting special values from signed integers? As I read the specs, converting NaN with (u)intBitsToFloat() is already undefined, but seems to work. Won't representing specials through signed integers be leaving more room for different behaviour on various cards, drivers and GLSL versions? I can sure make it and test on what I have here, but it seems hard to be sure how universally this will work.
On Fri, 19 Apr 2019 at 16:38, Paul Gofman gofmanp@gmail.com wrote:
On 4/19/19 14:42, Henri Verbeet wrote:
On Fri, 19 Apr 2019 at 15:42, Paul Gofman gofmanp@gmail.com wrote: It may be worth trying to make it work with ivec4() instead of uvec4(). Ironlake is perhaps a little special in that it does have true integer support, but not GLSL 1.30.
Do you think it will be portable when converting special values from signed integers? As I read the specs, converting NaN with (u)intBitsToFloat() is already undefined, but seems to work. Won't representing specials through signed integers be leaving more room for different behaviour on various cards, drivers and GLSL versions? I can sure make it and test on what I have here, but it seems hard to be sure how universally this will work.
That and "negative" values are the main issues I'd be potentially worried about, but at the same time I wouldn't expect drivers/hardware to go out of their way to distinguish between singned/unsigned here. We could also potentially switch between uvec4 and ivec4 based on whether we have GLSL 1.30 or not, but that may not offer any advantages in practice.
On 4/19/19 15:30, Henri Verbeet wrote:
On Fri, 19 Apr 2019 at 16:38, Paul Gofman gofmanp@gmail.com wrote:
On 4/19/19 14:42, Henri Verbeet wrote:
That and "negative" values are the main issues I'd be potentially worried about, but at the same time I wouldn't expect drivers/hardware to go out of their way to distinguish between singned/unsigned here. We could also potentially switch between uvec4 and ivec4 based on whether we have GLSL 1.30 or not, but that may not offer any advantages in practice.
I will test this and change uints to ints whenever ARB_shader_bit_encoding is used then.