On 20 August 2015 at 10:03, Stefan Dösinger <stefan(a)codeweavers.com> wrote:
> @@ -6332,7 +6332,8 @@ static GLuint shader_glsl_generate_ffp_fragment_shader(struct shader_glsl_priv *
> }
>
> if (settings->color_key_enabled)
> - shader_addline(buffer, "if (all(equal(tex0, color_key))) discard;\n");
> + shader_addline(buffer, "if (all(lessThan(abs(tex0 - color_key), vec4(%s)))) discard;\n",
> + wined3d_color_key_precision);
>
...
> +/* Normalization of B5G6R5_UNORM textures is horribly imprecise if we don't have
> + * GL_RGB_565 support. 1 / 256 (~0.0039) works in practice, but is awfully close
> + * to the next possible value in 8 bit formats. 1 / 384 is too precise for some
> + * 5 and 6 bit channel values at least on Nvidia.
> + *
> + * An exact comparison isn't reliable for any format, except for the normalized
> + * values 0.0 and 1.0. */
> +const char *wined3d_color_key_precision = "0.003";
>
Shouldn't you just pass the color key as a range to the shader
instead? E.g. in a 6 bpc format anything between 0.484f and 0.500f
would get quantized to 0x1f. (Which, other floating point issues
aside, would correspond to about 0.008 slop here. For a 5 bpc format
you'd even have 0.016.)