http://bugs.winehq.org/show_bug.cgi?id=35207
--- Comment #12 from Stefan Dösinger stefan@codeweavers.com --- I have compared the assembly generated from the bad and good shaders and I have an idea what is going on. The game has some code similar to this:
float mul_plus_one(float a, float b) { float c = a + 1.0; return c * b; }
With the 1.0 not hardcoded in the shader code, the compiler generates code like this
ADD c, a, {uniform_holding_one} MUL ret, c, b
With the 1.0 known at compile time, the code looks like this:
MAD ret, a, b, b
which calculates ret = a * b + b. I guess that's cheaper in the hardware, and mathematically it's equivalent.
the problem happens with
mul_plus_one(0.0, INF);
in the first case that's INF * 1.0 = INF. I the second case that's INF * 0.0 + INF = NaN + INF = NaN. The NaN results in a 0.0 alpha output from the shader. whereas INF would be 1.0. A later shader reads the texture, calculates 1 - alpha and writes the result to the screen.
I haven't confirmed that this is what's going on beyond a doubt, but I'm quite convinced that this is the problem here. So the problem is essentially bug 26967. (I think 26967 shouldn't have been closed INVALID just because it's fixed on the nvidia drivers by luck.)