http://bugs.winehq.org/show_bug.cgi?id=18993
--- Comment #6 from Dorek Biglari dbiglari@gmail.com 2009-12-30 02:11:22 --- It scales the depth bias value to be in the correct range. I know it works on my hardware, an NVidia 6xxx running World of Warcraft in with a 24 bit depth buffer, but I'd like to know if it works on all configurations, or if it helps other apps. The number I divided by I got from the website: http://aras-p.info/blog/2008/06/12/depth-bias-and-the-power-of-deceiving-you...
The author is discussing a cross platform app he's writing using OpenGL and Direct3d. The Direct3d users were complaining about zfighting. I'll quote the pertinent parts:
"How do you apply depth bias in OpenGL? Enable GL_POLYGON_OFFSET_FILL and set glPolygonOffset to something like -1, -1. This works.
How do you apply depth bias in Direct3D 9? Conceptually, you do the same. There are DEPTHBIAS and SLOPESCALEDEPTHBIAS render states that do just that. And so we did use them.
And people complained about funky results on Windows."
He then goes on to state that he thought it was because people were using terrible near/far planes, which is what I would think too. And I think thats probably why Lisa's patch also seems to correct the problem, because its messing with the projection. But the guy hits on an issue I have run into with Direct3D before, that is that the documentation on the depth bias is wrong.
Continuing to quote from him:
"First, depth bias documentation on Direct3D is wrong. Depth bias is not in 0..16 range, it is in 0..1 range which corresponds to entire range of depth buffer."
...
"So yeah, the proper multiplier for depth bias on Direct3D with 24 bit depth buffer should be not 1.0/65535.0, but something like 1.0/(2^24-1). Except that this value is really small, so something like 4.8e-7 should be used instead (see Lengyel’s GDC2007 talk). Oh, but for some reason it’s not really enough in practice, so something like 2.0*4.8e-7 should be used instead (tested so far on GeForce 8600, Radeon HD 3850, Radeon 9600, Intel 945, reference rasterizer). Oh, and the same value should be used even when a 16 bit depth buffer is used; using 1.0/65535.0 multiplier with 16 bit depth buffer produces way too large bias."
I've been running Wow and wine with this patch and it fixes the problem and I have not noticed any side effects. Out of curiosity, why do you think it looks incorrect, Henri? If I am wrong here I would like to know. I assume that you have seen the z-fighting that Lisa reported. Let me know your thoughts.