https://bugs.winehq.org/show_bug.cgi?id=46821
--- Comment #5 from Paul Gofman gofmanp@gmail.com --- In addition to my Comment #3, p1. After testing a bit more I figured out the exact conditions after which shaders are accepted by NVIDIA driver with FLT_MAX float literal or not. My initial tests were biased by the fact I did not clear shader cache each time while this is required here.
It is actually influenced by x87 FPU control word precision bits, and as it can be expected reproducible only in 32 bit tests and not in x64. When FPU precision (bits 8. 9 of FPU control word) is set to double (0x2, Windows default) or extended (0x3, Linux default) precision, the issue does not happen, compiler accepts FLT_MAX literal value. The issue happens when precision is set to single (0x0).
That's where CSMT comes into play here. When CSMT is on, all GL calls go from a thread created by wined3d code, and there is no explicit FPU control there. So thread gets default Windows precision and its ok. When there is no CSMT, GL calls go from an application thread, and the FPU flags ultimately depend on the application. If the application does nothing with FPU flags, but also does not specify D3DCREATE_FPU_PRESERVE flag on d3d9 device creation, d3d9 initializes x87 FPU precision to single. Given CSMT is enabled by default in Wine and have a purpose to solve a number of issues, this one seems to be just one of them and I don't think anymore there is anything to fix in respect to it.