-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Am 2015-02-20 um 13:50 schrieb Matteo Bruni:
Yes, I've noticed that too. I haven't really looked into it but the float-to-signed conversion has always been a bit tricky. In OpenGL AFAIK the conversion formula changed at some point (e.g. see table 2.9 in the 2.1 spec vs paragraph 2.3.5.1 in the 4.5 spec), it might be we have to use one instead of the other in our fallback conversion functions too. No idea if something like that applies to Windows drivers too.
Well, one problem we have in e.g. L6V5U5 is that we simply left-shift by 3. So the input value 15 / 0xf, which is the highest possible value for the signed channels becomes 120 (0x78). This doesn't give the same float result.
The code Henri uses in convert_s1_uint_d15_unorm looks reasonable, but I did not find a textbook explanation for it. Repeating the high bits in the extra space seems reasonable, but I'm not sure what the reason for ORing the least significant bit is.
Also I'm not sure what the correct handling of negative ranges is. I *think* it's enough to just left-shift them, but I am not sure.
Also, as the OpenGL spec says, representing zero is an issue.