On Sat, 7 Aug 2021 at 09:00, Stefan Dösinger stefan@codeweavers.com wrote:
It seems a little awkward to tie this to WINED3D_PIXEL_CENTER_INTEGER. Would it make sense to instead detect the filling convention during adapter initialisation, so that we can get rid of this for d3d9 and before as well if possible? We could then just store the filling convention offset in the wined3d_d3d_info structure.
Yeah I played with the thought of detecting it at adapter init, but for that I'd need a card that has different behavior. Otherwise I am just shooting blind. The ones I tested (AMD Radeon 560; Geforce 650M; Intel HD 4000; Intel HD graphics 615; Apple M1; - the last 3 only on MacOS, the others Mac and Linux) behaved uniformly. I remember back in the Geforce 7/8 days we had GPU specific issues. Unfortunately my Geforce 7s all died and my r500 card is a long distance away.
Being able to detect the convention correctly would probably be a good start. (And note that originally part of the issue was that the filling convention ended up being different for onscreen and offscreen render targets due to the y-flip; "AlwaysOffscreen" got rid of at least that part of the issue.) I still have a NVIDIA Tesla I would be able to test this on, although I don't remember for sure whether it was originally affected by this issue. Perhaps others like e.g. Matteo would also be able to help with testing.
Afaics we have no test that tests if we are doing the right thing in d3d9 and earlier. I'll add one, that should hopefully give some clues if the 63.0/128.0 is still correct on today's GPUs or if we need a flat 1.0/2.0 on some. If it's the former - and I am not aware of any d3d <= 9 games that have any pixel boundary issues right now - there might be some d3d9/d3d10 difference.
Yeah, we don't have existing tests for this.