On Fri Jun 17 19:11:37 2022 +0000, Matteo Bruni wrote:
I have tweaked / extended the tests a little further, see [gl-depth-stencil.txt](/uploads/4ae59fc98d8067d7babb524984574df7/gl-depth-stencil.txt). I only tested that on Nvidia for now, curious if they also pass with AMD with those changes.
I run the test on my AMD / Windows and added some traces / additional tests. I am attaching diff to the test (which includes your patch as well) and the output from AMD / Windows.
It seems that unfortunately 32/8 tests is a bit inconclusive here as somehow the output pixel format is 32 / 8 (honestly not sure what that means but that's what I see here; see no test failure on line 382 and trace output from line 381).
The rests of tests suggest that it prefers 24 bit whenever in doubt, see, e. g., 8x8 test, trace at line 364: it could choose 16x8 but preferred 24x8. From what I see it seems that the pattern is whenever stencil is requested it returns depths >= 24. That's what my current patch is doing. If we prioritize stencil formats when requested over depth match that will probably look more straightforward logic-wise but that would break all those tests if we'd tighten them to what is actually returned on AMD. Also, if specific games depending on the stencil choice are concerned, if we plainly prioritize stencil presence they will get 16 bit depth on Wine while getting 24 on Windows and that difference may matter (even if not break the thing completely like returning no stencil format).
What do you think?
[diff.txt](/uploads/4355976f66b6decb44d43c5b126e5479/diff.txt) [output.txt](/uploads/13bfe7a03d34c1303f04612c9d809e0e/output.txt)