Which ones?
Good question. I'm not 100% where the implementation exists. It might be a hardware vs. software thing and/or possibly the graphics driver. But I discovered this when running the tests on testbot (on my Windows install it uses the HD values).
What Windows install is that, and with which vendor card?
None of the Windows 11 testbot results match. All three vendors we have (WARP, NVidia, and AMD) use the SD range for both tested sizes, and NVidia moreover yields results that almost match the current Wine algorithm.
I checked one of my contemporaneous graphics cards (ATI Rage 128) and it mostly agrees with the more modern AMD/WARP results, although I had to raise the tolerance a bit (4 seems to work). It also succeeds YV12 surface creation but fails the subsequent Blt(), so we'll probably need to handle that—assuming that this test is indeed useful in the first place, which it indeed may not be. Sorry I didn't check these sooner...
A skip in that case seems odd, why not just use broken() when comparing colours?
Good point. I think I originally missed the difference, but if I understand correctly: a `skip` will skip the tests on Wine, where as a `broken` only applies to Windows. Is that right?
skip() / win_skip() by itself doesn't do anything but print a message and increase the "skips" counter. The precise usefulness of that has never quite been clear to me, but if there are skips that stands out on the test.winehq.org pages.
Both win_skip() and broken() only apply to Windows. A win_skip() under Wine counts as a failure; broken() under Wine basically returns 0 (so in general it also counts as a failure). Generally they're used when, for some reason, we deem that Wine needs to be more strict than Windows actually is, or that some Windows behaviour is due to a bug and we don't want to be bug-compatible in that case.
it seems this could be a regular todo_wine
Some tests actually pass with the current implementation, so I do need a `todo_wine_if`.
Oops, I misread, sorry.