Am Donnerstag, 16. September 2021, 15:28:28 EAT schrieb Henri Verbeet:
It's perhaps worth explicitly initialising "filling_convention_nudge" in wined3d_adapter_vk_init_d3d_info() as well, perhaps with a comment to explain why we're using 0.0f there.
Sounds like a good idea yeah.
Would it make sense to do a binary search here instead of trying every value? (Similar to what we do in wined3d_adapter_find_polyoffset_scale())
Between 1/64 and 1/1024 it might. I don't want to search the space between 1/1024 and 0 for the reason I explained in some comment (unpredictability wrt to other viewport dimensions). For the 5 non-zero values we're testing I don't know if it makes any difference in practise. Yeah I know cache locality isn't a concern here because we're not searching memory contents :-) .
Though for clarity: Do you suggest searching non-power-of-two values too? I am not sure if that will give us anything of use. I've avoided them because I expect powers of two to be more reliable re rounding when we add them to actual geometry positions.
Can the driver force multi-sampling on us here? My understanding was that generally speaking those controls only affect the backbuffer, and not FBOs; that was part of the reason for having the "SampleCount" registry setting.
I think at least the nvidia drivers try to be smarter than "just make the backbuffer multisampled", although I haven't played with nvidia-settings' multisample setting in years. (At some point turning on driver multisampling would break FBO blits from backbuffer to a texture, at least that got fixed somewhen)
"Enable multisampling by user request" is a common enough mode for the driver to ignore our request that it feels worth mentioning in this context. It is a rather fundamental change to rasterization...
Do we really need a FIXME for that? Getting e.g. a bottom-right result would perhaps be unexpected, but it doesn't seem particularly concerning.
A *-right result is the most obvious case, but I also saw the MacOS driver completely discard the test draw at certain magic offsets (1^-24.5 or something along those lines if I remember right); The top edge wasn't yet on the pixel but the bottom edge was already moved away.
Bottom line, we'd potentially apply a nudge for the wrong reason. The nudge can break things just as much as fix things. Seems worthy of a FIXME to me.
I am persuadable to returning TRUE in case of an unexpected result. That way we'd either not apply a quirk if things go wrong or just apply too small a nudge (if we keep the linear search). In that case I am less invested in the FIXME, but we'd depart more from the current behaviour.