https://bugs.winehq.org/show_bug.cgi?id=51420
--- Comment #44 from Henri Verbeet hverbeet@gmail.com --- (In reply to Sveinar Søpler from comment #43)
(In reply to Kurt Kartaltepe from comment #42) So, the patch i linked above that "forces" nVidia drivers to use randr 1.0, and was removed by proton (possibly for this reason alone) is what is causing this weirdness?
Proton has the "fullscreen hack", which is worse in some ways, but does avoid this particular issue, simply by never changing the display mode.
The text in the patch: /* Some (304.64, possibly earlier) versions of the NVIDIA driver only
- report a DFP's native mode through RandR 1.2 / 1.3. Standard DMT modes
- are only listed through RandR 1.0 / 1.1. This is completely useless,
- but NVIDIA considers this a feature, so it's unlikely to change. The
- best we can do is to fall back to RandR 1.0 and encourage users to
- consider more cooperative driver vendors when we detect such a
- configuration. */
Does the values from https://bugs.winehq.org/show_bug.cgi?id=51420#c41 show why this fallback is necessary for nVidia?
It does, in fact. Note that in the first xrandr output in that comment, it shows the "DVI-D-0" output as supporting only a single display mode. It is the primary output as well. This means that if an application would try to switch to e.g. 1024x768@60Hz, it would simply fail. It's not uncommon for older applications in particular to assume these standard display modes like 1024x768, 800x600, 640x480, etc. are always available.
It's perhaps worth pointing out that the RandR 1.0 output in comment 41 isn't quite right either; it shows made up refresh rates. This is typically caused by having the "DynamicTwinView" xorg.conf option enabled. It can cause similar issues for applications that insist on particular display modes (like 1024x768@60) always being available.
If i revert the "randr fallback for nvidia patch", what would happen? Would i only be able to choose native resolution for a 3D app for instance? Guess ill have to experiment a bit when i come home :)
In a well behaved application, yes, you will only be able to use the single listed display mode. Less well behaved applications may simply crash, or otherwise misbehave in hard to diagnose ways, because they e.g. end up using a never tested fallback path.
The "proper" way to resolve this specific bug is probably to cache the current display mode and listen to RRScreenChangeNotify events when using the RandR 1.0 path. (I.e., in xrandr10_get_current_mode().) That approach may not be a bad idea for the RandR 1.4+ path either, but it should be much less necessary there.
Of course it would be even better if the NVIDIA drivers would synthesise the standard display modes when using RandR 1.4+ on configurations like this, like pretty much all the other Linux GPU drivers do, but I'm not holding my breath.