On Wed, Sep 05, 2012 at 11:26:23AM -0700, Henri Verbeet wrote:
On 5 September 2012 18:52, Andy Ritger aritger@nvidia.com wrote:
At first glance, I agree that would be easier for applications, but that approach has some drawbacks:
- lies to the user/application about what timings are actually being driven to the monitor
- the above bullet causes confusion: the timings reported in the monitor's on screen display don't match what is reported by the X server
- user/application doesn't get complete control over what actual timings are being sent to the monitor
- does not provide the full flexibility of the hardware to configure, e.g., arbitrary position of the ViewPortOut within the active raster
Perhaps, but none of that changes, as far as Win32 applications are concerned, if we generate modes in Wine instead of in the kernel.
Agreed.
From Wine's point of view, we'd just get a bunch of extra code to maintain because nvidia does things differently from everyone else.
Eventually, I hope NVIDIA isn't unique about this approach to viewport configuration. The drawbacks aren't NVIDIA-specific.
The concern about added code maintenance to Wine is fair; is that concern lessened if the details of viewport configuration are abstracted by a new standard library?
I imagine counter arguments include:
- we already have the "scaling mode" output property in most drivers; that is good enough
- Transformation matrix and Border are too low level for most applications
For the first counter argument: I'm trying to make the case that providing the full flexibility, and being truthful about modetimings to users/applications, is valuable enough to merit a change (hopefully even in the drivers that currently expose a "scaling mode" output property).
I must say that I'm having some trouble imagining what not generating standard modes will allow someone to do that they couldn't do before. In terms of figuring out the "real" timings, the RandR "preferred" mode is probably close enough, but I suppose it should be fairly easy to extend RandR to explicitly mark specific modes as "native". I imagine that for most applications it's just an implementation detail whether the display panel has a scaler itself, or if that's done by the GPU though. Either way, that seems like a discussion more appropriate for e.g. dri-devel.
Fair enough; I'll discuss with the other drivers, first.
Perhaps there's a use case for a "big screen" setup, but that too is something that's probably best handled on the RandR / X server level instead of Wine.
I don't think we can have it both ways:
* RandR 1.1 gives you one "big screen" per X screen; the user can configure what is within that big screen via NVIDIA's MetaModes.
* RandR 1.2 gives applications control of each individual CRTC/output.
Are you suggesting we go back to something more like RandR 1.1?
I don't think you can actually do "immersive gaming" properly without support from the application though, you'll get fairly significant distortion at the edges if you just render to such a setup as if it was a single very wide display.
I'm sorry; I don't understand the distortion concern. Are you referring to the bezel of the monitor occupying physical space, but not pixel space in the X screen? I believe people often address that by configuring "dead space" in the X screen between their monitors.
In any case, my impression is that multi-monitor fullscreen gaming is not an uncommon use case.
(Also, uneven numbers of displays are probably more useful for such a thing than even numbers of displays.)
Agreed.
Since we have some differing viewpoints that won't be quickly resolved, how about as a compromise we add a way for users to force Wine from RandR 1.2 back to RandR 1.1? That would at least let users achieve some of the configurations they cannot configure with top of tree. If that seems fair, what is the preferred configuration mechanism to do that? Just a simple environment variable?
Thanks, - Andy