Hi,
It seems that the patch git-1399edb0925966a802a6a39835025c22c22c18e1.patch found here http://www.winehq.org/pipermail/wine-cvs/2005-December/019731.html causes an opengl regression on my system. With the patch loading War3 causes X Error of failed request: GLXUnsupportedPrivateRequest Major opcode of failed request: 143 (GLX) Minor opcode of failed request: 17 (X_GLXVendorPrivateWithReply) Serial number of failed request: 429 Current serial number in output stream: 429
Which seems to stop the game loading thread and causes the game to use the fail-safe thread "Please insert disc", so the game wont load. Reversing the patch fixes the problem.
I have a Radeon 9200 using DRI snapshots about 20051024.
X Window System Version 6.8.99.901 (6.9.0 RC 1) (Minimal DRI build from X.org tr ee) Release Date: 18 October 2005 + cvs X Protocol Version 11, Revision 0, Release 6.8.99.901 Build Operating System: Linux 2.6.14-rc5 i686 [ELF] Current Operating System: Linux tesore 2.6.15-rc4-git1 #1 PREEMPT Fri Dec 2 17:0 3:32 MST 2005 i686 Build Date: 28 October 2005
Is this a DRI problem?
Thanks, Jesse
On Thursday 15 December 2005 19:55, Jesse Allen wrote:
Hi,
It seems that the patch git-1399edb0925966a802a6a39835025c22c22c18e1.patch found here http://www.winehq.org/pipermail/wine-cvs/2005-December/019731.html causes an opengl regression on my system. With the patch loading War3 causes X Error of failed request: GLXUnsupportedPrivateRequest Major opcode of failed request: 143 (GLX) Minor opcode of failed request: 17 (X_GLXVendorPrivateWithReply) Serial number of failed request: 429 Current serial number in output stream: 429
Which seems to stop the game loading thread and causes the game to use the fail-safe thread "Please insert disc", so the game wont load. Reversing the patch fixes the problem.
I have a Radeon 9200 using DRI snapshots about 20051024.
X Window System Version 6.8.99.901 (6.9.0 RC 1) (Minimal DRI build from X.org tr ee) Release Date: 18 October 2005 + cvs X Protocol Version 11, Revision 0, Release 6.8.99.901 Build Operating System: Linux 2.6.14-rc5 i686 [ELF] Current Operating System: Linux tesore 2.6.15-rc4-git1 #1 PREEMPT Fri Dec 2 17:0 3:32 MST 2005 i686 Build Date: 28 October 2005
Is this a DRI problem?
No, only that DRI don't implement GLX 1.3 i just sent a patch to fix (ie. by-pass) this regression.
i thought that DRI implemented GLX 1.3 specs but seems they use a too older x code :( http://cvs.sourceforge.net/viewcvs.py/dri/xc/xc/programs/Xserver/GL/glx/
Thanks, Jesse
On 12/15/05, Raphael fenix@club-internet.fr wrote:
No, only that DRI don't implement GLX 1.3 i just sent a patch to fix (ie. by-pass) this regression.
i thought that DRI implemented GLX 1.3 specs but seems they use a too older x code :( http://cvs.sourceforge.net/viewcvs.py/dri/xc/xc/programs/Xserver/GL/glx/
from glxinfo: server glx version string: 1.2 client glx version string: 1.4
Raphael <fenix <at> club-internet.fr> writes:
On Thursday 15 December 2005 19:55, Jesse Allen wrote:
Hi,
It seems that the patch git-1399edb0925966a802a6a39835025c22c22c18e1.patch found here http://www.winehq.org/pipermail/wine-cvs/2005-December/019731.html causes an opengl regression on my system. With the patch loading War3 causes X Error of failed request: GLXUnsupportedPrivateRequest Major opcode of failed request: 143 (GLX) Minor opcode of failed request: 17 (X_GLXVendorPrivateWithReply) Serial number of failed request: 429 Current serial number in output stream: 429
Which seems to stop the game loading thread and causes the game to use the fail-safe thread "Please insert disc", so the game wont load. Reversing the patch fixes the problem.
I have a Radeon 9200 using DRI snapshots about 20051024.
X Window System Version 6.8.99.901 (6.9.0 RC 1) (Minimal DRI build from X.org tr ee) Release Date: 18 October 2005 + cvs X Protocol Version 11, Revision 0, Release 6.8.99.901 Build Operating System: Linux 2.6.14-rc5 i686 [ELF] Current Operating System: Linux tesore 2.6.15-rc4-git1 #1 PREEMPT Fri Dec 2 17:0 3:32 MST 2005 i686 Build Date: 28 October 2005
Is this a DRI problem?
No, only that DRI don't implement GLX 1.3 i just sent a patch to fix (ie. by-pass) this regression.
You really don't need to use glXQueryServerString() and glXQueryClientString(). It would be better, easier (not strcmp) and more correct to just use glXQueryVersion(). glXQueryVersion will automatically report the version common to both the client and server (so in this case 1.2).
Another thing I don't understand in your patch is why you have wine_glx_t and wine_glx defined at global scope. It looks like the only place in your patch they are used is in wgl.c, so why not define wine_glx_t in wgl.c and make wine_glx static? Sorry if I am missing something.
(Also there is some DEPTH_BITS hack in internal_glGetIntegerv which I assume is unrelated to this GLX patch?)
i thought that DRI implemented GLX 1.3 specs but seems they use a too older x code :( http://cvs.sourceforge.net/viewcvs.py/dri/xc/xc/programs/Xserver/GL/glx/
Too old perhaps, but that what DRI (and hence ATI) are using. Both support most of the 1.3 features so there really isn't much of an issue. The problem is that we cannot assume 1.3 regardless of how old 1.2 is. The reason for the glx version is so that we can do the right thing in any case.
Also glx 1.4 isn't an official standard. It is still a draft. There is an interesting recent thread about this at http://lists.freedesktop.org/archives/xorg/2005-November/011279.html
Also seems like we should be relying on glXGetProcAddress() but need to be using glXGetProcAddressARB() since nVidia apparently doesn't export the former.
Regards, Aric
On Friday 16 December 2005 02:26, Aric Cyr wrote:
Raphael <fenix <at> club-internet.fr> writes:
On Thursday 15 December 2005 19:55, Jesse Allen wrote:
Hi,
It seems that the patch git-1399edb0925966a802a6a39835025c22c22c18e1.patch found here http://www.winehq.org/pipermail/wine-cvs/2005-December/019731.html causes an opengl regression on my system. With the patch loading War3 causes X Error of failed request: GLXUnsupportedPrivateRequest Major opcode of failed request: 143 (GLX) Minor opcode of failed request: 17 (X_GLXVendorPrivateWithReply) Serial number of failed request: 429 Current serial number in output stream: 429
Which seems to stop the game loading thread and causes the game to use the fail-safe thread "Please insert disc", so the game wont load. Reversing the patch fixes the problem.
I have a Radeon 9200 using DRI snapshots about 20051024.
X Window System Version 6.8.99.901 (6.9.0 RC 1) (Minimal DRI build from X.org tr ee) Release Date: 18 October 2005 + cvs X Protocol Version 11, Revision 0, Release 6.8.99.901 Build Operating System: Linux 2.6.14-rc5 i686 [ELF] Current Operating System: Linux tesore 2.6.15-rc4-git1 #1 PREEMPT Fri Dec 2 17:0 3:32 MST 2005 i686 Build Date: 28 October 2005
Is this a DRI problem?
No, only that DRI don't implement GLX 1.3 i just sent a patch to fix (ie. by-pass) this regression.
You really don't need to use glXQueryServerString() and glXQueryClientString(). It would be better, easier (not strcmp) and more correct to just use glXQueryVersion(). glXQueryVersion will automatically report the version common to both the client and server (so in this case 1.2).
No, we cannot - glXGetFBConfigs is implemented by glx client (normally when glx client version > 1.2 but in many 1.2 implementations its provided by Xorg). - glXQueryDrawable is only implemented by glx server (when glx server version
1.2)
- for others calls we check if client version > 1.2 else we use SGIX extension
Another thing I don't understand in your patch is why you have wine_glx_t and wine_glx defined at global scope. It looks like the only place in your patch they are used is in wgl.c, so why not define wine_glx_t in wgl.c and make wine_glx static? Sorry if I am missing something.
It's for future use by wgl_ext.c. I don't like the idea to duplicate glx version/extensions checks and glXGetProcAddressARB pointer parameter on functions
(Also there is some DEPTH_BITS hack in internal_glGetIntegerv which I assume is unrelated to this GLX patch?)
yes i comment it (i use it for debug)
i thought that DRI implemented GLX 1.3 specs but seems they use a too older x code :( http://cvs.sourceforge.net/viewcvs.py/dri/xc/xc/programs/Xserver/GL/glx/
Too old perhaps, but that what DRI (and hence ATI) are using. Both support most of the 1.3 features so there really isn't much of an issue. The problem is that we cannot assume 1.3 regardless of how old 1.2 is. The reason for the glx version is so that we can do the right thing in any case.
Not really, many of GLX features are supported by glx clients. And only old clients (ie X versions) reports 1.2 caps (and generaly they support 1.3 extensions) I have only problems with "1.2 glx servers" (who don't support drawable queries), who is the DRI / ATI configuration :(
Also seems like we should be relying on glXGetProcAddress() but need to be using glXGetProcAddressARB() since nVidia apparently doesn't export the former.
??? where you see glXGetProcAddress use ?
wgl.c:1044: p_glXGetProcAddressARB = wine_dlsym(opengl_handle, "glXGetProcAddressARB", NULL, 0);
Regards, Aric
Regards, Raphael
On 12/22/05, Raphael fenix@club-internet.fr wrote:
On Friday 16 December 2005 02:26, Aric Cyr wrote:
Raphael <fenix <at> club-internet.fr> writes:
On Thursday 15 December 2005 19:55, Jesse Allen wrote:
You really don't need to use glXQueryServerString() and glXQueryClientString(). It would be better, easier (not strcmp) and more correct to just use glXQueryVersion(). glXQueryVersion will automatically report the version common to both the client and server (so in this case 1.2).
No, we cannot
- glXGetFBConfigs is implemented by glx client (normally when glx client
version > 1.2 but in many 1.2 implementations its provided by Xorg).
- glXQueryDrawable is only implemented by glx server (when glx server version
1.2)
- for others calls we check if client version > 1.2 else we use SGIX extension
Sorry, I think I was asleep when I posted that. What I meant to say is that we don't need any of the glXQueryServerString or glXQueryClientString calls, since we can and should just use glXQueryExtensionsString. In fact, this is what the code does, but it also passes around the server and client strings to each of the query_functions, even though none of them use those parameters. I guess you put those in for future use, but I think they will never get used and just clutter things up.
i thought that DRI implemented GLX 1.3 specs but seems they use a too older x code :( http://cvs.sourceforge.net/viewcvs.py/dri/xc/xc/programs/Xserver/GL/glx/
Too old perhaps, but that what DRI (and hence ATI) are using. Both support most of the 1.3 features so there really isn't much of an issue. The problem is that we cannot assume 1.3 regardless of how old 1.2 is. The reason for the glx version is so that we can do the right thing in any case.
Not really, many of GLX features are supported by glx clients. And only old clients (ie X versions) reports 1.2 caps (and generaly they support 1.3 extensions) I have only problems with "1.2 glx servers" (who don't support drawable queries), who is the DRI / ATI configuration :(
By drawable queries do you glXDrawableAttribARB? There is zero documentation for that function, and the entire GLX_ARB_render_texture extension. It seems like the ARB dropped the whole idea. Last mention of it was in 2002, from what I could find. I don't think we can rely on this extension to implement wglSetPbuffersAttribARB(). Maybe framebuffer objects would be a better solution, which ATI and nVidia both support now.
Also seems like we should be relying on glXGetProcAddress() but need to be using glXGetProcAddressARB() since nVidia apparently doesn't export the former.
??? where you see glXGetProcAddress use ?
Ya I checked the code after I sent this. I just happened to be reading the thread that I mentioned and thought it would be good advice to throw into the mail.
Regards, Aric
-- Aric Cyr <Aric.Cyr at gmail dot com> (http://acyr.net)