Tomas Carnecky wrote:
Changelog:
- fail if the drawable and context Visual IDs don't match.
why? because this will produce a BadMatch and crash the application so instead of the crash rather return GL_FALSE and let the application handle it.
Please don't apply this patch yet. It fixes one case but there is more to be aware of.
This BadMatch bug is triggered if an application creates a window (with a default Visual ID) and a custom (non-default) pixel format, the IDs were 0x21 and 0x23 in my case (glxinfo/xdpyinfo report Visual ID in the range from 0x21 to 0x70).
But! I've seen that World of Warcraft calls wglMakeCurrent() with a darawable that has Visual ID 0x71 and a context with Visual ID 0x21. Now 0x71 is not defined (according to the glxino output) but it works fine, eg. glXMakeCurrent() doesn't produce the X Error in that case. So I went up the backtrace, into create_glxpixmap(). I let this function fail if it would produce a X Error, eg. if "depth of pixmap does not match the GLX_BUFFER_SIZE value of vis". Then I had to modify wglMakeCurrent() to respect the create_glxpixmap() failure and return FALSE. Works good so far, framerate in WoW went up from ~20 to up to 70 fps, average is somewhere between 40-50 !!! This is incredible.
So.. in this attachment you'll find a patch that does what I've just described. I can't test it on anything else than WoW, so if someone would please review it and test with outher opengl/d3d applications it would be great.
Also, it would be great if we could put the *Swap*Buffers() into their own log domain, something like 'swapbuffers', because the trace is usually useless, only when you explicitly look whether these functions are called or not, otherwise they only fill the log with garbage.
thanks tom
Hi,
So.. in this attachment you'll find a patch that does what I've just described. I can't test it on anything else than WoW, so if someone would please review it and test with outher opengl/d3d applications it would be great.
No effects noticed with Half-life 1(GL), Warcraft III(GL and D3D) and Jedi Academy(GL).
That patch gave me a hint for a possible reason for the WineD3D slowness with a few games :)
Stefan
Stefan Dösinger wrote:
Hi,
So.. in this attachment you'll find a patch that does what I've just described. I can't test it on anything else than WoW, so if someone would please review it and test with outher opengl/d3d applications it would be great.
No effects noticed with Half-life 1(GL), Warcraft III(GL and D3D) and Jedi Academy(GL).
maybe it was because of the earlier opengl patch 'Store GL context in TEB'. But I didn't notice such an increase then.. only from ~20 -> ~30fps
That patch gave me a hint for a possible reason for the WineD3D slowness with a few games :)
good, at least something :)
Something else.. now I know why there is this VisualID mismatch. Someone didn't read the GLX spec. in opengl32/wgl.c:describeDrawable() you call glXQueryDrawable() with GLX_VISUAL_ID, but that's not allowed ! only GLX_FBCONFIG_ID and three others, according to the GLX 1.3/1.4 spec [1].
According to the spec, you can get the GLXFBConfig from a GLXDrawable using glXQueryDrawable() and then the Visual from the GLXFBConfig using glXGetFBConfigAttrib(). .. I'll send a patch.
tom
On Friday 24 March 2006 00:13, Tomas Carnecky wrote:
Stefan Dösinger wrote:
Hi,
So.. in this attachment you'll find a patch that does what I've just described. I can't test it on anything else than WoW, so if someone would please review it and test with outher opengl/d3d applications it would be great.
No effects noticed with Half-life 1(GL), Warcraft III(GL and D3D) and Jedi Academy(GL).
maybe it was because of the earlier opengl patch 'Store GL context in TEB'. But I didn't notice such an increase then.. only from ~20 -> ~30fps
That patch gave me a hint for a possible reason for the WineD3D slowness with a few games :)
good, at least something :)
Something else.. now I know why there is this VisualID mismatch. Someone didn't read the GLX spec.
It's me. i have read the spec. And it's work (but not really well documented) :)
in opengl32/wgl.c:describeDrawable() you call glXQueryDrawable() with GLX_VISUAL_ID, but that's not allowed ! only GLX_FBCONFIG_ID and three others, according to the GLX 1.3/1.4 spec [1].
According to the spec, you can get the GLXFBConfig from a GLXDrawable using glXQueryDrawable() and then the Visual from the GLXFBConfig using glXGetFBConfigAttrib().
Yes i now that, problem is that glXQueryDrawable is not always available (only nvidia drivers implement it). So expect to break major part of wine users
Regards, Raphael
Hi,
maybe it was because of the earlier opengl patch 'Store GL context in TEB'. But I didn't notice such an increase then.. only from ~20 -> ~30fps
Hm.
Something increased the speed of half-life / counter strike drastically. It isn't your patch, but some change that is in CVS already. The change is this:
Before half-life(non-steam) was running at 40-71 fps, depending on the situation. Now it's runs at 70-71 fps constantly.
Counter-Strike 1.6(steam) ran at 60-71 fps before if there were no bots in the game. Adding 8 bots(or maybe human players, didn't check that) made the fps drop to 20 in many rendering situations. Now this doesn't happen any more, the game runs at 60-71 fps in general, only if much players are visible, it may drop down to 30 fps.
Jedi academy got a major speed boost too, but that's mostly because I activated compressed textures in the game.
Btw, the card is a Radeon Mobility M9, with the dri driver from Xorg 7.0.
On Thursday 23 March 2006 20:26, Tomas Carnecky wrote:
Tomas Carnecky wrote:
Changelog:
- fail if the drawable and context Visual IDs don't match.
why? because this will produce a BadMatch and crash the application so instead of the crash rather return GL_FALSE and let the application handle it.
Please don't apply this patch yet. It fixes one case but there is more to be aware of.
This BadMatch bug is triggered if an application creates a window (with a default Visual ID) and a custom (non-default) pixel format, the IDs were 0x21 and 0x23 in my case (glxinfo/xdpyinfo report Visual ID in the range from 0x21 to 0x70).
yes, the classic wine problem :(
But! I've seen that World of Warcraft calls wglMakeCurrent() with a darawable that has Visual ID 0x71 and a context with Visual ID 0x21. Now 0x71 is not defined (according to the glxino output) but it works fine, eg. glXMakeCurrent() doesn't produce the X Error in that case. So I went up the backtrace, into create_glxpixmap(). I let this function fail if it would produce a X Error, eg. if "depth of pixmap does not match the GLX_BUFFER_SIZE value of vis". Then I had to modify wglMakeCurrent() to respect the create_glxpixmap() failure and return FALSE. Works good so far, framerate in WoW went up from ~20 to up to 70 fps, average is somewhere between 40-50 !!! This is incredible.
good news. But try to see if users aren't too impacted (ie. diff between nvidia/ATI drivers)
So.. in this attachment you'll find a patch that does what I've just described. I can't test it on anything else than WoW, so if someone would please review it and test with outher opengl/d3d applications it would be great.
Sorry no time to test ii :( but send it to wine-users to see
Also, it would be great if we could put the *Swap*Buffers() into their own log domain, something like 'swapbuffers', because the trace is usually useless, only when you explicitly look whether these functions are called or not, otherwise they only fill the log with garbage.
thanks tom
Regards, Raphael
Tomas Carnecky wrote:
Also, it would be great if we could put the *Swap*Buffers() into their own log domain, something like 'swapbuffers', because the trace is usually useless, only when you explicitly look whether these functions are called or not, otherwise they only fill the log with garbage.
Use grep -vE?
(Or completely remove it if it's useless, if someone needs to specifically debug *Swap*Buffers() they can add trace lines themselves..)