http://bugs.winehq.org/show_bug.cgi?id=11674
--- Comment #144 from Ema ema.oriani@gmail.com 2011-11-05 05:00:21 CDT --- (In reply to comment #143)
I would have grave suspicions about timing and synchronization errors as the draw functions return much sooner that programs expect them to and also return before anything is actually drawn...
I don't quite agree with this. I've used OpenGL for quite a while and all commands are always to be considered deferred, in the sense that the API, once you call any, will send the command to the driver and then, unless you call:
glFlush -> http://www.opengl.org/sdk/docs/man/xhtml/glFlush.xml glFinish -> http://www.opengl.org/sdk/docs/man/xhtml/glFinish.xml ARB Sync extension -> http://www.opengl.org/registry/specs/ARB/sync.txt
You _never_ know when a command will be or executed or completed by the host (the videocard). For this reasons all OpenGL developers, please correct if I'm wrong, have not to assume nothing when they execute their API calls.
If AE discovered that there's a lot of context switching and possibly delegating the work to one thread offloads the other threads, less locking, this should be the way to go.
And please consider that both Direct3D and OpenGL are fundamentally monothread APIs, so if we have one OpenGL thread executor (for each OpenGL context) things should definitely speed up because this would free a lot of resources. And this is what happens in WoW (and AFAIK in other games too).
Basically having a separate OpenGL renderer thread (per context) will catch two birds with a stone (as long as we implement the three above specifications correctly - i.e. waiting): 1) Less context switching (introduced by ENTER_GL/LEAVE_GL) 2) Offloading the caller thread from interacting with OpenGL driver (nVidia/AMD) thus making the application more responsive.
Cheers
Some more references: http://stackoverflow.com/questions/2143240/opengl-glflush-vs-glfinish