Hello,
I have written a small Direct3D7 test application which draws two overlapping triangles, a red one and a blue one. The red triangle has the z value 0.0 (closest to the viewer), the blue one 1.0(far away). Strangely, the blue triangle overlaps the red one on Windows.
The same happens on the current ddraw implementation, but in WineD3D the red triangle overlaps the blue one, which is mathematically correct. I've studied the ddraw and wined3d drawing code over and over, but I couldn't find any difference. It seems that on Windows, the triangle that is drawn last overlaps the other one in the output, and the z value seems to be ignored completely. Can someone explain me what's going on here????
The reason for this is the Direct3D mode of Half-Life 1. I've got it working with WineD3D after a long struggle with IDirect3DVertexBuffer::ProcessVertices, except of the HUD, which is hidden. If I force the z value in the [0.0, 1.0] range, it is shown, but the rest is broken obviously.
How to run my test program: I've attached a source file and a Makefile which builds a .exe with mingw, and a precompiled exe file. When you run it, it shows a black screen, and if you click anywhere, it draws the triangles. I've some bug in the flipping code, so you have to click two times actually. Pressing esc exits.
Thanks, Stefan
PS: If anyone needs the d3d7->wined3d code to look at, I can send patches. But it's quite messy now ;)
In OpenGL, when you specify depth coordinates with glOrtho, they're relative to the 'eye' which is pointing down the z-axis. So -30 actually evaluates to +30 in absolute coordinates. Maybe something similar is happening here? What happens when you draw a third triangle with z=0.5? What if you make the red triangle z=0.1 and the blue one z=1.0? (it could be a corner case for when z=0)
Stefan Dösinger wrote:
Hello,
I have written a small Direct3D7 test application which draws two overlapping triangles, a red one and a blue one. The red triangle has the z value 0.0 (closest to the viewer), the blue one 1.0(far away). Strangely, the blue triangle overlaps the red one on Windows.
The same happens on the current ddraw implementation, but in WineD3D the red triangle overlaps the blue one, which is mathematically correct. I've studied the ddraw and wined3d drawing code over and over, but I couldn't find any difference. It seems that on Windows, the triangle that is drawn last overlaps the other one in the output, and the z value seems to be ignored completely. Can someone explain me what's going on here????
The reason for this is the Direct3D mode of Half-Life 1. I've got it working with WineD3D after a long struggle with IDirect3DVertexBuffer::ProcessVertices, except of the HUD, which is hidden. If I force the z value in the [0.0, 1.0] range, it is shown, but the rest is broken obviously.
How to run my test program: I've attached a source file and a Makefile which builds a .exe with mingw, and a precompiled exe file. When you run it, it shows a black screen, and if you click anywhere, it draws the triangles. I've some bug in the flipping code, so you have to click two times actually. Pressing esc exits.
Thanks, Stefan
PS: If anyone needs the d3d7->wined3d code to look at, I can send patches. But it's quite messy now ;)
Stefan Dösinger wrote:
If I force the z value in the [0.0, 1.0] range, it is shown, but the rest is broken obviously.
Err reading your e-mail again, pardon my directx ignorance, but aren't all d3d z values supposed to be in the [0.0, 1.0] range? I thought that was the range for the z buffer. I recall reading that one of the main annoyance differences between OpenGL and D3D was OpenGL used [-1.0, 1.0] for depth and D3D used [0.0, 1.0].
Hi,
Err reading your e-mail again, pardon my directx ignorance, but aren't all d3d z values supposed to be in the [0.0, 1.0] range? I thought that was the range for the z buffer. I recall reading that one of the main annoyance differences between OpenGL and D3D was OpenGL used [-1.0, 1.0] for depth and D3D used [0.0, 1.0].
You could be right, I have to do some more checks. But the Half-Life problem could be caused by a bug in my ProcessVertices implementation. I interpreted the results I got from windows and blindy implemented the viewport conversation, and it seems that there are conflicts between my conversation and the GL viewport setup.
However, the triangle problem remains. I have changed the code to use z=0.1 for the red triangle and z=0.9 for the blue one, and the blue triangle still overlaps the red one on Windows. If I change the Z values, the result is unchanged.
Err reading your e-mail again, pardon my directx ignorance, but aren't all d3d z values supposed to be in the [0.0, 1.0] range? I thought that was the range for the z buffer. I recall reading that one of the main annoyance differences between OpenGL and D3D was OpenGL used [-1.0, 1.0] for depth and D3D used [0.0, 1.0].
I've found the reason for the problems in half-life: HL draws with 2 different viewports, one with z...[0.0, 1.0], and another one with z... [0.0.5.33333...]. WineD3D calls glOrtho with the current viewport on the first draw with processed vertices, then it stores that the viewport is set and avoids the glOrtho call. ProcessVertices always uses the current viewport. So the Vertices processed with the [0.0, 0.53333...] viewport were drawn into the [0.0, 1.0] viewport. Setting the glOrtho after a viewport change fixed the problem.
The HL Direct3D engine is now working fine :). I'll get a few more games running, then I can start sending patches :) :) (Well, two bugs remain: Decals are flickering, and hl changes the screen resolution back to 640x480 for some reason. And it's quite slow)
Stefan Dösinger wrote:
Err reading your e-mail again, pardon my directx ignorance, but aren't all d3d z values supposed to be in the [0.0, 1.0] range? I thought that was the range for the z buffer. I recall reading that one of the main annoyance differences between OpenGL and D3D was OpenGL used [-1.0, 1.0] for depth and D3D used [0.0, 1.0].
I've found the reason for the problems in half-life: HL draws with 2 different viewports, one with z...[0.0, 1.0], and another one with z... [0.0.5.33333...].
2 different viewports or just two different projection matrix settings? The way 2D overlays are handled in OpenGL from my (very limited) experience is that you setup your normal glPerspective, and then once you've drawn everything in the world, you switch to a glOrtho2D and do whatever 2D drawing you want on top. GL viewports just specify what what pixels in the window OpenGL has control over... if they use two whole different viewports instead of just changing projection settings that either strikes me as really dumb or that they're doing some black magic I don't know about ;)
Unless the viewport and the projection matrix are tied together in DirectX? If so there's a possible optimization in having the DirectX viewport function call check if everything that would be passed to glViewport is the same and if so only make the appropriate glOrtho/glPerspective call. I'm not sure how expensive a glViewport call is.
Hi,
2 different viewports or just two different projection matrix settings? The way 2D overlays are handled in OpenGL from my (very limited) experience is that you setup your normal glPerspective, and then once you've drawn everything in the world, you switch to a glOrtho2D and do whatever 2D drawing you want on top. GL viewports just specify what what pixels in the window OpenGL has control over... if they use two whole different viewports instead of just changing projection settings that either strikes me as really dumb or that they're doing some black magic I don't know about
It's 2 different viewports, and constantly changing world and view matrices. The projection matrix is the idendity matrix every time.
The view, projection and world matrix for GL drawing are idendity matrices in this case, because processed vertices are drawn(they are in viewport coordinates). WineD3D does a check if a glOrtho call is necessary.
I don't know much about GL drawing, but I noticed that GL clips primitive parts that are outside the Z range. So if vertices that were processed into a [0,5.3333] Z range are drawn into a [0,1.0] viewport leads to missing vertices. I guess I have to attend a few more courses at univertiy to really understand what exactly is going on.
The relevant code is at dlls/wined3d/drawprim.c, line 186, vtx_transformed is true, useVS and vtx_lit are false.
Stefan Dösinger <stefandoesinger <at> gmx.at> writes:
I don't know much about GL drawing, but I noticed that GL clips primitive parts that are outside the Z range. So if vertices that were processed into a [0,5.3333] Z range are drawn into a [0,1.0] viewport leads to missing vertices. I guess I have to attend a few more courses at univertiy to really understand what exactly is going on.
There are two components to Z buffering, the first is the Z range and the second is the near and far planes. These work together to determine whether a vertex is clipped (based on the near/far planes) and to what Z value it will obtain.
In OpenGL, first the vertices are clipped against the near/far planes (as specified by glFrustum), then modified for perspective. The near plane is mapped to z=-1 and far plane is z=1. All non-clipped vertices are now in [-1,1] (called normalized depth coordinates). Now window (i.e. viewport) depths are calculated based on the Z depth range as specified by glDepthRange(r_near, r_far). Window depth coords range [0,1] though, so a linear transform is applied to get from [-1,1] => [r_near,r_far].
The relevant code is at dlls/wined3d/drawprim.c, line 186, vtx_transformed is true, useVS and vtx_lit are false.
Seems to me that the call to glOrtho should be replaced by a call to glViewport(x,y,width,height) and glDepthRange(near,far). Since your vertices are already in viewport coordinates, according to the comment in the code, how does something like the following work for you:
{ ..... /* reset projection matrix before modelview since the glTranslate below should really be applied to the modelview matrix */ glMatrixMode(GL_PROJECTION); checkGLcall("glMatrixMode(GL_PROJECTION)"); glLoadIdentity(); checkGLcall("glLoadIdentity");
glMatrixMode(GL_MODELVIEW); checkGLcall("glMatrixMode(GL_MODELVIEW)"); glLoadIdentity(); checkGLcall("glLoadIdentity");
/* Set up the viewport to be full viewport */ X = This->stateBlock->viewport.X; Y = This->stateBlock->viewport.Y; height = This->stateBlock->viewport.Height; width = This->stateBlock->viewport.Width; minZ = This->stateBlock->viewport.MinZ; maxZ = This->stateBlock->viewport.MaxZ; glViewport(X, Y, width, height); checkGLcall("glViewport");
/* depth ranges will be clamped to [0, 1] */ glDepthRange(minZ, maxZ); checkGLcall("glDepthRange");
/* Window Coord 0 is the middle of the first pixel, so translate by half a pixel (See comment above glTranslate below) */ glTranslatef(0.5, 0.5, 0); checkGLcall("glTranslatef(0.5, 0.5, 0)"); if (This->renderUpsideDown) { glMultMatrixf(invymat); checkGLcall("glMultMatrixf(invymat)"); } ..... }
Regards, Aric
Seems to me that the call to glOrtho should be replaced by a call to glViewport(x,y,width,height) and glDepthRange(near,far). Since your vertices are already in viewport coordinates, according to the comment in the code, how does something like the following work for you:
glViewport(X, Y, width, height); checkGLcall("glViewport"); /* depth ranges will be clamped to [0, 1] */ glDepthRange(minZ, maxZ); checkGLcall("glDepthRange");
That code breaks half-life. The hl console is only dark brown rectangle in the top right quarter, the in-game graphics isn't drawn.
What's the difference between glOrtho and glViewport/glDepthRange? I've expected the code to be equal to the glOrtho call.
Stefan Dösinger <stefandoesinger <at> gmx.at> writes:
Seems to me that the call to glOrtho should be replaced by a call to glViewport(x,y,width,height) and glDepthRange(near,far). Since your vertices are already in viewport coordinates, according to the comment in the code, how does something like the following work for you:
glViewport(X, Y, width, height); checkGLcall("glViewport"); /* depth ranges will be clamped to [0, 1] */ glDepthRange(minZ, maxZ); checkGLcall("glDepthRange");
That code breaks half-life. The hl console is only dark brown rectangle in the top right quarter, the in-game graphics isn't drawn.
what about a combination of this plus the original glOrtho() call? It is possible that no projection matrix was set up at all I guess.
What's the difference between glOrtho and glViewport/glDepthRange? I've expected the code to be equal to the glOrtho call.
glOrtho multiplies to the current matrix (usually GL_PROJECTION) to define a transformation from eye to clip space. Clip space is a cube with each dimension in the range [-w,w] centered on the origin, which turns out to be convenient for clipping (w is the clip coord w value). So all vertices are multiplied by the projection matrix (as generated by glOrth0 or glFrustum), then they are clipped and then divided by w to get "normalized device coordinates" where each axis is bounded on [-1,1]. You can then imagine the vertices are being projected onto the near clip plane (any plane parallel to the near or far clip plane would do just as well, and would be identical). Then the projected points on this plane being mapped into a window whose size and position is defined by glViewport. In effect, the bounded projection plane is stretched to fit into the viewport
So to summarize, glOrtho and glViewport/glDepthRange are very different :)
I recommend you check out the Red Book, as it describes the OpenGL pipeline quite well and simply... older versions are available online as well I believe.
Also more info at http://www.opengl.org/resources/faq/technical/transformations.htm (9.011)
- Aric
I recommend you check out the Red Book, as it describes the OpenGL pipeline quite well and simply... older versions are available online as well I believe.
Yes, I found it. Looks like a good information source. Thanks for the hint :)