2009/6/7 Frank Richter frank.richter@gmail.com:
As far as I could gather DF16 is the "ATI way" of getting a renderable 16 bit depth texture.
Without knowing much about the actual format, DF16 implies this should be a floating point format, similar to the ones provided by ARB_depth_buffer_float. Also, could you please add this at the same location as the other depth formats?
On 07.06.2009 19:35, Henri Verbeet wrote:
2009/6/7 Frank Richter frank.richter@gmail.com:
As far as I could gather DF16 is the "ATI way" of getting a renderable 16 bit depth texture.
Without knowing much about the actual format, DF16 implies this should be a floating point format, similar to the ones provided by ARB_depth_buffer_float.
Maybe... but it seems that format is solely intended for use on render target textures, not any download or upload... so not sure if the data type would matter in practice. Also, I didn't find a 16-bit float depth texture format for OpenGL so far.
Also, could you please add this at the same location as the other depth formats?
Well I added it to the vendor-specific formats because it _is_ ATI-specific...
-f.r.
2009/6/7 Frank Richter frank.richter@gmail.com:
On 07.06.2009 19:35, Henri Verbeet wrote:
2009/6/7 Frank Richter frank.richter@gmail.com:
As far as I could gather DF16 is the "ATI way" of getting a renderable 16 bit depth texture.
Without knowing much about the actual format, DF16 implies this should be a floating point format, similar to the ones provided by ARB_depth_buffer_float.
Maybe... but it seems that format is solely intended for use on render target textures, not any download or upload... so not sure if the data type would matter in practice. Also, I didn't find a 16-bit float depth texture format for OpenGL so far.
Even if the format isn't lockable, you can still use the data with a shader. Typically float formats aren't normalized, so you can have values outside the traditional [0.0,1.0] range. If there's no specific extension for this format you could use GL_DEPTH_COMPONENT32F as internal format and GL_HALF_FLOAT_ARB as type, although that would waste some memory, of course. Is there any specific application that needs this format?
On 07.06.2009 22:22, Henri Verbeet wrote:
Even if the format isn't lockable, you can still use the data with a shader.
If it's a typical depth format the shader will see normalized values.
Information on DF16 seems to be sparse, one thing I found was: http://discussms.hosting.lsoft.com/SCRIPTS/WA-MSD.EXE?A2=ind0611A&L=DIRE... It says that DF16 returns "real" depth values - which would indeed be normalized.
Typically float formats aren't normalized, so you can have values outside the traditional [0.0,1.0] range. If there's no specific extension for this format you could use GL_DEPTH_COMPONENT32F as internal format and GL_HALF_FLOAT_ARB as type, although that would waste some memory, of course.
On what graphics cards is that extension supported? DF16 is supported since the R300. It appears that float depth formats are much younger, so it seems unlikely DF16 is actually a float format internally. Does D3D(9) allow a depth range outside [0.0,1.0] anyway?
Is there any specific application that needs this format?
Some ATI graphics demos (e.g. Toy Shop). Probably at least some games that use shadow mapping.
-f.r.
On Sunday 07 June 2009 2:05:41 pm Frank Richter wrote:
On what graphics cards is that extension supported? DF16 is supported since the R300. It appears that float depth formats are much younger, so it seems unlikely DF16 is actually a float format internally.
Just to note, there is a much earlier depth-float format in GL, before the ARB or NV versions came out: http://www.opengl.org/registry/specs/EXT/wgl_depth_float.txt
It seems its main purpose is increased precision over the non-linear range of the buffer values. Unfortunately, Wine doesn't implement this extension, as there is no GLX equivalent.
2009. 06. 7, vasárnap keltezéssel 23.05-kor Frank Richter ezt írta:
On 07.06.2009 22:22, Henri Verbeet wrote:
Even if the format isn't lockable, you can still use the data with a shader.
If it's a typical depth format the shader will see normalized values.
Information on DF16 seems to be sparse, one thing I found was: http://discussms.hosting.lsoft.com/SCRIPTS/WA-MSD.EXE?A2=ind0611A&L=DIRE... It says that DF16 returns "real" depth values - which would indeed be normalized.
Typically float formats aren't normalized, so you can have values outside the traditional [0.0,1.0] range. If there's no specific extension for this format you could use GL_DEPTH_COMPONENT32F as internal format and GL_HALF_FLOAT_ARB as type, although that would waste some memory, of course.
On what graphics cards is that extension supported? DF16 is supported since the R300. It appears that float depth formats are much younger, so it seems unlikely DF16 is actually a float format internally. Does D3D(9) allow a depth range outside [0.0,1.0] anyway?
Is there any specific application that needs this format?
GTA4 for example. (it checks only DF16, DF24, INTZ, RAWZ formats, and because there is no support, it exits with an error message.)
Some ATI graphics demos (e.g. Toy Shop). Probably at least some games that use shadow mapping.
-f.r.
2009/6/7 Frank Richter frank.richter@gmail.com:
On 07.06.2009 22:22, Henri Verbeet wrote:
Even if the format isn't lockable, you can still use the data with a shader.
If it's a typical depth format the shader will see normalized values.
Yes, but floating point formats aren't "typical".
Information on DF16 seems to be sparse, one thing I found was: http://discussms.hosting.lsoft.com/SCRIPTS/WA-MSD.EXE?A2=ind0611A&L=DIRE... It says that DF16 returns "real" depth values - which would indeed be normalized.
Ok, I guess the format could just be named badly, although Chris makes a good point about precision, even if the values are normalized.
Some ATI graphics demos (e.g. Toy Shop).
If those demos have source, it might be useful to look at that for hints. At the very least this needs tests to determine if the values are normalized or not.
Am 07.06.2009 um 23:34 schrieb Henri Verbeet:
Ok, I guess the format could just be named badly, although Chris makes a good point about precision, even if the values are normalized.
There's D3DFMT_D24FS8 in the standard d3d formats, so I guess if they just wanted to make it a float format there's no point in adding their own format
Still needs testing though
Am 07.06.2009 um 10:35 schrieb Henri Verbeet:
2009/6/7 Frank Richter frank.richter@gmail.com:
As far as I could gather DF16 is the "ATI way" of getting a renderable 16 bit depth texture.
Without knowing much about the actual format, DF16 implies this should be a floating point format, similar to the ones provided by ARB_depth_buffer_float. Also, could you please add this at the same location as the other depth formats?
I don't think it is a float format, in spite of the name. I don't understand what it exactly is, but it seems that ATI Windows drivers cannot use regular D3DFMT_D16 or D24S8 as texture. That means that if an app wants a depth texture it has to StretchRect from the depth stencil to a D3DFMT_Lx texture.
I think when D16 is used in a shader it is supposed to behave like GL_ARB_shadow. Sometimes this is not flexible enough, and hearsay says it does not work on ATI cards.
My understanding is that the different formats work somewhat like this:
D3DFMT_D16 / D24X8, D24S8(all? nv only?): GL_ARB_shadow / shadow2D() in GLSL DF16, DF24(ati): Like sampling DEPTH_COMPONENT24 formats with regular texture2D INTZ(nv): denormalized texture2D() RAWZ(nv): Comparable to INTZ, but needs some extra calculations. I guess we'll not implement this, only INTZ.
If this is correct, Franks patch is content wise correct. I think we'll need tests to see if my understanding of the formats is correct.
2009. 06. 7, vasárnap keltezéssel 19.44-kor Stefan Dösinger ezt írta:
Am 07.06.2009 um 10:35 schrieb Henri Verbeet:
2009/6/7 Frank Richter frank.richter@gmail.com:
As far as I could gather DF16 is the "ATI way" of getting a renderable 16 bit depth texture.
Without knowing much about the actual format, DF16 implies this should be a floating point format, similar to the ones provided by ARB_depth_buffer_float. Also, could you please add this at the same location as the other depth formats?
I don't think it is a float format, in spite of the name. I don't understand what it exactly is, but it seems that ATI Windows drivers cannot use regular D3DFMT_D16 or D24S8 as texture. That means that if an app wants a depth texture it has to StretchRect from the depth stencil to a D3DFMT_Lx texture.
I think when D16 is used in a shader it is supposed to behave like GL_ARB_shadow. Sometimes this is not flexible enough, and hearsay says it does not work on ATI cards.
My understanding is that the different formats work somewhat like this:
D3DFMT_D16 / D24X8, D24S8(all? nv only?): GL_ARB_shadow / shadow2D() in GLSL DF16, DF24(ati): Like sampling DEPTH_COMPONENT24 formats with regular texture2D INTZ(nv): denormalized texture2D() RAWZ(nv): Comparable to INTZ, but needs some extra calculations. I guess we'll not implement this, only INTZ.
If this is correct, Franks patch is content wise correct. I think we'll need tests to see if my understanding of the formats is correct.
Hi,
I think you can test these formats with Crysis demo, first it tries to use one of them, then falls back to a standard depth format.
Andras