2009. 06. 7, vasárnap keltezéssel 19.44-kor Stefan Dösinger ezt írta:
Am 07.06.2009 um 10:35 schrieb Henri Verbeet:
2009/6/7 Frank Richter frank.richter@gmail.com:
As far as I could gather DF16 is the "ATI way" of getting a renderable 16 bit depth texture.
Without knowing much about the actual format, DF16 implies this should be a floating point format, similar to the ones provided by ARB_depth_buffer_float. Also, could you please add this at the same location as the other depth formats?
I don't think it is a float format, in spite of the name. I don't understand what it exactly is, but it seems that ATI Windows drivers cannot use regular D3DFMT_D16 or D24S8 as texture. That means that if an app wants a depth texture it has to StretchRect from the depth stencil to a D3DFMT_Lx texture.
I think when D16 is used in a shader it is supposed to behave like GL_ARB_shadow. Sometimes this is not flexible enough, and hearsay says it does not work on ATI cards.
My understanding is that the different formats work somewhat like this:
D3DFMT_D16 / D24X8, D24S8(all? nv only?): GL_ARB_shadow / shadow2D() in GLSL DF16, DF24(ati): Like sampling DEPTH_COMPONENT24 formats with regular texture2D INTZ(nv): denormalized texture2D() RAWZ(nv): Comparable to INTZ, but needs some extra calculations. I guess we'll not implement this, only INTZ.
If this is correct, Franks patch is content wise correct. I think we'll need tests to see if my understanding of the formats is correct.
Hi,
I think you can test these formats with Crysis demo, first it tries to use one of them, then falls back to a standard depth format.
Andras