http://bugs.winehq.org/show_bug.cgi?id=12816
Stefan Dösinger stefandoesinger@gmx.at changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |stefandoesinger@gmx.at
--- Comment #17 from Stefan Dösinger stefandoesinger@gmx.at 2009-04-02 18:41:24 --- Note that the rejected query for these formats doesn't necessarily cause the problem. These are Vendor specific formats(similar to GL_NV_* and GL_ATI_* opengl extensions), so applications usually cope with them not being available(and probably run a bit slower, or less memory efficient, or cannot work around a driver bug that doesn't exist in the first place, ...)
ATI1: Don't know about that one ATI2: That one's equivalent to GL_ATI_texture_compression_3dc, and supported if the extension is available(should be gone from the debug complaints even on NV cards). It has been promoted into direct3d 10 core, so Nvidia Geforce 8+ cards support it too, via GL_EXT_texture_compression_rgtc. This is also implemented in Wine.
AL16: Probably GL_LUMINANCE_ALPHA with 16 bit each, but not sure. Google doesn't find any description
R16: Probably GL_RED16, google is pretty mum on it
RAWZ, INTZ: Those are ugly. They're described in the Nvidia GPU programming guide, and are a way to read depth values into a shader without copying them from a depth buffer to a color buffer first. INTZ sounds related to standard opengl depth textures, but I have to read more docs. Maybe INTZ matches DEPTH_COMPONENT24_ARB
DF16, DF24: Similar to INTZ, just for ATI cards this time. Kinda like DEPTH_COMPONENT16_ARB / DEPTH_COMPONENT24_ARB.
Depth textures in d3d are a bit odd, and need some more test cases. It seems that on ATI you need the vendor specific DF16 / DF24 formats. On OpenGL, D3DFMT_D16 works as well, but sampling from it performs an implicit comparison against the incoming fragment's depth. Maybe DEPTH_COMPONENT16_ARB does the same, I have to look that up.