On 11/11/19 21:09, Henri Verbeet wrote:
On Mon, 11 Nov 2019 at 21:07, Paul Gofman gofmanp@gmail.com wrote:
On 11/11/19 20:12, Henri Verbeet wrote:
Although looking a bit closer at the patch, it still seems a relatively fragile way to fix the issue. It works because texture_upload_data() uses e.g. GL_BGRA/GL_UNSIGNED_INT_8_8_8_8_REV for uploading WINED3DFMT_B8G8R8X8_UNORM data, but the caller isn't supposed to care about that, and a different backend (e.g. Vulkan) could do uploads in a different way.
I think the only reason why this type of fixup is needed is because we have the underlying format for _B8G8R8X8_UNORM which has an explicit alpha. So I suppose if WINED3DFMT_B8G8R8X8_UNORM gets underlying format without alpha, this conversion should not break but rather become redundant, as probably upload conversion should put alpha of 1.0 in this case. Should I maybe move this alpha fixup to wined3d_texture_gl_upload_data()?
One more observation is that e.g. WINED3DFMT_B8G8R8X8_UNORM has an equivalent format with alpha, WINED3DFMT_B8G8R8A8_UNORM, which could be used for looking up the masks and byte counts.
But how do I link WINED3DFMT_B8G8R8X8_UNORM, WINED3DFMT_B5G5R5X1_UNORM or WINED3DFMT_B4G4R4X4_UNORM to their counterpart with alpha? Should I probably still use a table for that, just reference the format with alpha instead of directly coding size and mask?