Am Freitag, 25. November 2011, 20:23:01 schrieb Henri Verbeet:
I don't think this is correct. E.g. signed formats without GL_NV_texture_shader have load time and read time fixups and both have to be applied.
That's silly. If that's really the case that should be fixed.
I don't see why it is silly. d3d sysmem format -> gl texture format and gl texture format -> rgba conversions are two separate things. What's unfortunate is that in the case of P8 both can do the same job and the selection code is a mess.
For signed surfaces the upload conversion maps [-1.0;1.0] to [0.0;1.0] to load it into a unsigned rgba surface. When we read the surface in the shader we have to reverse that. You may be able to avoid the upload conversion by a more tricky shader conversion, but that misses the point.
(And yeah, SNORM<->UNORM blits are broken for other reasons.)
P8 -> RGBA.
I'd say always store the index in the alpha component. We can do that since P8 textures are disabled, there's no other use for the alpha value. Also the primary_render_target_is_p8 is stupid.
This way the additional shader conversion is redundant, but produces the correct result. Not perfect, but works for Wine 1.4. After Wine 1.4 the code should be changed to use the load time P8 conversion only if shader conversion isn't available, and only for the final blit to the screen(and software blits otherwise).
Also, do you have a game that needs P8->RGBA blits? In my testing with ddraw, P8->RGBA blits don't do what you expect. They ignore the palette, and replicate the index to all channels(ie, 0xa4 -> 0xa4a4a4a4).