Am Donnerstag, 24. November 2011, 22:52:32 schrieb Henri Verbeet:
- if (surface->flags & SFLAG_CONVERTED)
- {
ENTER_GL();
glEnable(textype);
checkGLcall("glEnable(textype)");
LEAVE_GL();
return WINED3D_OK;
- }
I don't think this is correct. E.g. signed formats without GL_NV_texture_shader have load time and read time fixups and both have to be applied.
What exactly are you trying to fix? I assume it is something about P8 blits, but P8->P8 blits don't enter the ARB blitting code(due to the dest fixup check), and I don't know of any app that uses P8->RGB or SNORM->UNORM blits.
Stefan
On 25 November 2011 19:50, Stefan Dösinger stefandoesinger@gmx.at wrote:
Am Donnerstag, 24. November 2011, 22:52:32 schrieb Henri Verbeet:
- if (surface->flags & SFLAG_CONVERTED)
- {
- ENTER_GL();
- glEnable(textype);
- checkGLcall("glEnable(textype)");
- LEAVE_GL();
- return WINED3D_OK;
- }
I don't think this is correct. E.g. signed formats without GL_NV_texture_shader have load time and read time fixups and both have to be applied.
That's silly. If that's really the case that should be fixed. Note that this case would already be broken with the current code though, arbfp_blit_set() only handles P8 and the various YUV fixups. More generally, I have some doubts about whether the way converted surfaces currently work is really what we want.
What exactly are you trying to fix? I assume it is something about P8 blits, but P8->P8 blits don't enter the ARB blitting code(due to the dest fixup check), and I don't know of any app that uses P8->RGB or SNORM->UNORM blits.
P8 -> RGBA.
Am Freitag, 25. November 2011, 20:23:01 schrieb Henri Verbeet:
I don't think this is correct. E.g. signed formats without GL_NV_texture_shader have load time and read time fixups and both have to be applied.
That's silly. If that's really the case that should be fixed.
I don't see why it is silly. d3d sysmem format -> gl texture format and gl texture format -> rgba conversions are two separate things. What's unfortunate is that in the case of P8 both can do the same job and the selection code is a mess.
For signed surfaces the upload conversion maps [-1.0;1.0] to [0.0;1.0] to load it into a unsigned rgba surface. When we read the surface in the shader we have to reverse that. You may be able to avoid the upload conversion by a more tricky shader conversion, but that misses the point.
(And yeah, SNORM<->UNORM blits are broken for other reasons.)
P8 -> RGBA.
I'd say always store the index in the alpha component. We can do that since P8 textures are disabled, there's no other use for the alpha value. Also the primary_render_target_is_p8 is stupid.
This way the additional shader conversion is redundant, but produces the correct result. Not perfect, but works for Wine 1.4. After Wine 1.4 the code should be changed to use the load time P8 conversion only if shader conversion isn't available, and only for the final blit to the screen(and software blits otherwise).
Also, do you have a game that needs P8->RGBA blits? In my testing with ddraw, P8->RGBA blits don't do what you expect. They ignore the palette, and replicate the index to all channels(ie, 0xa4 -> 0xa4a4a4a4).
On 25 November 2011 21:07, Stefan Dösinger stefandoesinger@gmx.at wrote:
Am Freitag, 25. November 2011, 20:23:01 schrieb Henri Verbeet:
That's silly. If that's really the case that should be fixed.
I don't see why it is silly. d3d sysmem format -> gl texture format and gl texture format -> rgba conversions are two separate things. What's unfortunate is that in the case of P8 both can do the same job and the selection code is a mess.
For signed surfaces the upload conversion maps [-1.0;1.0] to [0.0;1.0] to load it into a unsigned rgba surface. When we read the surface in the shader we have to reverse that. You may be able to avoid the upload conversion by a more tricky shader conversion, but that misses the point.
Maybe there's a legitimate case for signed formats and it's just P8 that's broken. I'm not entirely sure that we shouldn't just use either EXT_texture_snorm or the NV_texture_shader formats and mark the formats as unsupported otherwise though.
Also, do you have a game that needs P8->RGBA blits? In my testing with ddraw,
Not yet. I came across this while investigating how we should handle P8 primary surfaces for ddraw. You'd have a RGBA8 frontbuffer in wined3d and a P8 surface in ddraw.
Am Sonntag, 27. November 2011, 17:02:01 schrieb Henri Verbeet:
I'm not entirely sure that we shouldn't just use either EXT_texture_snorm or the NV_texture_shader formats and mark the formats as unsupported otherwise though.
That's not an option. My iMac doesn't support either extension(r500, up to date Lion) and EXT_texture_snorm doesn't support WINED3DFMT_R5G5_SNORM_L6_UNORM.
Not yet. I came across this while investigating how we should handle P8 primary surfaces for ddraw. You'd have a RGBA8 frontbuffer in wined3d and a P8 surface in ddraw.
This would elimintate the need to have P8 surfaces that are ALPHA8 in the gl texture and RGBA8 onscreen. However, which implications does that have for the ddraw shadow frontbuffer in the long run?
I'm asking because of overlays and, to some extend, the software d3d8/d3d9 cursor. For those I need a shadow frontbuffer in wined3d. If ddraw keeps using one as well we might end up with a pretty long blitting chain and redundant code.
To fix some issues with overlays(tearing, moving the overlay) I have to composit the primary and overlay in the wgl backbuffer and then call swapbuffers to have a vsynced present. (This need some more tests, especially the vsync timing).
The d3d8/9 software cursor behaves pretty similarly to a ddraw overlay. It can be moved without a present call and doesn't leave behind an old cursor image. It doesn't show up in the backbuffer after a D3DSWAPEFFECT_FLIP blit. It does however show up in GetFrontbufferData. Printscreen copies the backbuffer contents to the clipboard, so I can't test if the SW cursor shows up in regular screenshot.
(Note that ddraw overlays are currently broken because ddraw's UpdateOverlay uses the wrong wined3d surface. That is not hard to fix, but even then no app would use overlays because our caps are broken)
On 2011-11-27 21:28, Stefan Dösinger wrote:
Am Sonntag, 27. November 2011, 17:02:01 schrieb Henri Verbeet:
I'm not entirely sure that we shouldn't just use either EXT_texture_snorm or the NV_texture_shader formats and mark the formats as unsupported otherwise though.
That's not an option. My iMac doesn't support either extension(r500, up to
I must say that as a Wine developer I care only marginally about running on closed platforms. For CrossOver, OS X is more important of course, but I'm sure we could add another hack there. I don't think OS X support should be a deciding factor here.
date Lion) and EXT_texture_snorm doesn't support WINED3DFMT_R5G5_SNORM_L6_UNORM.
You'd load it into RGB8_SNORM. That means you have to do load time conversion, but we do that for DSDT8_MAG8_NV as well.
Not yet. I came across this while investigating how we should handle P8 primary surfaces for ddraw. You'd have a RGBA8 frontbuffer in wined3d and a P8 surface in ddraw.
This would elimintate the need to have P8 surfaces that are ALPHA8 in the gl texture and RGBA8 onscreen. However, which implications does that have for the ddraw shadow frontbuffer in the long run?
Not a whole lot. Note that I was only investigating this a bit, I don't have any actual patches.
I'm asking because of overlays and, to some extend, the software d3d8/d3d9 cursor. For those I need a shadow frontbuffer in wined3d. If ddraw keeps using one as well we might end up with a pretty long blitting chain and redundant code.
To fix some issues with overlays(tearing, moving the overlay) I have to composit the primary and overlay in the wgl backbuffer and then call swapbuffers to have a vsynced present. (This need some more tests, especially the vsync timing).
That's a general issue with Flip() / Present(). The way it currently works we render ddraw backbuffers offscreen and blit them directly to the frontbuffer. However, this means we don't get vsync. To make that work we'd probably need to blit to the backbuffer first and depend on SwapBuffers() to sort out the vsync. It doesn't really matter if that code lives in ddraw or wined3d, or what format the various surfaces have in that regard.
Am Sonntag, 27. November 2011, 23:13:31 schrieb Henri Verbeet:
I must say that as a Wine developer I care only marginally about running on closed platforms. For CrossOver, OS X is more important of course, but I'm sure we could add another hack there. I don't think OS X support should be a deciding factor here.
I think this would be disrespectful of the OSX users Wine has. Signed textures aren't a minor feature, most games that use shaders need them. This also affects all Macs, since no GPU/OSX version combination implements EXT_texture_snorm. Nvidia Macs also dropped support for NV_texture_shader(and when Tiger supported it it didn't work). Signed texture emulation isn't hard to do, all we need is proper separation of load time and sample time fixups.
On 28 November 2011 12:36, Stefan Dösinger stefandoesinger@gmx.at wrote:
Am Sonntag, 27. November 2011, 23:13:31 schrieb Henri Verbeet:
I must say that as a Wine developer I care only marginally about running on closed platforms. For CrossOver, OS X is more important of course, but I'm sure we could add another hack there. I don't think OS X support should be a deciding factor here.
I think this would be disrespectful of the OSX users Wine has.
My point was pretty much that I care only marginally about that argument. Nevertheless, I'm not all that convinced that Wine even has a lot of OS X users. If it does, that's certainly not reflected in the number of active developers (Perhaps if you count people that are paid to work on Wine on OS X things look better there, but I think few of those run OS X as their primary platform by choice.) or the number of bug reports we receive from OS X users.
Signed textures aren't a minor feature, most games that use shaders need them. This also
So it should be easy to convince Apple to implement EXT_texture_snorm, right? I don't think it's unreasonable to require some features from the underlying OS when you want certain features to work in Wine. It certainly works like that for the BSDs and OpenSolaris, and probably would for e.g. Haiku if they wanted to run Wine as well. I don't see why OS X should be different.
when Tiger supported it it didn't work). Signed texture emulation isn't hard to do, all we need is proper separation of load time and sample time fixups.
Yeah well, hacking something together is easy. Doing it properly you should also be able to blit to those formats, render to them (you can render to e.g. Q8W8V8U8 even in d3d9, d3d10 can render to all kinds of formats), read them back, and probably do clears on them. If we had no other choice that would simply be a pain, but pretty much all the linux drivers support either the various NV_texture_shader extensions or EXT_texture_snorm.
Am Montag, 28. November 2011, 15:21:11 schrieb Henri Verbeet:
My point was pretty much that I care only marginally about that argument. Nevertheless, I'm not all that convinced that Wine even has a lot of OS X users. If it does, that's certainly not reflected in the number of active developers
It probably doesn't help that Wine is (partially intentionally, and partially unintentionally) crippled on this platform and most users have to use third party tools to use it(which we rightfully don't support). Telling users that their platform isn't good enough isn't good marketing either, and I doubt that it makes Apple more likely to be cooperative.
So it should be easy to convince Apple to implement EXT_texture_snorm, right?
I filed a feature request, Bug ID 10490831. The most likely problem is that this extension requires OpenGL 3.0 on paper.
I don't think it's unreasonable to require some features from the underlying OS when you want certain features to work in Wine. It certainly works like that for the BSDs and OpenSolaris, and probably would for e.g. Haiku if they wanted to run Wine as well. I don't see why OS X should be different.
Call it hyprocricy from a technical point of view, but I think user numbers and market share matter. It certainly does on Linux where we spend quite some time debugging and working around troubles various distros cause.
Yeah well, hacking something together is easy. Doing it properly you should also be able to blit to those formats, render to them (you can render to e.g. Q8W8V8U8 even in d3d9, d3d10 can render to all kinds of formats), read them back, and probably do clears on them.
My DirectX9 GPUs can't render to Q8W8V8U8. Blitting from those formats is easy, it's not implemented because no app needed it yet. Blitting to those formats is harder, but requires render target support. I'm fine with not supporting d3d10 features without this specific extension or GL 3.0-level driver support. But I think it's a bad idea to disable d3d8 support because a driver doesn't support some GL 3.0 extension.
On 28 November 2011 17:57, Stefan Dösinger stefandoesinger@gmx.at wrote:
It probably doesn't help that Wine is (partially intentionally, and partially unintentionally) crippled on this platform and most users have to use third party tools to use it(which we rightfully don't support). Telling users that their platform isn't good enough isn't good marketing either, and I doubt that it makes Apple more likely to be cooperative.
I think Apple's opinion on Free Software and the GPL is pretty well known by now. At this point it's probably about as likely we'll get help from Microsoft as from Apple.
So it should be easy to convince Apple to implement EXT_texture_snorm, right?
I filed a feature request, Bug ID 10490831. The most likely problem is that this extension requires OpenGL 3.0 on paper.
I don't suppose you have a link to a public bug tracker for that by any chance?
I don't think it's unreasonable to require some features from the underlying OS when you want certain features to work in Wine. It certainly works like that for the BSDs and OpenSolaris, and probably would for e.g. Haiku if they wanted to run Wine as well. I don't see why OS X should be different.
Call it hyprocricy from a technical point of view, but I think user numbers and market share matter. It certainly does on Linux where we spend quite some time debugging and working around troubles various distros cause.
It's probably a good thing that Wine doesn't have a lot of OS X users then. I suspect "various distros" above means "mostly Ubuntu". While I don't necessarily always agree with those kinds of workarounds either, it's certainly not on the level of the amount of hacks we have for OS X in CrossOver.
Yeah well, hacking something together is easy. Doing it properly you should also be able to blit to those formats, render to them (you can render to e.g. Q8W8V8U8 even in d3d9, d3d10 can render to all kinds of formats), read them back, and probably do clears on them.
My DirectX9 GPUs can't render to Q8W8V8U8. Blitting from those formats is easy, it's not implemented because no app needed it yet. Blitting to those formats is harder, but requires render target support. I'm fine with not supporting d3d10 features without this specific extension or GL 3.0-level driver support. But I think it's a bad idea to disable d3d8 support because a driver doesn't support some GL 3.0 extension.
The original NV_texture_shader extension is over 10 years old.
Am Montag, 28. November 2011, 19:00:25 schrieb Henri Verbeet:
I don't suppose you have a link to a public bug tracker for that by any chance?
Unfortunately not, Apple considers such bug reports private. I don't think the efforts to create a public mirror on a voluntary basis went anywhere.
The original NV_texture_shader extension is over 10 years old.
But it has a lot of things beyond signed textures that are specific to Nvidia hardware. It is unreasonable to expect other vendors to support it. GL_EXT_texture_snorm requires GL 3.0. We're on pretty shaky ground if we demand support for one of those extensions on generic d3d8/9 hardware.
On 28 November 2011 21:37, Stefan Dösinger stefandoesinger@gmx.at wrote:
The original NV_texture_shader extension is over 10 years old.
But it has a lot of things beyond signed textures that are specific to Nvidia hardware. It is unreasonable to expect other vendors to support it. GL_EXT_texture_snorm requires GL 3.0.
That's probably more a case of laziness by the spec writers than a real requirement for anything that's in there. I suppose there's MESA_texture_signed_rgba for hardware that couldn't possibly support either of those extensions.
We're on pretty shaky ground if we demand support for one of those extensions on generic d3d8/9 hardware.
We wouldn't demand support, signed textures simply aren't going to be supported if the underlying GL implementation doesn't support them. Sounds reasonable enough to me.
Am Montag, 28. November 2011, 22:05:31 schrieb Henri Verbeet:
That's probably more a case of laziness by the spec writers than a real requirement for anything that's in there. I suppose there's MESA_texture_signed_rgba for hardware that couldn't possibly support either of those extensions.
It's probably because of the implied support for GL_RED and GL_RG textures and 16 bit per channel textures(although GL 1.2 already requires support for that).
MESA_texture_signed_rgba will probably work for r300g. WINED3DFMT_R8G8_SNORM_L8X8_UNORM can't be supported with this extension, but I assume that games do not depend on this format.
We wouldn't demand support, signed textures simply aren't going to be supported if the underlying GL implementation doesn't support them. Sounds reasonable enough to me.
Which will break many d3d8/9 games that work fine right now. Pixel shader 1 kinda implies signed texture format support with the texbem instruction.
However, I must say that fewer games use it than I expected. Out of the games I checked here, Sims 3 takes the format for granted and has rendering issues otherwise(texture creation rejected). Trackmania and the settlers 2 remake check for support and lower graphics quality. C&C3 checks for it, but I see no negative effect. 3DMark2001 crashes when Pixel Shaders are supported but V8U8 is not. That's about one third of the d3d8/9 games I checked, the others don't use signed textures.
However, it's not clear to me what we'd gain from dropping signed texture conversion support. Yes, it makes P8 a bit easier because you can ignore the fact that the load-time conversion selection in d3dfmt_get_conv is broken(The entire function is questionable imo. Fixing the color keying parts of it will be harder). If you go the separate RGBA8 surface way you could e.g. move the P8 unpacking to the software blitter and always store P8 surfaces as GL_ALPHA8 and keep them in software if shaders aren't supported. (There's not much point in doing HW blits on older GPUs anyway. Also the unpacked P8 textures are based on a broken design and use e.g. the wrong palette)
It won't fix blitting from surfaces that need non-complex fixups. E.g. GL_R32F also needs this.
It won't eliminate load-time fixups. The Bumpmap-Luminance formats will still need it.
On 29 November 2011 01:23, Stefan Dösinger stefandoesinger@gmx.at wrote:
Am Montag, 28. November 2011, 22:05:31 schrieb Henri Verbeet:
That's probably more a case of laziness by the spec writers than a real requirement for anything that's in there. I suppose there's MESA_texture_signed_rgba for hardware that couldn't possibly support either of those extensions.
It's probably because of the implied support for GL_RED and GL_RG textures and
Well yes, writing the spec against e.g. 2.1 would require specifying interactions like that, along the lines of the R/RG formats not being available without ARB_texture_rg. You can't write negative values without disabling color clamping either, so there's somewhat of an interaction with ARB_color_buffer_float as well. But that's all more on the level of "the spec writers couldn't be bothered to specify interactions" than "this can't possibly be implemented without GL3".
MESA_texture_signed_rgba will probably work for r300g.
r300g should support EXT_texture_snorm, IIRC.
We wouldn't demand support, signed textures simply aren't going to be supported if the underlying GL implementation doesn't support them. Sounds reasonable enough to me.
Which will break many d3d8/9 games that work fine right now. Pixel shader 1 kinda implies signed texture format support with the texbem instruction.
However, I must say that fewer games use it than I expected. Out of the games I checked here, Sims 3 takes the format for granted and has rendering issues otherwise(texture creation rejected). Trackmania and the settlers 2 remake check for support and lower graphics quality. C&C3 checks for it, but I see no negative effect. 3DMark2001 crashes when Pixel Shaders are supported but V8U8 is not. That's about one third of the d3d8/9 games I checked, the others don't use signed textures.
Well yeah, if nothing used those formats there wouldn't be much of a point supporting them at all. The (fairly general) issue is about how much we should go out of our way to support features on obscure or broken configurations. There are of course various factors involved there, like e.g. how common such a configuration is or how invasive the workaround would be. I'd argue that if we were adding support for signed textures today, we shouldn't add this kind of workaround just for OS X. Similarly, I think that if this code causes problems because of interactions with other code, getting rid of it is a valid option.
However, it's not clear to me what we'd gain from dropping signed texture conversion support.
Reduced complexity, mostly.
It won't fix blitting from surfaces that need non-complex fixups. E.g. GL_R32F also needs this.
Sure, but we'd just use ARB_texture_swizzle for that if we didn't already have the fixup code for signed formats. (Incidentally, using ARB_texture_swizzle may be slightly better in terms of performance anyway. At least on Radeons you can specify these kinds of swizzles as part of the texture format setup, instead of using up shader instructions.) With those all gone, the only formats that would have fixups would P8 and the YUV formats. Since I don't think you can necessarily texture from YUV formats either, that could then probably be something entirely local to specific blitters.
Am Dienstag, 29. November 2011, 06:30:30 schrieb Henri Verbeet:
The (fairly general) issue is about how much we should go out of our way to support features on obscure or broken configurations.
Considering that we're willing to write a quartz driver instead of blaming the OSX X server or maintain the OSS audio backend for various BSDs or support for gcc 2.9x I'd say the bar for Wine as a whole is quite high.
On 29 November 2011 11:47, Stefan Dösinger stefandoesinger@gmx.at wrote:
Am Dienstag, 29. November 2011, 06:30:30 schrieb Henri Verbeet:
The (fairly general) issue is about how much we should go out of our way to support features on obscure or broken configurations.
Considering that we're willing to write a quartz driver instead of blaming the OSX X server or maintain the OSS audio backend for various BSDs or support for gcc 2.9x I'd say the bar for Wine as a whole is quite high.
Oh, I completely forgot this quartz driver we have in Wine is really just a quick hack on top of the X11 driver. I must have been wrong about this code quality crap all along, feel free to add any hacks you like, as long as it makes something work for someone.
These are of course not nearly the same thing. I think I've explained my position well enough, feel free to ignore or misinterpret at your own risk.
Am Dienstag, 29. November 2011, 12:40:42 schrieb Henri Verbeet:
Oh, I completely forgot this quartz driver we have in Wine is really just a quick hack on top of the X11 driver. I must have been wrong about this code quality crap all along, feel free to add any hacks you like, as long as it makes something work for someone.
These are of course not nearly the same thing. I think I've explained my position well enough, feel free to ignore or misinterpret at your own risk.
Yeah, I guess the opinions have been expressed and we'll have to agree to disagree on the amount of effort we're willing to spend to work properly on OSX and other drivers. I'll keep pointing out when a specific patch will break a specific configuration, and then I guess it's up to Alexandre to voice his opinion. Or, if he doesn't have one, I'll yield to you since you're far more active at the moment.
Although I think it's a good idea to have somewhat of an agreement project- wide. With Coreaudio, disk arbitration for mountmgr.sys, a OSX schannel backend and wineqtdecoder a considerable amount of work has been done for this system. It would be a pity if that goes to waste because of oppinion differences. It might at least be a good flamewar subject for the next Wineconf.