On 26/10/2007, Francois Gouget fgouget@free.fr wrote:
Parallels is a bit of a special case because it does not just replace the graphics driver, but the DirectX dlls altogether. However VMware 5.5 does not do that.
In practice the difference isn't that significant. While replacing the DirectX dlls completely allows you to break more tests, a rather large part of the functionality of Direct3D is actually implemented by the drivers. The drivers should behave according to the specifications of course, but the apparent need for the DCT and WHQL shows that this is hardly guaranteed.
That's precisely the problem. I would add, what about Neomagic chips then? By your reasonning there's only Windows on Nvidia and Windows on ATI, and maybe not even the latter.
Arguably that's actually the case for the situations where the results of those tests would make a difference. The point is that broken graphics drivers will break the test though, and there's not a whole lot we can reasonably do about it.
But Windows is not used on just one brand of graphics card, and our tests should acknowledge that. The goal of our tests is to probe and document the behavior of the Windows APIs, not the behavior of a specific version of the Nvidia drivers.
WineD3D is in a bit of a special position there, because it actually does replace both the Direct3D dlls and the driver. While of course it shouldn't test the behaviour of a specific version of drivers from a specific vendor, it *should* at least for a part test the behaviour of a driver that behaves according to the specs.
Anyway... Can this be fixed by better checking the Direct3D capabilities? Does the test check for some undocumented feature or a documented one? Do we know that it works with the Nvidia, ATI and Intel drivers? If not, then maybe it should be removed?
I don't think the caps will really be useful here, but we could retrieve the card and driver name, and skip the test for some combinations. IDirect3D9::GetAdapterIdentifier() for example should return enough information for d3d9. I'm not sure we want to go there though, Alexandre has said before that the tests should simply fail on broken setups.