"H. Verbeet" hverbeet@gmail.com writes:
On 10/10/2007, Jeremy White jwhite@codeweavers.com wrote:
I think a test that fails, or crashes a system because of a driver bug is a broken test. It's admittedly hard to write a test that will detect a vmware situation or driver bug and work around it, but I think that is what we should do.
At that point the only real option is keeping a list of blacklisted video drivers (which some applications actually sort of do), but I'm not sure we really want to go there. I do think we should investigate native failures, and fix them if possible, but if tests fail because eg. vmware has a broken d3d implementation, I think the test is essentially doing what it should do.
Yes, if the code is going to break in normal use too, then it's OK for the test to fail. It simply means the platform is useless both for regression testing and for normal apps that use the functionality. If we really have to add workarounds for broken drivers, this has to be done in the code, not in the tests.