Francois Gouget fgouget@free.fr writes:
On Sun, 24 Aug 2003, Jon Griffiths wrote: [...]
Note that a regression is something that used to work, and now doesn't. So any regression testing should only report failures for tests that used to work and now don't. New tests that fail, or those that have never succeeded, _aren't_ regressions, and shouldn't be marked
These are not regression tests but conformance tests. They should report anything that does not conform to the Windows behavior as an error.
I disagree, as far as I'm concerned they are definitely regression tests. If we only wanted to check conformance then it would be OK for tests to fail everywhere we are not compatible; it's because the tests must be usable to find regressions that we cannot have failing ones in the tree. And that's why we have the todo_wine macro to mark tests that are expected to fail.