On Thu, Jan 03, 2002 at 04:55:03PM -0500, Robert Baruch wrote:
On Thursday 03 January 2002 07:54 am, Andriy Palamarchuk wrote: The value is when you add new functionality (and possibly new tests) and old tests break. Then you can pinpoint the changes that caused the old tests to break. Again, that can only work if all the old tests succeeded, which means you can't include tests that you know will fail in a release.
No, you can !
This is exactly what everybody seems to assume we don't need: tests that are *known* to fail. (like Ulrich Weigand said: have status variables like FAIL, XFAIL, GOOD, XGOOD)
The key to success is to check the *difference* to *expected* behaviour. And if there is indeed a *difference*, then we know that something changed and we need to examine it more closely and thus derive our ultimate result codes from it.
Again, we can't just include tests that work on all occasions. Instead we should have tests that are as thorough/strict as possible, and thus with all sorts of failures, but which ultimately don't let the test suite fail, since they're *expected* to currently fail.