On Thursday 03 January 2002 06:05 pm, Andreas Mohr wrote:
On Thu, Jan 03, 2002 at 04:55:03PM -0500, Robert Baruch wrote:
The value is when you add new functionality (and possibly new tests) and old tests break. Then you can pinpoint the changes that caused the old tests to break. Again, that can only work if all the old tests succeeded, which means you can't include tests that you know will fail in a release.
No, you can !
This is exactly what everybody seems to assume we don't need: tests that are *known* to fail. (like Ulrich Weigand said: have status variables like FAIL, XFAIL, GOOD, XGOOD)
The key to success is to check the *difference* to *expected* behaviour.
Oh, I see. That does make more sense.
I think my problem was in what XP defines as a "release", which is a system which performs some of its functionality perfectly, and doesn't perform the rest of the functionality at all. That is, a customer can play with a "release" and expect not to break the app.
Since Wine effectively gives the "customer" (the Windows exe) access to functionality that hasn't been completed yet, Wine releases aren't the same as XP releases, so the XP concept of 100% success in unit tests doesn't apply.
--Rob