On Wed, Jan 02, 2002 at 10:20:25AM -0800, Alexandre Julliard wrote:
Francois Gouget fgouget@free.fr writes:
- it should be easy to correlate with the source of the test. For
instance if a check fails, it would be a good idea to print a message that can easily be grepped in the source code, or even the line number for that check. But don't print line numbers for success messages, they will change whenever someone changes the test and would require an update to the reference files..
IMO the test should not be printing successes or failures at all. If it can determine whether some result is OK or not, it should simply do an assert on the result. Printing things should be only for the cases where checking for failure is too complicated, and so we need to rely on the output comparison to detect failures.
Hmm, I don't know how you'd do that exactly. If we implement "strict" testing, then tons of functions will fail on Wine. And then we get an assert() every 20 seconds or what ?? More info needed here, I guess...
Otherwise this file is either called:
- 'xxx.ref'
- or 'xxx.win95' or 'xxx.win98' ... if the output depends on the
Windows version being emulated. The winever-specific file takes precedence over the '.ref' file, and the '.ref' file, which should exist, serves as a fallback.
No, there should always be a single .ref file IMO. Version checks should be done inside the test itself to make sure the output is always the same.
. . .
- TEST_WINVER This contains the value of the '-winver' Wine argument, or the
Windows version if the test is being run in Windows. We should mandate the use of specific values of winver so that test don't have to recognize all the synonyms of win2000 (nt2k...), etc. (Do we need to distinguish between Windows and Wine?)
The test should use GetVersion() and friends IMO, no need for a separate variable.
Doh ! Right ! Like Andriy already said: the tests itself should reflect the entire behaviour of the functions, and even the version differences. One additional step in the direction of very simple, self-behaving tests...
- and add two corresponding targets: 'make cui-tests' runs only those
tests that do not pop up windows, and 'make gui-tests' runs only those tests that do pop up windows
- 'make tests' would be 'tests: cui-tests gui-tests'
I don't think this complexity is necessary. You can always redirect the display if the windows annoy you. And tests should try to keep the windows hidden as much as possible.
Hmm, why complexity ? Is it really that difficult to implement ? I'd say it's a useful feature, and it doesn't incur much penalty, so the feature/penalty quotient is high enough ;-) Avoiding visible Windows as much as possible would be nice to have, though.
Hmm, OTOH: maybe it'd be better to use make targets tests-unattended and tests-visual instead (note that I'm writing the names the other way around). This of course means that even many GUI tests would fall under the "unattended" category, thus annoying window popups would have to be minimized.
OTOH we already kid of decided that we don't want to care about GUI testing right now, so maybe we should really just use one test target for now. Splitting later should be easy anyway.