Eric Pouech Eric.Pouech@wanadoo.fr writes:
Do you really feel like a C compiler is less common than a perl binary under Windows ? I seriously doubt it...
Anybody can install a perl binary on their machine, while nearly all Windows compilers are commercial software...
well, the stubs between perl and windows will not be exactly the same anyway, so you'll anyway introduce code differences between the two (not to speak about the perl interpreters which might have small differences too).
You'll need some serious testing to make sure that both interpreters behave the same way of course; but you only have to do this once, you can then share test scripts freely. If you have to compile them you'll never know if your foo.exe and your foo.so were compiled from the same source, since you cannot diff them to check.
In some other cases, it may also be interesting to simply run existing programs (if they exist). This shouldn't part of the whole test system, but if program exists then use it (of course command based programs are much easier than UI ones...)
That's a completely different test environment. The perl stuff is meant for unit testing (a single API at a time), and especially for catching regressions. Running whole applications is of course necessary too, but it's not the same usage nor the same goal. I don't want 'make test' to launch winword so that I can test it too.
where I want to end up to is that we'd need several tests environments (perl to be one, pure C could be another one... - it's anyway needed to test the Winelib part, and the associated tools..., command line programs from shell scripts could be a third). the driving idea would be that all of those test tools spit the same type of outputs (they all belong to the first pass), so we can share the analysis tools
I don't see any need for that. The tests and the results are very different, and I don't see what you'll be able to share. The result of the unit test would be something like 'HeapFree does not set the correct last error code when called with pointer xxx'; the result of running an application will be 'winword crashes when I click on save'. It's a completely different approach to testing, both are necessary but not within the same environment IMO.
there's just one point that worries me. Let me explain how I ended up with XML. one thing I don't like is to have a "reference" file, but I don't know what it refers to. I think the test process should output both the result of the test (pure ASCII is fine) but also the environment of the test (Windows - if so version, even DLLs used... -, vs wine - which version, which DLLs are used, native vs builtin...) so that we could make usefull comparisons
But none of this information is relevant for unit testing. The only information that may be needed is the Windows version, and this only for the few APIs that actually behave differently between versions. And if you do your test scripts right you don't even need to know about it. For instance, instead of doing something like:
kernel32->MapViewOfFile(bad params); print kernel32->GetLastError();
which would print different things on 95 and NT, you'd do:
kernel32->MapViewOfFile(bad params); $err = kernel32->GetLastError(); if (kernel32->GetVersion() == win95) { print ($err == ERROR_INVALID_ADDRESS) ? "OK" : "failed: $err"; } else { print ($err == ERROR_INVALID_PARAMETER) ? "OK" : "failed: $err"; }
This way the script always prints the same thing no matter the Windows version, and you can use a simple diff against a single reference file.