Eric Pouech Eric.Pouech@wanadoo.fr writes:
So, I had rather in mind to have a two passes approach pass 1 : let the scripts run and output the results pass 2 : analyze the results. This could be either comparing with a manually edited reference case, or reference cases could be generated from running the program under Win <put any version here>.
Yes, that's the idea; the test script will output a bunch of results which can then be compared against a reference run generated under Windows. We probably also want a standard config file (or a few of them to handle version differences) to ensure the test environment is the same for everybody.
So, this imposes to have the test programs also (compile) and run under windows... This could even allow (from source code for example) to have 3 test caes : 1/ compiled and running under windows 2/ compiled under windows, but run with wine 3/ compiled as a winelib app
The idea of using an interpreter like Perl is precisely that you don't need to compile anything to run tests. I think this is important because not everybody has a Windows compiler. It also allows using the exact same test script under Windows and Wine, so that you don't have to worry whether your Windows binary exactly matches your Winelib binary.
in order to help the pass 2 (and even produce all possible reports), I had in mind (but didn't give a good look at it) to let the test scripts produce an XML output. This would allow the test harness to spit out any information (like used DLLs native/builtin, version of those, version of windows (emulated by wine or native...), plus the output by itself analysis and report should be (partially) driven by some XSL/XSLT tools
I don't think we need anything fancy like that. The output should be simple ASCII that can be automatically compared with diff, and the test is considered a failure if diff finds any difference against the reference output. Everything should be automated as much as possible.
for developpers, the make test should be made more fine grain (IMO). Like (not necessarly thru make) test a DLL (if some modifications have been made to it) or even an entry point... (or a set of entry points).
Yes, there should be one set of test scenarios for each dll, and each scenario should test one entry point (or a few related ones); this way you can either run a single test, or do a make test in a dll directory to run the tests for this dll, or make test at the top-level which will simply iterate through all the dlls. The tests should ideally run fast enough that you can do a make test in the dll dir everytime you change something.
It's the way I started splitting the test cases (DLL / set of features...). This has drawbacks. For example, it's rather hard to test, say, Unicode support accross all DLLs
But you should never need to do that. If you change something in a core Unicode function in kernel, this will be tested by the kernel test scenarios. If the test scenarios don't find a problem but tests for higher level dlls fail, then your kernel test scenarios are buggy since they didn't spot the change.
On 21 Feb 2001, Alexandre Julliard wrote:
Eric Pouech Eric.Pouech@wanadoo.fr writes:
[...]
So, this imposes to have the test programs also (compile) and run under windows... This could even allow (from source code for example) to have 3 test caes : 1/ compiled and running under windows 2/ compiled under windows, but run with wine 3/ compiled as a winelib app
The idea of using an interpreter like Perl is precisely that you don't need to compile anything to run tests. I think this is important because not everybody has a Windows compiler. It also allows using the exact same test script under Windows and Wine, so that you don't have to worry whether your Windows binary exactly matches your Winelib binary.
The downside of interpreter-based tests are: - they won't test the Winelib headers or Winelib specific issues - I imagine that some of our potential test writers would be windows programmers (after all these tests would be nothing more than simple Windows applications). They would probably be more comfortable writing tests in C/C++.
So I guess I would prefer C/C++ based regression tests but I'm not really opposed to interpreter based tests either. Let's get something rolling.
-- Francois Gouget fgouget@free.fr http://fgouget.free.fr/ La terre est une bĂȘta...
So, this imposes to have the test programs also (compile) and run under windows... This could even allow (from source code for example) to have 3 test caes : 1/ compiled and running under windows 2/ compiled under windows, but run with wine 3/ compiled as a winelib app
The idea of using an interpreter like Perl is precisely that you don't need to compile anything to run tests. I think this is important because not everybody has a Windows compiler.
Do you really feel like a C compiler is less common than a perl binary under Windows ? I seriously doubt it...
It also allows using the exact same test script under Windows and Wine, so that you don't have to worry whether your Windows binary exactly matches your Winelib binary.
well, the stubs between perl and windows will not be exactly the same anyway, so you'll anyway introduce code differences between the two (not to speak about the perl interpreters which might have small differences too). but let's try to open up this a bit. I don't think that perl only testing is sufficient (it can a very good test bed for some cases, but I won't cover any aspects). In some other cases, it may also be interesting to simply run existing programs (if they exist). This shouldn't part of the whole test system, but if program exists then use it (of course command based programs are much easier than UI ones...)
where I want to end up to is that we'd need several tests environments (perl to be one, pure C could be another one... - it's anyway needed to test the Winelib part, and the associated tools..., command line programs from shell scripts could be a third). the driving idea would be that all of those test tools spit the same type of outputs (they all belong to the first pass), so we can share the analysis tools
in order to help the pass 2 (and even produce all possible reports), I had in mind (but didn't give a good look at it) to let the test scripts produce an XML output. This would allow the test harness to spit out any information (like used DLLs native/builtin, version of those, version of windows (emulated by wine or native...), plus the output by itself analysis and report should be (partially) driven by some XSL/XSLT tools
I don't think we need anything fancy like that. The output should be simple ASCII that can be automatically compared with diff, and the test is considered a failure if diff finds any difference against the reference output. Everything should be automated as much as possible.
there's just one point that worries me. Let me explain how I ended up with XML. one thing I don't like is to have a "reference" file, but I don't know what it refers to. I think the test process should output both the result of the test (pure ASCII is fine) but also the environment of the test (Windows - if so version, even DLLs used... -, vs wine - which version, which DLLs are used, native vs builtin...) so that we could make usefull comparisons XML is just the way to structure the metadata about the test, the output of the test per se could be pure ASCII
A+