I think it would be great if we could start to define (and build) a test harness. I think that there are a lot of people who would help write test scripts, who might otherwise be unable to help with Wine development. The more the merrier, I always say...
100% agreed
What did you have in mind? Our thinking, beyond building the
Perl based winetest, hasn't gone very far. I think we've imagined the following:
make test Performs an automated, non interactive regression test and returns true or false.
well, I started thinking on that... there could several uses of a test harness for Wine. We could either define this is a good result or not. But this is a bit deeper than that. For examples, some cases should work in Win98 emulation but fail for Win2k... So, I had rather in mind to have a two passes approach pass 1 : let the scripts run and output the results pass 2 : analyze the results. This could be either comparing with a manually edited reference case, or reference cases could be generated from running the program under Win <put any version here>.
So, this imposes to have the test programs also (compile) and run under windows... This could even allow (from source code for example) to have 3 test caes : 1/ compiled and running under windows 2/ compiled under windows, but run with wine 3/ compiled as a winelib app
normally, there shouldn't be too many differences between 2 and 3... but, we all know what "normally" means...
Of course, we don't need to start with all that
in order to help the pass 2 (and even produce all possible reports), I had in mind (but didn't give a good look at it) to let the test scripts produce an XML output. This would allow the test harness to spit out any information (like used DLLs native/builtin, version of those, version of windows (emulated by wine or native...), plus the output by itself
analysis and report should be (partially) driven by some XSL/XSLT tools this may be a bit overkill to start but could be a reasonable target
on the test cases basis, I wrote a few of them (as any of us has done more or less), basically tackling the multimedia DLLs
however, it's also a good basis to reverse engineer at the API level (or better understand some un^^ poorly documented features of the APIs)
for developpers, the make test should be made more fine grain (IMO). Like (not necessarly thru make) test a DLL (if some modifications have been made to it) or even an entry point... (or a set of entry points). It's the way I started splitting the test cases (DLL / set of features...). This has drawbacks. For example, it's rather hard to test, say, Unicode support accross all DLLs
make interactive-test Runs through interactive tests that require user interaction with the system; user confirms or denies their results.
Both of these test runs could also be done on a Windows box, so that an exact 1:1 comparison could be made.
Of course, the hard part if figuring out how to do the interactive-test. We've thought about trying to automate it (screen shots, simulating mouse movements, all that good stuff), but I don't think we have an obvious or brilliant idea. We looked into tools for that, but found no good free ones; there is also a suspicion that an X based test tool won't be enough for Wine.
no many thoughts on this one either ... did you try the windows messages journaling API ? IIRC, some early MS test programs did use it.
A+