Andreas Mohr wrote:
I guess we really should change our development
model from trying tons of
programs to *systematically* testing functions and
Windows mechanisms now.
If we can show everyone where stuff is failing, it
might be a lot easier
to attract new people.
I *completely* support this idea. Benefits of such test suite are enormous. Existing developers can contribute a lot by adding test snippets for the functions they create. Now they create such snippets anyway and throw them away.
I attached a preview of the posting I intend to post
on *tons* of Windows
devel newsgroups ("Call For Volunteers"). That way
we might actually get
hold of hundreds of Windows developers helping us
implement a complete
test suite (complete tests of up to 12000 Windows
functions).
Not to mention the additional PR we might get out of
this...
Lets ask for help only after the suite structure is more/less defined and we'll be able to give people something to work on.
Comments: - Don't want to reinvent the weel. Is there any existing test suite framework we can use? Sorry, I can't suggest any for C but I'm very impressed with JUnit in Java. It is even Ok if the framework is GPLed or LGPLed - I don't think any company will make buziness based on the test suite. - I /personally/ prefer CL interface only for such suite - it would be better if the suite print summary information and information about failed tests only - make the test suite more "visible" for existing developers. Ask them to run the test suite before submitting a patch? - I think the suite test will consist from a few separate applications because different tests may have different requirements to GUI configuration, processes, etc. We need a way to run all the applications in one batch. - define variable which indicates whether the suite runs under Wine. Such indicator can be used for Wine "white-box" testing. - it would be greate to have functionality to support output comparison? For some functionality it is easier to write tests to compare output instead of doing explicit checks (e.g. tests, involving a few processes). The output can be redirected to file and files compared. If we use files we need to store files for Wine and a few versions of Windows :-( - the suite applications size will be pretty big. Is it better to move it to separate CVS tree? - what about running the suite weekly (or daily) automatically and publishing the results to wine-devel? - most developers on this list have access to one version of Windows. Is it difficult to create "testing farm" with remote access to a few versions of windows? This would help developers to test their code on a few platforms. Existing environments in the companies, involved in the project can be used. - I remember long time ago there was a post on wine-devel about using Perl or Perl-like language for unit testing. What is current status of that project?
Thanks, Andriy Palamarchuk
__________________________________________________ Do You Yahoo!? Send your FREE holiday greetings online! http://greetings.yahoo.com
On Wed, Dec 26, 2001 at 10:07:20AM -0800, Andriy Palamarchuk wrote:
Andreas Mohr wrote:
I guess we really should change our development
model from trying tons of
programs to *systematically* testing functions and
Windows mechanisms now.
If we can show everyone where stuff is failing, it
might be a lot easier
to attract new people.
I *completely* support this idea. Benefits of such test suite are enormous. Existing developers can contribute a lot by adding test snippets for the functions they create. Now they create such snippets anyway and throw them away.
Ah, good ! :-) Exactly. A lot of people create test code e.g. for undocumented functions etc. By adding a *slight* bit more work, they'd have a test for this function.
Comments:
- Don't want to reinvent the weel. Is there any
existing test suite framework we can use? Sorry, I can't suggest any for C but I'm very impressed with JUnit in Java. It is even Ok if the framework is GPLed or LGPLed - I don't think any company will make buziness based on the test suite.
Hmm, good question. I don't know of any, but we should probably do some more research. After all it's about 12000 functions, so we should get it right.
- I /personally/ prefer CL interface only for such
suite
Yes, yes, yes. *Much* easier to use. That's why I did exactly that kind of thing.
- it would be better if the suite print summary
information and information about failed tests only
Yep. Current output is something like: WINETEST:test:Loader_16:LoadModule:FAILED:01:[retval] WINETEST:test:Loader_16:LoadModule:FAILED:12:[retval] WINETEST:test:Loader_16:LoadModule:FAILED:13:[retval]
or, in case of success, only: WINETEST:test:Loader_16:LoadModule:OK
(yeah, I know, wishful thinking ;-)
This output is pretty useful, I think: It can be parsed *very* easily, and grepping for regressions is also pretty easy.
"WINETEST" exists to be able to distinguish this output from bogus Wine messages, "test" indicates that this is a test line output versus a warning message or similar output, "Loader_16" indicates testing of 16bit loader functionality, "LoadModule" - well... ;-) "FAILED" - obvious "01" - test number 01 failed. "[retval]" - contains the (wrong) return value of the function, if applicable.
BTW, I think having a test suite wouldn't be about hunting regressions at first: just look at my LoadModule16 example and you'll see that we're still quite far from hunting regressions *only*. My guess is that we'll be shocked at how many functions fail in how many ways.
- make the test suite more "visible" for existing
developers. Ask them to run the test suite before submitting a patch?
No, I don't think so. I think it suffices if Alexandre runs the test suite before or after every large commit cycle. That way he'd be able to back out problematic patches. Asking developers to run the *whole* test suite for each patch could be pretty painful.
- I think the suite test will consist from a few
separate applications because different tests may have different requirements to GUI configuration, processes, etc. We need a way to run all the applications in one batch.
Exactly. Which is why I really prefer simple text output. IMHO it's the only way to go.
- define variable which indicates whether the suite
runs under Wine. Such indicator can be used for Wine "white-box" testing.
Hmm, yes, that might be useful. We'd also need to pass a winver value to the test suite via command line in order to let the test app adapt to different windows environments (and thus also to different wine --winver settings !).
- it would be greate to have functionality to support
output comparison? For some functionality it is easier to write tests to compare output instead of doing explicit checks (e.g. tests, involving a few processes). The output can be redirected to file and files compared. If we use files we need to store files for Wine and a few versions of Windows :-(
Hmm, I don't quite get what exactly you're talking about.
- the suite applications size will be pretty big. Is
it better to move it to separate CVS tree?
Yep, I'd say so. There definitely is no business for it to reside in the main Wine tree.
- what about running the suite weekly (or daily)
automatically and publishing the results to wine-devel?
Good idea ! Might prove worthwhile.
- most developers on this list have access to one
version of Windows. Is it difficult to create "testing farm" with remote access to a few versions of windows? This would help developers to test their code on a few platforms. Existing environments in the companies, involved in the project can be used.
Hmm, why ? The idea is that hundreds (or hopefully thousands ?) of volunteer Windows developers create bazillions of test functions for specific API functions. That will happen on specific Windows version only, of course. Now we have a test framework for a specific API function on a specific Windows version. Now if there are behavioral conflicts on different Windows versions (functions behave differently), then I guess people will notice immediately and fix the test function immediately to support different behaviour of different windows versions. --> no problem at all.
- I remember long time ago there was a post on
wine-devel about using Perl or Perl-like language for unit testing. What is current status of that project?
Hmm. That'd be programs/winetest/, right ?
--- Andreas Mohr andi@rhlx01.fht-esslingen.de wrote:
On Wed, Dec 26, 2001 at 10:07:20AM -0800, Andriy Palamarchuk wrote:
Andreas Mohr wrote:
[... skipped ...]
- it would be better if the suite print summary
information and information about failed tests
only Yep. Current output is something like:
WINETEST:test:Loader_16:LoadModule:FAILED:01:[retval]
WINETEST:test:Loader_16:LoadModule:FAILED:12:[retval]
WINETEST:test:Loader_16:LoadModule:FAILED:13:[retval]
or, in case of success, only: WINETEST:test:Loader_16:LoadModule:OK
I mean something like: =================== Run: 1234 tests Failed: 2 Errors: 1
Fail 1: <....> Fail 2: <....> Error 1: <....> =================== In the example above failture means condition check failure, Error - exception.
I suggest to print nothing for successfull tests. At least this is the way I am accustomed with JUnit. We are not interested in successfull tests, are we? ;-)
This output is pretty useful, I think: It can be parsed *very* easily, and grepping for regressions is also pretty easy.
"WINETEST" exists to be able to distinguish this output from bogus Wine messages, "test" indicates that this is a test line output versus a warning message or similar output, "Loader_16" indicates testing of 16bit loader functionality, "LoadModule" - well... ;-) "FAILED" - obvious "01" - test number 01 failed. "[retval]" - contains the (wrong) return value of the function, if applicable.
Looks simple and the output is really useful. I just don't see any reason to show information about successfull tests. At least we can get short form of the output by clipping all "Ok" messages from your suggested form.
BTW, I think having a test suite wouldn't be about hunting regressions at first: just look at my LoadModule16 example and you'll see that we're still quite far from hunting regressions *only*. My guess is that we'll be shocked at how many functions fail in how many ways.
Agree, agree, agree... We can even use eXtreme Programming approaches :-) See http://xprogramming.com/ and other sites on the subj. I also like this article: http://members.pingnet.ch/gamma/junit.htm I use JUnit extensively and like the whole idea.
- make the test suite more "visible" for existing
developers. Ask them to run the test suite before submitting a patch?
No, I don't think so. I think it suffices if Alexandre runs the test suite before or after every large commit cycle. That way he'd be able to back out problematic patches. Asking developers to run the *whole* test suite for each patch could be pretty painful.
I don't see why running the unit tests is paintful. I'd estimate that it would not take more then 5 minutes to test all 12000 W32 functions. We also can keep tests for slow/rarely changed areas of API in separate "complete" suite.
I think the test suite is for developers, not for Alexandre (I meas as a team leader :-) or QA. This is why I want to increase "visibility" of unit tests. Again, the developers will more likely to contribute to the suite if they will remember about it.
I do not suggest to enforce the unit test usage because we'll always have developers/companies who don't want to do that. It would suffice to recomment before submitting a patch to check that we have the same (accidentally - less :-) number of failures as we had before or report any new bugs introduced. It is even Ok to have increased number of issues as soon as developer consiously makes decision to break something. Compact tests output I describe above also will help to quicky identify any changes in unit tests output.
We'd also need to pass a winver value to the test suite via command line in order to let the test app adapt to different windows environments (and thus also to different wine --winver settings !).
Sounds good.
- it would be greate to have functionality to
support
output comparison? For some functionality it is
easier
to write tests to compare output instead of doing explicit checks (e.g. tests, involving a few processes). The output can be redirected to file
and
files compared. If we use files we need to store
files
for Wine and a few versions of Windows :-(
Hmm, I don't quite get what exactly you're talking about.
Example: I have pretty big unit test for SystemParametersInfo function. Part of the test is to insure that WM_SETTINGCHANGE window message is fired when necessary. I have simple handler for the message which prints confirmation when the message received. I save output when I run tests under Windows and Wine and compare the output. Advantages - 1) simplicity, 2) I can see contents of the failure. To do explicit check I need to set up some communication (common variable, step counter etc) between the message handler and testing code. If these 2 code snippets are in different processes I need to use IPC to do explicit check?
Ideally I'd like to pring nothing to the screen - developer does not need to see all this information. The information can be saved to file, and I need to keep separate files for Wine, (a few versions of ?) Windows.
- what about running the suite weekly (or daily)
automatically and publishing the results to wine-devel?
Good idea ! Might prove worthwhile.
For this feature compact output is useful too.
- most developers on this list have access to one
version of Windows. Is it difficult to create
"testing
farm" with remote access to a few versions of
windows?
This would help developers to test their code on a
few
platforms. Existing environments in the companies, involved in the project can be used.
Hmm, why ? The idea is that hundreds (or hopefully thousands ?) of volunteer Windows developers create bazillions of test functions for specific API functions. That will happen on specific Windows version only, of course. Now we have a test framework for a specific API function on a specific Windows version. Now if there are behavioral conflicts on different Windows versions (functions behave differently), then I guess people will notice immediately and fix the test function immediately to support different behaviour of different windows versions. --> no problem at all.
I was thinking not about unit tests only. Sometimes I'd like to know how different version of Windows behaves. The only option I have is to ask somebody who has such version to run a test (honestly - up to now I was lazy to ask anybody :-). But you are right - it is not a big issue.
- I remember long time ago there was a post on
wine-devel about using Perl or Perl-like language
for
unit testing. What is current status of that project?
Hmm. That'd be programs/winetest/, right ?
Lazy me ;-)
Andriy Palamarchuk
__________________________________________________ Do You Yahoo!? Send your FREE holiday greetings online! http://greetings.yahoo.com