Andriy Palamarchuk apa3a@yahoo.com writes:
Almost all complexity you are talking about is already implemented for us. Usage of the framework is very simple, do not require from test writers anything. They are only required to correctly use Test::Simple (or Test::More). They don't need to remember about Test::Harness.
But adapting the framework to do what we want is IMO more work than simply reimplementing from scratch the few features that we actually need. We don't really gain anything by making the effort of reusing all that code, since we don't need most of it.
There are reasons to use Test::Harness:
- control of TODO tests - I really want to use this
feature. 2) control of SKIP tests - very useful for Wine-specific tests, choosing behavior, depending on Windows versions, etc. I need this feature too.
Yes, I agree we want that. I think there are easy to implement no matter what we use, we don't really need Test::Harness for that.
- we already need to manage the test output. I'd
estimate number of checks for my existing SystemParametersInfo unit test as: 25 (number of implemented actions) * 10 (minimal number of checks for each action) = 250 - 350 tests We'll definitely have huge number of tests. Why not pick up scaleable approach from very beginning?
For me your SystemParametersInfo is one test, not 250. All I want to know if whether it passed or not, and if not what was the cause of the failure. I don't want to know about the details of the 250 individual checks.
Suggest decisions from the discussion:
- unit tests are very important for the project
- mixed C/Perl environment will be used for the tests
development. Choosing the tool is a matter of personal preferences.
I don't think I agree. For me the value of Perl is in that it makes it trivial to take the suite over the Windows; but if half the tests are in C we lose this advantage, and then we might as well do everything in C.
- Test::Harness will be used to compile report for
test batches
I don't see the need. What I want is a make-like system that keeps track of which tests have been run, which ones need to be re-run because they have been modified etc. I don't think there is any use in a report stating that 12.42% of the tests failed, this doesn't tell us anything.
- The unit test will be a separate application
You cannot put the whole test suite in a single application, you need to split things up some way. A decent test suite will probably be several times the size of the code it is testing; you don't want that in a single application.
Alexandre, we explicitely did not agree on this decision yet. You preferred to have unit tests spreaded over the Wine directory tree. The main argument for this was possibility of running subsets of test.
No, the argument is modularity. The tests for kernel32 have nothing to do with the tests for opengl, and have everything to do with the kernel32 code they are testing. So it seems logical to put them together.
Then when you change something in kernel32 you can change the test that is right next to it, run make test in the kernel32 directory and have it re-run the relevant tests, and then do cvs diff dlls/kernel32 and get a complete diff including the code and the test changes.