I know regression testing isn't as interesting as doing real development, and so I appologize with once again spamming the list with regression questions...
As I've been writing tests (only for the last week or so), I realized that as we get a lot more tests, it will be very difficult to keep track of what is being tested, and what is not. I'd like to propose a list of which functions for each DLL are being tested, which file tests it, as well as any comments on the current tests. Currently, there is no great way to do this. I may use several functions in a directed test which I am not actually testing (for instance I've found it necessary to retrieve the page-size using GetSystemInfo, but my tests have nothing to do with this function, and so writing a directed test for it is a task for another day). Thus just grepping for which functions are used in a test is not sufficient. Also, I have found that some functions, or even specific task functions are very difficult to test, and so I've been leaving these out so that perhaps people ore skilled than myself could take a crack at them late (A good example is testing inheritance with CreateThread. It is a task that is beyond my capabilities at the moment, at least until I can find a good way to test CreateProcess...this would be much easier if Windows had something equivalent to 'fork'). In any case, I've tried to be thorough about commenting what I do and don't test, but it would be a lot more convenient if this information was more easily parsed than trying to find my comments intersperesed through the tests. So does anyone thing that creating a easily parsable list with the state of the current tests is this a reasonable thing to do, and would creating a simple file in each /tests directory with the information be good enough? If so, I can add one to my next test, and update it with what I've done so far.
In general, while Francois' presentation is a good place to get started, I think it'd be a lot easier if there was a document off of winehq with recommendations on how to build, test, and document tests. Lowering the difficulty threshold, is more likely to draw more people to do so.
Thanks, .Geoff