On Mon, Apr 28, 2008 at 08:37:26PM +0100, Robert Shearman wrote:
This should aid in testing more-obscure parts of the parser that aren't necessarily valid when using RPC (and hence don't make sense being put in dlls/rpcrt4/tests/server.idl).
Obviously this is a good idea. I tried doing something similar a while ago, but it didn't get accepted on the first try and I never asked why, so I'll ask why now. See here:
http://winehq.org/pipermail/wine-patches/2006-September/030438.html
It has some benefits over the approach you're proposing so it's worth bringing up. One is that it can run the tests against MIDL, so we know the tests themselves are correct (ok, MIDL has been known to do things differently than the spec, but we may want to copy MIDL's behaviour anyway, and in any case it's easier to automatically validate the tests on MIDL than to do so by hand or by inspection).
Another is versatility in what it can test. Since the tests are shell scripts, not only can it test the success/failure of a parse, but it can check that all the correct files were created, and it can perform further tests on the output (e.g., if we have an import in the IDL file, we can grep the output to make sure a #include was generated for the import).
It already supports "todo"s, and the test scripts look similar to the usual Wine tests written in C.
On the other hand your method has some advantages over mine. You already provide a "make test" framework, and including the expected result of the compilation in the IDL file is cleaner.
Alexandre may very well disagree with what I see as valid points (like testing on MIDL), but these are my suggestions anyway. Hope they're useful.
Dan Hipschman wrote:
On Mon, Apr 28, 2008 at 08:37:26PM +0100, Robert Shearman wrote:
This should aid in testing more-obscure parts of the parser that aren't necessarily valid when using RPC (and hence don't make sense being put in dlls/rpcrt4/tests/server.idl).
Obviously this is a good idea. I tried doing something similar a while ago, but it didn't get accepted on the first try and I never asked why, so I'll ask why now. See here:
http://winehq.org/pipermail/wine-patches/2006-September/030438.html
It has some benefits over the approach you're proposing so it's worth bringing up. One is that it can run the tests against MIDL, so we know the tests themselves are correct (ok, MIDL has been known to do things differently than the spec, but we may want to copy MIDL's behaviour anyway, and in any case it's easier to automatically validate the tests on MIDL than to do so by hand or by inspection).
The above linked patch of yours is what actually inspired me to add automated testing of widl. I don't see any reason why we can't run MIDL with the same framework. In fact, I think the framework I am proposing could better handle this case due to being able to expect specific errors/warnings that differ between MIDL and widl (and being able to cope with MIDL failing and widl succeeding, for example, due to bugs in MIDL).
Another is versatility in what it can test. Since the tests are shell scripts, not only can it test the success/failure of a parse, but it can check that all the correct files were created, and it can perform further tests on the output (e.g., if we have an import in the IDL file, we can grep the output to make sure a #include was generated for the import).
It already supports "todo"s, and the test scripts look similar to the usual Wine tests written in C.
Absolutely. The flexibility of your system being able to test the contents of files and do other checks for specific tests is a definite bonus.
On the other hand your method has some advantages over mine. You already provide a "make test" framework,
I don't see any reason why your framework can't also be plugged into "make test".
and including the expected result of the compilation in the IDL file is cleaner.
Again, that could also be done using your approach. It's just that it was necessary with my approach.
Alexandre may very well disagree with what I see as valid points (like testing on MIDL), but these are my suggestions anyway. Hope they're useful.
The way I see it, we have a choice between having a framework that uses the makefile to run individual tests of the parser without checking the content or a framework that runs every test in one go, but is capable of checking the output of the generated files. The only technical advantage that I can think of for the former is that it allows the tests to be performed in parallel, but I don't know how much of a benefit that is to Alexandre (who will be the one doing "make test" the most).
Robert Shearman rob@codeweavers.com writes:
The way I see it, we have a choice between having a framework that uses the makefile to run individual tests of the parser without checking the content or a framework that runs every test in one go, but is capable of checking the output of the generated files. The only technical advantage that I can think of for the former is that it allows the tests to be performed in parallel, but I don't know how much of a benefit that is to Alexandre (who will be the one doing "make test" the most).
We definitely have to be able to run tests individually from make, so there can't be just a single script to run them all.
Still, it seems to me that most of these tests can just as well be done in the existing framework, as part of the rpcrt4 test for instance. This way we can not only make sure that the code compiles, but also that the generated code builds, and works the way it should.
The only thing that can't be tested that way is obviously the code that is expected to fail to build, and for this something like Rob's framework would work fine, even though I'm not quite convinced that we care that much about getting the failure cases exactly right.
Alexandre Julliard wrote:
Robert Shearman rob@codeweavers.com writes:
The way I see it, we have a choice between having a framework that uses the makefile to run individual tests of the parser without checking the content or a framework that runs every test in one go, but is capable of checking the output of the generated files. The only technical advantage that I can think of for the former is that it allows the tests to be performed in parallel, but I don't know how much of a benefit that is to Alexandre (who will be the one doing "make test" the most).
We definitely have to be able to run tests individually from make, so there can't be just a single script to run them all.
Still, it seems to me that most of these tests can just as well be done in the existing framework, as part of the rpcrt4 test for instance. This way we can not only make sure that the code compiles, but also that the generated code builds, and works the way it should.
I don't really like the idea of mixing tests of two different components into the same file. Also, when developing and trying to debug a regression I prefer to work on simpler IDL files (i.e. testing a particular type of statement in a few ways) than everything being in one file and having to work out what statement broke what.
The only thing that can't be tested that way is obviously the code that is expected to fail to build, and for this something like Rob's framework would work fine, even though I'm not quite convinced that we care that much about getting the failure cases exactly right.
I see the failure cases being important for two reasons: 1. That the line number is reported correctly. 2. That the error is being triggered by the right part of the statement and therefore makes sense to the user.
Of course the failures themselves are important in order for incorrect constructs to be detected before: 1. widl crashes during generation of output files. 2. widl generates files that can't be compiled or compile with warnings. 3. widl generates files that compile correctly but crash or raise an exception at runtime.