Hello,
i noticed that not all dlls with unittests are listed inside the Main summary of test.winehq.org. Missing dlls are: iphlpapi, mapi32, msvcrtd, psapi, version
Is there any reason for this?
Bye Stefan
Stefan Leichter Stefan.Leichter@camLine.com writes:
i noticed that not all dlls with unittests are listed inside the Main summary of test.winehq.org. Missing dlls are: iphlpapi, mapi32, msvcrtd, psapi, version
Is there any reason for this?
Not that I know of. Submitting a patch, thanks for pointing it out.
On Thu, Jun 17, 2004 at 07:36:52PM +0200, Ferenc Wagner wrote:
Not that I know of. Submitting a patch, thanks for pointing it out.
Speaking of the test results, I've noticed the following problems: 1. Some errors reported in the summary don't get reported in the differences. For example, in today's results: http://test.winehq.org/data/200406171000/ If you look at the summary for the shlwapi:clist test, it reports 2 errors (in red) in the Win98 column. If you click on the "2", you are taken to the Win98 differences table (correctly), but there's no mention in there of the shlwapi:clist test.
2. The differences tables are inconsistent. They seem to prune lines that are all 0 (green), but not all of them. I can certainly still see lines that are all 0. Prunning those lines is not a bad idea, but if we do, then we should do it for all such lines, and we also shouldn't make the '0' in the summary a link, since it has nowhere to take us.
3. We are running multiple test _per_ build, but only one is curretly reported. Currently, it says: Main summary for build 200406171000 where '200406171000' is a link to the test. But since we have multiple downloads, it should be: Main summary for build 200406171000: [0] [1] [2] [3] (or somesuch), where [x] is a link to the download.
4. Each of the testers run and submit 4 sets of tests for a build, but only one is reflected in the results. This is related to the above point, and needs fixing.
"Dimitrie O. Paun" dpaun@rogers.com writes:
Speaking of the test results, I've noticed the following problems:
- Some errors reported in the summary don't get reported in the differences.
Good catch, fixed (*)
- The differences tables are inconsistent.
How can you say that?! Do you think WineHQ can't reliably run my program? :) You may find my logic wrong, though. The differences show, well, the differences. Lines are pruned iff - all the reports are the same (char by char) AND - no run failed AND - there were no errors (* from now). It doesn't work as well as I expected because lots of successful tests produce variable output. I thought about fixing them, but I could as well change the pruning policy with much less work. Shall I simply drop the first condition?
we also shouldn't make the '0' in the summary a link, since it has nowhere to take us.
Those links don't target specific lines, so they could as well stay, but I've got no objection against nuking them.
- We are running multiple test _per_ build, but only one is curretly reported. Currently, it says:
Main summary for build 200406171000 where '200406171000' is a link to the test. But since we have multiple downloads, it should be: Main summary for build 200406171000: [0] [1] [2] [3] (or somesuch), where [x] is a link to the download.
This is a more complicated issue which I haven't dared to touch yet. I planned to handle this as different builds, so there is no machinery in place for this variability. Neither for the download link, nor for the results. The Tag could be overloaded for this purpose, and the download links should go into the differences header for easy access, where they could also serve as a quick indication of the build type. Then we need short names or icons for the builds.
- Each of the testers run and submit 4 sets of tests for a build, but only one is reflected in the results. This is related to the above point, and needs fixing.
4? I though it was 2: Kevin's and Paul's. Or do they both plan to build with MinGW and MSVC? Anyway, the reports should be present with the same tag, successively numbered. The annoying thing is that people started to number their tags, which leads to strange-looking headers. Oh well.
It would be possible to subdivide the columns under the tags by builds. We would need a short build id for this at a well-defined place, like in a new field in the report or maybe in the first line of the Build info: block.
On Fri, Jun 18, 2004 at 02:33:31AM +0200, Ferenc Wagner wrote:
"Dimitrie O. Paun" dpaun@rogers.com writes:
Good catch, fixed (*)
Nice, thanks for the quick fix.
- The differences tables are inconsistent.
How can you say that?! Do you think WineHQ can't reliably run my program? :) You may find my logic wrong, though. The differences show, well, the differences. Lines are pruned iff - all the reports are the same (char by char) AND - no run failed AND - there were no errors (* from now). It doesn't work as well as I expected because lots of successful tests produce variable output. I thought about fixing them, but I could as well change the pruning policy with much less work. Shall I simply drop the first condition?
I think we should. We already have so much data that I don't think anyone will start digging through a row of '0' trying to figure out the differences. It would clean the output a bit.
we also shouldn't make the '0' in the summary a link, since it has nowhere to take us.
Those links don't target specific lines, so they could as well stay, but I've got no objection against nuking them.
Maybe we can do two things: -- nuke the links for '0' -- make the other things line specific, if it's not too difficult (it shouldn't be, just create the 'id' based on the OS and test name. So say you have an error in Win95, in the kernel:heap test. Just create the id diff_Win95_kernel_heap so the URL should just be: <a href="#diff_Win95_kernel_heap">1</a> You would need to assign the same id to the <tr> element in the difference, but that should be easy as you have all the information available.
- We are running multiple test _per_ build, but only one is curretly reported. Currently, it says:
Main summary for build 200406171000 where '200406171000' is a link to the test. But since we have multiple downloads, it should be: Main summary for build 200406171000: [0] [1] [2] [3] (or somesuch), where [x] is a link to the download.
This is a more complicated issue which I haven't dared to touch yet. I planned to handle this as different builds, so there is no machinery in place for this variability. Neither for the download link, nor for the results. The Tag could be overloaded for this purpose, and the download links should go into the differences header for easy access, where they could also serve as a quick indication of the build type. Then we need short names or icons for the builds.
Nothing too complicated. Just append the above index to the Tag, separated by ':'. So tests submitted by me would be DimiPaun:0 DimiPaun:1 DimiPaun:2 DimiPaun:3 If I submit the results from the _same_ URL twice (say from the '0' one), we can do: DimiPaun:0.0 DimiPaun:0.1 But I don't think we need this complication currently.
- Each of the testers run and submit 4 sets of tests for a build, but only one is reflected in the results. This is related to the above point, and needs fixing.
4? I though it was 2: Kevin's and Paul's. Or do they both plan to build with MinGW and MSVC? Anyway, the reports
For now each submit two: one .zip, one compressed with as a self-extractable. We hope to also have a MSVC build in the future.
should be present with the same tag, successively numbered. The annoying thing is that people started to number their tags, which leads to strange-looking headers. Oh well.
Not if we separate them as I suggested above. I don't think we currently allow ':' and '.' in the Tag.
It would be possible to subdivide the columns under the tags by builds. We would need a short build id for this at a well-defined place, like in a new field in the report or maybe in the first line of the Build info: block.
We can just assign numbers for that as I suggested above. As long as we list in the begginning what each number is (that is, the URL for it) we should be fine. Since they all should come from the same BUILD-ID, there should be no difference between then anyway (usually :).
"Dimitrie O. Paun" dpaun@rogers.com writes:
- We are running multiple test _per_ build, but only one is curretly reported. Currently, it says:
Main summary for build 200406171000 where '200406171000' is a link to the test. But since we have multiple downloads, it should be: Main summary for build 200406171000: [0] [1] [2] [3] (or somesuch), where [x] is a link to the download.
Just append the above index to the Tag, separated by ':'.
I think separate columns would serve us better by not repeating the overly long tags several times. The problem is rather: where to get these numbers (or anything) from? In principle it would be possible to mine them out of the archive URL, but Kevin and Paul have different naming conventions and the URL should be kept flexible. That's why I think we need a new piece of information stored in the reports. [See also at the end!]
For now each submit two: one .zip, one compressed with as a self-extractable.
I'm not sure I understand this. The above two should always produce the exact same results, shouldn't they? They contain the same executable after all...
We hope to also have a MSVC build in the future.
That would be a nice addition, really.
Not if we separate them as I suggested above. I don't think we currently allow ':' and '.' in the Tag.
Currently it's [-.a-zA-Z0-9]*.
It would be possible to subdivide the columns under the tags by builds. We would need a short build id for this at a well-defined place, like in a new field in the report or maybe in the first line of the Build info: block.
We can just assign numbers for that as I suggested above. As long as we list in the begginning what each number is (that is, the URL for it) we should be fine. Since they all should come from the same BUILD-ID, there should be no difference between then anyway (usually :).
Ah, finally I understand you. I think. So do you suggest that [1] and [2] may mean different things on different pages? That would be possible with what we have now, you are right. I can give it a shot, but only after having some sleep and maybe soccer, even.
On Fri, Jun 18, 2004 at 03:41:35AM +0200, Ferenc Wagner wrote:
I think separate columns would serve us better by not repeating the overly long tags several times.
Separate columns are good idea if we can get them.
For now each submit two: one .zip, one compressed with as a self-extractable.
I'm not sure I understand this. The above two should always produce the exact same results, shouldn't they? They contain the same executable after all...
Yes, in theory. But before we drop the .zip, we wanted to make sure this is really the case. Unfortunately, we can't see the results for now :( Once we make sure the results are really the same, and the self-expandable archive is not creating any problems, we can drop the .zip. Meanwhile, it serves as a test case for doing multiple submission for the same build id! :)
Ah, finally I understand you. I think. So do you suggest that [1] and [2] may mean different things on different pages? That would be possible with what we have now, you are right. I can give it a shot, but only after having some sleep and maybe soccer, even.
Right, indeed, that was what I had in mind: [1], [2], etc. are local to the page, not global. In other words, say that for build XXX we get get a bunch of returns. We look into them, and it turns out that the set of distinct URLs that generated them are URLa, URLb, URLc, URLd. We just pick the most convenient order (alphabetical would probably be best to avoid to much variability on how we order them), and we just list them:
[0] : URLa [1] : URLb [2] : URLc [3] : URLd
Nothing that we shouldn't be able to do with what we have now, AFAIU.