On Fri, Jun 18, 2004 at 02:33:31AM +0200, Ferenc Wagner wrote:
"Dimitrie O. Paun" dpaun@rogers.com writes:
Good catch, fixed (*)
Nice, thanks for the quick fix.
- The differences tables are inconsistent.
How can you say that?! Do you think WineHQ can't reliably run my program? :) You may find my logic wrong, though. The differences show, well, the differences. Lines are pruned iff - all the reports are the same (char by char) AND - no run failed AND - there were no errors (* from now). It doesn't work as well as I expected because lots of successful tests produce variable output. I thought about fixing them, but I could as well change the pruning policy with much less work. Shall I simply drop the first condition?
I think we should. We already have so much data that I don't think anyone will start digging through a row of '0' trying to figure out the differences. It would clean the output a bit.
we also shouldn't make the '0' in the summary a link, since it has nowhere to take us.
Those links don't target specific lines, so they could as well stay, but I've got no objection against nuking them.
Maybe we can do two things: -- nuke the links for '0' -- make the other things line specific, if it's not too difficult (it shouldn't be, just create the 'id' based on the OS and test name. So say you have an error in Win95, in the kernel:heap test. Just create the id diff_Win95_kernel_heap so the URL should just be: <a href="#diff_Win95_kernel_heap">1</a> You would need to assign the same id to the <tr> element in the difference, but that should be easy as you have all the information available.
- We are running multiple test _per_ build, but only one is curretly reported. Currently, it says:
Main summary for build 200406171000 where '200406171000' is a link to the test. But since we have multiple downloads, it should be: Main summary for build 200406171000: [0] [1] [2] [3] (or somesuch), where [x] is a link to the download.
This is a more complicated issue which I haven't dared to touch yet. I planned to handle this as different builds, so there is no machinery in place for this variability. Neither for the download link, nor for the results. The Tag could be overloaded for this purpose, and the download links should go into the differences header for easy access, where they could also serve as a quick indication of the build type. Then we need short names or icons for the builds.
Nothing too complicated. Just append the above index to the Tag, separated by ':'. So tests submitted by me would be DimiPaun:0 DimiPaun:1 DimiPaun:2 DimiPaun:3 If I submit the results from the _same_ URL twice (say from the '0' one), we can do: DimiPaun:0.0 DimiPaun:0.1 But I don't think we need this complication currently.
- Each of the testers run and submit 4 sets of tests for a build, but only one is reflected in the results. This is related to the above point, and needs fixing.
4? I though it was 2: Kevin's and Paul's. Or do they both plan to build with MinGW and MSVC? Anyway, the reports
For now each submit two: one .zip, one compressed with as a self-extractable. We hope to also have a MSVC build in the future.
should be present with the same tag, successively numbered. The annoying thing is that people started to number their tags, which leads to strange-looking headers. Oh well.
Not if we separate them as I suggested above. I don't think we currently allow ':' and '.' in the Tag.
It would be possible to subdivide the columns under the tags by builds. We would need a short build id for this at a well-defined place, like in a new field in the report or maybe in the first line of the Build info: block.
We can just assign numbers for that as I suggested above. As long as we list in the begginning what each number is (that is, the URL for it) we should be fine. Since they all should come from the same BUILD-ID, there should be no difference between then anyway (usually :).