"Dimitrie O. Paun" dimi@intelliware.ca writes:
On Mon, 22 Sep 2003, Ferenc Wagner wrote:
"Dimitrie O. Paun" dpaun@rogers.com writes:
-- for the ME case, how can we have have some results (up to kernel32.dll:codepage) and then have no results? Doesn't that mean that they failed?
No, this means that when the console test hung the tester killed the DOS box and thus did not run further tests. Jakob might implement a timeout or we could explain more.
Right, this was my point: it's more of a failure, than not having run the test (displayed as "."). Maybe we should say "timeout" for these?
For the console test, yes. I just did not care, because I was promised a better run.
-- How do you assign the name to different reports for the same OS?
It is the name of the directory the data comes from. In principle, testers could provide their tags.
How do you make sure they don't collide?
We could put up a little cgi which asks for a tag and makes sure it is unique. Or simply append a number if the submission is done by email. I do not expect too many concurrent submissions anyway...
that stuff is useless anyway, so I think we should just drop it altogether.
I see your point, but would like to make sure it is easy to pinpoint a given submission. Names are useful for that.
But I have a real problem here: which results to put in the main summary if there are many reports for a version? The one submitted first? The one with the most/least successes? Or maybe a mixture? Now it is the one with no tag, which is practically the first submission.
-- Also, in the "XXX differences" section, shouldn't we have the exact version & ServicePack displayed between the OS name and the reporter link, as we are dealing with only one instance?
Sure, but I do not have the information. Noted, though.
How come? I thought Jakob includes a dump of the OS version structure, like so:
Yes, he does. But except for one result submitted by Jakob, all the results are from my .bat-driven zip file.
Feri.