Chris wrote:
Perhaps appdb should check that "Installs?" and "Runs?" column on particular test have "Yes", before it accept "Platinum" to "Rating" column ?
Exactly, we need some logic to ensure ratings are correct. I think the fundamental change is that we should remove maintainer ratings entirely and be driven by the test results.
I disagree. I'm afraid the more logic you add to the appdb, the more annoying it will be for maintainers to use. Worse yet, the logic won't really achieve your goal; the reviewers can game the logic if they like to achieve the ratings they want if they disagree with your logic.
I would rather see the ratings simplified and better defined; that would make it easier for maintainers to stick to standard meanings for the ratings.
For instance, we should be clear about what to do when there are multiple differing ratings. Should the best rating be the one that wins, or should we go with the most recent test results? (I prefer going with the most recent version of wine, since that's most like the one that the average user will use.)
And we don't really need four levels; three should do, and they can be defined very simply:
Gold: installs and runs as you would expect them to in Microsoft Windows. Good enough to rely on every day, with at most minor cosmetic problems.
Silver: installs and runs well enough to be usable, though some less-important features may not work right.
Bronze: installs and runs, can accomplish some portion of their fundamental mission, but has enough bugs that it's not really dependable, or requires special configuration, workarounds or third-party tweaks to function.
Implicit in the above is that gold and silver should not require any tweaks or hacks. Gold and silver apps are good enough for ordinary users to use; bronze apps are those which only the dedicated would put up with.
With simple definitions like that, we don't need logic to enforce the ratings. - Dan