http://bugs.winehq.org/show_bug.cgi?id=23158
--- Comment #2 from Rosanne DiMesio dimesio@earthlink.net 2010-08-04 08:45:52 --- (In reply to comment #1)
A problem with using the AppDB to find regressions is that users have a lot of different setups, and frankly many of them are broken.
We could, however, look for test results submitted by the same user. This would give us a good indication of regressions.
Not necessarily. There's another problem: false platinums given by users who didn't test every feature of a complex app and gave it a platinum based solely on what was tested. They may come back later with a more thorough test with a lower rating that is not a regression.
If you need an example, start with me: http://appdb.winehq.org/objectManager.php?sClass=version&iId=2905&iT... There actually was a regression in that app, but it shows up in the drop from silver to bronze in 1.1.44. The drop from platinum to silver simply reflects the fact that I started tested more obscure features of the app.
I see a lot of false platinums submitted for Office apps based on testing only the basic functions, and I think that is probably true for other feature-rich apps like Photoshop, etc. I reduce the rating to silver when something I know doesn't work hasn't been tested, but not every maintainer does that.