http://bugs.winehq.org/show_bug.cgi?id=23158
Summary: Enhance appdb search filters to allow searching for regressions Product: WineHQ Apps Database Version: unspecified Platform: x86 OS/Version: Linux Status: NEW Severity: normal Priority: P2 Component: appdb-unknown AssignedTo: wine-bugs@winehq.org ReportedBy: dank@kegel.com
Is there an easy way to search for regressions in the appdb? The current search filters don't seem to do the trick - but with a small change, they could. Right now you can have multiple filters active, e.g.
Active filters Rating = platinum Rating = garbage Wine version < 1.0.1 Wine version > 1.2rc1
but the combination doesn't seem well defined. If rating & wine version were grouped, e.g.
Active filters Rating = platinum and Wine version < 1.0.1 Rating = garbage and Wine version > 1.2rc1
that would do fine. Or is there a better way?
http://bugs.winehq.org/show_bug.cgi?id=23158
Alexander Nicolaysen Sørnes alex@thehandofagony.com changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |alex@thehandofagony.com
--- Comment #1 from Alexander Nicolaysen Sørnes alex@thehandofagony.com 2010-08-04 07:07:04 --- This is a really interesting idea.
Currently the filters are only added using the logical AND, so two different rating filters will always give no results.
A problem with using the AppDB to find regressions is that users have a lot of different setups, and frankly many of them are broken.
We could, however, look for test results submitted by the same user. This would give us a good indication of regressions.
Hopefully I will have time to take a look at this after GSoC is over. :)
http://bugs.winehq.org/show_bug.cgi?id=23158
--- Comment #2 from Rosanne DiMesio dimesio@earthlink.net 2010-08-04 08:45:52 --- (In reply to comment #1)
A problem with using the AppDB to find regressions is that users have a lot of different setups, and frankly many of them are broken.
We could, however, look for test results submitted by the same user. This would give us a good indication of regressions.
Not necessarily. There's another problem: false platinums given by users who didn't test every feature of a complex app and gave it a platinum based solely on what was tested. They may come back later with a more thorough test with a lower rating that is not a regression.
If you need an example, start with me: http://appdb.winehq.org/objectManager.php?sClass=version&iId=2905&iT... There actually was a regression in that app, but it shows up in the drop from silver to bronze in 1.1.44. The drop from platinum to silver simply reflects the fact that I started tested more obscure features of the app.
I see a lot of false platinums submitted for Office apps based on testing only the basic functions, and I think that is probably true for other feature-rich apps like Photoshop, etc. I reduce the rating to silver when something I know doesn't work hasn't been tested, but not every maintainer does that.
http://bugs.winehq.org/show_bug.cgi?id=23158
Rosanne DiMesio dimesio@earthlink.net changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |dimesio@earthlink.net
http://bugs.winehq.org/show_bug.cgi?id=23158
--- Comment #3 from Alexander Nicolaysen Sørnes alex@thehandofagony.com 2010-08-04 09:54:46 --- I totally agree that the false Platinum ratings are a big problem. However, we could check for applications that have been reduced from a rating >= Bronze to Garbage.
To combat the problem of false ratings, maybe we should let maintainers create a per-version checklist that is used when submitting test results. So in addition to the normal test results fields, a version could have a checkmarks saying 'Multiplayer tested', 'Printing tested' etc. This way we could automatically reduce the rating if something has not been tested and is known not to work.
http://bugs.winehq.org/show_bug.cgi?id=23158
--- Comment #4 from Rosanne DiMesio dimesio@earthlink.net 2010-08-04 11:07:27 --- (In reply to comment #3)
I totally agree that the false Platinum ratings are a big problem. However, we could check for applications that have been reduced from a rating >= Bronze to Garbage.
That would miss a lot of real regressions that don't render an app completely unusable. It would also produce false positives for problems that are due to some other change in the user's system, such as upgrading to the kernel that killed World of Warcraft.
I'm not arguing against doing it, mind you; I just want to point out that no matter how you do it, the results of any such search will always have a large margin of error. As to whether "good enough" accuracy can be achieved, I suppose that depends on your purpose.
To combat the problem of false ratings, maybe we should let maintainers create a per-version checklist that is used when submitting test results. So in addition to the normal test results fields, a version could have a checkmarks saying 'Multiplayer tested', 'Printing tested' etc. This way we could automatically reduce the rating if something has not been tested and is known not to work.
Possibly, but that wouldn't help with unmaintained apps or apps with less-conscientious maintainers, and for complex apps like Word or Photoshop, the checklist could get unbearably long.
One idea I had would be to have the submission system check the version's bug links for open, confirmed bugs, and not allow a platinum rating if there are any affecting the Wine version tested. But I don't know how difficult that would be to implement.
http://bugs.winehq.org/show_bug.cgi?id=23158
nathan.n saturn_systems@yahoo.com changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |saturn_systems@yahoo.com
https://bugs.winehq.org/show_bug.cgi?id=23158
Ben Shefte shefben@gmail.com changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |shefben@gmail.com