The nice reports in recent wwn's showing changes in appdb ratings make me want to use the appdb itself to see which apps have recently fallen in rating.
Following past appdb practice, one would implement that by adding a new "Browse Regressions" menu item on the left, below Browse Apps, Browse Newest Apps, Downloadable Apps, and Browse Apps by Rating.
I expect this will become increasingly useful as we approach 1.0. How hard would it be?
(Also, it might be nice to have an "advanced browse" that let you filter and sort by date, rating, and rating change. Once that works well, we might not need all those other specific browse commands...) - Dan
The nice reports in recent wwn's showing changes in appdb ratings make me want to use the appdb itself to see which apps have recently fallen in rating.
Following past appdb practice, one would implement that by adding a new "Browse Regressions" menu item on the left, below Browse Apps, Browse Newest Apps, Downloadable Apps, and Browse Apps by Rating.
I expect this will become increasingly useful as we approach 1.0. How hard would it be?
(Also, it might be nice to have an "advanced browse" that let you filter and sort by date, rating, and rating change. Once that works well, we might not need all those other specific browse commands...)
- Dan
Personally I don't trust appdb regressions much. The main issue I see is that the appdb rating mechanism is not good. A lot of users don't know how to rate apps properly and even rate something as gold when half of its features don't work. They even rate it gold when they have to copy half a windows registry and tons of windows dlls.
This makes appdb lots of users complain that they saw that an app worked but that they can't reproduce it themselves. I have seen this on a lot of forums and on irc. A too low rating would even be better than a too high rating as it would then surprise users that an app worked.
Someone on irc started this page to come to a new rating: http://wiki.winehq.org/appdbratingpage
I have extended it a bit and I think we should move to such a system in which the user needs to answer a few questions and that based on those answers a rating is selected.
The problem with the current approach is that due to the way different users rank apps, the appdb scores can fluctuate quite a bit.
Roderick
Roderick Colenbrander thunderbird2k@gmx.net wrote:
A lot of users don't know how to rate apps properly and even rate something as gold when half of its features don't work. They even rate it gold when they have to copy half a windows registry and tons of windows dlls.
Agreed. We need to solve this somehow.
Someone on irc started this page to come to a new rating: http://wiki.winehq.org/appdbratingpage
I used to think that asking too many questions would annoy users, but now I agree that a wizard approach to computing the rating might provide more reliable results. - Dan
Roderick Colenbrander thunderbird2k@gmx.net wrote:
A lot of users don't know how to rate apps properly and even rate something as gold when half of its features don't work. They even rate it gold when they have to copy half a windows registry and tons of windows dlls.
Agreed. We need to solve this somehow.
Certainly, that's my experience, too.
Someone on irc started this page to come to a new rating: http://wiki.winehq.org/appdbratingpage
I used to think that asking too many questions would annoy users, but now I agree that a wizard approach to computing the rating might provide more reliable results.
- Dan
We were planning to add a wizard people could use if they were unsure about the rating, but yes, it might be a good idea to make it compulsory.
Alexander N. Sørnes
On Sun, Mar 16, 2008 at 8:53 AM, Roderick Colenbrander thunderbird2k@gmx.net wrote:
Personally I don't trust appdb regressions much.
We could work around some of the problems by only listing apps where the same reviewer gave it a lower rating in a newer version of wine. That compensates for the lack of a uniform rating system somewhat.
Dan Kegel wrote:
On Sun, Mar 16, 2008 at 8:53 AM, Roderick Colenbrander thunderbird2k@gmx.net wrote:
Personally I don't trust appdb regressions much.
We could work around some of the problems by only listing apps where the same reviewer gave it a lower rating in a newer version of wine. That compensates for the lack of a uniform rating system somewhat.
Reminds me of a bit I just read on a guy who's doing really well in the Netflix competition. One of the good heuristics he uses is to track the levels that a particular person uses to adjust for the "anchoring effect".
http://www.wired.com/techbiz/media/magazine/16-03/mf_netflix?currentPage=all
One such phenomenon is the anchoring effect, a problem endemic to any numerical rating scheme. If a customer watches three movies in a row that merit four stars — say, the Star Wars trilogy — and then sees one that's a bit better — say, Blade Runner — they'll likely give the last movie five stars. But if they started the week with one-star stinkers like the Star Wars prequels, Blade Runner might get only a 4 or even a 3. Anchoring suggests that rating systems need to take account of inertia — a user who has recently given a lot of above-average ratings is likely to continue to do so. Potter finds precisely this phenomenon in the Netflix data; and by being aware of it, he's able to account for its biasing effects and thus more accurately pin down users' true tastes.
Jim
On So, 2008-03-16 at 07:30 -0700, Dan Kegel wrote:
want to use the appdb itself to see which apps have recently fallen in rating.
Nice Idea.
Following past appdb practice, one would implement that by adding a new "Browse Regressions" menu item on the left,
I created recently the Patch for Bugzilla: "Task Lists" => "Regressions"
http://bugs.winehq.org/buglist.cgi?bug_status=NEW&bug_status=ASSIGNED&am...