Dan Kegel wrote:
On Sun, Mar 16, 2008 at 8:53 AM, Roderick Colenbrander thunderbird2k@gmx.net wrote:
Personally I don't trust appdb regressions much.
We could work around some of the problems by only listing apps where the same reviewer gave it a lower rating in a newer version of wine. That compensates for the lack of a uniform rating system somewhat.
Reminds me of a bit I just read on a guy who's doing really well in the Netflix competition. One of the good heuristics he uses is to track the levels that a particular person uses to adjust for the "anchoring effect".
http://www.wired.com/techbiz/media/magazine/16-03/mf_netflix?currentPage=all
One such phenomenon is the anchoring effect, a problem endemic to any numerical rating scheme. If a customer watches three movies in a row that merit four stars — say, the Star Wars trilogy — and then sees one that's a bit better — say, Blade Runner — they'll likely give the last movie five stars. But if they started the week with one-star stinkers like the Star Wars prequels, Blade Runner might get only a 4 or even a 3. Anchoring suggests that rating systems need to take account of inertia — a user who has recently given a lot of above-average ratings is likely to continue to do so. Potter finds precisely this phenomenon in the Netflix data; and by being aware of it, he's able to account for its biasing effects and thus more accurately pin down users' true tastes.
Jim