Log inRegister an accountBrowse CSDbHelp & documentationFacts & StatisticsThe forumsAvailable RSS-feeds on CSDbSupport CSDb Commodore 64 Scene Database
 Welcome to our latest new user Rodrigo Yeowtch ! (Registered 2024-11-24) You are not logged in - nap
CSDb User Forums


Forums > CSDb Feedback > weighted average?
2005-07-06 19:05
Nightlord
Account closed

Registered: Jan 2003
Posts: 131
weighted average?

ok something i just noticed when i was voting for alih.

the guy has 6 9's and 3 8's and a 1 from a downvoter. his weighted average turns out to be 7.7.

now if we took the arithmetic average. it would be
(6 x 9) + (3 x 8) + 1 = 79 divided by 10 would be ~ 7.9

so the weighted average function was supposed to be able to recognize downvoting and perform better than arithmetic average. I say something is wrong.

not only the real average should have been somewhere in high 8's, but also the csdb function works even worse than the arithmetic average.

am i making a calculation mistake or something?
 
... 27 posts hidden. Click here to view all posts....
 
2020-08-16 11:42
F7sus4

Registered: Apr 2013
Posts: 117
Quoting Jammer
You make things bad with downright 1 but you need quite a lot of 1s so lonely vigilantes stand no bigger chance against cluster of other grades ;)


All calculation formulas are prone to generate some (positive or negative) bias in one way or another, but will be exploited only if there are people willing to do so.

On the other hand, there is no "perfect" system. Utilizing absolute (numeric) measures by people attributing arbitrary criteria generates bias, but so does the dispersion of the scale (10-point scales generate way more 8-9s for "very good" works, whereas in 5-point scales it is almost exclusively 5) because psychology, and so on, and so on.
2020-08-16 12:21
Jammer

Registered: Nov 2002
Posts: 1335
Quoting F7sus4
All calculation formulas are prone to generate some (positive or negative) bias in one way or another, but will be exploited only if there are people willing to do so.

On the other hand, there is no "perfect" system. Utilizing absolute (numeric) measures by people attributing arbitrary criteria generates bias, but so does the dispersion of the scale (10-point scales generate way more 8-9s for "very good" works, whereas in 5-point scales it is almost exclusively 5) because psychology, and so on, and so on.


Psychological bias can't really be avoided and it's IMHO not a system's job to compensate for it. It's easier and better to eradicate single malicious actions. If more people gave a prod weak grades, there certainly must be sth to it, obviously, and there's no need to fix it. This particular formula certainly doesn't aim for fixing biases, it just amplifies frequent votes and therefore assures stability of calculated result.
2020-08-16 12:51
F7sus4

Registered: Apr 2013
Posts: 117
Quoting Jammer
Psychological bias can't really be avoided and it's IMHO not a system's job to compensate for it. It's easier and better to eradicate single malicious actions.


Note that this solution does nothing but implement a psychological factor into systemic compensation, as it is based on assumption that one or several votes were malicious/beneficial solely because they went against what most people said. That doesn't need to be true, and excluding them leads to self-confirmation bias, which is an equally rigged outcome.
2020-08-16 13:03
Jammer

Registered: Nov 2002
Posts: 1335
Quoting F7sus4
Note that this solution does nothing but implement a psychological factor into systemic compensation, as it is based on assumption that one or several votes were malicious/beneficial solely because they went against what most people said. That doesn't need to be true, and excluding them leads to self-confirmation bias, which is an equally rigged outcome.


That's right. But what interests me most is stable result, not quite people's thinking - whatever the bias comes out ;) Try it out:

Simple Average with ^2 Votecount
2020-08-16 13:30
Frostbyte

Registered: Aug 2003
Posts: 181
Oops, sorry - that was me changing the values on spreadsheet. :)

The weighing seems to work reasonably well in lessening the effect of single or even a few downvotes. One change that would accompany this very well would be to show two decimals in the CSDB scores on the prod pages, not just in the top lists.
2020-08-16 13:33
Jammer

Registered: Nov 2002
Posts: 1335
Quoting JojeliOne change that would accompany this very well would be to show two decimals in the CSDB scores on the prod pages, not just in the top lists.[/quote

All top prods would basically look like

All top prods would basically look like 9.99 :D
2020-08-16 13:42
F7sus4

Registered: Apr 2013
Posts: 117
Quoting Jammer
But what interests me most is stable result, not quite people's thinking - whatever the bias comes out ;)


Which might be a bad idea in the very beginning, as it is impossible to pull out reliable results from distorted input data. Therefore, the conclusion would not be to advocate for the improvement of the formula, but for the honesty of the votes. And to achieve that, transparency would be required.

I've tried your formula and while it provides more stable results with mixed input, it is definitely more prone to direct hate-vote (multiple 1s) when compared to the current CSDb system. But "it's only my opinion, I'm not an oracle" ;D
2020-08-16 13:48
Jammer

Registered: Nov 2002
Posts: 1335
Quoting F7sus4
it is definitely more prone to direct hate-vote (multiple 1s) when compared to the current CSDb system.

Of course every system is more or less vulnerable to mass attack of any kind. Review bombings on Metacritic can be quite a weapon at times :D


BTW, obviously I was wrong about result not being weighted. They are weighted on basis of vote count - what I meant is that results don't include fixed per-value weight. Just to be precise ;)
2020-08-16 14:11
F7sus4

Registered: Apr 2013
Posts: 117
Quoting Jammer
Review bombings on Metacritic can be quite a weapon at times :D


Yes, this is completely true. Bombings are popular in big communities, but having enourmous vote-count also dispers the responsibility and the effect of a single participant. CSDb is not Metacritic, Rotten Tomatoes etc. In small communities being anonymous (or not) puts completely different weight on the decision-making process, and how each single vote/comment affects the outcome.
2020-08-16 14:11
TheRyk

Registered: Mar 2009
Posts: 2218
You obviously suffer from Silly Season/Dog Days. Too hot to create music, but not too hot to vex your brain with Voting System :)

BTT/Krill's question: At least Jammer's formula could be easily hammered into few lines of BASIC Code and thus, perform way faster than Penisbruch stuff which seems to compute for ages. And the results do not seem to vary big deal.

PS: More often than not, even Jury votes do not vary so much from the infamous CSDb Down/Namevoting than people expect. Party voting is a totally different story, though. People who actually are there in persona, normally are voted more generously than remote entries, e.g.
Previous - 1 | 2 | 3 | 4 - Next
RefreshSubscribe to this thread:

You need to be logged in to post in the forum.

Search the forum:
Search   for   in  
All times are CET.
Search CSDb
Advanced
Users Online
cbmeeks
Rodrigo Yeow../Hokut..
Guests online: 111
Top Demos
1 Next Level  (9.7)
2 13:37  (9.7)
3 Coma Light 13  (9.7)
4 Edge of Disgrace  (9.6)
5 Mojo  (9.6)
6 Uncensored  (9.6)
7 Wonderland XIV  (9.6)
8 Comaland 100%  (9.6)
9 What Is The Matrix 2  (9.6)
10 No Bounds  (9.6)
Top onefile Demos
1 Layers  (9.6)
2 Party Elk 2  (9.6)
3 Cubic Dream  (9.6)
4 Copper Booze  (9.6)
5 Libertongo  (9.5)
6 Rainbow Connection  (9.5)
7 Onscreen 5k  (9.5)
8 Morph  (9.5)
9 Dawnfall V1.1  (9.5)
10 It's More Fun to Com..  (9.5)
Top Groups
1 Performers  (9.3)
2 Booze Design  (9.3)
3 Oxyron  (9.3)
4 Nostalgia  (9.3)
5 Censor Design  (9.3)
Top Graphicians
1 Mirage  (9.7)
2 Archmage  (9.7)
3 Mikael  (9.6)
4 Carrion  (9.6)
5 Sulevi  (9.6)

Home - Disclaimer
Copyright © No Name 2001-2024
Page generated in: 0.066 sec.