Forum Discussion
6 years ago
"EduardoCadav;c-1889018" wrote:
@cannonfodder_iv
Assuming a binomial distribution with mean 0.25 the probability of exactly, or fewer than, 100 successes out of 500 trials is p = .0049037. So you are a ~1 in 200 case. Now in the population of the whole game that's not surprising at all, but it may be considered a surprising result if the population size is limited just to "people playing swgoh and tracking speed slices".
Here's the interesting part though (to me at least). If you (or anyone else tracking drops) have ever checked your success rate at any point during your tracking and thought "I must just need a bigger sample size" then you've committed a form of scientific malpractice that is very very common and almost exclusively unintentional, called p-hacking. It's ultra common in fields like social sciences, epidemiology, nutritional sciences, psychology but common to all research that involves some statistics and results in silly news headlines like "Scientific study shows chocolate helps you lose weight".
To give a concrete example. If someone tracked 150 slices and got 24 hits (19.2% success) this would be a p = .00540223 which again is a ~1 in 200 case. Following this further tracking is done to increase the sample size. To get to 100 successes out of 500 slices this person would need 76 out of 350 slices (21.7% success) which is a p = .08575196 so a ~1 in 11 occurrence. In order to get the mean result of 25% after N=500 you would need to have sliced speed 101 times and the probability of exactly, or more than, 101 out of 350 (28.9% success) is p = .05592093 which is ~1 in 18 occurrence so returning to the mean is quite a bit less probable than remaining below it.
None of this changes the fact that your data was a 1 in 200 occurrence and always would have been regardless of if you checked at any point during the tracking, the point is in the reporting of the data and who reports data. In science what occurs is people check a hypothesis and see a trend but note that it's not significant so collect more data. When combining the new data with the old, the probability of a "statistically significant" result increases and the paper gets published with an incorrect conclusion. In swgoh tracking, multiple people track a few 10s or 100s of drops and see the expected rate so stop tracking and never report. Those (not necessarily you) that track a few 10s of drops and get a rubbish droprate (but not one that is statistically significant) come on here and tell people and get the response "just track more data". I cringe at this because then, said person goes away and tracks more data, and then later rolls that into the previous data resulting in a much bigger sample size that will have a very high probability of showing a large difference from the true rate (whatever it is).
Anyway, been wanting to get that off my chest for a while and isn't necessarily directed at you just since you asked about the stats. :)
I don't disagree with what you have said, except that you don't seem to conveying that you do indeed need a sufficient sample size to model reality. To not have that defeats the point of statistics and brings us back into the realm of "I think because some anecdotal evidence I've seen fulfills my confirmation bias."
In the case of mod slicing and perception of data, I don't believe any complainers are collecting and analyzing the data. So I would thus say, "You need more n." Right now they are at 0.
About SWGOH Strategy & Tips
Share guides, tips, and tricks for Star Wars: Galaxy of Heroes, discuss Arena strats, and help new players get started.
22,726 PostsLatest Activity: 3 days agoRelated Posts
Recent Discussions
- 3 days ago
- 3 days ago
- 4 days ago