"nfidel2k;c-2393963" wrote:
The mistake was transferring the 33% drop rate to the success of the full set.
I don’t know if you were going with the chance to roll at least one success out of five, which would be 33%; but the inverse of that would be the chance to roll at least one failure out of five, which would be 67%. So the chance of rolling 1, 2, 3, 4, or 5 out of 5 is 33% technically, but the 67% inverse is the chance to roll 0, 1, 2, 3, or 4 out of 5. Not just roll a 0 out of 5.
The chance to roll 0 out of 5 is the chance on a single roll (67%) carried to the power of the number of rolls. So 0.67 x 0.67 x 0.67 x 0.67 x 0.67, or (0.67)^5, or approximately 13.5%.
No...and it is out of 6, not out of 5.
There are 6 samples (0/5), (1/5), (2/5), (3/5), (4/5) and (5/5)
But we know that the probabily of scoring is 33% <<< everyone seems to agree with this.
What is the probability of scoring...the probability THAT we score.
In individual score...we can say.
1st attempt = (1/1) <<< scores
2nd attempt = (0/1)
3rd attempt = (0/1)
4th attempt = (1/1) <<< scores
5th attempt = (0/1)
Or we can say (2/5) if we want to say it as 5 collective attempts.
Now, we are all agree that in a large data, we get 33% of success rate.
Lets say, 1000 data.
In individual attempt = 1000 attempts have to be made.
In 5 collective attempts = 200 attempts have to be made.
But since we can do 5 attempts per day (without refresh)...these 1000 attempts are actually those 200 5-collective-attempts...are they not? You see?
So...how do you now say...what is applied to those 1000 attemps cant be apllied to those 200 5-collective-attempts?
It is just a new way of saying it....just a term.
The datas are the same, no matter how you group them, in group of 1 or in group of 5.
1000 individual attempts = 200 5-collectives attempts.
Thus what can be aplied to the first can be applied to the later too.
All seem to agree to 67% x 1000 individual attempts = 670 fail attemps.
So...67% x 200 5-collectives-attempts = 134 fail attempts
So i will have (0/5) 134 times...
And i will have the amount of (1/5) + #(2/5) + #(3/5) + #(4/5) + # (5/5) = 66 times.
Just do normal 1000 individual sampling that you already know...
Like this....
(1/1)
(0/1)
(0/1)
(0/1)
(0/1)
(0/1)
(0/1)
(0/1)
(0/1)
(0/1)
(0/1)
(1/1)
(1/1)
(0/1)
(0/1)
•
•
•
(1/1) <<< until the 1000th data.
But now, group it into 5.
(1/1) group 1 = (1/5)
(0/1)
(0/1)
(0/1)
(0/1)
(0/1) group 2 = (0/5)
(0/1)
(0/1)
(0/1)
(0/1)
(0/1) group 3 = (2/5)
(1/1)
(1/1)
(0/1)
(0/1)
•
•
•
(1/1) <<< until the 1000th data.
I am sure...that you will have close to 134 of group 2 (failed group) and 66 of success group which contain (1/5) and/or (2/5) and/or (3/5) and/or (4/5) and/or (5/5).