"J_Sk333bs;c-1870708" wrote:
I understand more clearly ty. This is how farming calculators are able to be quite accurate then, correct? If it was self correcting as you explained it you wouldn't be able to pull accurate prediction becuase the base %'s would be ever changing.?
Not exactly.
Long term, big picture? A good self-correcting die and a fair die will average out the same, in time, letting the calculators work out the same.
Games do a lot of things to avoid a fair die because it often
feels unfair or nonrandom to our garbage monkey brains. Which method they choose can influence whether or not it can be gamed.
If the game does long-range tracking of drops on each individual node and weights odds to skew toward a set probability so that you get the same overall drop rate over an extended period of time, you can't really game that system and it averages out the same. This requires a lot of memory, but the results "feel" more fair and accurate because the weighting makes it much less likely that you roll ten dice and get no hits, even though it also curtails lucky breaks considerably to do so. You see that 1/3 consistently on much smaller sample sizes.
If the game does long-range tracking of drops for "shards" in general, regardless of type, that can absolutely be gamed. Do attempts singly on the nodes you care less about, then after a miss, go to the node you do care about to benefit from that self-correction.
If the game does not do long-range tracking and just corrects within an individual set, it's easier to implement but you might be able to game that by rolling in pairs. Let's say this model has a base of 30%, and will adjust odds by 10% flat to self-correct, but only in sets rolled together. The theoretical average of two dice would be .6 shards. But if you roll in pairs, and the first die is a success, yes the second die will be a 20% chance, but you've already won. If the first die misses, then the second die has a 40% chance, still giving you a reasonable chance of coming out above average. The average result of the pair is not the theoretical .6; rather, it's .64. Those odds become reproducible, and analysis can be done on the optimal number of dice to roll together.
Then there are the less, quasi, or nonrandom methods.
A game can, instead of rolling for a drop, give you invisible points. A node gives you 300 invisible shard points. When you get 1000 shard points, you get a shard. You can always know exactly which attempt will give you a shard, but the average remains the same.
Or, one can take that drop rate, and instead you get a random value between 200 and 400 shard points. The average remains unchanged, but you've reintroduced randomness. You mathematically cannot go more than 5 attempts without getting a shard, and it is likewise impossible to get a shard twice in a row. Personally, if I were to uproot the legacy code and replace the drop system, this is the version I'd use.
There are a bunch of tricks games use to muck around with "random" events to cheat and make them feel more palatable. But the one CG chose was a fair die, which is totes valid.