Questions About the Recently Published Anti-Cheat Statistics (MIR = 2%)
Hi everyone,
I wanted to raise a constructive question regarding the recently published Battlefield 6 anti-cheat statistics — especially the statement that “~98% of all matches were fair” and the associated “Match Infection Rate (MIR) of ~2%”.
Before posting, I reviewed the numbers carefully and also had them cross-checked by an analytical AI configured for Six-Sigma-style measurement-system reasoning.
The outcome was consistent:
the MIR metric, as published, does not appear statistically capable of representing the real cheating impact players experience.
I’d like to explain why and invite clarification from the developers.
1. MIR only measures detected cheats, not actual cheating
Based on the description, MIR seems to count:
-cheats the system already detects
-actions that triggered confirmed anti-cheat signatures
-enforcement events (kicks, bans, suspensions)
-blocked cheat attempts
But MIR does not include:
-undetected ESP
-soft aim with human-like smoothing
-RCS scripts
-DMA hardware
-external AI aim assistance
-private “safe build” clients
-controller scripting
-anything not yet classified as cheating by the detection logic
This is not a criticism — it is simply how any detection-based KPI works.
But it also means MIR cannot reflect the full scope of cheating that may be happening in real matches.
2. The 2% MIR conflicts strongly with what many players actually experience
If the MIR truly reflected all cheating:
-98% of matches would be clean
-the average player would see one cheater every ~50 matches
-cheating discussions would be minimal
But many players observe suspicious or clearly abnormal behavior in far more matches than that.
And this cannot simply be blamed on netcode, desync, latency spikes, hit-reg variance, or TTK inconsistencies.
Those issues certainly exist — but they do not explain:
-blatant tracking through walls
-perfect recoil nullification
-robotic micro-adjust aim patterns
-snapped target switching
-impossible flick precision
-unnatural spatial awareness
-repeated full-team wipes with identical aim signatures
These are observations that netcode or server issues cannot mimic reliably or repeatedly.
In other words, the gap between the published number and what players see is too large to attribute to “mismatched perception”.
3. A simple 95% confidence check rejects the 2% MIR
You don’t need equations to see the problem.
If 2% were the true match infection rate:
-seeing cheating in 4 or more matches out of 50 would already be statistically unlikely
-many players easily see 10, 15, or even 20 such games in that range
-such frequencies would be close to impossible under a true 2% rate
This assessment was independently confirmed by an analytical AI with Six-Sigma measurement logic:
the MIR value is not consistent with typical gameplay observations at a 95% confidence level.
4. This indicates a measurement-system issue, not a perception issue
In quality management, a KPI must:
-represent real failure modes
-correlate with user experience
-include meaningful detection coverage
-support process control
-not hide unmeasured defects
MIR fails these criteria.
It appears to measure only a narrow subset of confirmed detections, not real cheat presence.
This does not mean the anti-cheat team is doing a bad job — rather that the metric used to describe their performance may not be suited to the complexity of the problem.
5. A constructive suggestion
If the published MIR is primarily intended as a communication metric:
-it unintentionally undermines community trust
-players become less motivated to report suspicious behavior
-it creates the impression that player experiences are being dismissed
The community would likely be far more supportive if EA presented more realistic, transparent metrics — even if the numbers look less perfect. Honest trend data encourages cooperation, not criticism.
If MIR is used internally as a KPI to steer anti-cheat development, then it may be worth reviewing whether a detection-only metric can actually guide such decisions.
A measurement system that cannot see undetected cheating cannot fully control or improve the process.
6. Summary
-MIR seems to reflect only cheats already detected, not real cheat impact.
-The 2% value does not align with widespread player observations.
-Netcode and latency issues cannot explain the gap.
-A basic statistical check (95% confidence) rejects the MIR value.
-This suggests a measurement-system limitation, not a gameplay misunderstanding.
-More realistic metrics would greatly improve trust and community reporting.
I’m not here to attack anyone.
I just want to understand how MIR is intended to be interpreted and whether it’s truly meant to represent the cheating reality that players encounter.
A clarification from the team would be very appreciated.
Thanks for reading.
Cru3lr4Ge aka Crucx