Solved
Forum Discussion
5 years ago
I updated having a network expansion on performance display and stuff about Valorant, possibly setting your own limits with relevant pic, code refactoring mentioned by @DoYaSeeMe and copy/pasted some other bits he mentioned elsewhere with more to come soon.
Also, does anyone have a link to the dev Q&A on reddit as i can't find it and would like to put it up if/when they discuss the servers.
5 years ago
@apostolateofDOOM Some more nitpicking:
“The Valorant team has put extensive effort into determining the best combination of tick rate and latency that will minimize peeker’s advantage, and those tests showed that a 128hz tick rate and 35ms (or less) latency would be best for our players.”
What we actually care about is snapshot rate and not tickrate. Tickrate is the frequency with which the server sends data packets to clients. Snapshot rate is the actual frequency of sending entire game updates, which are split into multiple packets when they exceed the packet size limitation. Therefore, if a game has 128Hz tickrate, it doesn't mean that it sends full updates at that rate. If an entire update gets split into 4 packets, the update rate will be around 32Hz. If we take Valorant and put 20 players on a map instead of 10, there is a big chance that updates would require multiple packets, going down to a frequency of 64Hz. Also, more player interactions to be processed would also increase the update processing time, which would require lowering the tickrate to maybe 64Hz to keep the server stable. In the end, updates for a 20-player Valorant game would probably go at an average rate of 32Hz.
Also, latency and ping have various definitions which aren't always the same. Some say ping is the round trip and latency is the delay between the transmitter and the receiver, other say the opposite or that ping is client to server and latency is server to client. It's pretty confusing, but can explain why ping values for data centers are always different from the latency values shown in game. In Apex's case, I'm leaning towards ping being the one way and latency being the round trip, since it's always higher. Also, I'm not sure if the data sent from the client goes on the same route as the data received from the server, so the round trip may not always be twice as big that one way, but also less or more than 2x.
Riot's ultimate aim is to deliver 35ms latency to 70% of their playerbase, so they're clearly not there. Their internet backbone is far from getting global, which makes me wonder if that 70% they refer to are from the US playerbase, or if US players represent 70% of their entire playerbase. Also, I wonder if Multiplay can go for the internet backbone solution or if that's too game specific, not working if you host many titles of different genres and developers.
“The Valorant team has put extensive effort into determining the best combination of tick rate and latency that will minimize peeker’s advantage, and those tests showed that a 128hz tick rate and 35ms (or less) latency would be best for our players.”
What we actually care about is snapshot rate and not tickrate. Tickrate is the frequency with which the server sends data packets to clients. Snapshot rate is the actual frequency of sending entire game updates, which are split into multiple packets when they exceed the packet size limitation. Therefore, if a game has 128Hz tickrate, it doesn't mean that it sends full updates at that rate. If an entire update gets split into 4 packets, the update rate will be around 32Hz. If we take Valorant and put 20 players on a map instead of 10, there is a big chance that updates would require multiple packets, going down to a frequency of 64Hz. Also, more player interactions to be processed would also increase the update processing time, which would require lowering the tickrate to maybe 64Hz to keep the server stable. In the end, updates for a 20-player Valorant game would probably go at an average rate of 32Hz.
Also, latency and ping have various definitions which aren't always the same. Some say ping is the round trip and latency is the delay between the transmitter and the receiver, other say the opposite or that ping is client to server and latency is server to client. It's pretty confusing, but can explain why ping values for data centers are always different from the latency values shown in game. In Apex's case, I'm leaning towards ping being the one way and latency being the round trip, since it's always higher. Also, I'm not sure if the data sent from the client goes on the same route as the data received from the server, so the round trip may not always be twice as big that one way, but also less or more than 2x.
Riot's ultimate aim is to deliver 35ms latency to 70% of their playerbase, so they're clearly not there. Their internet backbone is far from getting global, which makes me wonder if that 70% they refer to are from the US playerbase, or if US players represent 70% of their entire playerbase. Also, I wonder if Multiplay can go for the internet backbone solution or if that's too game specific, not working if you host many titles of different genres and developers.
Featured Places
Apex Legends General Discussion
Discuss the latest news and game information around Apex Legends in the community forums.Latest Activity: 2 hours agoCommunity Highlights
- EA_Mari10 days ago
Community Manager