On Thu Feb 20 21:41:59 2025 +0000, Paul Gofman wrote:
On Windows sleep / timer resolution is 16ms by default however. That can be changed systemwide with NtSetTimerResolution() / winmm.timeBeginPeriod and AFAIK (didn't test that myself) 0.5ms is the best supported resolution (only available through NtSetTimerResolution). So it is interesting what the game does exactly, which sort of delays or waits affect the performance and how that works on Windows. Just making sleep precision as good as possible might instead negatively affact the performance in some other games.
Well, for one, I'd like to see an example of a real game that doesn't call timeBeginPeriod(1), but that's besides the point. We already simulate a 1ms timer period by having the tick count increase every millisecond, and this is absolutely not intended to change that.
What this is addressing is unrelated server calls from other threads pushing the **global** `current_time` and `monotonic_time` (updated each timeout) forward. That causes the timeouts for alertable `SleepEx` (used in those games' frame limiters) to expire for up to 1ms from that last update, as it's ceil'd to the next millisecond to avoid spinning with a 0 timeout. This causes, on average, an additional 0.5ms waited, depending on where we land on the tick boundary.