On Windows NtDelayExecution() has the default resolution of 15.625 ms. This can be changed system-wide by NtSetTimerResolution() or timeBeginPeriod().
Each process's preference is stored in EPROCESS and the global resolution is the min(all_of_them). This gets updated each time a process exits or sets a new value.
In Wine NtDelayExecution() has the resolution of select(), which is sub 1ms.
Recently GetTickCount() was changed to use user_shared_data and has the update interval of 16ms - this is now (almost) the same as on Windows.
The Sleep()/GetTickCount() misalignment seems to be root cause of https://bugs.winehq.org/show_bug.cgi?id=49564.
I would like to implement timer resolution support, but since I am new to the project it's better to ask you folks first :-)
Implementation plan:
1. Change max tick update interval to 15.625ms or as close as possible, err on the shorter side.
2. Introduce per-process timer resolution.
2. Make changes to NtDelayExecution() so it waits in multiples of process's timer resolution.
3. Implement NtQueryTimerResolution(), and NtSetTimerResolution() that sets (2nd arg = TRUE) and resets (2nd arg = FALSE) the per-process timer resoltuion with similar min, max and rounding behaviors as on Windows.
4. Implement timeBeginPeriod() and timeEndPeriod() (they will keep track of matching calls and use NtSetTimerResolution() internally).
5. Check if other "wait" calls use timer resolution - give them treatment from step 2.
6. If we ever find software that depends on timer resolution being global, then we can make NtSetTimerResolution() do a wineserver call. Requested resolution would be kept in a per-process structure. The effective timer resolution can be exposed via unused part of user_shared_data and updated each NtSetTimerResolution() call or if any process exits.
Thoughts?