On 2020-08-17 19:46, Arkadiusz Hiler wrote:
TL;DR: Good learning experience, lead to a fix in winmm but IMO we are better off defaulting to 1ms resolution and avoid all the extra complications.
The whole thing started because of miss-assigning blame for DOSBox regression to mismatched GetTickCount() and Sleep() resolutions. Initial attempts at fixing the issue just made Sleep() work in increments of 15.6ms.
Then, I have learned about timer resolution and I have started implementing this series. As soon as I had working timeBeginPeriod() DOSBox regressed again.
After some confusing debugging of "phantom" calls to GetTickCount() with 1-call stacktraces and building myself and unoptimized debug version of SDL and DOSBox I have found out that, in Wine, timeGetTime() is an alias for GetTickCount() and this has caused the problem.
More details here: https://www.winehq.org/pipermail/wine-devel/2020-August/171965.html
I have started questioning usefulness and feasibility of implementing resolution support:
- introducing extra branching for each Sleep / Wait call
- multiple places to maintain (some wait calls use server_wait, other call server directly)
- a lot of Sleep(1) to assess across Wine's codebase
- per-process implementation may be not sufficient for some multi-process software
I did testing with popular software and a lot of things set resolution:
- Chrome and a lot of Electron-based apps - 1ms with multimedia content, fluctuating otherwise
- Firefox - 8ms when downloading, 5ms when playing music/videos
- Visual Studio - 1ms all the time
- Discord - 1ms all the time while being on a voice chat
- Spotify - oscillating 1-5ms when playing music
- EGS - 1ms
- many many games
Resolution is global - if one process requests high resolution (i.e. low update interval) the whole system gets it.
Conjecture: If having 1ms resolution all the time / by default would cause any noticeable problems / performance issue with software, it would be seen on Windows and loudly complained about.
I think it's worth abandoning the effort to implement proper resolution setting, default to 1ms and implement more convincing stubs for timeBeginPeriod() and NtSetTimerResolution() and friends.
Any thoughts?
Cc: Rémi Bernon rbernon@codeweavers.com Cc: Paul Gofman pgofman@codeweavers.com
Arkadiusz Hiler (5): ntdll: Implmement NtQueryTimerResolution and NtSetTimerResolution ntdll: Make wait/delay calls respect the timer resolution server: Set the worst case time for updating user_shared_data to 15ms ntdll: Remove todo_wine from tick count vs sleep test winmm: Implement timeBeginPeriod() and timeEndPeriod()
dlls/ntdll/tests/time.c | 187 ++++++++++++++++++++++++++++++++- dlls/ntdll/unix/server.c | 16 ++- dlls/ntdll/unix/sync.c | 84 +++++++++++++-- dlls/ntdll/unix/unix_private.h | 10 ++ dlls/winmm/tests/timer.c | 46 +++++++- dlls/winmm/time.c | 76 +++++++++++--- server/fd.c | 2 +- 7 files changed, 396 insertions(+), 25 deletions(-)
I think that if it adds such complexity just for the sake of better mimicing Windows, and without actually fixing anything, it may indeed not be worth it. If and when we find an application that is broken by this behavior then maybe we could reconsider it.
Then IIRC the DOSBOX issue is just about winmm and its assumptions that GetTickCount resolution is 1ms on Wine, which is now incorrect. But it is also incorrect on Windows too, regardless of this resolution thing right? So that definitely needs a fix.