Hi Folks,
I ran into a problem with PowerPoint and winmm timer events, that has led me to radically alter this file.
However, I'd appreciate a gut check from anyone more experienced with it than I.
The fundamental problem is that Powerpoint does a timeBeginPeriod(1), trying to set timer resolution to 1 ms; we steadfastly discard that, and continue generating time updates and events on a roughly 10 ms interval, this can, in rare situations, lead to some awkward delays.
Some investigation with a test program shows that most Windows systems start with a default resolution of 1 ms. This test program is attached; the 'average' reported at the top is the default resolution. Further, Windows seem to steadfastly ignore any timeBeginPeriod(x) where x > the current period. I surveyed about 6 systems; a mix on Win98/Win2k and Win XP. All but one rogue Win 2K system had a 1 ms timer interval (the rogue system had a 10 ms interval; why, I do not know). The MSDN pages suggest (but don't state clearly) that this is the expected behavior.
This behavior essentially means that on most Windows systems the timer resolution of winmm is 1ms and can not be changed.
So, question #1: anyone object if I fix the timer resolution at 1 ms?
Next, if I make this change, it makes the code in time.c, which is fairly awkward imho, even less efficient. We'll basically be scanning a loop every ms for no particularly good reason.
I couldn't stomach this, so I rewrote it. Question #2: anyone object to my change? I have tested it with my attached test program, and I appear to have gotten it right, but...
Thoughts? Comments? Flames? Suggestions I go back to my day job? <grin>
Cheers,
Jeremy
Jeremy White a écrit :
Hi Folks,
I ran into a problem with PowerPoint and winmm timer events, that has led me to radically alter this file.
However, I'd appreciate a gut check from anyone more experienced with it than I.
The fundamental problem is that Powerpoint does a timeBeginPeriod(1), trying to set timer resolution to 1 ms; we steadfastly discard that, and continue generating time updates and events on a roughly 10 ms interval, this can, in rare situations, lead to some awkward delays.
Some investigation with a test program shows that most Windows systems start with a default resolution of 1 ms. This test program is attached; the 'average' reported at the top is the default resolution. Further, Windows seem to steadfastly ignore any timeBeginPeriod(x) where x > the current period. I surveyed about 6 systems; a mix on Win98/Win2k and Win XP. All but one rogue Win 2K system had a 1 ms timer interval (the rogue system had a 10 ms interval; why, I do not know). The MSDN pages suggest (but don't state clearly) that this is the expected behavior.
mm timers have been created to give better resolution than the old ticker from DOS (55ms, 18.2 ticks per second). But this requires specific hardware to do so, hence the various results you get. I would assume that on recent boxes you get 1ms as lowest resolution.
This behavior essentially means that on most Windows systems the timer resolution of winmm is 1ms and can not be changed.
So, question #1: anyone object if I fix the timer resolution at 1 ms?
Next, if I make this change, it makes the code in time.c, which is fairly awkward imho, even less efficient. We'll basically be scanning a loop every ms for no particularly good reason.
I couldn't stomach this, so I rewrote it. Question #2: anyone object to my change? I have tested it with my attached test program, and I appear to have gotten it right, but...
the current code had been written (hmmm) with the assumption that GetTickCount() could have a resolution of 55ms, hence you couldn't rely on it and had to do the job yourself for a better resolution.
Wine provides 1ms resolution in GetTickCount, ReactOs does also (through a quicly browsing of the code) (and the number of cases where winmm will be run on Windows), so the basic proposition sounds fine.
From an implementation point of view, there's something wrong with it. The time returned from TIME_MMSysTimeCallback (time to wait for next timer to elapse) only takes into account the timers which existed when the function is entered, not the ones which could have been created or deleted while processing the callbacks.
Another point, with your proposal, we could use the fact that the global time no longer needs to be maintained, hence the worker thread can be destroyed when no more timers exist.
A+
Hi Eric,
Thanks for responding.
From an implementation point of view, there's something wrong with it. The time returned from TIME_MMSysTimeCallback (time to wait for next timer to elapse) only takes into account the timers which existed when the function is entered, not the ones which could have been created or deleted while processing the callbacks.
I think this is safe; the wakeup event would be signalled if any events were added while we were in this function, so we should immediately return to that function.
Another point, with your proposal, we could use the fact that the global time no longer needs to be maintained, hence the worker thread can be destroyed when no more timers exist.
I hadn't considered that; I had thought that having the thread sleeping for INFINITE was a pretty nice improvement, but your proposal is arguably even better. The only case where it would be a poor choice is a situation where events are being created and destroyed continually.
But maybe we should start with this patch (and deal with the bugs it introduces <grin>), and I can mark a FIXME to make that improvement in the future.
Cheers,
Jeremy
Jeremy White a écrit :
Hi Eric,
Thanks for responding.
From an implementation point of view, there's something wrong with it. The time returned from TIME_MMSysTimeCallback (time to wait for next timer to elapse) only takes into account the timers which existed when the function is entered, not the ones which could have been created or deleted while processing the callbacks.
I think this is safe; the wakeup event would be signalled if any events were added while we were in this function, so we should immediately return to that function.
yup. I mixed up the fact that you didn't set the event when destroying a timer, but that shouldn't hurt too much. In the worst case, you'll reenter the loop to recompute the correct value. As rereading the patch, for one-shot timer, you could set the delta_value to INFINITE (since they are destroyed when they expire). In terms of other optimization, you can remove the call to TIME_MMTimeStart in most of the places, except for a timer creation. The other places rely on it to get WINMM_SysTimeMS updated, but since you killed it...
Another point, with your proposal, we could use the fact that the global time no longer needs to be maintained, hence the worker thread can be destroyed when no more timers exist.
I hadn't considered that; I had thought that having the thread sleeping for INFINITE was a pretty nice improvement, but your proposal is arguably even better.
I'd mainly like to get rid of the poor anti-race scheme that's implemented (we can do better than that).
The only case where it would be a poor choice is a situation where events are being created and destroyed continually.
and it's application dependent :-(
But maybe we should start with this patch (and deal with the bugs it introduces <grin>), and I can mark a FIXME to make that improvement in the future.
sure A+
In terms of other optimization, you can remove the call to TIME_MMTimeStart in most of the places, except for a timer creation. The other places rely on it to get WINMM_SysTimeMS updated, but since you killed it...
Good point.
I'd mainly like to get rid of the poor anti-race scheme that's implemented (we can do better than that).
Done.
But maybe we should start with this patch (and deal with the bugs it introduces <grin>), and I can mark a FIXME to make that improvement in the future.
Done.
Full patch attached; change from patch2 attached as well; if this looks clean, I'll submit it formally.
Cheers,
Jeremy
I'm using mmTimer on Win95 in one of my programs to create more precise Sleep function. I'm using one shot timers for this. So having an extra overhead will heart the performance. I'm already have interthread suspending/resuming with thread handle duplication.
As I recall all my systems I was using at a time had a default resolution of 10ms.
MSDN says that each timeBeginPeriod should be matched with timeEndPeriod. When I stopped the program in the middle of debugging it, I had a resolution of 1ms thereafter. It seems that native using the same resolution for the whole system.
Vitaliy Margolen