On my machine, one call to QueryPerformanceCounter() seems to take about 80 ns.
How does that compare to e.g. QueryInterruptTime()?
Currently in Wine Query(Unbiased)InterruptTime has the same resolution as GetTickCount(), which is "16 ms or whenever someone does a server call". That's not good enough here.
On Windows it has 1 ms resolution, or at least it does if someone calls timeBeginPeriod(). But I think Alexandre has resisted turning up the resolution in the server due to performance concerns.
If we are worried about performance—and I'm not really sure whether to be worried—then I'm open to trying the aforementioned approach with a separate submit thread.
How about something like CreateTimerQueueTimer() or SetWaitableTimer()?
Yes, that's what I was planning to use. Actually it'd even be enough to use a regular sleep or wait, at least on Wine, but I'm not sure that's quite reliable on Windows.
I constructed a simple benchmark that just makes a bunch of draw calls per frame (and nothing else). That benchmark is artificial, shows a consistent performance hit from this commit:
10000 draws: 217 vs 183 2000 draws: 860 vs 772 500 draws: 1900 vs 1840
Is that just the `QueryPerformanceCounter()` overhead? I.e., what does the benchmark say if you always return false from `should_periodic_submit()`?
So... I don't know what I was testing originally, but I can't reproduce that huge difference anymore. Skipping should_periodic_submit entirely, I see 217 vs 213 fps for 10000 draws, and no consistent measurable difference for 2000 draws.
I'm pretty sure my original tests were at least missing patch 1/2, so maybe that made the difference.