http://bugs.winehq.org/show_bug.cgi?id=28056
--- Comment #29 from Jörg Höhle hoehle@users.sourceforge.net 2011-09-26 13:06:33 CDT --- Here's my analysis of the logs. The result is not pretty.
+ Position deltas are related to deltas in time.
- What Wine computes as stream latency looks more like what ALSA returns as snd_pcm_delay, e.g. how much time it will take for the next written sample (inl. buffering) to hit the speaker. Indeed it's based on DSP_GETODELAY, which should be used by GetPosition, not GetStreamLatency.
- Each iteration takes too much time, 256ms instead of slightly more than 100ms. Either that or the HighPerformanceTimer is not trustworthy.
- Mateusz' Ubuntu OSS system seems to suffer particularly badly from that, typically taking 550ms per iteration. Meanwhile "GetBuffer large (22500)" at every iteration signals that so many frames really get drained in that period (so the HP timer is probably right).
What this means is that apps will spend a significant amount of time waiting, leaving less time for other things. That might explain: err:ntdll:RtlpWaitForCriticalSection section 0x12ba8c "?" wait timed out in thread 002c, blocked by 0009, retrying (60 sec) Cause: certainly blocking mode, http://www.winehq.org/pipermail/wine-devel/2011-September/092519.html
- Matteusz' without vmix uses 192000Hz as default frequency. Obviously the card supports that but IMHO that doesn't make sense in the context of Wine. The code too simply uses audioinfo max_rate.
- Incorrect underrun handling. Even after 1s sleep, 500ms worth of samples was not drained entirely. Or perhaps remaining samples less than some fragment size are left unplayed? Perhaps that's an aretefact of the ugly workaround in GetCurrentPadding?
Why do Alex and Mateusz get very different logs? I suspect that due to the bugs, differences in OSS buffer and fragment sizes have tremendous impact. They shouldn't.