http://bugs.winehq.org/show_bug.cgi?id=28723
--- Comment #23 from Alexey Loukianov mooroon2@mail.ru 2011-11-03 12:23:37 CDT --- (In reply to comment #21)
I've never seen native tests report anything else beside 10.0000 and 10.1587ms (alignment for Intel HDA) shared mode default periods. So reporting anything else is out of question.
IMHO it would be interesting to test for reported values with hardware other than Intel HDA. What I've got here and would be able to test is a laptop with installed Win7Starter. This laptop is equipped with Conexant HDA codec, it would be interesting to take a look at the GetDefaultPeriod values Core Audio would report on it. Actually it might be reasonable to ask users on Wine forums to do some tests on Seven and Vista with different types of audio hardware so we would get the entire picture and would be able to determine the rage which is sane to use for default and minimum periods and also for duration.
Alexey, how does that app react if you internally set duration to 30ms even when it asks for 20ms? From your description, it would initially fill half of it, i.e. 15ms, then receive an event and fill 10ms more, which would not be that bad.
Hadn't tested yet, going to do it tonight.
Does GetStreamLatency influence XA2?
It looks like XA2 doesn't calls it at all. At least I wouldn't able to grep its invocation in all the logs I've collected during experiments.
Of course, what would really be nice is to signal events at the earliest possible time when timing is tight, i.e. ALSA -> mmdevapi -> app. The cascade of periodic timers obviously causes latency.
Agreed. I've tried to quickly hack-in usage of async ALSA PCM callbacks into winealsa.drv but had failed to do it properly. Looks like I have to read more and increase my knowledge on the ALSA side to be able to done it right (or come to a conclusion that async ALSA IO can't be used as a solution for this case).
I believe native can get away with a buffer size slightly > 11ms. Presumably, when the periodic 10ms event fires, the mixer has mixed 10ms of data that was immediately fed to the HW and the app has ~9ms to provide the next chunk.
Emm, I'm a bit confused and can't understand what are you writing about here. Buffer size of 11ms - do you mean the "duration" that had been requested by app? Actually I can easily write an app that would be able to use 10ms (and maybe even less) buffers with current winealsa mmdevdrv implementation without hitting underruns. What is needed is to poll GetCurrentPadding frequently enough and feed in the data as soon as buffer would have some free space available. XA2 hits the bug because it only pumps out data in 10ms chunks and checks if there's enough free buffer space available are only done at event fire times. Pair it with small buffer size (2xDefaultPeriod) and events firing only roughly once per period and we've got perfect way to produce underruns.
Wine needs more latency because mixer and event and HW are not synchronised, e.g. winmm has a periodic feeder thread, mmdevapi adds another one etc.
winmm in unrelated to this bug but you're generally right - various mixing threads and buffers in different APIs downstream from app and to the OS sound driver add up a lot to latency. But increased output latency issue isn't a show-stopper for the most use cases (one would use something like wineasio->jack in cases where latency really matters), while underruns caused by race conditions introduced as a result of desync are real pain.
IMO, what winealsa.drv mmdevdrv is actually doing is emulating "DMA ring buffer with period boundary crossing interrupts" behavior in software, but reported "padding" value is being fetched from the OS driver (ALSA) and which is not closely in sync with emulated ring buffer "padding". As emulated period size and real ALSA period sizes are not the same (and it is not ensured/guaranteed that emulated HW period size is a multiple of ALSA hw period size) and timer events are not synchronized with real hardware events - padding size that is being reported by winealsa.drv mmdevdrv would always lag behind a bit from the real HW padding value (lag would be around ALSA hw period size at most). What I'm going to test tonight is to try to report padding based on the amount of data that had been uploaded into ALSA and not on the amount of free buffer space as reported by ALSA. While being obviously wrong (it would report to an app that some samples had been already played while actually they might be not) it might be yet another working workaround for this bug.