http://bugs.winehq.org/show_bug.cgi?id=28723
--- Comment #53 from Alexey Loukianov mooroon2@mail.ru 2011-11-24 20:35:46 CST --- (In reply to comment #51)
Instead the audio driver should really be written in such way that if it's fed more audio data, e.g. 333ms, it won't depend on a wake-up every 10ms. ... IOW, I'm opposed to a design that depends on 10ms feeder threads just because that would make Rage (or any 20ms XA2 client) work. XA2 ought to work, yet other apps with large buffer sizes must not stress the system ^H^H produce reliable audio output even if a 10ms event is missed.
I had already posted it and want to point on it again: it seems that native system (w7 at least, I hadn't had a chance to test on Vista yet) acts in a way that latency is purely controlled by an amount of data an app feeds in to a buffer. This means that de-facto native infrastructure is designed in a way that allows for "dynamic latency control". In case I'm content with 0.5s latency - no problems, I just fed in 0.5s of audiodata to the audio subsystem and make sure to wake - say - every 0.25s and top the buffer to be contain 0.5s of data again. As soon as I need a lower latency - I simply may just wait for buffer to have desired amount of data left (roughly equal to the targeted latency) and start to feed in audio core more frequently. It is perfectly OK to do it from the same exact thread of the same exact app. It makes me believe that to satisfy this concerns it is required to use smallest possible period available on the system. OTOH I'm also against abusing timer API to produce hi-res events (10ms or even less) and I'm against stressing the system hard in order to achieve small latency for the cases where this isn't strictly required. Our goal is to find a code design solution for Wine mmdevapi audio driver which would be "less abusing but still doing the trick". Experimenting with patches vs. latency and testing on native systems is fun but to come up with a final patch we would eventually have to discuss and agree on the way audiodriver should be designed. By "we" here I mostly mean Jörg and Andrew as a main developers working with Wine audio subsystem codebase.
The general lesson is that I can't expect a Linux system to give repeated 10ms "interrupt" response. There will be random (=rare) cases with higher latency (upto 100 ms with user processes).
Sure, but it can be easily proven with experiments that high-latency cases are really rare on moderately loaded system and sticking to "worst latency case" to fully get rid of underruns doesn't seem reasonable to me. IMO it'd be much better to target at 30-40-50ms latency level and accept that there would be some cases here-and-there when userland scheduling latency spikes would cause some underruns.
I also think that it would be very revealing to write the following test case: on every (10ms) event, feed 5ms of data.... Obviously, such a test should be performed by somebody with access to both systems. I have none of them.
Hmm, I would try to hack-in such test on this weekend. I've got Win7 installed on my netbook and have a relative who owns laptop with Vista installed. Another way to test it on Vista might be to use IE-testing virtual appliances offered for download by MS - as far as I recall one of them was Vista-based and had been capable of running another Win32 binaries besides pre-installed IE stuff.