Hi,
Maarten Lankhorst wrote:
only oss4 and (maybe, just guessing?) coreaudio don't support float
MacOS' CoreAudio supports floats. Actually, given that it's an entire audio pipeline like quartz, I bet it even prefers floats.
I also refer you to GetStreamLatency where I was getting the period from,
Please take it from me: IT'S OK TO FAIL THOSE MMDEVAPI PERIOD TESTS,
I agree that some tests are extremely strict and that a "good enough" driver could work extremely well in practice despite failing some tests. However, please specify exactly which tests you have in mind so that we all talk about the same set.
BUT PLEASE DON'T LIE ABOUT THE HARDWARE CAPABILITIES REGARDING TO MINIMUM PERIOD, DEFAULT PERIOD AND STREAM LATENCY.
The reasoning is as follows: First, tests show that native exhibits a 10ms period, hence we can assume that every PC SW has been tested with that period only. Using another period in Wine might reveal bugs in the apps, but users would blame Wine.
This is why I've been arguing no later than last week in bug #28622, comment #58 that Wine should fully decouple ALSA/OSS periods from the published 10ms mmdevapi periods.
OTOH, winecoreaudio has been using 20ms periods since day 1 in 2011. I'm not aware of bugs in bugzilla caused by that deviation.
Second, the shared mode audio engine IMHO presents an abstraction over any particular HW, providing the same behviour on all machines and eliminating the need for app X to talk to or work around bugs in sound driver Y. Thus I wonder why you talk about hardware capabilities at all. IOW the shared mode caps are: fixed period=10ms and unlimited buffer (MSDN says 2s).
a) Minimum Period Why do you care? It's only relevant in exclusive mode. I'd say that minimum = shared mode default period is a reasonable value. And if default=10ms, minimum = 5ms seems ok if it has been tested and works well. OTOH, 200 periods and wakeups per second may be unrealistic as we alreadty struggle to sustain 100.
Do you know an app that uses mmdevapi in exclusive mode?
b) Default Period == shared mode audio engin period The target is 10ms, see above.
Yet we could make exemptions: for instance, we might observe that all apps using mmdevapi use it via the XAudio2 libraries and that XA2 is well debugged and would bear a larger period, e.g. 50ms.
There should be a registry setting for users to play with.
I'm not happy with Wine's DSound+WinMM->mmdevapi. That may be a good move for MS, but it's a bad move for Wine. Wine's audio drivers don't bang the HW, they have huge SW stacks like PA or Jack in their back.
c) Stream Latency
Bear in mind that the mmdevapi stream latency is a static property of an audio SW transformation graph. It does not vary if some buffer is full or empty. It is unlike snd_pcm_delay.
I believe winealsa outputs a reasonable stream latency. One may argue that it is too low: as winealsa always buffers 3 periods internally the latency should count those. I kept it low because native's mmdevapi is reported to introduce 30-40ms delays yet its GetStreamLatency ignores that and reports values like 10.6666ms.
Perhaps we are wrong about GetStreamLatency. Perhaps it's always a local call to one element of the audio transformation graph and should never sum other elements itself. Perhaps the app should use some enumeration facility to walk the audio graph and sum each element's latency? Perhaps there's some native graph element that would report said 30-40ms latency?
Regards, Jörg Höhle
Hi,
Maarten Lankhorst wrote:
For example Skyrim with a 36 ms stream latency will just buffer in more data to compensate instead. But it can't do it if you report that it's fine to feed data every 5 ms.
Please elaborate. Every now and then I wish you used more words to explain your observations.
Where do the 5ms come from? What's the relationship with the 36ms latency?
You patch DSound so I assume that Skyrim uses DSound, not mmdevapi directly. Shouldn't DSound export capabilities that make sense given its abstraction? IIRC, there's some former mailto wine-devel or bug report where I talked about the alternating period-sized buffer abstraction that is underlying DSound's API and how all positions being reported modulo buffer size are problematic if someone misses the period intervals. Of course, that abstraction cannot explain a 36ms latency with two alternating period-sized "direct hardware" buffers of 5ms.
Wine's DSound must maintain a set of mutually consistent variables { play position, write position, buffer size, period size } that in addition need to make sense w.r.t. audio output. I'm not convinced that it does. There should be tests to verify that.
In particular, wine's DSound must not let play position leave the virtual DSound buffer, even if mmdevapi reports that it's 2 seconds behind with PulseAudio. (IMHO, should that happen, then DSound must cap the reported position).
(BTW, I'd find it easier for Wine if DSound's primary were equal to mmdevapi's buffer, except that with winecoreaudio, there's no such mmdevapi buffer...)
Regards, Jörg Höhle
Op 24-10-12 12:52, Joerg-Cyril.Hoehle@t-systems.com schreef:
Hi,
Maarten Lankhorst wrote:
For example Skyrim with a 36 ms stream latency will just buffer in more data to compensate instead. But it can't do it if you report that it's fine to feed data every 5 ms.
Please elaborate. Every now and then I wish you used more words to explain your observations.
Where do the 5ms come from? What's the relationship with the 36ms latency?
Well skyrim is pretty aggressive with its low latency. It seems to want to buffer 20 ms only when default period is 10 ms, but when I did a bunch of tests with tons of underruns my minimum and default period ended up at 36 ms, skyrim still ran correctly, and looking at 'pacmd list-sink-inputs' it just increased buffer length in that case.
You patch DSound so I assume that Skyrim uses DSound, not mmdevapi directly. Shouldn't DSound export capabilities that make sense given its abstraction? IIRC, there's some former mailto wine-devel or bug report where I talked about the alternating period-sized buffer abstraction that is underlying DSound's API and how all positions being reported modulo buffer size are problematic if someone misses the period intervals.
Nah, originally when I started hacking on dsound I assumed skyrim was using dsound, but it's only used for the intro. I'm running the Skyrim steam version with windows set to win7.
I'm not sure yet how dsound should be fixed on bigger periods, I believe using IAudioClock for reporting position *might* be the correct answer, but it's not properly test yet on behavior, so I chose for the simple solution of directly reporting next mix position as play pos, which is the same as current code.
Of course, that abstraction cannot explain a 36ms latency with two alternating period-sized "direct hardware" buffers of 5ms.
Indeed, skyrim was creating the 20 ms buffer directly, and allocating a bigger buffer didn't help. So instead when an application requests 2 default periods I try to set latency to half requested period (5 ms instead of 10ms) and I'm more aggressive about firing events when data can be written. Pulseaudio seems to behave differently at lower latencies though, pointer updates are no longer done at a fixed size, presumably because it puts pulseaudio in a real danger of underruns so it mixes as much as it can each pass.
Wine's DSound must maintain a set of mutually consistent variables { play position, write position, buffer size, period size } that in addition need to make sense w.r.t. audio output. I'm not convinced that it does. There should be tests to verify that.
run the dsound tests with WINETEST_INTERACTIVE=1.
In particular, wine's DSound must not let play position leave the virtual DSound buffer, even if mmdevapi reports that it's 2 seconds behind with PulseAudio. (IMHO, should that happen, then DSound must cap the reported position).
At the point dsound queues the data to mmdevapi it can no longer do anything about it, so instead of trying something more complicated the best way to deal with it is to accept that fact and keep queued latency as fixed as possible, and ideally small.
I create a 80 ms buffer, please read msdn's documentation of iaudioclient::initialize, the requirements for shared mode is that it will create a buffer that's at least that large to prevent underruns, so if a 2 second buffer is created as a result, then somehow the audio system believes we should buffer 2 seconds. Not much we can do about that, and according to docs looks like we should keep the buffer as filled as possible in that case. However I believe that this is unlikely happen, and more likely we end up with only having to buffer a fixed 80 ms.
80 ms is generous btw, if you're feeling lucky you can set dsound latency much lower, 40 ms buffering would probably be the lowest safe value, but if you're feeling adventerous initialize buffer length to 100 ns and get the driver to come up with the minimal period and buffer length it supports. But things do seem to break down when you go that low. Pulseaudio got angry at me for those underruns and increased minimum latency to 8 ms after I did some testing at sub 1 ms buffers.
:-(
(BTW, I'd find it easier for Wine if DSound's primary were equal to mmdevapi's buffer, except that with winecoreaudio, there's no such mmdevapi buffer...)
I've been doing some cleanups and there's really no need to do this. I handled the primary buffer mixing as a special case of secondary buffer and it works just fine. The idea behind my patches has been to simply remove primary buffer altogether, as far as possible. device->buffer is equal to primary buffer size in writeprimary mode, before being copied to audio client. It's used for mixing to float if in secondary mode when iaudioclient uses a non-float format, and it's NULL for float format. In case of float it gets mixed to the audio buffer directly, saving a memcpy and a conversion.
~Maarten
On 10/24/2012 05:16 AM, Maarten Lankhorst wrote:
I'm not sure yet how dsound should be fixed on bigger periods, I believe using IAudioClock for reporting position *might* be the correct answer, but it's not properly test yet on behavior, so I chose for the simple solution of directly reporting next mix position as play pos, which is the same as current code.
...
At the point dsound queues the data to mmdevapi it can no longer do anything about it, so instead of trying something more complicated the best way to deal with it is to accept that fact and keep queued latency as fixed as possible, and ideally small.
FWIW, I agree. With OpenAL Soft, the "read" position of a sound is reported by simply using the next update position, and the "write" position is one period size ahead of that. The underlying output mix buffer is set to a fixed size with fixed update lengths (and where PulseAudio is concerned, makes sure to use PA_STREAM_ADJUST_LATENCY to keep its own latency in check). This seems to me to be more than adequate, even when using a custom dsound->openal wrapper.
Hello Jörg,
On 10/24/2012 12:36 PM, Joerg-Cyril.Hoehle@t-systems.com wrote:
I'm not happy with Wine's DSound+WinMM->mmdevapi. That may be a good move for MS, but it's a bad move for Wine. Wine's audio drivers don't
what would you do instead? Go back to the old way and have dsound and winmm drivers too? That road leads to madness as it will triple the amount of sound drivers in Wine.
bang the HW, they have huge SW stacks like PA or Jack in their back.
bye michael