Hi,
Maarten Lankhorst queried:
Bump, anything wrong with this patch?
Here's my 0.0$ ... (standard DSound disclaimer here...)
Using mmdevapi's events in DSound is basically TRT. Then, the DSound mixer will be synchronized with mmdevapi writes.
/* ALSA is retarded, add a timeout.. */
ret = WaitForSingleObject(dev->sleepev, dev->sleeptime);
How does that comment help the reader of the code?
- if (period_ms <= 15)
device->sleeptime = period_ms * 5 / 2;
- else
device->sleeptime = period_ms * 3 / 2;
I expect such threshold functions to be continuous or at least monotonic. A sawtooth is certainly unexpected. What is its purpose?
What I would understand is a comment in the code saying that as of 2012, all 3 wine*.drv mmdevapi drivers have a bug that they stop sending events after IAudioClient_Stop. Therefore add a short timeout to the wait.
If that's the reason for the timeout, then period_ms * 1.5 is perfectly fine. There's no reason for a particular threshold. Note that neither 1.5 nor 2.5 give you regular spacing around the time of transition from playing to stopped.
+ IAudioClient_GetStreamLatency(device->client, &period); + period_ms = (period + 9999) / 10000; If IAC_Stop is the reason for the sleep time out, then it's obvious that GetStreamLatency has no business here, rather than the device period.
I could understand a use of GetStreamLatency when it comes to computing a reasonable size of the primary buffer.
+ ret = WaitForSingleObject(dev->sleepev, dev->sleeptime); + ... if (ret) WaitFor* return values are defined in terms of WAIT_OBJECT_0, WAIT_TIMEOUT and WAIT_FAILED etc., not zero or non-zero.
- device->sleepev = CreateEventW(0, 0, 0, 0);
- device->thread = CreateThread(0, 0, DSOUND_mixthread, device, 0, 0);
I haven't checked the overall style of DSound, but I prefer FALSE and TRUE be used for things advertised as BOOL. (I tend to write NULL for null pointer initialisation. Note that I easily read "if (foo) ..." for boolean and pointer types; I don't expect "if (foo == TRUE) or (foo != NULL) ...")
BTW, I still believe that mixing and resampling would find their best place in mmdevapi, not DSound.
Regards, Jörg Höhle
On Mon, Dec 03, 2012 at 02:59:07PM +0100, Joerg-Cyril.Hoehle@t-systems.com wrote:
BTW, I still believe that mixing and resampling would find their best place in mmdevapi, not DSound.
Perhaps, but that's provably not what Windows does. Given a format with: dwChannelMask = (CHANNEL_BACK_LEFT | CHANNEL_BACK_RIGHT) and a physical speaker setup containing only: (CHANNEL_FRONT_LEFT | CHANNEL_FRONT_RIGHT) dsound and mmdevapi behave differently. mmdevapi maps the BACK_X channel entirely out of the respective FRONT_X speaker. dsound splits about 2/3 of the matching channel, and 1/3 out of the other FRONT channel, so you hear both signals out of both speakers.
I think the best solution is to have dsound call into the system openal to perform multichannel mixing and 3D processing. We can start by getting dsound to have rudimentary support for those features, then plug openal in as the last step.
Andrew
Op 03-12-12 17:59, Andrew Eikum schreef:
(...) I think the best solution is to have dsound call into the system openal to perform multichannel mixing and 3D processing. (...)
Andrew
... Must print this out on poster format, and frame this above my bed..
~Maarten
Op 03-12-12 14:59, Joerg-Cyril.Hoehle@t-systems.com schreef:
Hi,
Maarten Lankhorst queried:
Bump, anything wrong with this patch?
Here's my 0.0$ ... (standard DSound disclaimer here...)
Using mmdevapi's events in DSound is basically TRT. Then, the DSound mixer will be synchronized with mmdevapi writes.
/* ALSA is retarded, add a timeout.. */
ret = WaitForSingleObject(dev->sleepev, dev->sleeptime);
How does that comment help the reader of the code?
- if (period_ms <= 15)
device->sleeptime = period_ms * 5 / 2;
- else
device->sleeptime = period_ms * 3 / 2;
I expect such threshold functions to be continuous or at least monotonic. A sawtooth is certainly unexpected. What is its purpose?
What I would understand is a comment in the code saying that as of 2012, all 3 wine*.drv mmdevapi drivers have a bug that they stop sending events after IAudioClient_Stop. Therefore add a short timeout to the wait.
Yeah, but in a followup commit I'm going to stop sending IAudioClient_Stop entirely. :-)
If that's the reason for the timeout, then period_ms * 1.5 is perfectly fine. There's no reason for a particular threshold. Note that neither 1.5 nor 2.5 give you regular spacing around the time of transition from playing to stopped.
I just didn't want to make the timeout too short in case no processing is done yet. Alsa's native period is ~ 22ms (1024 samples / 44100 or 48000) with dmix despite claiming it to be otherwise.. I thought about doing something more complicated to make it smoother, but I'm really just increases it >2 buffers to compensate for jitter on lower latencies, so if it the timeout would be beyond 20 ms I don't care any more..
- IAudioClient_GetStreamLatency(device->client, &period);
- period_ms = (period + 9999) / 10000;
If IAC_Stop is the reason for the sleep time out, then it's obvious that GetStreamLatency has no business here, rather than the device period.
I could understand a use of GetStreamLatency when it comes to computing a reasonable size of the primary buffer.
Got it right.. and this is a perfect valid use of IAudioClient GetStreamLatency here, the device could also be dead (AUDCLNT_E_DEVICE_INVALIDATED), in which case events are probably not fired any more.
GetStreamLatency is also used for calculating the period size, see http://msdn.microsoft.com/en-us/library/windows/desktop/dd370874%28v=vs.85%2...
I don't think it's used as such in this commit yet, but in the mixer rework it's used it to calculate fragment length.
ret = WaitForSingleObject(dev->sleepev, dev->sleeptime);
... if (ret)
WaitFor* return values are defined in terms of WAIT_OBJECT_0, WAIT_TIMEOUT and WAIT_FAILED etc., not zero or non-zero.
Except WAIT_OBJECT_0 is defined as 0.
- device->sleepev = CreateEventW(0, 0, 0, 0);
- device->thread = CreateThread(0, 0, DSOUND_mixthread, device, 0, 0);
I haven't checked the overall style of DSound, but I prefer FALSE and TRUE be used for things advertised as BOOL. (I tend to write NULL for null pointer initialisation. Note that I easily read "if (foo) ..." for boolean and pointer types; I don't expect "if (foo == TRUE) or (foo != NULL) ...")
BTW, I still believe that mixing and resampling would find their best place in mmdevapi, not DSound.
Well some way for dsound and winmm to use them both, so winmm can do resampling internally.
Hi,
Maarten Lankhorst wrote:
Alsa's native period is ~ 22ms (1024 samples / 44100 or 48000) with dmix despite claiming it to be otherwise..
What I don't understand is why you talk about ALSA at this level. DSound talks to mmdevapi and that's DSound's underlying API, not ALSA.
So if Wine's mmdevapi claims a 10ms period, it ought to behave like a native shared mode driver with a 10ms period.
IOW, I expect 10ms decreases of padding from winealsa when it claims a 10ms period, even if dmix/ALSA uses 21.333ms, PA-ALSA uses 50ms or Jack-ALSA uses 150ms.
That means winealsa needs more polish...
I talked about this decoupling between the app <-> mmdevapi <-> ALSA/OSS/XY some time ago in wine-devel. I still believe this is less prone to app bugs than to have mmdevapi publish the period of 20, 50 or 150ms of its back-end, because no native app was ever tested with such periods. MS apps are not prepared for them.
If you think that's not a good path to go, please raise your voice.
(Some may argue that winecoreaudio has been using 20ms since day 1, not 10ms. Maybe that's close enough. OTOH, I've mentioned today that my Mac machine stutters at the render test, so wineCoreAudio is not yet in its best shape).
GetStreamLatency is also used for calculating the period size, see
http://msdn.microsoft.com/en-us/library/windows/desktop/dd370874%28v=vs.85%2... "clients can use this latency value to compute the minimum amount of data that they can write during any single processing pass." That one paragraph is indeed interesting. I have no explanation of it yet. What's true is that 10ms won't get you far when the back-end's period is 20, 50 or 150ms. However, I was talking about a continuous supply of 10ms chunks.
(We are already past the ancient bug that "ALSA/dmix won't start if not fed at least one ALSA period full of samples" - imagine the period were 150ms.)
I don't think it's used as such in this commit yet, but in the mixer rework it's used it to calculate fragment length.
I'm not familiar with DSound. For sure, GetStreamLatency should be involved in buffer size calculations, e.g. as a lower limit.
have dsound call into the system openal to perform multichannel mixing and 3D processing. (...)
... Must print this out on poster format, and frame this above my bed..
This must sound like going a full circle to you. Wasn't some earlier version of DSound and/or mmdevapi based on OpenAL until some devs concluded that it was not a viable path?
Except WAIT_OBJECT_0 is defined as 0.
I don't want to remember that.
Regards, Jörg Höhle
Op 04-12-12 17:26, Joerg-Cyril.Hoehle@t-systems.com schreef:
Hi,
Maarten Lankhorst wrote:
Alsa's native period is ~ 22ms (1024 samples / 44100 or 48000) with dmix despite claiming it to be otherwise..
What I don't understand is why you talk about ALSA at this level. DSound talks to mmdevapi and that's DSound's underlying API, not ALSA.
So if Wine's mmdevapi claims a 10ms period, it ought to behave like a native shared mode driver with a 10ms period.
IOW, I expect 10ms decreases of padding from winealsa when it claims a 10ms period, even if dmix/ALSA uses 21.333ms, PA-ALSA uses 50ms or Jack-ALSA uses 150ms.
That means winealsa needs more polish...
I talked about this decoupling between the app <-> mmdevapi <-> ALSA/OSS/XY some time ago in wine-devel. I still believe this is less prone to app bugs than to have mmdevapi publish the period of 20, 50 or 150ms of its back-end, because no native app was ever tested with such periods. MS apps are not prepared for them.
[Citation needed]
Now if you were talking about the IAudioClock, and preferably have some good tests that fail with current winepulse, I would believe you.
If you think that's not a good path to go, please raise your voice.
VERY LOUDLY!
(Some may argue that winecoreaudio has been using 20ms since day 1, not 10ms. Maybe that's close enough. OTOH, I've mentioned today that my Mac machine stutters at the render test, so wineCoreAudio is not yet in its best shape).
I haven't looked at winecoreaudio at all.
GetStreamLatency is also used for calculating the period size, see
http://msdn.microsoft.com/en-us/library/windows/desktop/dd370874%28v=vs.85%2... "clients can use this latency value to compute the minimum amount of data that they can write during any single processing pass." That one paragraph is indeed interesting. I have no explanation of it yet. What's true is that 10ms won't get you far when the back-end's period is 20, 50 or 150ms. However, I was talking about a continuous supply of 10ms chunks.
(We are already past the ancient bug that "ALSA/dmix won't start if not fed at least one ALSA period full of samples" - imagine the period were 150ms.)
Or it sometimes refuses to flush the last remainder for writing.. if it's less than 1 period.
I don't think it's used as such in this commit yet, but in the mixer rework it's used it to calculate fragment length.
I'm not familiar with DSound. For sure, GetStreamLatency should be involved in buffer size calculations, e.g. as a lower limit.
Indeed! :D
have dsound call into the system openal to perform multichannel mixing and 3D processing. (...)
... Must print this out on poster format, and frame this above my bed..
This must sound like going a full circle to you. Wasn't some earlier version of DSound and/or mmdevapi based on OpenAL until some devs concluded that it was not a viable path?
Except WAIT_OBJECT_0 is defined as 0.
I don't want to remember that.
You're now never going to forget that WaitForSingleObject and WaitForMultipleObjects with any=true returns the index of the first waited on event. ;-)
~Maarten