http://bugs.winehq.org/show_bug.cgi?id=29585
--- Comment #7 from Jörg Höhle hoehle@users.sourceforge.net 2012-01-23 15:45:00 CST ---
if osspadding < 0 then bad formula...
if(osspadding<0)then { This->oss_buffer_size aka. bi_initial.bytes = bi.bytes; warn("buffer size increase now %u\n", bi.bytes); } The design doesn't care about the exact size, consistency is important. PA's bug where avail grows and grows would defeat this.
How do we implement Stop() then? [plan:]During Stop(), we use DSP_HALT
IAC_Stop is conceptually pause in shared mode. I don't know whether that best maps to a) letting OSS enter underrun, b) SNDCTL_DSP_SILENCE or c) SNDCTL_DSP_HALT. GetPosition is the sole judge. 1. GetPosition must eventually reach the total Release'd frames. 2. IAC_Stop+_Start must behave w.r.t. GetPosition as if no stop ever occurred. 3. As an approximation, even GCP should nearly freeze. But I don't think that's important. After all, there are no guarantees when audio resumes after IAC_Start, hence a decrease of padding is not objectionable. Note that padding jumps in exclusive mode. Here you'll notice that unless cached, my design will see GCP continue to decrease past Stop. Actually, I recommend a cache or rather updating the data for GCP within the callback only. If that can be done only when running, GCP will appear frozen when stopped. My first approximation would be Stop with _SILENCE or underrun (I prefer underrun so the data is heard once), Start with SNDCTL_DSP_SKIP in shared mode. _HALT may be ok in exclusive mode, but most apps experience shared mode.
I don't consider stopping audible output as fast as possible important. Let it play some remaining frames and best fade out. However, IAC_Reset is important, as shown by bug #28413. playSound should use waveOutReset which must invoke IAC_Reset which should kill audio ASAP, i.e. use SNDCTL_DSP_HALT, not _SKIP and not simply set held_frames to 0. BTW http://manuals.opensound.com/developer/callorder.html says SKIP has no effect when not running, yet IAC_Reset currently uses that.
Here's a complicated plan.
If I understand you right, this would require enlarging mmdevapi's buffer so as to ensure Getbuffer(GetBufferSize-GetCurrentPadding) always works. Gettings this right sounds more complicated than entering underrun -- until there comes the day where an OSS back-end eats 2s of data like PA.
_only_after_ the first write
Why is a single write important? If you want to replay N frames, you need to buffer them. If each callback only writes period size frames, would you buffer one period only? That's not enough to worry about. Really, the problem would appear only with backends that buffer >1s like PA.
The issue I see is: we whould tell OSS that we're interested in a buffer ~50-500ms, not 10, not 1000, despite all the caveats in SNDCTL_DSP_SETFRAGMENT manpage.