Thanks for reviewing.
On Mon, Jun 30, 2014 at 06:45:29PM -0500, Ken Thomases wrote:
I'm concerned about the use of the AudioConverter API within the capture callback (ca_capture_cb). I have it embedded in my memory that it's not permitted to call a converter on the device thread. I can't find a hard and fast declaration of that in the documentation, although Technical Note TN2091: Device input using the HAL Output Audio Unit does say this:
https://developer.apple.com/library/mac/technotes/tn2091/_index.html "If sample rate conversion is needed, it can be accomplished by buffering the input and converting the data on a separate thread with another AudioConverter."
If what you're doing were allowed, it would seem they would have mentioned that and not suggested the use of "a separate thread". It also seems that the HAL Output Audio Unit would just do this itself and not have the restriction on the sample rate in the first place.
Yeah, I read the same thing and decide to just try it out and see what happened. Obviously it works fine in all the cases I've tried, and for a small handful of people who tested it for me. I decided with the wimpy doc language and the fact that it works fine, it wasn't worth the complication of moving the resampling to the application thread.
Besides my specific recollection that sample rate conversion is not allowed, there's the general requirement to avoid slow operations in those callbacks. The call to AudioConverterFillComplexBuffer() may be such a slow operation. So could the calls to malloc() in your ca_capture_cb() and feed_cb().
Could you not move the sample rate conversion to the application thread, at the point where the application calls in to get the captured data (or metadata about it)?
You're right, and I suppose it wouldn't be drastically more complicated. It requires some more careful handling of buffer overflow and reporting the buffer fill level. I wish the docs were more clear-cut on this issue. Hiding the resampling from the rest of the code by doing it at data capture time just feels so much cleaner.
In AudioClient_IsFormatSupported(), is it really the case that any format for which a stream description can be constructed is supported? Is there a need to try setting the format on an AudioConverter to see if it will accept it?
Well, we do have tests to show that IsFormatSupported() is accurate, and we pass the tests. I figured it wasn't worth the effort unless we actually find problems. Very few applications (none that I can think of, actually) ask for unusual formats without going through winmm's ACM.
I'm always wary about actually opening and setting up devices. It tends to be a slow operation, depending on hardware, and we do lots of IsFormatSupported calls in winmm and dsound. That said, I'll take another look.
There are still two mentions of AudioQueue in lines like:
WARN("AudioQueue doesn't support per-channel volume control\n");
Good catch, I'll fix those.
Andrew