There is something that sees very wrong in OSS_WaveOutInit. We do:
smplrate = 44100; if (ioctl(ossdev->fd, SNDCTL_DSP_SPEED, &smplrate) == 0) { ... this format is supported ... }
But according to the OSS spect this ioctl call is not garanteed to give you the actual sampling frequency you asked for. So if the device only supports 11025Hz, ioctl will return success but you will find that after the call smplrate==11025.
So it seems it would be more correct to do:
smplrate = 44100; if (ioctl(ossdev->fd, SNDCTL_DSP_SPEED, &smplrate) == 0 && smplrate==44100) { ... this format is supported ... }
There are other issues like you cannot assume that because a sound card supports 44100Hz in mono it supports it in stereo. It might only support stereo up to 22050Hz. Same thing with 8 bits vs. 16 bits. That's for old cards though, with modern cards it's the opposite. Some cards, e.g. based on the i810 chipset, support 48000Hz in 16 bit in stereo but nothing else, not 48000Hz in 16 bit in mono for instance.
So assuming we do strict checks (I have code for that), for an i810 card waveOutGetDevCaps should return:
0: "CS4236/37/38" 1.0 (255:1): channels=2 formats=0000 support=006c
I.e. none of the formats at 11025, 22050 or 44100 are supported. But testing this on Windows I get:
0: "YAMAHA AC-XG WDM Audio" 5.10 <1:100>: channels=65535 formats=bfff support=002c
Which leads me to the following questions: * should waveOutGetDevCaps consider that a format is supported as long as we can up-sample to a format that is? * what does the b mean in 'formats=bfff'? I could not find any documentation...
(the support=006c vs. 002c is not significant, it's just WAVECAPS_DIRECTSOUND)
But according to the OSS spect this ioctl call is not garanteed to give you the actual sampling frequency you asked for. So if the device only supports 11025Hz, ioctl will return success but you will find that after the call smplrate==11025.
So it seems it would be more correct to do:
smplrate = 44100; if (ioctl(ossdev->fd, SNDCTL_DSP_SPEED, &smplrate) == 0 && smplrate==44100) { ... this format is supported ... }
in fact the real test would even be more complicated and should use the NEAR_MATCH macro (some cards report for example 11000 when asked for 11025) ... however this is only used in waveXXXGetDevCaps (and I'm not sure lots of apps use this API to get the supported formats) in most of the cases, returning 0xFFFF as the format set should be sufficient (or 0xFFFFF)
Which leads me to the following questions:
- should waveOutGetDevCaps consider that a format is supported as long
as we can up-sample to a format that is?
I think we shouldn't care for now. as far as we support the correct mapping in the upper layer, the best behavior would be: - return as many formats as we can in waveoutgetdevcaps - while opening, if the requested format isn't supported, then report an error (as your previous patch does) - the handling of the WAVE_FORMAT_DIRECT is already done at the winmm level
correctly returning the supported formats wouldn't change the behavior from a functional point of view. only performance would be better. but to my knowledge capacity introspection is mostly done using the WAVE_FORMAT_QUERY flag in waveXXXOpen, not the waveXXXGetDevCaps
- what does the b mean in 'formats=bfff'? I could not find any
documentation...
those are from the latest XP SDK #define WAVE_FORMAT_96M08 0x00010000 /* 96 kHz, Mono, 8-bit */ #define WAVE_FORMAT_96S08 0x00020000 /* 96 kHz, Stereo, 8-bit */ #define WAVE_FORMAT_96M16 0x00040000 /* 96 kHz, Mono, 16-bit */ #define WAVE_FORMAT_96S16 0x00080000 /* 96 kHz, Stereo, 16-bit */
A+