Hi all,
I believe most oconcerns about OpenAL have been addressed. One of the open questions was whether midi and wave would be synced. And I think the most likely answer is that they aren't, even on windows. On windows winmm midi drivers use winmm timeGetTime or QueryPerformanceCounter for DirectMusic. Sample accurate wave is used to sync with midi, by using exclusive mode or the asio model.
If there are any other open concerns, apart from packaging, let me know, since I really want to get dsound openal merged. :) If there are still concerns that are open I would like to know, especially if they affect wasapi which is defined in include/audioclient.idl and audiopolicy.idl
Cheers, Maarten.
Am 11.12.2009 um 19:07 schrieb Maarten Lankhorst:
Hi all,
I believe most oconcerns about OpenAL have been addressed. One of the open questions was whether midi and wave would be synced. And I think the most likely answer is that they aren't, even on windows. On windows winmm midi drivers use winmm timeGetTime or QueryPerformanceCounter for DirectMusic. Sample accurate wave is used to sync with midi, by using exclusive mode or the asio model.
Sounds ok.
If there are any other open concerns, apart from packaging, let me know, since I really want to get dsound openal merged. :) If there are still concerns that are open I would like to know, especially if they affect wasapi which is defined in include/audioclient.idl and audiopolicy.idl
Well, there are still the two (comparably minor) issues like
1) OpenAL's future, since the development is fragmented and little is going on. That said, there probably isn't much going on in the sound world(or its rather going backwards, see dsound3d), so its probably OK that there isn't much development
2) How maintaining separate winmm drivers for midi and using openal for wasapi and dsound is any better than using separate wasapi drivers that do midi+wave.
Since you're writing the code, if you're not concerned about (2) it's probably not an issue. Concerning (1) if I were writing this stuff I'd probably hedge my bets and avoid making openal an integral part of our sound design(like dsound via openal does). But again, you're writing the code, so you are the boss. And as far as I understand it using openal for dsound is one of your main goals and not an unintended side effect.
If there are any other open concerns, apart from packaging, let me know, since I really want to get dsound openal merged. :) If there are still concerns that are open I would like to know, especially if they affect wasapi which is defined in include/audioclient.idl and audiopolicy.idl
One of the things which worries me and which you also mentioned on irc is whether openal is the right library to implement wasapi. You mentioned that some tasks require a 'server' (for the session guid stuff). Further there are other features like per stream volume which sound servers support but openal doesn't support this (it would the task of a real sound server). I have the feeling that 'classic' sound libraries (openal, alsa, oss, ..) are not the right approach for implementing a 'sound server'. In my opinion they should be implemented on top of CoreAudio/PulseAudio/..
Roderick
Hi Roderick,
Roderick Colenbrander schreef:
If there are any other open concerns, apart from packaging, let me know, since I really want to get dsound openal merged. :) If there are still concerns that are open I would like to know, especially if they affect wasapi which is defined in include/audioclient.idl and audiopolicy.idl
One of the things which worries me and which you also mentioned on irc is whether openal is the right library to implement wasapi. You mentioned that some tasks require a 'server' (for the session guid stuff). Further there are other features like per stream volume which sound servers support but openal doesn't support this (it would the task of a real sound server). I have the feeling that 'classic' sound libraries (openal, alsa, oss, ..) are not the right approach for implementing a 'sound server'. In my opinion they should be implemented on top of CoreAudio/PulseAudio/..
OpenAL supports per stream volume. The service is basically needed if an application wants to 'audio groups' all using the same sound card across processes, and be able to switch it to some different card with 1 command. Since I don't think the sessions are persistent, this requires a audio service no matter what audio api is used. :)
Cheers, Maarten.
OpenAL supports per stream volume. The service is basically needed if an application wants to 'audio groups' all using the same sound card across processes, and be able to switch it to some different card with 1 command. Since I don't think the sessions are persistent, this requires a audio service no matter what audio api is used. :)
Speaking about Cross-Process
Does wasapi offer any ways to manipulate(e.g. volume control) or access another processes streams? With a sound server this might be possible, don't know if it makes sense though. I guess openal cannot do this because its just a per-process library.
Hi Stefan,
2009/12/12 Stefan Dösinger stefandoesinger@gmx.at:
OpenAL supports per stream volume. The service is basically needed if an application wants to 'audio groups' all using the same sound card across processes, and be able to switch it to some different card with 1 command. Since I don't think the sessions are persistent, this requires a audio service no matter what audio api is used. :)
Speaking about Cross-Process
Does wasapi offer any ways to manipulate(e.g. volume control) or access another processes streams? With a sound server this might be possible, don't know if it makes sense though. I guess openal cannot do this because its just a per-process library.
Would still be easy if all streams and contexts were registered with wasapi.. I could just send a request to mute it, and the service would marshall the request to the other client.