Though, how would MIDI be handled. I think that OpenAL doesn't handle MIDI...
Good question.
As far as I can see PulseAudio doesn't support MIDI either, but it is certainly available in Alsa, CoreAudio and on Windows Vista, even though its more tricky to activate HW midi on Vista.
Emulating MIDI through a software synthesizer won't fly, as this will make it impossible to use hardware instruments.
A related topic: How do Joysticks connected to the gameport work(which is the same hardware connector as MIDI). I think that a joystick already takes an entirely different path in the Linux kernel and it doesn't get anywhere near a sound system, so we don't have to bother about it. Is that correct?
On Sun, Dec 6, 2009 at 10:42 PM, Stefan Dösinger stefandoesinger@gmx.at wrote:
Though, how would MIDI be handled. I think that OpenAL doesn't handle MIDI...
Good question.
As far as I can see PulseAudio doesn't support MIDI either, but it is certainly available in Alsa, CoreAudio and on Windows Vista, even though its more tricky to activate HW midi on Vista.
Emulating MIDI through a software synthesizer won't fly, as this will make it impossible to use hardware instruments.
A related topic: How do Joysticks connected to the gameport work(which is the same hardware connector as MIDI). I think that a joystick already takes an entirely different path in the Linux kernel and it doesn't get anywhere near a sound system, so we don't have to bother about it. Is that correct?
Joysticks work using /dev/js or evdev these days. It was hard to find some info on MIDI for Vista but I believe it still works using WinMM. You really need oss or alsa for midi I think.
Roderick
Am 06.12.2009 um 22:57 schrieb Roderick Colenbrander:
Joysticks work using /dev/js or evdev these days. It was hard to find some info on MIDI for Vista but I believe it still works using WinMM. You really need oss or alsa for midi I think.
There's this:
http://www.jcabs-rumblings.com/Programming/OpenALFeatures.html
But it seems to be an early design prototype. All other google hits say that MIDI is not supported by openal. If you want to play MIDI files with openal you have to synthesize them first(if you want to have openal apply 3D effects on top of that) or use a different API.
On Sun, Dec 6, 2009 at 11:19 PM, Stefan Dösinger stefandoesinger@gmx.at wrote:
Am 06.12.2009 um 22:57 schrieb Roderick Colenbrander:
Joysticks work using /dev/js or evdev these days. It was hard to find some info on MIDI for Vista but I believe it still works using WinMM. You really need oss or alsa for midi I think.
There's this:
http://www.jcabs-rumblings.com/Programming/OpenALFeatures.html
But it seems to be an early design prototype. All other google hits say that MIDI is not supported by openal. If you want to play MIDI files with openal you have to synthesize them first(if you want to have openal apply 3D effects on top of that) or use a different API.
Midi playback isn't that important the main feature is Midi access to hardware like keyboards, drums, light equipment and so on. It is basically the 'RS232' of the audio/video world.
Roderick
Hello,
Roderick Colenbrander schreef:
On Sun, Dec 6, 2009 at 10:42 PM, Stefan Dösinger stefandoesinger@gmx.at wrote:
Though, how would MIDI be handled. I think that OpenAL doesn't handle MIDI...
Good question.
As far as I can see PulseAudio doesn't support MIDI either, but it is certainly available in Alsa, CoreAudio and on Windows Vista, even though its more tricky to activate HW midi on Vista.
Emulating MIDI through a software synthesizer won't fly, as this will make it impossible to use hardware instruments.
A related topic: How do Joysticks connected to the gameport work(which is the same hardware connector as MIDI). I think that a joystick already takes an entirely different path in the Linux kernel and it doesn't get anywhere near a sound system, so we don't have to bother about it. Is that correct?
Joysticks work using /dev/js or evdev these days. It was hard to find some info on MIDI for Vista but I believe it still works using WinMM. You really need oss or alsa for midi I think.
I wouldn't be surprised if this still was the case, we could keep the midi interfaces around and just report 0 wave in/out devices for oss/coreaudio/alsaa once we complete wine7audio.drv.
Cheers, Maarten.
Am 06.12.2009 um 23:21 schrieb Maarten Lankhorst:
I wouldn't be surprised if this still was the case, we could keep the midi interfaces around and just report 0 wave in/out devices for oss/coreaudio/alsaa once we complete wine7audio.drv.
So you mean keeping the existing infrastructure around just for midi and have something entirely different for the rest of the audio features?
Apparently winmm is still the only way to do MIDI on Windows Vista, and if winmm uses wasapi and friends there must be some way to tell the devices to talk to MIDI instruments.
On Sun, Dec 6, 2009 at 11:34 PM, Stefan Dösinger stefandoesinger@gmx.at wrote:
Am 06.12.2009 um 23:21 schrieb Maarten Lankhorst:
I wouldn't be surprised if this still was the case, we could keep the midi interfaces around and just report 0 wave in/out devices for oss/coreaudio/alsaa once we complete wine7audio.drv.
So you mean keeping the existing infrastructure around just for midi and have something entirely different for the rest of the audio features?
Apparently winmm is still the only way to do MIDI on Windows Vista, and if winmm uses wasapi and friends there must be some way to tell the devices to talk to MIDI instruments.
I mainly wonder how Vista is able to sync MIDI with audio playback? I would say that's an important feature e.g. be able to enable light effects on a podium in sync with the music you play from your computer.
Roderick
On Sun, Dec 6, 2009 at 4:37 PM, Roderick Colenbrander < thunderbird2k@gmail.com> wrote:
On Sun, Dec 6, 2009 at 11:34 PM, Stefan Dösinger stefandoesinger@gmx.at wrote:
Am 06.12.2009 um 23:21 schrieb Maarten Lankhorst:
I wouldn't be surprised if this still was the case, we could keep the
midi interfaces around and just report 0 wave in/out devices for oss/coreaudio/alsaa once we complete wine7audio.drv.
So you mean keeping the existing infrastructure around just for midi and
have something entirely different for the rest of the audio features?
Apparently winmm is still the only way to do MIDI on Windows Vista, and
if winmm uses wasapi and friends there must be some way to tell the devices to talk to MIDI instruments.
I mainly wonder how Vista is able to sync MIDI with audio playback? I would say that's an important feature e.g. be able to enable light effects on a podium in sync with the music you play from your computer.
Roderick
DirectMusic is the other way to work with MIDI I think.
On Sun, 6 Dec 2009, Stefan Dösinger wrote: [...]
So you mean keeping the existing infrastructure around just for midi and have something entirely different for the rest of the audio features?
That might nt be too bad in that we could still drop all the current current winmm audio backends that don't support Midi. At first glance: wineaudioio, wineesd, winejack, and winenas. So we'd just keep the Midi portion of winealsa, winecoreaudio and wineoss.
Francois Gouget wrote:
On Sun, 6 Dec 2009, Stefan Dösinger wrote: [...]
So you mean keeping the existing infrastructure around just for midi and have something entirely different for the rest of the audio features?
That might nt be too bad in that we could still drop all the current current winmm audio backends that don't support Midi. At first glance: wineaudioio, wineesd, winejack, and winenas. So we'd just keep the Midi portion of winealsa, winecoreaudio and wineoss.
Can we drop all current audio drivers now that don't have midi?
Any driver that doesn't implement the complete winmm API and direct sound shouldn't be in the current tree anyway.
Hello,
2009/12/8 Robert Reif reif@earthlink.net: ...
Can we drop all current audio drivers now that don't have midi? Any driver that doesn't implement the complete winmm API and direct sound shouldn't be in the current tree anyway.
I wish, but aj wouldn't even let me delete winenas.drv that has been obviously broken for many years, even after I pointed out it even failed 'play test sound' in winecfg after you set it up. I guess it has to wait until after wine7audio hits. The lack of an accelerated dsound interface isn't that big, just means 1 more copy of the primary buffer kept around, wine dsound should handle it correctly, tested on windows some time back too.
Our dsound won't run on windows though, mingw gets horribly confused by DirectSoundCreate8@24 and DirectSoundCreate@24, and no matter what I tried, either dynamic linking or static linking would fail (or both..)
Cheers, Maaten.
Maarten Lankhorst wrote:
The lack of an accelerated dsound interface isn't that big, just means
I wish accelerated direct sound had been a big issue. Both OSS and alsa theoretically could support it under very limited circumstances but there was no guarantee that it could be supported under all circumstances.
With full direct sound hardware acceleration in the low level driver, all software mixing would be bypassed in the current direct sound implementation. Direct sound would just forward calls from the application to the driver. That's where the direct in direct sound originally came from in the old days.
On Mon, 7 Dec 2009, Robert Reif wrote: [...]
Can we drop all current audio drivers now that don't have midi? Any driver that doesn't implement the complete winmm API and direct sound shouldn't be in the current tree anyway.
I don't see why drivers that don't support MIDI should be dropped. Lack of MIDI support does not hamper games, media players and email arrival notifiers. So a driver without MIDI support is still useful for a good 90% of the applications out there.
As for not implementing the 'complete winmm API' I'm not sure what you mean. Also, as far as I know, DirectSound works on top of all our backend drivers.
Francois Gouget wrote:
Also, as far as I know, DirectSound works on top of all our backend drivers.
It works through the WAVE API on most drivers which requires software mixing and format conversions. Even the direct sound drivers only implement a single hardware buffer which means that even direct sound goes through the software mixer and format conversions.
If any direct sound driver implemented multiple buffers of any format, there would be no software mixing done in the direct sound dll. Everything would just pass through the direct sound dll directly to the driver untouched. We would also get multiple open support since it wouldn't matter which application the buffers came from.
OpenAL and pulseaudio probably have the capabilities to implement a direct sound driver that supports multiple opens and mixing of multiple streams of different formats which would bypass the software mixing and format conversion path in the current direct sound dll implementation but I guess we will never know. Unfortunately OSS and alsa don't except under very limited conditions (if the drivers implemented it which they don't).
Hi Reif,
2009/12/8 Robert Reif reif@earthlink.net:
Francois Gouget wrote:
Also, as far as I know, DirectSound works on top of all our backend drivers.
It works through the WAVE API on most drivers which requires software mixing and format conversions. Even the direct sound drivers only implement a single hardware buffer which means that even direct sound goes through the software mixer and format conversions.
GASP, that's not such a problem that you make it out to be. Hell, winealsa even emulates a ring buffer with read calls, see f27d88e16fe..
If any direct sound driver implemented multiple buffers of any format, there would be no software mixing done in the direct sound dll. Everything would just pass through the direct sound dll directly to the driver untouched. We would also get multiple open support since it wouldn't matter which application the buffers came from.
The dsound timer would still tick and most of the time the app would still use the crappy remixer in dsound since games use DSBCAPS_LOCSOFTWARE these days.. Even more fundamentally, our winmm drivers are crap, full of literally copied versions of wineoss, just renamed and sedded, but never maintained. Spot the similar #if 0's...
The wavein/out drivers will be thrown out after we can forward to WASAPI, since the midi code is still handled in winmm afaict, a few of them will continue to live for that purpose..
Can I please have some new discussion point instead of you trying to bring up the same over and over? I'm growing tired of having to repeat myself so much..
Maarten Lankhorst wrote:
Hi Reif,
2009/12/8 Robert Reif reif@earthlink.net:
Francois Gouget wrote:
Also, as far as I know, DirectSound works on top of all our backend drivers.
It works through the WAVE API on most drivers which requires software mixing and format conversions. Even the direct sound drivers only implement a single hardware buffer which means that even direct sound goes through the software mixer and format conversions.
GASP, that's not such a problem that you make it out to be. Hell, winealsa even emulates a ring buffer with read calls, see f27d88e16fe..
Yes, a single ring buffer for all the software mixed direct sound buffers. A good driver implementation would have a ring buffer for every direct sound buffer. Therefore no requirement for any software mixing. That path is in the direct sound dll now but is not used because no driver supports it. In fact, when you rewrite the dll, you should be removing all existing code paths excepts this one.
If any direct sound driver implemented multiple buffers of any format, there would be no software mixing done in the direct sound dll. Everything would just pass through the direct sound dll directly to the driver untouched. We would also get multiple open support since it wouldn't matter which application the buffers came from.
The dsound timer would still tick and most of the time the app would still use the crappy remixer in dsound since games use DSBCAPS_LOCSOFTWARE these days.. Even more fundamentally, our winmm drivers are crap, full of literally copied versions of wineoss, just renamed and sedded, but never maintained. Spot the similar #if 0's...
The wavein/out drivers will be thrown out after we can forward to WASAPI, since the midi code is still handled in winmm afaict, a few of them will continue to live for that purpose..
Can I please have some new discussion point instead of you trying to bring up the same over and over? I'm growing tired of having to repeat myself so much..
I know this discussion is academic at this point but sound in wine is poor not because of a bad model but because of real poor and incomplete driver implementations. With the right driver, the current model for pre vista applications would have had no issues. This is the same problem Microsoft had and is one of the reasons they changed their audio system model.
With the introduction of vista, a new model is required but just like vista, audio capabilities will be significantly reduced and backwards compatibility will become a big issue requiring workarounds. Also putting in new incomplete drivers that don't support everything required will just be repeating the same mistakes we already made. I don't want to see a new implementation start out with the same problems that the old system had because people didn't learn any lessons. We are already talking about hacks to fix things before the new implementation is even fully conceptualized. Not a good start.
Am 08.12.2009 um 13:06 schrieb Robert Reif:
Yes, a single ring buffer for all the software mixed direct sound buffers. A good driver implementation would have a ring buffer for every direct sound buffer. Therefore no requirement for any software mixing. That path is in the direct sound dll now but is not used because no driver supports it. In fact, when you rewrite the dll, you should be removing all existing code paths excepts this one.
Maarten please correct me if I am wrong, but I think the reason why the Alsa driver does its own mixing is because Alsa does sample rate and similar conversions only when NOT using the mmap API. So you either get Alsa-side mixing or fast access using mmap. Back it the days it was considered that mixing ourselves and then using mmap was faster than not using mmap and then let Alsa mix things(which happens in Software as well in most cases)
Hi Stefan,
Stefan Dösinger schreef:
Am 08.12.2009 um 13:06 schrieb Robert Reif:
Yes, a single ring buffer for all the software mixed direct sound buffers. A good driver implementation would have a ring buffer for every direct sound buffer. Therefore no requirement for any software mixing. That path is in the direct sound dll now but is not used because no driver supports it. In fact, when you rewrite the dll, you should be removing all existing code paths excepts this one.
Maarten please correct me if I am wrong, but I think the reason why the Alsa driver does its own mixing is because Alsa does sample rate and similar conversions only when NOT using the mmap API. So you either get Alsa-side mixing or fast access using mmap. Back it the days it was considered that mixing ourselves and then using mmap was faster than not using mmap and then let Alsa mix things(which happens in Software as well in most cases)
Wrong :)
Alsa doesn't allow you to specify buffer sizes, or granularity. Furthermore rate resampling in alsa results in horrible things happening, you cannot set a alsa buffer in looping mode reliably, and if you could you cannot randomly access its memory, furthermore buffer notications don't work, and you cannot set per stream volume.. I probably missed a few other reasons but those are the biggest ones.
Cheers, Maarten.
Stefan Dösinger wrote:
A related topic: How do Joysticks connected to the gameport work(which is the same hardware connector as MIDI). I think that a joystick already takes an entirely different path in the
Linux kernel and it doesn't get anywhere near a sound system, so we don't have to bother about it. Is that correct?
Well, if you're interested... no, the gameport is *not* the same hardware connector as MIDI. A MIDI cable has a 5-pin circular DIN connector. A game port has a 15-pin D-sub connector. They're not the same thing.
However, some of the pins of the original 15-pin game port were redundant. Early soundcard manufacturers decided that they could repurpose those pins, and let MIDI users use a breakout cable (from the soundcard's "custom" 15-pin connector to a standard 15-pin gameport connector and two 5-pin MIDI connectors). It was cheap, and a full-size MIDI connector probably wouldn't fit on a regular soundcard without using breakout cables anyway. And not many needed it.
When gameports were provided by soundcards, then usually, the joystick interface was provided by the soundcard's drivers - although it could be in a different kernel module. After all, the joystick and the MIDI was driven by separate pieces of hardware, they just got "multiplexed" onto the same 15-pin connector through appropriate circuitry, with the assumption that the breakout cable would separate the signals again.
By the time this stuff got integrated into motherboards, the multiplexed 15-pin gameport had become a standard. Yet still, two separate functional units drive the two separate sets of pins on the game port, so the kernel does see two separate hardware interfaces, controlled by separate pieces of software.
So while I'm not sure you've gotten the facts completely straight, it's correct that you don't have to worry about it.
And I suspect you will need ALSA to do real MIDI... I guess it's like in Windows, where wave APIs are redesigned, but if you want to do MIDI, you still need to use the exact same multimedia interface as in Windows 3.x. Just as in Linux, you can do wave with OpenAL and pulse, but for MIDI, you still need to use OSS/ALSA. I don't necessarily consider that a problem, actually. (If you play MIDI through ALSA and route it into a softsynth like timidity, then timidity could still be routed into pulseaudio, etc...)