This seemed like a useful overview of how the various layers relate to each other: http://tuxradar.com/content/how-it-works-linux-audio-explained
On 9 April 2010 20:04, Dan Kegel dank@kegel.com wrote:
This seemed like a useful overview of how the various layers relate to each other: http://tuxradar.com/content/how-it-works-linux-audio-explained
A cursory glance does not reveal any mention of libalsa as a valid "sound input", and I'm not sure but I think it falsely implies that jack can't use OSS backend.
On 9 April 2010 11:04, Dan Kegel dank@kegel.com wrote:
This seemed like a useful overview of how the various layers relate to each other: http://tuxradar.com/content/how-it-works-linux-audio-explained
This is somewhat confusing:
* PulseAudio is an audio mixer that provides a finer grained control over volume (being able to set it per-application), etc.
* GStreamer is not solving the same problem as PulseAudio (PulseAudio is not a multimedia framework). GStreamer adds support for playing, synchronising and encoding audio and video media so is fulfilling a different role in the audio landscape -- you cannot decode ogg files directly through PulseAudio, for example.
* The Xine framework is like GStreamer and FFmpeg/MPlayer. AFAICS, this is independant of GStreamer and thus sits on top of PulseAudio, ALSA, Jack, etc. I can't see anything that says that xine-lib calls into GStreamer.
* Phonon is an API that abstracts the multimedia frameworks and sits on top of either GStreamer or Xine (or other supported backends), with Xine being the default.
- Reece
On Fri, Apr 9, 2010 at 12:04 PM, Dan Kegel dank@kegel.com wrote:
This seemed like a useful overview of how the various layers relate to each other: http://tuxradar.com/content/how-it-works-linux-audio-explained
http://insanecoding.blogspot.com/2009/06/state-of-sound-in-linux-not-so-sorr... is another good one.
On 9 April 2010 20:30, Damjan Jovanovic damjan.jov@gmail.com wrote:
On Fri, Apr 9, 2010 at 12:04 PM, Dan Kegel dank@kegel.com wrote:
This seemed like a useful overview of how the various layers relate to each other: http://tuxradar.com/content/how-it-works-linux-audio-explained
http://insanecoding.blogspot.com/2009/06/state-of-sound-in-linux-not-so-sorr... is another good one.
The diagrams there are generally much more representative of the reality of audio APIs in Linux systems. A few gems: "As should be obvious, these sound servers today do nothing except add latency, and should be done away with." "Compare the insanity that is PulseAudio ..."
:)
On Fri, Apr 9, 2010 at 12:40 PM, Ben Klein shacklein@gmail.com wrote:
On 9 April 2010 20:30, Damjan Jovanovic damjan.jov@gmail.com wrote:
On Fri, Apr 9, 2010 at 12:04 PM, Dan Kegel dank@kegel.com wrote:
This seemed like a useful overview of how the various layers relate to each other: http://tuxradar.com/content/how-it-works-linux-audio-explained
http://insanecoding.blogspot.com/2009/06/state-of-sound-in-linux-not-so-sorr... is another good one.
The diagrams there are generally much more representative of the reality of audio APIs in Linux systems. A few gems: "As should be obvious, these sound servers today do nothing except add latency, and should be done away with." "Compare the insanity that is PulseAudio ..."
:)
Whether we like it our not audio servers are the future, they make things a lot easier for users they don't care about the card but only see a microphone, speakers and other input/output devices and don't know how it is wired up. The audio server (or whatever thing you use as a 'router') all takes care of it.
Windows Vista/Win7 also use a sound server quite similar to pulseaudio. In Wine, Maarten is busy implementing these new Windows APIs and older winmm (and perhaps dsound) will be layered on top of it which is also what Windows does. The design will use OpenAL and whether that's a good choice is another discussion.
Roderick