On 11/8/21 11:41 AM, Rémi Bernon wrote:
On 11/8/21 17:36, Rémi Bernon wrote:
On 10/27/21 22:34, Zebediah Figura (she/her) wrote:
On 10/27/21 10:25, Rémi Bernon wrote:
Planet Coaster requests an output format with 44100 rate for user provided music, which may not match what the files are decoded to.
Signed-off-by: Rémi Bernon rbernon@codeweavers.com
This one looks good, although it'd be nice for this description to be in the code itself.
It could also be split into front and backend parts.
So it ends up being a little bit more complicated than that, and I don't think hardcoding a list of supported audio format is the right way to do it. In order to support all possible user music formats, we would have to hardcode all possible variations of media types.
As far as I could see (and with tests from https://source.winehq.org/patches/data/219024) native streams only enumerate their native media types. Then, more media types are supported by the IMFSourceReader, but probably using a dynamically allocated decoder / converter MF transforms, or using MF topology elements which I believe are doing more advanced logic than what src_reader_SetCurrentMediaType does.
Doing it this way would mean instantiating an audio_converter MF transform for instance, and would then make the audioresampler element useless. This seems to be the direction existing code is generally going, but it kind of defeat the idea of using GStreamer and its dynamically created pipelines.
Another way I can see to make this dynamic, using an always present audioresampler element, would be instead to change the way we match IMFSourceReader stream media types, by delegating the type matching to winegstreamer, and GStreamer through gst_pad_query_caps calls. I'm not sure how we can do that, maybe with a custom IMFMediaTypeHandler, but it's not supposed to do this.
Actually it should be possible to do something not too ugly, with an audioresampler element, allowing audio conversion by accepting media types directly in source_reader_set_compatible_media_type, as MSDN seems to suggest [1] is done since Win8 (and as the test confirm).
[1] https://docs.microsoft.com/en-us/windows/win32/api/mfreadwrite/nf-mfreadwrit...
I'm not entirely sure I understand what you're proposing?
The patch as-is looks fine and is perfectly in line with what we do already. We basically have the frontend (mfplat, quartz, wmvcore) request a specific media type from the backend, and give the whole responsibility of deciding *what* media type to the frontend.
In terms of mfplat separating its demuxers and decoders (really quartz and wmvcore do this too, but the design of mfplat makes this fact harder to ignore) I've been thus far inclined to try to keep demuxing within the same GStreamer pipeline as much as we can. In particular I'm worried about the latency of marshalling buffers between two or three threads (after all, we already have latency problems even on high-end processors), plus the added code complexity is not insignificant.