On the client side, frame aperture may be included or omitted. This can be used to include or drop the frame padding, and the VideoProcessor MFT supports it.
On GStreamer side, we can only use the unpadded frame size in the caps, because this is how decoders work, and padding needs to be added as a property of the input/output video buffers. Then, frame size needs to be adjusted to be consistent between input and output, and any difference considered as extra padding to be added on one side or the other.
-- v3: winegstreamer: Respect video format padding for input buffers too. winegstreamer: Exclude padding from wg_format video frame size. winegstreamer: Use video info stride in buffer meta rather than videoflip. winegstreamer: Use a new wg_video_buffer_pool class to add buffer meta. winegstreamer: Update the output_format when stream format changes. winegstreamer: Pass optional GstVideoInfo to read_transform_output_data. winegstreamer: Split read_transform_output_data in two helpers. winegstreamer: Introduce a new sample_needs_copy_buffer helper. winegstreamer: Introduce a new sample_set_flags_from_buffer helper. mf/tests: Add more video processor tests with aperture.
From: Rémi Bernon rbernon@codeweavers.com
--- dlls/mf/tests/resource.rc | 4 ++ dlls/mf/tests/rgb32frame-bogus.bmp | Bin 0 -> 36918 bytes dlls/mf/tests/transform.c | 66 +++++++++++++++++++++++++++-- 3 files changed, 67 insertions(+), 3 deletions(-) create mode 100644 dlls/mf/tests/rgb32frame-bogus.bmp
diff --git a/dlls/mf/tests/resource.rc b/dlls/mf/tests/resource.rc index d9595c68980..a8fe550b475 100644 --- a/dlls/mf/tests/resource.rc +++ b/dlls/mf/tests/resource.rc @@ -109,6 +109,10 @@ rgb32frame-flip.bmp RCDATA rgb32frame-flip.bmp /* @makedep: rgb32frame-grabber.bmp */ rgb32frame-grabber.bmp RCDATA rgb32frame-grabber.bmp
+/* Generated from running the tests on Windows */ +/* @makedep: rgb32frame-bogus.bmp */ +rgb32frame-bogus.bmp RCDATA rgb32frame-bogus.bmp + /* Generated from running the tests on Windows */ /* @makedep: rgb555frame.bmp */ rgb555frame.bmp RCDATA rgb555frame.bmp diff --git a/dlls/mf/tests/rgb32frame-bogus.bmp b/dlls/mf/tests/rgb32frame-bogus.bmp new file mode 100644 index 0000000000000000000000000000000000000000..d0c5c4e4d64172ada2dc20b4a06f3e359a5d9f2f GIT binary patch literal 36918 zcmeI4KW-H<5XOxn8mK7wQ&51!3332>Xd-%U07u{ih+_avPLcyqMx;kVf+A7gvLTs} z8P@SPv-aA%o+nM5@qV7Mf8XY9ltg*?YX4LC``RBrz8K@}Ron?r$Ir&s)%DBd^5c=e zoPZN>0#3jQH~}Z%1e|~qZ~{)i2{-{K-~={JV7J=^kNFI&H_7l}2$pcqxMx=WR*n|W zF4kE4wZ3j?{#G7a^=&Qux9Z#0`M2(y^XJb)t6#KE`#x#qPkpGbk2W8z{E0XDpdD#_ z@8ewlR(@K)NN?@e>bhF|tvtvF?P%>+dfnRiTX|^xBK?<P2qnYHzvP};s!cv<$K%mA zB&__A+C6JfzpVUa{l{mg!9&l$%3rdUexUimub-jku=3Bv4E0Is!f+n85LW)R&o2Ey z^V<8C{nyH0o&(KMpQH}0uhsR&U(1K|(hoGJ^|8_Umwk?;e6sS7)s6Y=^V2?C`Nw$1 zd`Zp6>TP{KH-9Uyls~cS)_$$5YlXj+N6eShytQBMb!+Eu<&p9ycK!Wu7;=V{f6hIu zsaDLF)co_|Bus^s|5PoXn`V5$CFYNnf2<zOe>-~^JkkuT{8Mbn2krRr$Lr8jSoxP? zr1k4sKa4*PU13=HH+NR!-(%N4$7c9j@BPZYb<e-DZ_a<==Z5%tGxHa9uAhT)-g<v^ zvd+vO<s<z-bF}`-b;bEx`Jg^YowZ-3b<Og(@{oR@xwYTSb(`aF<$?MnbYF-2VNO{2 z&*{0PwbAc8giliUeb@^ZVdYQjeZ1tT<v@N|`O|uId@}y}+2el(R{mO+s83S&=648N z4lDnqm`Fd+JdDrbHH4La>t{oKlDehOwN?IBj?xb_xAt@XdETc`pQO(D=i$Hf{V#n) zbL&30x^C(FAN5J<oIiJJX&%xKG+%n3+J2lWo@x8k(xW~}-BSC-iF4&r`>WMQ`hn)! z{c@^!rtQ;<9`%WIIB~9|^}TQ{bxuwd&uBm7gLXU*yWp`i|NMV0JFQ=&pWC;!_4pmn zm9@@6K4`~U`%U%bRPiiTx3%k9zer#D95`{VEWNK*>XQ%J5#2whif3s3UUjHnIB~9& z^?T(!_uQN+p2@z_4>Xr`ddqR*T$zg#>XX#v>R-DKr;2B__bvTE^V<8C{c+-4DeJ5y zM}3mIQs?7T@vPK-TCFSnK=aS{hHzWp#JTdenttsc^OR4VDxSsa9`B9+chm!BfD`A+ z7~_~Psrg6YH1rspDxT$LmGUQc{UD70KXk=`6X(iS&K&b4H6O;`KC}>=DxS5l_c?n= z`4hW7r{~mKoH$ob)#_m`=1Xcm#)ebHvsgW!pMGD9`GXVZ%2+*`cORyGkX$%bJd<@~ zPCjTyS!e4xPMj-Canbrk`cnHTU6)hEv(o$0)+Zmdqqa|OJx-h}bMK>8J*{7)&$VAp z70+_@UuzxmK|8*Ca2Rd|PMj-m*6`0c)7yARzc^JqlXYVGv%4q3BhCOP&Xs=vp`gdt
literal 0 HcmV?d00001
diff --git a/dlls/mf/tests/transform.c b/dlls/mf/tests/transform.c index 468239b0fd4..3bd13384620 100644 --- a/dlls/mf/tests/transform.c +++ b/dlls/mf/tests/transform.c @@ -7368,9 +7368,9 @@ static void test_video_processor(void) }; const struct attribute_desc rgb32_no_aperture[] = { - ATTR_GUID(MF_MT_MAJOR_TYPE, MFMediaType_Video), - ATTR_GUID(MF_MT_SUBTYPE, MFVideoFormat_RGB32), - ATTR_RATIO(MF_MT_FRAME_SIZE, 82, 84), + ATTR_GUID(MF_MT_MAJOR_TYPE, MFMediaType_Video, .required = TRUE), + ATTR_GUID(MF_MT_SUBTYPE, MFVideoFormat_RGB32, .required = TRUE), + ATTR_RATIO(MF_MT_FRAME_SIZE, 82, 84, .required = TRUE), {0}, }; const MFT_OUTPUT_STREAM_INFO initial_output_info = {0}; @@ -7395,6 +7395,18 @@ static void test_video_processor(void) .buffer_count = 1, .buffers = &rgb32_buffer_desc, };
+ const struct buffer_desc rgb32_cropped_buffer_desc = + { + .length = 82 * 84 * 4, + .compare = compare_rgb32, .dump = dump_rgb32, + }; + const struct sample_desc rgb32_cropped_sample_desc = + { + .attributes = output_sample_attributes, + .sample_time = 0, .sample_duration = 10000000, + .buffer_count = 1, .buffers = &rgb32_cropped_buffer_desc, + }; + const struct buffer_desc rgb555_buffer_desc = { .length = actual_width * actual_height * 2, @@ -7501,6 +7513,14 @@ static void test_video_processor(void) .output_sample_desc = &rgb555_sample_desc, .result_bitmap = L"rgb555frame-flip.bmp", .delta = 4, /* Windows returns 0, Wine needs 4 */ }, + { + .input_type_desc = rgb32_no_aperture, .output_type_desc = rgb32_with_aperture, + .output_sample_desc = &rgb32_sample_desc, .result_bitmap = L"rgb32frame-bogus.bmp", + }, + { + .input_type_desc = rgb32_with_aperture, .output_type_desc = rgb32_no_aperture, + .output_sample_desc = &rgb32_cropped_sample_desc, .result_bitmap = L"rgb32frame.bmp", + }, };
MFT_REGISTER_TYPE_INFO output_type = {MFMediaType_Video, MFVideoFormat_NV12}; @@ -7823,6 +7843,23 @@ static void test_video_processor(void) check_mft_set_input_type(transform, test->input_type_desc); check_mft_get_input_current_type(transform, test->input_type_desc);
+ if (i >= 15) + { + IMFMediaType *media_type; + HRESULT hr; + + hr = MFCreateMediaType(&media_type); + ok(hr == S_OK, "MFCreateMediaType returned hr %#lx.\n", hr); + init_media_type(media_type, test->output_type_desc, -1); + hr = IMFTransform_SetOutputType(transform, 0, media_type, 0); + todo_wine + ok(hr == S_OK, "SetOutputType returned %#lx.\n", hr); + IMFMediaType_Release(media_type); + + winetest_pop_context(); + continue; + } + check_mft_set_output_type_required(transform, test->output_type_desc); check_mft_set_output_type(transform, test->output_type_desc, S_OK); check_mft_get_output_current_type(transform, test->output_type_desc); @@ -7837,6 +7874,11 @@ static void test_video_processor(void) output_info.cbSize = actual_width * actual_height * 2; check_mft_get_output_stream_info(transform, S_OK, &output_info); } + else if (test->output_type_desc == rgb32_no_aperture) + { + output_info.cbSize = 82 * 84 * 4; + check_mft_get_output_stream_info(transform, S_OK, &output_info); + } else { output_info.cbSize = actual_width * actual_height * 4; @@ -7867,6 +7909,18 @@ static void test_video_processor(void) ok(input_data_len == 18432, "got length %lu\n", input_data_len); input_data += length; } + else if (test->input_type_desc == rgb32_no_aperture) + { + input_info.cbSize = 82 * 84 * 4; + check_mft_get_input_stream_info(transform, S_OK, &input_info); + + load_resource(L"rgb32frame.bmp", &input_data, &input_data_len); + /* skip BMP header and RGB data from the dump */ + length = *(DWORD *)(input_data + 2 + 2 * sizeof(DWORD)); + input_data_len -= length; + ok(input_data_len == 36864, "got length %lu\n", input_data_len); + input_data += length; + } else { input_info.cbSize = actual_width * actual_height * 4; @@ -7929,6 +7983,12 @@ static void test_video_processor(void) ret = IMFSample_Release(output_sample); ok(ret == 0, "Release returned %lu\n", ret); winetest_pop_context(); + + ret = IMFTransform_Release(transform); + ok(ret == 0, "Release returned %ld\n", ret); + hr = CoCreateInstance(class_id, NULL, CLSCTX_INPROC_SERVER, + &IID_IMFTransform, (void **)&transform); + ok(hr == S_OK, "got hr %#lx\n", hr); }
ret = IMFTransform_Release(transform);
From: Rémi Bernon rbernon@codeweavers.com
--- dlls/winegstreamer/wg_transform.c | 47 +++++++++++++++++-------------- 1 file changed, 26 insertions(+), 21 deletions(-)
diff --git a/dlls/winegstreamer/wg_transform.c b/dlls/winegstreamer/wg_transform.c index 516b28e82e2..51706a6e6a6 100644 --- a/dlls/winegstreamer/wg_transform.c +++ b/dlls/winegstreamer/wg_transform.c @@ -730,6 +730,31 @@ static NTSTATUS copy_buffer(GstBuffer *buffer, GstCaps *caps, struct wg_sample * return STATUS_SUCCESS; }
+static void sample_set_flags_from_buffer(struct wg_sample *sample, GstBuffer *buffer, gsize total_size) +{ + if (GST_BUFFER_PTS_IS_VALID(buffer)) + { + sample->flags |= WG_SAMPLE_FLAG_HAS_PTS; + sample->pts = GST_BUFFER_PTS(buffer) / 100; + } + if (GST_BUFFER_DURATION_IS_VALID(buffer)) + { + GstClockTime duration = GST_BUFFER_DURATION(buffer) / 100; + + duration = (duration * sample->size) / total_size; + GST_BUFFER_DURATION(buffer) -= duration * 100; + if (GST_BUFFER_PTS_IS_VALID(buffer)) + GST_BUFFER_PTS(buffer) += duration * 100; + + sample->flags |= WG_SAMPLE_FLAG_HAS_DURATION; + sample->duration = duration; + } + if (!GST_BUFFER_FLAG_IS_SET(buffer, GST_BUFFER_FLAG_DELTA_UNIT)) + sample->flags |= WG_SAMPLE_FLAG_SYNC_POINT; + if (GST_BUFFER_FLAG_IS_SET(buffer, GST_BUFFER_FLAG_DISCONT)) + sample->flags |= WG_SAMPLE_FLAG_DISCONTINUITY; +} + static NTSTATUS read_transform_output_data(GstBuffer *buffer, GstCaps *caps, gsize plane_align, struct wg_sample *sample) { @@ -762,27 +787,7 @@ static NTSTATUS read_transform_output_data(GstBuffer *buffer, GstCaps *caps, gsi return status; }
- if (GST_BUFFER_PTS_IS_VALID(buffer)) - { - sample->flags |= WG_SAMPLE_FLAG_HAS_PTS; - sample->pts = GST_BUFFER_PTS(buffer) / 100; - } - if (GST_BUFFER_DURATION_IS_VALID(buffer)) - { - GstClockTime duration = GST_BUFFER_DURATION(buffer) / 100; - - duration = (duration * sample->size) / total_size; - GST_BUFFER_DURATION(buffer) -= duration * 100; - if (GST_BUFFER_PTS_IS_VALID(buffer)) - GST_BUFFER_PTS(buffer) += duration * 100; - - sample->flags |= WG_SAMPLE_FLAG_HAS_DURATION; - sample->duration = duration; - } - if (!GST_BUFFER_FLAG_IS_SET(buffer, GST_BUFFER_FLAG_DELTA_UNIT)) - sample->flags |= WG_SAMPLE_FLAG_SYNC_POINT; - if (GST_BUFFER_FLAG_IS_SET(buffer, GST_BUFFER_FLAG_DISCONT)) - sample->flags |= WG_SAMPLE_FLAG_DISCONTINUITY; + sample_set_flags_from_buffer(sample, buffer, total_size);
if (needs_copy) {
From: Rémi Bernon rbernon@codeweavers.com
--- dlls/winegstreamer/wg_transform.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/dlls/winegstreamer/wg_transform.c b/dlls/winegstreamer/wg_transform.c index 51706a6e6a6..4152856ec5a 100644 --- a/dlls/winegstreamer/wg_transform.c +++ b/dlls/winegstreamer/wg_transform.c @@ -755,13 +755,10 @@ static void sample_set_flags_from_buffer(struct wg_sample *sample, GstBuffer *bu sample->flags |= WG_SAMPLE_FLAG_DISCONTINUITY; }
-static NTSTATUS read_transform_output_data(GstBuffer *buffer, GstCaps *caps, gsize plane_align, - struct wg_sample *sample) +static bool sample_needs_copy_buffer(struct wg_sample *sample, GstBuffer *buffer, gsize *total_size) { - gsize total_size; - bool needs_copy; - NTSTATUS status; GstMapInfo info; + bool needs_copy;
if (!gst_buffer_map(buffer, &info, GST_MAP_READ)) { @@ -770,10 +767,20 @@ static NTSTATUS read_transform_output_data(GstBuffer *buffer, GstCaps *caps, gsi return STATUS_UNSUCCESSFUL; } needs_copy = info.data != wg_sample_data(sample); - total_size = sample->size = info.size; + *total_size = sample->size = info.size; gst_buffer_unmap(buffer, &info);
- if (!needs_copy) + return needs_copy; +} + +static NTSTATUS read_transform_output_data(GstBuffer *buffer, GstCaps *caps, gsize plane_align, + struct wg_sample *sample) +{ + gsize total_size; + NTSTATUS status; + bool needs_copy; + + if (!(needs_copy = sample_needs_copy_buffer(sample, buffer, &total_size))) status = STATUS_SUCCESS; else if (stream_type_from_caps(caps) == GST_STREAM_TYPE_VIDEO) status = copy_video_buffer(buffer, caps, plane_align, sample, &total_size);
From: Rémi Bernon rbernon@codeweavers.com
--- dlls/winegstreamer/wg_transform.c | 52 ++++++++++++++++++++++++------- 1 file changed, 40 insertions(+), 12 deletions(-)
diff --git a/dlls/winegstreamer/wg_transform.c b/dlls/winegstreamer/wg_transform.c index 4152856ec5a..fc7dd3363ca 100644 --- a/dlls/winegstreamer/wg_transform.c +++ b/dlls/winegstreamer/wg_transform.c @@ -704,7 +704,7 @@ static NTSTATUS copy_video_buffer(GstBuffer *buffer, GstCaps *caps, gsize plane_ return status; }
-static NTSTATUS copy_buffer(GstBuffer *buffer, GstCaps *caps, struct wg_sample *sample, +static NTSTATUS copy_buffer(GstBuffer *buffer, struct wg_sample *sample, gsize *total_size) { GstMapInfo info; @@ -773,8 +773,8 @@ static bool sample_needs_copy_buffer(struct wg_sample *sample, GstBuffer *buffer return needs_copy; }
-static NTSTATUS read_transform_output_data(GstBuffer *buffer, GstCaps *caps, gsize plane_align, - struct wg_sample *sample) +static NTSTATUS sample_read_video_buffer(struct wg_sample *sample, GstBuffer *buffer, + GstCaps *caps, gsize plane_align) { gsize total_size; NTSTATUS status; @@ -782,10 +782,8 @@ static NTSTATUS read_transform_output_data(GstBuffer *buffer, GstCaps *caps, gsi
if (!(needs_copy = sample_needs_copy_buffer(sample, buffer, &total_size))) status = STATUS_SUCCESS; - else if (stream_type_from_caps(caps) == GST_STREAM_TYPE_VIDEO) - status = copy_video_buffer(buffer, caps, plane_align, sample, &total_size); else - status = copy_buffer(buffer, caps, sample, &total_size); + status = copy_video_buffer(buffer, caps, plane_align, sample, &total_size);
if (status) { @@ -797,12 +795,37 @@ static NTSTATUS read_transform_output_data(GstBuffer *buffer, GstCaps *caps, gsi sample_set_flags_from_buffer(sample, buffer, total_size);
if (needs_copy) + GST_WARNING("Copied %u bytes, sample %p, flags %#x", sample->size, sample, sample->flags); + else if (sample->flags & WG_SAMPLE_FLAG_INCOMPLETE) + GST_ERROR("Partial read %u bytes, sample %p, flags %#x", sample->size, sample, sample->flags); + else + GST_INFO("Read %u bytes, sample %p, flags %#x", sample->size, sample, sample->flags); + + return STATUS_SUCCESS; +} + +static NTSTATUS sample_read_other_buffer(struct wg_sample *sample, GstBuffer *buffer) +{ + gsize total_size; + NTSTATUS status; + bool needs_copy; + + if (!(needs_copy = sample_needs_copy_buffer(sample, buffer, &total_size))) + status = STATUS_SUCCESS; + else + status = copy_buffer(buffer, sample, &total_size); + + if (status) { - if (stream_type_from_caps(caps) == GST_STREAM_TYPE_VIDEO) - GST_WARNING("Copied %u bytes, sample %p, flags %#x", sample->size, sample, sample->flags); - else - GST_INFO("Copied %u bytes, sample %p, flags %#x", sample->size, sample, sample->flags); + GST_ERROR("Failed to copy buffer %"GST_PTR_FORMAT, buffer); + sample->size = 0; + return status; } + + sample_set_flags_from_buffer(sample, buffer, total_size); + + if (needs_copy) + GST_INFO("Copied %u bytes, sample %p, flags %#x", sample->size, sample, sample->flags); else if (sample->flags & WG_SAMPLE_FLAG_INCOMPLETE) GST_ERROR("Partial read %u bytes, sample %p, flags %#x", sample->size, sample, sample->flags); else @@ -893,8 +916,13 @@ NTSTATUS wg_transform_read_data(void *args) return STATUS_SUCCESS; }
- if ((status = read_transform_output_data(output_buffer, output_caps, - transform->attrs.output_plane_align, sample))) + if (stream_type_from_caps(output_caps) == GST_STREAM_TYPE_VIDEO) + status = sample_read_video_buffer(sample, output_buffer, output_caps, + transform->attrs.output_plane_align); + else + status = sample_read_other_buffer(sample, output_buffer); + + if (status) { wg_allocator_release_sample(transform->allocator, sample, false); return status;
From: Rémi Bernon rbernon@codeweavers.com
Instead of computing them in copy_video_buffer. --- dlls/winegstreamer/wg_transform.c | 59 ++++++++++++++----------------- 1 file changed, 27 insertions(+), 32 deletions(-)
diff --git a/dlls/winegstreamer/wg_transform.c b/dlls/winegstreamer/wg_transform.c index fc7dd3363ca..08ba442366a 100644 --- a/dlls/winegstreamer/wg_transform.c +++ b/dlls/winegstreamer/wg_transform.c @@ -650,25 +650,14 @@ NTSTATUS wg_transform_push_data(void *args) return STATUS_SUCCESS; }
-static NTSTATUS copy_video_buffer(GstBuffer *buffer, GstCaps *caps, gsize plane_align, - struct wg_sample *sample, gsize *total_size) +static NTSTATUS copy_video_buffer(GstBuffer *buffer, const GstVideoInfo *src_video_info, + const GstVideoInfo *dst_video_info, struct wg_sample *sample, gsize *total_size) { NTSTATUS status = STATUS_UNSUCCESSFUL; GstVideoFrame src_frame, dst_frame; - GstVideoInfo src_info, dst_info; - GstVideoAlignment align; GstBuffer *dst_buffer;
- if (!gst_video_info_from_caps(&src_info, caps)) - { - GST_ERROR("Failed to get video info from caps."); - return STATUS_UNSUCCESSFUL; - } - - dst_info = src_info; - align_video_info_planes(plane_align, &dst_info, &align); - - if (sample->max_size < dst_info.size) + if (sample->max_size < dst_video_info->size) { GST_ERROR("Output buffer is too small."); return STATUS_BUFFER_TOO_SMALL; @@ -680,14 +669,14 @@ static NTSTATUS copy_video_buffer(GstBuffer *buffer, GstCaps *caps, gsize plane_ GST_ERROR("Failed to wrap wg_sample into GstBuffer"); return STATUS_UNSUCCESSFUL; } - gst_buffer_set_size(dst_buffer, dst_info.size); - *total_size = sample->size = dst_info.size; + gst_buffer_set_size(dst_buffer, dst_video_info->size); + *total_size = sample->size = dst_video_info->size;
- if (!gst_video_frame_map(&src_frame, &src_info, buffer, GST_MAP_READ)) + if (!gst_video_frame_map(&src_frame, src_video_info, buffer, GST_MAP_READ)) GST_ERROR("Failed to map source frame."); else { - if (!gst_video_frame_map(&dst_frame, &dst_info, dst_buffer, GST_MAP_WRITE)) + if (!gst_video_frame_map(&dst_frame, dst_video_info, dst_buffer, GST_MAP_WRITE)) GST_ERROR("Failed to map destination frame."); else { @@ -704,8 +693,7 @@ static NTSTATUS copy_video_buffer(GstBuffer *buffer, GstCaps *caps, gsize plane_ return status; }
-static NTSTATUS copy_buffer(GstBuffer *buffer, struct wg_sample *sample, - gsize *total_size) +static NTSTATUS copy_buffer(GstBuffer *buffer, struct wg_sample *sample, gsize *total_size) { GstMapInfo info;
@@ -774,7 +762,7 @@ static bool sample_needs_copy_buffer(struct wg_sample *sample, GstBuffer *buffer }
static NTSTATUS sample_read_video_buffer(struct wg_sample *sample, GstBuffer *buffer, - GstCaps *caps, gsize plane_align) + const GstVideoInfo *src_video_info, const GstVideoInfo *dst_video_info) { gsize total_size; NTSTATUS status; @@ -783,7 +771,7 @@ static NTSTATUS sample_read_video_buffer(struct wg_sample *sample, GstBuffer *bu if (!(needs_copy = sample_needs_copy_buffer(sample, buffer, &total_size))) status = STATUS_SUCCESS; else - status = copy_video_buffer(buffer, caps, plane_align, sample, &total_size); + status = copy_video_buffer(buffer, src_video_info, dst_video_info, sample, &total_size);
if (status) { @@ -858,8 +846,10 @@ NTSTATUS wg_transform_read_data(void *args) { struct wg_transform_read_data_params *params = args; struct wg_transform *transform = get_transform(params->transform); + GstVideoInfo src_video_info, dst_video_info; struct wg_sample *sample = params->sample; struct wg_format *format = params->format; + GstVideoAlignment align = {0}; GstBuffer *output_buffer; GstCaps *output_caps; bool discard_data; @@ -877,6 +867,18 @@ NTSTATUS wg_transform_read_data(void *args) output_buffer = gst_sample_get_buffer(transform->output_sample); output_caps = gst_sample_get_caps(transform->output_sample);
+ if (stream_type_from_caps(output_caps) == GST_STREAM_TYPE_VIDEO) + { + gsize plane_align = transform->attrs.output_plane_align; + + if (!gst_video_info_from_caps(&src_video_info, output_caps)) + GST_ERROR("Failed to get video info from %"GST_PTR_FORMAT, output_caps); + dst_video_info = src_video_info; + + /* set the desired output buffer alignment on the dest video info */ + align_video_info_planes(plane_align, &dst_video_info, &align); + } + if (GST_MINI_OBJECT_FLAG_IS_SET(transform->output_sample, GST_SAMPLE_FLAG_WG_CAPS_CHANGED)) { GST_MINI_OBJECT_FLAG_UNSET(transform->output_sample, GST_SAMPLE_FLAG_WG_CAPS_CHANGED); @@ -885,17 +887,10 @@ NTSTATUS wg_transform_read_data(void *args)
if (format) { - gsize plane_align = transform->attrs.output_plane_align; - GstVideoAlignment align; - GstVideoInfo info; - wg_format_from_caps(format, output_caps);
- if (format->major_type == WG_MAJOR_TYPE_VIDEO - && gst_video_info_from_caps(&info, output_caps)) + if (format->major_type == WG_MAJOR_TYPE_VIDEO) { - align_video_info_planes(plane_align, &info, &align); - GST_INFO("Returning video alignment left %u, top %u, right %u, bottom %u.", align.padding_left, align.padding_top, align.padding_right, align.padding_bottom);
@@ -917,8 +912,8 @@ NTSTATUS wg_transform_read_data(void *args) }
if (stream_type_from_caps(output_caps) == GST_STREAM_TYPE_VIDEO) - status = sample_read_video_buffer(sample, output_buffer, output_caps, - transform->attrs.output_plane_align); + status = sample_read_video_buffer(sample, output_buffer, + &src_video_info, &dst_video_info); else status = sample_read_other_buffer(sample, output_buffer);
From: Rémi Bernon rbernon@codeweavers.com
This makes sure transform->output_format always reflect the internal stream format, regardless of whether the client calls set_output_format or not, as they aren't actually required to. --- dlls/winegstreamer/wg_transform.c | 48 +++++++++++++++++-------------- 1 file changed, 26 insertions(+), 22 deletions(-)
diff --git a/dlls/winegstreamer/wg_transform.c b/dlls/winegstreamer/wg_transform.c index 08ba442366a..06be9d048d5 100644 --- a/dlls/winegstreamer/wg_transform.c +++ b/dlls/winegstreamer/wg_transform.c @@ -38,6 +38,7 @@ #include "mferror.h"
#include "unix_private.h" +#include "wine/debug.h"
#define GST_SAMPLE_FLAG_WG_CAPS_CHANGED (GST_MINI_OBJECT_FLAG_LAST << 0)
@@ -543,16 +544,16 @@ NTSTATUS wg_transform_set_output_format(void *args) { struct wg_transform_set_output_format_params *params = args; struct wg_transform *transform = get_transform(params->transform); - const struct wg_format *format = params->format; + struct wg_format output_format = *params->format; GstSample *sample; GstCaps *caps;
- if (!(caps = transform_format_to_caps(transform, format))) + if (!(caps = transform_format_to_caps(transform, &output_format))) { - GST_ERROR("Failed to convert format %p to caps.", format); + GST_ERROR("Failed to convert format to caps."); return STATUS_UNSUCCESSFUL; } - transform->output_format = *format; + transform->output_format = output_format;
GST_INFO("transform %p output caps %"GST_PTR_FORMAT, transform, caps);
@@ -574,7 +575,7 @@ NTSTATUS wg_transform_set_output_format(void *args) if (transform->video_flip) { const char *value; - if (transform->input_is_flipped != wg_format_video_is_flipped(format)) + if (transform->input_is_flipped != wg_format_video_is_flipped(&output_format)) value = "vertical-flip"; else value = "none"; @@ -881,30 +882,33 @@ NTSTATUS wg_transform_read_data(void *args)
if (GST_MINI_OBJECT_FLAG_IS_SET(transform->output_sample, GST_SAMPLE_FLAG_WG_CAPS_CHANGED)) { + struct wg_format output_format; + GST_MINI_OBJECT_FLAG_UNSET(transform->output_sample, GST_SAMPLE_FLAG_WG_CAPS_CHANGED);
GST_INFO("transform %p output caps %"GST_PTR_FORMAT, transform, output_caps);
- if (format) + wg_format_from_caps(&output_format, output_caps); + if (output_format.major_type == WG_MAJOR_TYPE_VIDEO) { - wg_format_from_caps(format, output_caps); - - if (format->major_type == WG_MAJOR_TYPE_VIDEO) - { - GST_INFO("Returning video alignment left %u, top %u, right %u, bottom %u.", align.padding_left, - align.padding_top, align.padding_right, align.padding_bottom); - - format->u.video.padding.left = align.padding_left; - format->u.video.width += format->u.video.padding.left; - format->u.video.padding.right = align.padding_right; - format->u.video.width += format->u.video.padding.right; - format->u.video.padding.top = align.padding_top; - format->u.video.height += format->u.video.padding.top; - format->u.video.padding.bottom = align.padding_bottom; - format->u.video.height += format->u.video.padding.bottom; - } + output_format.u.video.padding.left = align.padding_left; + output_format.u.video.width += output_format.u.video.padding.left; + output_format.u.video.padding.right = align.padding_right; + output_format.u.video.width += output_format.u.video.padding.right; + output_format.u.video.padding.top = align.padding_top; + output_format.u.video.height += output_format.u.video.padding.top; + output_format.u.video.padding.bottom = align.padding_bottom; + output_format.u.video.height += output_format.u.video.padding.bottom; + GST_INFO("new video padding rect %s", wine_dbgstr_rect(&output_format.u.video.padding)); + + if (transform->output_format.u.video.height < 0) + output_format.u.video.height *= -1; }
+ if (format) + *format = output_format; + transform->output_format = output_format; + params->result = MF_E_TRANSFORM_STREAM_CHANGE; GST_INFO("Format changed detected, returning no output"); wg_allocator_release_sample(transform->allocator, sample, false);
From: Rémi Bernon rbernon@codeweavers.com
--- dlls/winegstreamer/wg_transform.c | 75 +++++++++++++++++++++---------- 1 file changed, 51 insertions(+), 24 deletions(-)
diff --git a/dlls/winegstreamer/wg_transform.c b/dlls/winegstreamer/wg_transform.c index 06be9d048d5..10d4b205ef0 100644 --- a/dlls/winegstreamer/wg_transform.c +++ b/dlls/winegstreamer/wg_transform.c @@ -83,6 +83,51 @@ static void align_video_info_planes(gsize plane_align, GstVideoInfo *info, GstVi gst_video_info_align(info, align); }
+typedef struct +{ + GstVideoBufferPool parent; +} WgVideoBufferPool; + +typedef struct +{ + GstVideoBufferPoolClass parent_class; +} WgVideoBufferPoolClass; + +G_DEFINE_TYPE(WgVideoBufferPool, wg_video_buffer_pool, GST_TYPE_VIDEO_BUFFER_POOL); + +static void wg_video_buffer_pool_init(WgVideoBufferPool * pool) {} +static void wg_video_buffer_pool_class_init(WgVideoBufferPoolClass *klass) {} + +static WgVideoBufferPool *wg_video_buffer_pool_create(GstCaps *caps, gsize plane_align, + GstAllocator *allocator, GstVideoInfo *info, GstVideoAlignment *align) +{ + WgVideoBufferPool *pool; + GstStructure *config; + + if (!(pool = g_object_new(wg_video_buffer_pool_get_type(), NULL))) + return NULL; + + gst_video_info_from_caps(info, caps); + align_video_info_planes(plane_align, info, align); + + if (!(config = gst_buffer_pool_get_config(GST_BUFFER_POOL(pool)))) + GST_ERROR("Failed to get %"GST_PTR_FORMAT" config.", pool); + else + { + gst_buffer_pool_config_add_option(config, GST_BUFFER_POOL_OPTION_VIDEO_META); + gst_buffer_pool_config_add_option(config, GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT); + gst_buffer_pool_config_set_video_alignment(config, align); + + gst_buffer_pool_config_set_params(config, caps, info->size, 0, 0); + gst_buffer_pool_config_set_allocator(config, allocator, NULL); + if (!gst_buffer_pool_set_config(GST_BUFFER_POOL(pool), config)) + GST_ERROR("Failed to set %"GST_PTR_FORMAT" config.", pool); + } + + GST_INFO("Created %"GST_PTR_FORMAT, pool); + return pool; +} + static GstFlowReturn transform_sink_chain_cb(GstPad *pad, GstObject *parent, GstBuffer *buffer) { struct wg_transform *transform = gst_pad_get_element_private(pad); @@ -129,11 +174,10 @@ static gboolean transform_src_query_cb(GstPad *pad, GstObject *parent, GstQuery
static gboolean transform_sink_query_allocation(struct wg_transform *transform, GstQuery *query) { - gsize plane_align = transform->attrs.output_plane_align; - GstStructure *config, *params; + WgVideoBufferPool *pool; GstVideoAlignment align; + GstStructure *params; gboolean needs_pool; - GstBufferPool *pool; GstVideoInfo info; GstCaps *caps;
@@ -143,12 +187,10 @@ static gboolean transform_sink_query_allocation(struct wg_transform *transform, if (stream_type_from_caps(caps) != GST_STREAM_TYPE_VIDEO || !needs_pool) return false;
- if (!gst_video_info_from_caps(&info, caps) - || !(pool = gst_video_buffer_pool_new())) + if (!(pool = wg_video_buffer_pool_create(caps, transform->attrs.output_plane_align, + transform->allocator, &info, &align))) return false;
- align_video_info_planes(plane_align, &info, &align); - if ((params = gst_structure_new("video-meta", "padding-top", G_TYPE_UINT, align.padding_top, "padding-bottom", G_TYPE_UINT, align.padding_bottom, @@ -160,26 +202,11 @@ static gboolean transform_sink_query_allocation(struct wg_transform *transform, gst_structure_free(params); }
- if (!(config = gst_buffer_pool_get_config(pool))) - GST_ERROR("Failed to get %"GST_PTR_FORMAT" config.", pool); - else - { - gst_buffer_pool_config_add_option(config, GST_BUFFER_POOL_OPTION_VIDEO_META); - gst_buffer_pool_config_add_option(config, GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT); - gst_buffer_pool_config_set_video_alignment(config, &align); - - gst_buffer_pool_config_set_params(config, caps, - info.size, 0, 0); - gst_buffer_pool_config_set_allocator(config, transform->allocator, NULL); - if (!gst_buffer_pool_set_config(pool, config)) - GST_ERROR("Failed to set %"GST_PTR_FORMAT" config.", pool); - } - /* Prevent pool reconfiguration, we don't want another alignment. */ - if (!gst_buffer_pool_set_active(pool, true)) + if (!gst_buffer_pool_set_active(GST_BUFFER_POOL(pool), true)) GST_ERROR("%"GST_PTR_FORMAT" failed to activate.", pool);
- gst_query_add_allocation_pool(query, pool, info.size, 0, 0); + gst_query_add_allocation_pool(query, GST_BUFFER_POOL(pool), info.size, 0, 0); gst_query_add_allocation_param(query, transform->allocator, NULL);
GST_INFO("Proposing %"GST_PTR_FORMAT", buffer size %#zx, %"GST_PTR_FORMAT", for %"GST_PTR_FORMAT,
From: Rémi Bernon rbernon@codeweavers.com
This removes the need for an extra videoconvert, and any ambiguity that comes with it. Also reduce our dependency on the GStreamer elements to behave as we assume they do, and reduce the chances to see unnecessary copies happening which can be harmful with large buffers. Some GStreamer versions for instance have a bogus videoflip that causes a buffer copy to happen when it is supposed to be pass through. --- dlls/winegstreamer/wg_transform.c | 142 ++++++++++++++++++++---------- 1 file changed, 94 insertions(+), 48 deletions(-)
diff --git a/dlls/winegstreamer/wg_transform.c b/dlls/winegstreamer/wg_transform.c index 10d4b205ef0..6882e16bf1c 100644 --- a/dlls/winegstreamer/wg_transform.c +++ b/dlls/winegstreamer/wg_transform.c @@ -54,14 +54,13 @@ struct wg_transform
GstAtomicQueue *input_queue;
- bool input_is_flipped; - GstElement *video_flip; - + struct wg_format input_format; struct wg_format output_format; GstAtomicQueue *output_queue; GstSample *output_sample; bool output_caps_changed; GstCaps *output_caps; + GstCaps *input_caps; };
static struct wg_transform *get_transform(wg_transform_t trans) @@ -69,7 +68,8 @@ static struct wg_transform *get_transform(wg_transform_t trans) return (struct wg_transform *)(ULONG_PTR)trans; }
-static void align_video_info_planes(gsize plane_align, GstVideoInfo *info, GstVideoAlignment *align) +static void align_video_info_planes(struct wg_format *format, gsize plane_align, + GstVideoInfo *info, GstVideoAlignment *align) { gst_video_alignment_reset(align);
@@ -81,11 +81,21 @@ static void align_video_info_planes(gsize plane_align, GstVideoInfo *info, GstVi align->stride_align[3] = plane_align;
gst_video_info_align(info, align); + + if (format->u.video.height < 0) + { + for (guint i = 0; i < ARRAY_SIZE(info->offset); ++i) + { + info->offset[i] += (info->height - 1) * info->stride[i]; + info->stride[i] = -info->stride[i]; + } + } }
typedef struct { GstVideoBufferPool parent; + GstVideoInfo info; } WgVideoBufferPool;
typedef struct @@ -95,11 +105,51 @@ typedef struct
G_DEFINE_TYPE(WgVideoBufferPool, wg_video_buffer_pool, GST_TYPE_VIDEO_BUFFER_POOL);
+static void buffer_add_video_meta(GstBuffer *buffer, GstVideoInfo *info) +{ + GstVideoMeta *meta; + + if (!(meta = gst_buffer_get_video_meta(buffer))) + meta = gst_buffer_add_video_meta(buffer, GST_VIDEO_FRAME_FLAG_NONE, + info->finfo->format, info->width, info->height); + + if (!meta) + GST_ERROR("Failed to add video meta to buffer %"GST_PTR_FORMAT, buffer); + else + { + memcpy(meta->offset, info->offset, sizeof(info->offset)); + memcpy(meta->stride, info->stride, sizeof(info->stride)); + } +} + +static GstFlowReturn wg_video_buffer_pool_alloc_buffer(GstBufferPool *gst_pool, GstBuffer **buffer, + GstBufferPoolAcquireParams *params) +{ + GstBufferPoolClass *parent_class = GST_BUFFER_POOL_CLASS(wg_video_buffer_pool_parent_class); + WgVideoBufferPool *pool = (WgVideoBufferPool *)gst_pool; + GstFlowReturn ret; + + GST_LOG("%"GST_PTR_FORMAT", buffer %p, params %p", pool, buffer, params); + + if (!(ret = parent_class->alloc_buffer(gst_pool, buffer, params))) + { + buffer_add_video_meta(*buffer, &pool->info); + GST_INFO("%"GST_PTR_FORMAT" allocated buffer %"GST_PTR_FORMAT, pool, *buffer); + } + + return ret; +} + static void wg_video_buffer_pool_init(WgVideoBufferPool * pool) {} -static void wg_video_buffer_pool_class_init(WgVideoBufferPoolClass *klass) {} + +static void wg_video_buffer_pool_class_init(WgVideoBufferPoolClass *klass) +{ + GstBufferPoolClass *pool_class = GST_BUFFER_POOL_CLASS(klass); + pool_class->alloc_buffer = wg_video_buffer_pool_alloc_buffer; +}
static WgVideoBufferPool *wg_video_buffer_pool_create(GstCaps *caps, gsize plane_align, - GstAllocator *allocator, GstVideoInfo *info, GstVideoAlignment *align) + GstAllocator *allocator, struct wg_format *format, GstVideoAlignment *align) { WgVideoBufferPool *pool; GstStructure *config; @@ -107,8 +157,10 @@ static WgVideoBufferPool *wg_video_buffer_pool_create(GstCaps *caps, gsize plane if (!(pool = g_object_new(wg_video_buffer_pool_get_type(), NULL))) return NULL;
- gst_video_info_from_caps(info, caps); - align_video_info_planes(plane_align, info, align); + if (!gst_video_info_from_caps(&pool->info, caps)) + GST_ERROR("Failed to create video info from caps %" GST_PTR_FORMAT, caps); + else + align_video_info_planes(format, plane_align, &pool->info, align);
if (!(config = gst_buffer_pool_get_config(GST_BUFFER_POOL(pool)))) GST_ERROR("Failed to get %"GST_PTR_FORMAT" config.", pool); @@ -118,7 +170,7 @@ static WgVideoBufferPool *wg_video_buffer_pool_create(GstCaps *caps, gsize plane gst_buffer_pool_config_add_option(config, GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT); gst_buffer_pool_config_set_video_alignment(config, align);
- gst_buffer_pool_config_set_params(config, caps, info->size, 0, 0); + gst_buffer_pool_config_set_params(config, caps, pool->info.size, 0, 0); gst_buffer_pool_config_set_allocator(config, allocator, NULL); if (!gst_buffer_pool_set_config(GST_BUFFER_POOL(pool), config)) GST_ERROR("Failed to set %"GST_PTR_FORMAT" config.", pool); @@ -178,7 +230,6 @@ static gboolean transform_sink_query_allocation(struct wg_transform *transform, GstVideoAlignment align; GstStructure *params; gboolean needs_pool; - GstVideoInfo info; GstCaps *caps;
GST_LOG("transform %p, %"GST_PTR_FORMAT, transform, query); @@ -188,7 +239,7 @@ static gboolean transform_sink_query_allocation(struct wg_transform *transform, return false;
if (!(pool = wg_video_buffer_pool_create(caps, transform->attrs.output_plane_align, - transform->allocator, &info, &align))) + transform->allocator, &transform->output_format, &align))) return false;
if ((params = gst_structure_new("video-meta", @@ -206,11 +257,11 @@ static gboolean transform_sink_query_allocation(struct wg_transform *transform, if (!gst_buffer_pool_set_active(GST_BUFFER_POOL(pool), true)) GST_ERROR("%"GST_PTR_FORMAT" failed to activate.", pool);
- gst_query_add_allocation_pool(query, GST_BUFFER_POOL(pool), info.size, 0, 0); + gst_query_add_allocation_pool(query, GST_BUFFER_POOL(pool), pool->info.size, 0, 0); gst_query_add_allocation_param(query, transform->allocator, NULL);
GST_INFO("Proposing %"GST_PTR_FORMAT", buffer size %#zx, %"GST_PTR_FORMAT", for %"GST_PTR_FORMAT, - pool, info.size, transform->allocator, query); + pool, pool->info.size, transform->allocator, query);
g_object_unref(pool); return true; @@ -349,27 +400,23 @@ NTSTATUS wg_transform_destroy(void *args) g_object_unref(transform->my_src); gst_query_unref(transform->drain_query); gst_caps_unref(transform->output_caps); + gst_caps_unref(transform->input_caps); gst_atomic_queue_unref(transform->output_queue); free(transform);
return STATUS_SUCCESS; }
-static bool wg_format_video_is_flipped(const struct wg_format *format) -{ - return format->major_type == WG_MAJOR_TYPE_VIDEO && (format->u.video.height < 0); -} - NTSTATUS wg_transform_create(void *args) { struct wg_transform_create_params *params = args; struct wg_format output_format = *params->output_format; struct wg_format input_format = *params->input_format; GstElement *first = NULL, *last = NULL, *element; - GstCaps *raw_caps = NULL, *src_caps = NULL; NTSTATUS status = STATUS_UNSUCCESSFUL; GstPadTemplate *template = NULL; struct wg_transform *transform; + GstCaps *raw_caps = NULL; const gchar *media_type; GstEvent *event;
@@ -386,18 +433,19 @@ NTSTATUS wg_transform_create(void *args) if (!(transform->allocator = wg_allocator_create())) goto out; transform->attrs = *params->attrs; + transform->input_format = input_format; transform->output_format = output_format;
- if (!(src_caps = transform_format_to_caps(transform, &input_format))) + if (!(transform->input_caps = transform_format_to_caps(transform, &input_format))) goto out; - if (!(template = gst_pad_template_new("src", GST_PAD_SRC, GST_PAD_ALWAYS, src_caps))) + if (!(template = gst_pad_template_new("src", GST_PAD_SRC, GST_PAD_ALWAYS, transform->input_caps))) goto out; transform->my_src = gst_pad_new_from_template(template, "src"); g_object_unref(template); if (!transform->my_src) goto out;
- GST_INFO("transform %p input caps %"GST_PTR_FORMAT, transform, src_caps); + GST_INFO("transform %p input caps %"GST_PTR_FORMAT, transform, transform->input_caps);
gst_pad_set_element_private(transform->my_src, transform); gst_pad_set_query_function(transform->my_src, transform_src_query_cb); @@ -436,7 +484,7 @@ NTSTATUS wg_transform_create(void *args) case WG_MAJOR_TYPE_VIDEO_INDEO: case WG_MAJOR_TYPE_VIDEO_WMV: case WG_MAJOR_TYPE_VIDEO_MPEG1: - if (!(element = find_element(GST_ELEMENT_FACTORY_TYPE_DECODER, src_caps, raw_caps)) + if (!(element = find_element(GST_ELEMENT_FACTORY_TYPE_DECODER, transform->input_caps, raw_caps)) || !append_element(transform->container, element, &first, &last)) { gst_caps_unref(raw_caps); @@ -476,15 +524,6 @@ NTSTATUS wg_transform_create(void *args) break;
case WG_MAJOR_TYPE_VIDEO: - if (!(element = create_element("videoconvert", "base")) - || !append_element(transform->container, element, &first, &last)) - goto out; - if (!(transform->video_flip = create_element("videoflip", "base")) - || !append_element(transform->container, transform->video_flip, &first, &last)) - goto out; - transform->input_is_flipped = wg_format_video_is_flipped(&input_format); - if (transform->input_is_flipped != wg_format_video_is_flipped(&output_format)) - gst_util_set_object_arg(G_OBJECT(transform->video_flip), "method", "vertical-flip"); if (!(element = create_element("videoconvert", "base")) || !append_element(transform->container, element, &first, &last)) goto out; @@ -521,7 +560,7 @@ NTSTATUS wg_transform_create(void *args) if (!(event = gst_event_new_stream_start("stream")) || !push_event(transform->my_src, event)) goto out; - if (!(event = gst_event_new_caps(src_caps)) + if (!(event = gst_event_new_caps(transform->input_caps)) || !push_event(transform->my_src, event)) goto out;
@@ -534,8 +573,6 @@ NTSTATUS wg_transform_create(void *args) || !push_event(transform->my_src, event)) goto out;
- gst_caps_unref(src_caps); - GST_INFO("Created winegstreamer transform %p.", transform); params->transform = (wg_transform_t)(ULONG_PTR)transform; return STATUS_SUCCESS; @@ -547,8 +584,8 @@ out: gst_caps_unref(transform->output_caps); if (transform->my_src) gst_object_unref(transform->my_src); - if (src_caps) - gst_caps_unref(src_caps); + if (transform->input_caps) + gst_caps_unref(transform->input_caps); if (transform->allocator) wg_allocator_destroy(transform->allocator); if (transform->drain_query) @@ -599,15 +636,6 @@ NTSTATUS wg_transform_set_output_format(void *args) gst_caps_unref(transform->output_caps); transform->output_caps = caps;
- if (transform->video_flip) - { - const char *value; - if (transform->input_is_flipped != wg_format_video_is_flipped(&output_format)) - value = "vertical-flip"; - else - value = "none"; - gst_util_set_object_arg(G_OBJECT(transform->video_flip), "method", value); - } if (!push_event(transform->my_sink, gst_event_new_reconfigure())) { GST_ERROR("Failed to reconfigure transform %p.", transform); @@ -641,6 +669,7 @@ NTSTATUS wg_transform_push_data(void *args) struct wg_transform_push_data_params *params = args; struct wg_transform *transform = get_transform(params->transform); struct wg_sample *sample = params->sample; + GstVideoInfo video_info; GstBuffer *buffer; guint length;
@@ -664,6 +693,14 @@ NTSTATUS wg_transform_push_data(void *args) GST_INFO("Wrapped %u/%u bytes from sample %p to %"GST_PTR_FORMAT, sample->size, sample->max_size, sample, buffer); }
+ if (stream_type_from_caps(transform->input_caps) == GST_STREAM_TYPE_VIDEO + && gst_video_info_from_caps(&video_info, transform->input_caps)) + { + GstVideoAlignment align; + align_video_info_planes(&transform->input_format, 0, &video_info, &align); + buffer_add_video_meta(buffer, &video_info); + } + if (sample->flags & WG_SAMPLE_FLAG_HAS_PTS) GST_BUFFER_PTS(buffer) = sample->pts * 100; if (sample->flags & WG_SAMPLE_FLAG_HAS_DURATION) @@ -898,13 +935,22 @@ NTSTATUS wg_transform_read_data(void *args) if (stream_type_from_caps(output_caps) == GST_STREAM_TYPE_VIDEO) { gsize plane_align = transform->attrs.output_plane_align; + GstVideoMeta *meta;
if (!gst_video_info_from_caps(&src_video_info, output_caps)) GST_ERROR("Failed to get video info from %"GST_PTR_FORMAT, output_caps); dst_video_info = src_video_info;
- /* set the desired output buffer alignment on the dest video info */ - align_video_info_planes(plane_align, &dst_video_info, &align); + /* set the desired output buffer alignment and stride on the dest video info */ + align_video_info_planes(&transform->output_format, plane_align, + &dst_video_info, &align); + + /* copy the actual output buffer alignment and stride to the src video info */ + if ((meta = gst_buffer_get_video_meta(output_buffer))) + { + memcpy(src_video_info.offset, meta->offset, sizeof(meta->offset)); + memcpy(src_video_info.stride, meta->stride, sizeof(meta->stride)); + } }
if (GST_MINI_OBJECT_FLAG_IS_SET(transform->output_sample, GST_SAMPLE_FLAG_WG_CAPS_CHANGED))
From: Rémi Bernon rbernon@codeweavers.com
Because it was inconsistently done before we need to adjust the frame size temporarily on the wg_transform side. This fixes one test but breaks a different one. Next change will fix it properly. --- dlls/mf/tests/transform.c | 41 ++++++++++++------------ dlls/winegstreamer/mfplat.c | 33 ++++++++++--------- dlls/winegstreamer/wg_transform.c | 53 +++++++++++++++++++++++++++---- 3 files changed, 84 insertions(+), 43 deletions(-)
diff --git a/dlls/mf/tests/transform.c b/dlls/mf/tests/transform.c index 3bd13384620..a7c659c7cbf 100644 --- a/dlls/mf/tests/transform.c +++ b/dlls/mf/tests/transform.c @@ -7394,17 +7394,32 @@ static void test_video_processor(void) .sample_time = 0, .sample_duration = 10000000, .buffer_count = 1, .buffers = &rgb32_buffer_desc, }; + const struct buffer_desc rgb32_buffer_desc_todo = + { + .length = actual_width * actual_height * 4, + .compare = compare_rgb32, .dump = dump_rgb32, .rect = {.top = 12, .right = 82, .bottom = 96}, + .todo_length = TRUE, + }; + const struct sample_desc rgb32_sample_desc_todo = + { + .attributes = output_sample_attributes, + .sample_time = 0, .sample_duration = 10000000, + .buffer_count = 1, .buffers = &rgb32_buffer_desc_todo, + .todo_length = TRUE, + };
const struct buffer_desc rgb32_cropped_buffer_desc = { .length = 82 * 84 * 4, .compare = compare_rgb32, .dump = dump_rgb32, + .todo_length = TRUE }; const struct sample_desc rgb32_cropped_sample_desc = { .attributes = output_sample_attributes, .sample_time = 0, .sample_duration = 10000000, .buffer_count = 1, .buffers = &rgb32_cropped_buffer_desc, + .todo_length = TRUE };
const struct buffer_desc rgb555_buffer_desc = @@ -7492,7 +7507,7 @@ static void test_video_processor(void) }, { .input_type_desc = rgb32_with_aperture, .output_type_desc = rgb32_with_aperture, - .output_sample_desc = &rgb32_sample_desc, .result_bitmap = L"rgb32frame.bmp", + .output_sample_desc = &rgb32_sample_desc_todo, .result_bitmap = L"rgb32frame.bmp", .broken = TRUE /* old Windows version incorrectly rescale */ }, { @@ -7515,7 +7530,7 @@ static void test_video_processor(void) }, { .input_type_desc = rgb32_no_aperture, .output_type_desc = rgb32_with_aperture, - .output_sample_desc = &rgb32_sample_desc, .result_bitmap = L"rgb32frame-bogus.bmp", + .output_sample_desc = &rgb32_sample_desc_todo, .result_bitmap = L"rgb32frame-bogus.bmp", }, { .input_type_desc = rgb32_with_aperture, .output_type_desc = rgb32_no_aperture, @@ -7843,23 +7858,6 @@ static void test_video_processor(void) check_mft_set_input_type(transform, test->input_type_desc); check_mft_get_input_current_type(transform, test->input_type_desc);
- if (i >= 15) - { - IMFMediaType *media_type; - HRESULT hr; - - hr = MFCreateMediaType(&media_type); - ok(hr == S_OK, "MFCreateMediaType returned hr %#lx.\n", hr); - init_media_type(media_type, test->output_type_desc, -1); - hr = IMFTransform_SetOutputType(transform, 0, media_type, 0); - todo_wine - ok(hr == S_OK, "SetOutputType returned %#lx.\n", hr); - IMFMediaType_Release(media_type); - - winetest_pop_context(); - continue; - } - check_mft_set_output_type_required(transform, test->output_type_desc); check_mft_set_output_type(transform, test->output_type_desc, S_OK); check_mft_get_output_current_type(transform, test->output_type_desc); @@ -7969,6 +7967,7 @@ static void test_video_processor(void) ok(ref == 1, "Release returned %ld\n", ref);
ret = check_mf_sample_collection(output_samples, test->output_sample_desc, test->result_bitmap); + todo_wine_if(i == 10 || i == 15) ok(ret <= test->delta || broken(test->broken), "got %lu%% diff\n", ret); IMFCollection_Release(output_samples);
@@ -8007,8 +8006,8 @@ static void test_video_processor(void) check_mft_set_output_type(transform, rgb32_no_aperture, S_OK); check_mft_get_output_current_type(transform, rgb32_no_aperture);
- check_mft_set_input_type_(__LINE__, transform, nv12_with_aperture, TRUE); - check_mft_get_input_current_type_(__LINE__, transform, nv12_with_aperture, TRUE, FALSE); + check_mft_set_input_type(transform, nv12_with_aperture); + check_mft_get_input_current_type(transform, nv12_with_aperture);
/* output type is the same as before */ check_mft_get_output_current_type(transform, rgb32_no_aperture); diff --git a/dlls/winegstreamer/mfplat.c b/dlls/winegstreamer/mfplat.c index e6d9fb9fd2c..c27a2646937 100644 --- a/dlls/winegstreamer/mfplat.c +++ b/dlls/winegstreamer/mfplat.c @@ -519,6 +519,22 @@ static IMFMediaType *mf_media_type_from_wg_format_video(const struct wg_format * if (FAILED(MFCreateMediaType(&type))) return NULL;
+ if (format->u.video.padding.left || format->u.video.padding.right + || format->u.video.padding.top || format->u.video.padding.bottom) + { + MFVideoArea aperture = + { + .OffsetX = {.value = format->u.video.padding.left}, + .OffsetY = {.value = format->u.video.padding.top}, + .Area.cx = width, .Area.cy = height, + }; + width += format->u.video.padding.left + format->u.video.padding.right; + height += format->u.video.padding.top + format->u.video.padding.bottom; + + IMFMediaType_SetBlob(type, &MF_MT_MINIMUM_DISPLAY_APERTURE, + (BYTE *)&aperture, sizeof(aperture)); + } + IMFMediaType_SetGUID(type, &MF_MT_MAJOR_TYPE, &MFMediaType_Video); IMFMediaType_SetGUID(type, &MF_MT_SUBTYPE, video_formats[i].subtype); IMFMediaType_SetUINT64(type, &MF_MT_FRAME_SIZE, make_uint64(width, height)); @@ -532,21 +548,6 @@ static IMFMediaType *mf_media_type_from_wg_format_video(const struct wg_format * stride = -stride; IMFMediaType_SetUINT32(type, &MF_MT_DEFAULT_STRIDE, stride);
- if (format->u.video.padding.left || format->u.video.padding.right - || format->u.video.padding.top || format->u.video.padding.bottom) - { - MFVideoArea aperture = - { - .OffsetX = {.value = format->u.video.padding.left}, - .OffsetY = {.value = format->u.video.padding.top}, - .Area.cx = width - format->u.video.padding.right - format->u.video.padding.left, - .Area.cy = height - format->u.video.padding.bottom - format->u.video.padding.top, - }; - - IMFMediaType_SetBlob(type, &MF_MT_MINIMUM_DISPLAY_APERTURE, - (BYTE *)&aperture, sizeof(aperture)); - } - return type; } } @@ -706,6 +707,8 @@ static void mf_media_type_to_wg_format_video(IMFMediaType *type, const GUID *sub format->u.video.padding.top = aperture.OffsetY.value; format->u.video.padding.right = format->u.video.width - aperture.Area.cx - aperture.OffsetX.value; format->u.video.padding.bottom = format->u.video.height - aperture.Area.cy - aperture.OffsetY.value; + format->u.video.width -= format->u.video.padding.left + format->u.video.padding.right; + format->u.video.height -= format->u.video.padding.top + format->u.video.padding.bottom; }
if (SUCCEEDED(IMFMediaType_GetUINT64(type, &MF_MT_FRAME_RATE, &frame_rate)) && (UINT32)frame_rate) diff --git a/dlls/winegstreamer/wg_transform.c b/dlls/winegstreamer/wg_transform.c index 6882e16bf1c..65b95229096 100644 --- a/dlls/winegstreamer/wg_transform.c +++ b/dlls/winegstreamer/wg_transform.c @@ -92,6 +92,48 @@ static void align_video_info_planes(struct wg_format *format, gsize plane_align, } }
+static void update_format_width_padding(struct wg_format *max, struct wg_format *min) +{ + max->u.video.padding.right += max->u.video.width - min->u.video.width; + max->u.video.width = min->u.video.width; +} + +static void update_format_height_padding(struct wg_format *max, struct wg_format *min) +{ + max->u.video.padding.bottom += abs(max->u.video.height) - abs(min->u.video.height); + max->u.video.height = (max->u.video.height < 0 ? -1 : 1) * abs(min->u.video.height); +} + +static void update_format_padding(struct wg_format *input_format, struct wg_format *output_format) +{ + if (input_format->major_type == WG_MAJOR_TYPE_VIDEO) + { + input_format->u.video.width += input_format->u.video.padding.left + input_format->u.video.padding.right; + input_format->u.video.height += input_format->u.video.padding.top + input_format->u.video.padding.bottom; + } + + if (output_format->major_type == WG_MAJOR_TYPE_VIDEO) + { + output_format->u.video.width += output_format->u.video.padding.left + output_format->u.video.padding.right; + output_format->u.video.height += output_format->u.video.padding.top + output_format->u.video.padding.bottom; + } + + if (input_format->major_type != output_format->major_type) + return; + if (input_format->major_type != WG_MAJOR_TYPE_VIDEO) + return; + + if (input_format->u.video.width > output_format->u.video.width) + update_format_width_padding(input_format, output_format); + else + update_format_width_padding(output_format, input_format); + + if (abs(input_format->u.video.height) > abs(output_format->u.video.height)) + update_format_height_padding(input_format, output_format); + else + update_format_height_padding(output_format, input_format); +} + typedef struct { GstVideoBufferPool parent; @@ -433,6 +475,8 @@ NTSTATUS wg_transform_create(void *args) if (!(transform->allocator = wg_allocator_create())) goto out; transform->attrs = *params->attrs; + + update_format_padding(&input_format, &output_format); transform->input_format = input_format; transform->output_format = output_format;
@@ -964,18 +1008,13 @@ NTSTATUS wg_transform_read_data(void *args) wg_format_from_caps(&output_format, output_caps); if (output_format.major_type == WG_MAJOR_TYPE_VIDEO) { + if (transform->output_format.u.video.height < 0) + output_format.u.video.height *= -1; output_format.u.video.padding.left = align.padding_left; - output_format.u.video.width += output_format.u.video.padding.left; output_format.u.video.padding.right = align.padding_right; - output_format.u.video.width += output_format.u.video.padding.right; output_format.u.video.padding.top = align.padding_top; - output_format.u.video.height += output_format.u.video.padding.top; output_format.u.video.padding.bottom = align.padding_bottom; - output_format.u.video.height += output_format.u.video.padding.bottom; GST_INFO("new video padding rect %s", wine_dbgstr_rect(&output_format.u.video.padding)); - - if (transform->output_format.u.video.height < 0) - output_format.u.video.height *= -1; }
if (format)
From: Rémi Bernon rbernon@codeweavers.com
--- dlls/mf/tests/transform.c | 20 ++----------------- dlls/winegstreamer/wg_transform.c | 32 +++++++++++++++++++------------ 2 files changed, 22 insertions(+), 30 deletions(-)
diff --git a/dlls/mf/tests/transform.c b/dlls/mf/tests/transform.c index a7c659c7cbf..db47710bc61 100644 --- a/dlls/mf/tests/transform.c +++ b/dlls/mf/tests/transform.c @@ -7394,32 +7394,17 @@ static void test_video_processor(void) .sample_time = 0, .sample_duration = 10000000, .buffer_count = 1, .buffers = &rgb32_buffer_desc, }; - const struct buffer_desc rgb32_buffer_desc_todo = - { - .length = actual_width * actual_height * 4, - .compare = compare_rgb32, .dump = dump_rgb32, .rect = {.top = 12, .right = 82, .bottom = 96}, - .todo_length = TRUE, - }; - const struct sample_desc rgb32_sample_desc_todo = - { - .attributes = output_sample_attributes, - .sample_time = 0, .sample_duration = 10000000, - .buffer_count = 1, .buffers = &rgb32_buffer_desc_todo, - .todo_length = TRUE, - };
const struct buffer_desc rgb32_cropped_buffer_desc = { .length = 82 * 84 * 4, .compare = compare_rgb32, .dump = dump_rgb32, - .todo_length = TRUE }; const struct sample_desc rgb32_cropped_sample_desc = { .attributes = output_sample_attributes, .sample_time = 0, .sample_duration = 10000000, .buffer_count = 1, .buffers = &rgb32_cropped_buffer_desc, - .todo_length = TRUE };
const struct buffer_desc rgb555_buffer_desc = @@ -7507,7 +7492,7 @@ static void test_video_processor(void) }, { .input_type_desc = rgb32_with_aperture, .output_type_desc = rgb32_with_aperture, - .output_sample_desc = &rgb32_sample_desc_todo, .result_bitmap = L"rgb32frame.bmp", + .output_sample_desc = &rgb32_sample_desc, .result_bitmap = L"rgb32frame.bmp", .broken = TRUE /* old Windows version incorrectly rescale */ }, { @@ -7530,7 +7515,7 @@ static void test_video_processor(void) }, { .input_type_desc = rgb32_no_aperture, .output_type_desc = rgb32_with_aperture, - .output_sample_desc = &rgb32_sample_desc_todo, .result_bitmap = L"rgb32frame-bogus.bmp", + .output_sample_desc = &rgb32_sample_desc, .result_bitmap = L"rgb32frame-bogus.bmp", }, { .input_type_desc = rgb32_with_aperture, .output_type_desc = rgb32_no_aperture, @@ -7967,7 +7952,6 @@ static void test_video_processor(void) ok(ref == 1, "Release returned %ld\n", ref);
ret = check_mf_sample_collection(output_samples, test->output_sample_desc, test->result_bitmap); - todo_wine_if(i == 10 || i == 15) ok(ret <= test->delta || broken(test->broken), "got %lu%% diff\n", ret); IMFCollection_Release(output_samples);
diff --git a/dlls/winegstreamer/wg_transform.c b/dlls/winegstreamer/wg_transform.c index 65b95229096..47ddd0ff81f 100644 --- a/dlls/winegstreamer/wg_transform.c +++ b/dlls/winegstreamer/wg_transform.c @@ -75,6 +75,18 @@ static void align_video_info_planes(struct wg_format *format, gsize plane_align,
align->padding_right = ((plane_align + 1) - (info->width & plane_align)) & plane_align; align->padding_bottom = ((plane_align + 1) - (info->height & plane_align)) & plane_align; + align->padding_right = max(align->padding_right, format->u.video.padding.right); + align->padding_bottom = max(align->padding_bottom, format->u.video.padding.bottom); + align->padding_top = format->u.video.padding.top; + align->padding_left = format->u.video.padding.left; + + if (format->u.video.height < 0) + { + gsize top = align->padding_top; + align->padding_top = align->padding_bottom; + align->padding_bottom = top; + } + align->stride_align[0] = plane_align; align->stride_align[1] = plane_align; align->stride_align[2] = plane_align; @@ -106,18 +118,6 @@ static void update_format_height_padding(struct wg_format *max, struct wg_format
static void update_format_padding(struct wg_format *input_format, struct wg_format *output_format) { - if (input_format->major_type == WG_MAJOR_TYPE_VIDEO) - { - input_format->u.video.width += input_format->u.video.padding.left + input_format->u.video.padding.right; - input_format->u.video.height += input_format->u.video.padding.top + input_format->u.video.padding.bottom; - } - - if (output_format->major_type == WG_MAJOR_TYPE_VIDEO) - { - output_format->u.video.width += output_format->u.video.padding.left + output_format->u.video.padding.right; - output_format->u.video.height += output_format->u.video.padding.top + output_format->u.video.padding.bottom; - } - if (input_format->major_type != output_format->major_type) return; if (input_format->major_type != WG_MAJOR_TYPE_VIDEO) @@ -476,6 +476,7 @@ NTSTATUS wg_transform_create(void *args) goto out; transform->attrs = *params->attrs;
+ /* GStreamer cannot include the buffer padding in the frame sizes but MF does, make sure the formats have the same */ update_format_padding(&input_format, &output_format); transform->input_format = input_format; transform->output_format = output_format; @@ -656,6 +657,13 @@ NTSTATUS wg_transform_set_output_format(void *args) GstSample *sample; GstCaps *caps;
+ if (output_format.major_type == WG_MAJOR_TYPE_VIDEO) + { + /* GStreamer cannot include the buffer padding in the frame sizes but MF does, make sure the formats have the same */ + update_format_width_padding(&output_format, &transform->output_format); + update_format_height_padding(&output_format, &transform->output_format); + } + if (!(caps = transform_format_to_caps(transform, &output_format))) { GST_ERROR("Failed to convert format to caps.");
Hi,
It looks like your patch introduced the new failures shown below. Please investigate and fix them before resubmitting your patch. If they are not new, fixing them anyway would help a lot. Otherwise please ask for the known failures list to be updated.
The tests also ran into some preexisting test failures. If you know how to fix them that would be helpful. See the TestBot job for the details:
The full results can be found at: https://testbot.winehq.org/JobDetails.pl?Key=143502
Your paranoid android.
=== w8 (32 bit report) ===
mf: transform.c:7955: Test failed: videoproc: transform #15: got 38% diff
=== w8adm (32 bit report) ===
mf: transform.c:7955: Test failed: videoproc: transform #15: got 38% diff
=== w864 (32 bit report) ===
mf: transform.c:7955: Test failed: videoproc: transform #15: got 38% diff
=== w1064v1507 (32 bit report) ===
mf: transform.c:7955: Test failed: videoproc: transform #15: got 25% diff
=== w1064v1809 (32 bit report) ===
mf: transform.c:7955: Test failed: videoproc: transform #15: got 25% diff
=== w864 (64 bit report) ===
mf: transform.c:7955: Test failed: videoproc: transform #15: got 38% diff
=== w1064v1507 (64 bit report) ===
mf: transform.c:7955: Test failed: videoproc: transform #15: got 25% diff
=== w1064v1809 (64 bit report) ===
mf: transform.c:7955: Test failed: videoproc: transform #15: got 25% diff
v3: Rebased and added more tests to confirm the behavior and validate output processed image.
That seems unnecessarily defeatist...? The behaviour of those elements is documented. If they're failing to respect parts of the GStreamer API, that's a bug in the element and it should be fixed there. If they're doing something less performantly than they should, that's also something that should be fixed in the element.
This has nothing to do with documentation, and the behavior of components when combined together isn't documented anyway. They can behave as they will, and all depends on the caps negotiation that happens between them and depends on their capabilities. The videoflip elements has specific video format requirements, and can very well end up causing suboptimal negotiation in the pipeline.
"Library code will run performantly" isn't the kind of thing that's really documented, no, but it tends to be a general truth regardless. If we find that a bit of library code isn't matching our needs, our first reaction should not be to assume we can't do anything about the library and abandon it. This especially in the case of GStreamer, who when I've worked with them have been very open to making changes to help consumers.
More concretely, if videoflip has poor performance because it doesn't support flipping e.g. RGB16, then we can add support for that to videoflip; it wouldn't even be hard. I don't know what the specific issue is here either, but I'd be very surprised if it's unsolvable.
Why is this any more the "right" way than using videoflip?
We are not doing any kind of frame flipping, but instead we are implementing buffers with negative strides. Providing stride information to GStreamer is the right way to tell about buffer stride. Using a videoflip element is an equivalent but convoluted way to do it.
Okay, I can see how DirectShow's negative stride convention maps conceptually to GStreamer's similar convention. At the same time, I have a hard time seeing a manual videoflip as wrong.
I also don't understand the bit about complexity. From a somewhat abstract level, code gets more complex and harder to work with when you add multiple _different_ interacting components. Having more of the _same_ component—in this case, adding more beads to a string of postprocessing elements—doesn't make anything harder to work with.
Of course it does, it increases the number of possible failures. Doesn't matter of the components are the same, the more you add the more complex it gets. And the worst part is that it's not components we have the source directly available, it's GStreamer components which are most often pre-built from the system distribution.
Take debugging of the current video processor pipeline for instance, you have three elements when we could have only one. The two videoconvert and videoflip are talking to each other back and forth to negotiate their caps. Decyphering the GStreamer trace to understand what is actually going on, and figure out in the end that somewhere in the middle of all these verbose traces, videoflip has decided to drop the pool it was provided by downstream element to use its own, _is_ way more complicated than it could be.
I'm sorry that you had some difficulty there, but I remain unconvinced that a string of three postprocessing elements is more complex than a single postprocessing element interacting with a custom buffer pool. Speaking from experience, I would find it more difficult to maintain and debug these two interacting components, than a string of postprocessing elements.
Even with this said, I'm not conceptually opposed to the idea of using stride, but the hoops we have to jump through to do it seem excessive. I'm not convinced it's better than videoflip.
And I must be missing something, because I don't see this situation tested in test_video_processor()? I only see tests where both the input and output have the same aperture.
There's one test which is fixed with this MR (well now that I've split the last patch, there's another one which is broken then fixed), and it is about using an aperture on input and no aperture on output. It fails before this MR, it passes after.
That test has the same actual aperture on input and output, though, i.e. the same content size, or am I misreading somehow?
And I still don't understand why the format that's passed to wg_transform_set_output_format isn't the format we store as transform->output_format.
This merge request was closed by Rémi Bernon.