[PATCH 0/3] MR9694: mmdevapi/tests: Add some extended tests.
Extended tests are only run when WINETEST_EXTENDED=1 is passed. This allows having some tests that take too long to be run in each MR pipeline, but are still valuable to run every now and then. In this MR the feature is introduced and it is used for some mmdevapi tests. -- https://gitlab.winehq.org/wine/wine/-/merge_requests/9694
From: Giovanni Mascellani <gmascellani(a)codeweavers.com> Extended tests are only run when WINETEST_EXTENDED=1 is passed. This allows having some tests that take too long to be run in each MR pipeline, but are still valuable to run every now and then. --- include/wine/test.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/include/wine/test.h b/include/wine/test.h index 0bc361ebf0c..7f1098d3ef5 100644 --- a/include/wine/test.h +++ b/include/wine/test.h @@ -61,6 +61,9 @@ extern int winetest_platform_is_wine; /* use ANSI escape codes for output coloring */ extern int winetest_color; +/* run extended tests */ +extern int winetest_extended; + extern LONG winetest_successes; /* number of successful tests */ extern LONG winetest_failures; /* number of failures */ extern LONG winetest_flaky_failures; /* number of failures inside flaky block */ @@ -616,6 +619,9 @@ int winetest_mute_threshold = 42; /* use ANSI escape codes for output coloring */ int winetest_color = 0; +/* run extended tests */ +int winetest_extended = 0; + static HANDLE winetest_mutex; /* passing arguments around */ @@ -881,6 +887,7 @@ int main( int argc, char **argv ) if (GetEnvironmentVariableA( "WINETEST_REPORT_FLAKY", p, sizeof(p) )) winetest_report_flaky = atoi(p); if (GetEnvironmentVariableA( "WINETEST_REPORT_SUCCESS", p, sizeof(p) )) winetest_report_success = atoi(p); if (GetEnvironmentVariableA( "WINETEST_TIME", p, sizeof(p) )) winetest_time = atoi(p); + if (GetEnvironmentVariableA( "WINETEST_EXTENDED", p, sizeof(p) )) winetest_extended = atoi(p); winetest_last_time = winetest_start_time = winetest_get_time(); if (!strcmp( winetest_platform, "windows" )) SetUnhandledExceptionFilter( exc_filter ); -- GitLab https://gitlab.winehq.org/wine/wine/-/merge_requests/9694
From: Giovanni Mascellani <gmascellani(a)codeweavers.com> And trim down the non-extended tests. --- dlls/mmdevapi/tests/render.c | 44 ++++++++++++++++++++++++++++++------ 1 file changed, 37 insertions(+), 7 deletions(-) diff --git a/dlls/mmdevapi/tests/render.c b/dlls/mmdevapi/tests/render.c index 9ac82a65400..c7ecd34258c 100644 --- a/dlls/mmdevapi/tests/render.c +++ b/dlls/mmdevapi/tests/render.c @@ -41,10 +41,21 @@ #include "audiopolicy.h" #include "endpointvolume.h" -static const unsigned int sampling_rates[] = { 8000, 16000, 22050, 44100, 48000, 96000 }; -static const unsigned int channel_counts[] = { 1, 2, 8 }; -static const unsigned int sample_formats[][2] = { {WAVE_FORMAT_PCM, 8}, {WAVE_FORMAT_PCM, 16}, - {WAVE_FORMAT_PCM, 32}, {WAVE_FORMAT_IEEE_FLOAT, 32} }; +static const unsigned int *sampling_rates; +static const unsigned int *channel_counts; +static const unsigned int (*sample_formats)[2]; +static size_t sampling_rate_count; +static size_t channel_count_count; +static size_t sample_format_count; + +static const unsigned int sampling_rates_regular[] = { 8000, 44100, 96000 }; +static const unsigned int channel_counts_regular[] = { 1, 2, 8 }; +static const unsigned int sample_formats_regular[][2] = { {WAVE_FORMAT_PCM, 16}, {WAVE_FORMAT_IEEE_FLOAT, 32} }; + +static const unsigned int sampling_rates_extended[] = { 8000, 11025, 16000, 22050, 44100, 48000, 96000 }; +static const unsigned int channel_counts_extended[] = { 1, 2, 4, 6, 8 }; +static const unsigned int sample_formats_extended[][2] = { {WAVE_FORMAT_PCM, 8}, {WAVE_FORMAT_PCM, 16}, + {WAVE_FORMAT_PCM, 32}, {WAVE_FORMAT_IEEE_FLOAT, 32} }; #define NULL_PTR_ERR MAKE_HRESULT(SEVERITY_ERROR, FACILITY_WIN32, RPC_X_NULL_REF_POINTER) @@ -518,9 +529,9 @@ static void test_formats(AUDCLNT_SHAREMODE mode, BOOL extensible) fmt.Format.cbSize = extensible ? sizeof(WAVEFORMATEXTENSIBLE) - sizeof(WAVEFORMATEX) : 0; - for (i = 0; i < ARRAY_SIZE(sampling_rates); i++) { - for (j = 0; j < ARRAY_SIZE(channel_counts); j++) { - for (k = 0; k < ARRAY_SIZE(sample_formats); k++) { + for (i = 0; i < sampling_rate_count; i++) { + for (j = 0; j < channel_count_count; j++) { + for (k = 0; k < sample_format_count; k++) { char format_chr[3]; hr = IMMDevice_Activate(dev, &IID_IAudioClient, CLSCTX_INPROC_SERVER, @@ -2832,6 +2843,25 @@ START_TEST(render) HRESULT hr; DWORD mode; + if (winetest_extended) + { + sampling_rates = sampling_rates_extended; + channel_counts = channel_counts_extended; + sample_formats = sample_formats_extended; + sampling_rate_count = ARRAY_SIZE(sampling_rates_extended); + channel_count_count = ARRAY_SIZE(channel_counts_extended); + sample_format_count = ARRAY_SIZE(sample_formats_extended); + } + else + { + sampling_rates = sampling_rates_regular; + channel_counts = channel_counts_regular; + sample_formats = sample_formats_regular; + sampling_rate_count = ARRAY_SIZE(sampling_rates_regular); + channel_count_count = ARRAY_SIZE(channel_counts_regular); + sample_format_count = ARRAY_SIZE(sample_formats_regular); + } + CoInitializeEx(NULL, COINIT_MULTITHREADED); hr = CoCreateInstance(&CLSID_MMDeviceEnumerator, NULL, CLSCTX_INPROC_SERVER, &IID_IMMDeviceEnumerator, (void**)&mme); if (FAILED(hr)) -- GitLab https://gitlab.winehq.org/wine/wine/-/merge_requests/9694
From: Giovanni Mascellani <gmascellani(a)codeweavers.com> And trim down the non-extended tests. --- dlls/mmdevapi/tests/capture.c | 44 +++++++++++++++++++++++++++++------ 1 file changed, 37 insertions(+), 7 deletions(-) diff --git a/dlls/mmdevapi/tests/capture.c b/dlls/mmdevapi/tests/capture.c index 918a6d0c2f6..357ebdbd7bc 100644 --- a/dlls/mmdevapi/tests/capture.c +++ b/dlls/mmdevapi/tests/capture.c @@ -37,10 +37,21 @@ #include "mmdeviceapi.h" #include "audioclient.h" -static const unsigned int sampling_rates[] = { 8000, 16000, 22050, 44100, 48000, 96000 }; -static const unsigned int channel_counts[] = { 1, 2, 8 }; -static const unsigned int sample_formats[][2] = { {WAVE_FORMAT_PCM, 8}, {WAVE_FORMAT_PCM, 16}, - {WAVE_FORMAT_PCM, 32}, {WAVE_FORMAT_IEEE_FLOAT, 32} }; +static const unsigned int *sampling_rates; +static const unsigned int *channel_counts; +static const unsigned int (*sample_formats)[2]; +static size_t sampling_rate_count; +static size_t channel_count_count; +static size_t sample_format_count; + +static const unsigned int sampling_rates_regular[] = { 8000, 44100, 96000 }; +static const unsigned int channel_counts_regular[] = { 1, 2, 8 }; +static const unsigned int sample_formats_regular[][2] = { {WAVE_FORMAT_PCM, 16}, {WAVE_FORMAT_IEEE_FLOAT, 32} }; + +static const unsigned int sampling_rates_extended[] = { 8000, 11025, 16000, 22050, 44100, 48000, 96000 }; +static const unsigned int channel_counts_extended[] = { 1, 2, 4, 6, 8 }; +static const unsigned int sample_formats_extended[][2] = { {WAVE_FORMAT_PCM, 8}, {WAVE_FORMAT_PCM, 16}, + {WAVE_FORMAT_PCM, 32}, {WAVE_FORMAT_IEEE_FLOAT, 32} }; #define NULL_PTR_ERR MAKE_HRESULT(SEVERITY_ERROR, FACILITY_WIN32, RPC_X_NULL_REF_POINTER) @@ -570,9 +581,9 @@ static void test_formats(AUDCLNT_SHAREMODE mode, BOOL extensible) fmt.Format.cbSize = extensible ? sizeof(WAVEFORMATEXTENSIBLE) - sizeof(WAVEFORMATEX) : 0; - for (i = 0; i < ARRAY_SIZE(sampling_rates); i++) { - for (j = 0; j < ARRAY_SIZE(channel_counts); j++) { - for (k = 0; k < ARRAY_SIZE(sample_formats); k++) { + for (i = 0; i < sampling_rate_count; i++) { + for (j = 0; j < channel_count_count; j++) { + for (k = 0; k < sample_format_count; k++) { char format_chr[3]; hr = IMMDevice_Activate(dev, &IID_IAudioClient, CLSCTX_INPROC_SERVER, @@ -1245,6 +1256,25 @@ START_TEST(capture) { HRESULT hr; + if (winetest_extended) + { + sampling_rates = sampling_rates_extended; + channel_counts = channel_counts_extended; + sample_formats = sample_formats_extended; + sampling_rate_count = ARRAY_SIZE(sampling_rates_extended); + channel_count_count = ARRAY_SIZE(channel_counts_extended); + sample_format_count = ARRAY_SIZE(sample_formats_extended); + } + else + { + sampling_rates = sampling_rates_regular; + channel_counts = channel_counts_regular; + sample_formats = sample_formats_regular; + sampling_rate_count = ARRAY_SIZE(sampling_rates_regular); + channel_count_count = ARRAY_SIZE(channel_counts_regular); + sample_format_count = ARRAY_SIZE(sample_formats_regular); + } + CoInitializeEx(NULL, COINIT_MULTITHREADED); hr = CoCreateInstance(&CLSID_MMDeviceEnumerator, NULL, CLSCTX_INPROC_SERVER, &IID_IMMDeviceEnumerator, (void**)&mme); if (FAILED(hr)) -- GitLab https://gitlab.winehq.org/wine/wine/-/merge_requests/9694
The `build-mac` failure seems to be a temporary hiccup, and the `test-linux-32` failures do not seem related to my changes. -- https://gitlab.winehq.org/wine/wine/-/merge_requests/9694#note_125142
Some audio tests already use `WINETEST_INTERACTIVE`. That option is supposed to mean something different, of course, but it might be worth considering if (ab)using the existing option could be preferable. Also, kinda related, do the existing mmdevapi `WINETEST_INTERACTIVE == 1` tests work properly? Are they still useful? -- https://gitlab.winehq.org/wine/wine/-/merge_requests/9694#note_128265
On Tue Feb 3 11:47:04 2026 +0000, Matteo Bruni wrote:
Some audio tests already use `WINETEST_INTERACTIVE`. That option is supposed to mean something different, of course, but it might be worth considering if (ab)using the existing option could be preferable. Also, kinda related, do the existing mmdevapi `WINETEST_INTERACTIVE == 1` tests work properly? Are they still useful? I suppose that "interactive" tests are supposed to require user interaction, so are not suited for an automated run, and I would like my extended tests to still be run in an automated setting, even if not for each single MR. So I would consider it inconvenient to reuse `WINETEST_INTERACTIVE` for that. OTOH what benefit would we have from conflating the two concepts? It doesn't look like checking another environment variable is particularly expensive.
I don't know much about the interactive tests, and as I argues that seems something independent from this MR, so I don't have much to comment about that. -- https://gitlab.winehq.org/wine/wine/-/merge_requests/9694#note_128622
On Tue Feb 3 11:50:02 2026 +0000, Giovanni Mascellani wrote:
I suppose that "interactive" tests are supposed to require user interaction, so are not suited for an automated run, and I would like my extended tests to still be run in an automated setting, even if not for each single MR. So I would consider it inconvenient to reuse `WINETEST_INTERACTIVE` for that. OTOH what benefit would we have from conflating the two concepts? It doesn't look like checking another environment variable is particularly expensive. I don't know much about the interactive tests, and as I argues that seems something independent from this MR, so I don't have much to comment about that. Fair enough. I think I was more interested in knowing if anyone ran the interactive tests somewhat recently :sweat_smile:
-- https://gitlab.winehq.org/wine/wine/-/merge_requests/9694#note_128623
On Tue Feb 3 19:13:43 2026 +0000, Matteo Bruni wrote:
Fair enough. I think I was more interested in knowing if anyone ran the interactive tests somewhat recently :sweat_smile: I do always run interactive tests for modules which I'm working on or reviewing, although I haven't run a full test run. I think I've also hidden a couple of tests behind INTERACTIVE which are valuable but impossible to write without being a little flaky (the blocking send tests in ws2_32 in particular come to mind).
-- https://gitlab.winehq.org/wine/wine/-/merge_requests/9694#note_128692
On Tue Feb 3 19:13:43 2026 +0000, Elizabeth Figura wrote:
I do always run interactive tests for modules which I'm working on or reviewing, although I haven't run a full test run. I think I've also hidden a couple of tests behind INTERACTIVE which are valuable but impossible to write without being a little flaky (the blocking send tests in ws2_32 in particular come to mind). IMO tests are meant to be run regularly or are otherwise meaningless and doomed to bitrot and fail without anybody noticing. As our testing policy is generally designed around testing MRs and nightly runs of the test suite in non-interactive mode, I don't see much value in interactive tests. People may run a couple of tests in interactive mode but nobody will run the entire test suite when reviewing.
I could find an "extended" test suite useful, but only if it's run regularly. If it is so much more expensive that we can't afford running it in MRs and nightly runs, I kind of doubt we can do that? If necessary I think we could perhaps consider increasing the test timeouts on a case-by-case basis, but it's usually better to try to find some interesting test subset rather than being exhaustive. Looking at the change here I would say that testing the entire parameter matrix seems a bit overkill, and only varying over one dimension at a time would be enough? -- https://gitlab.winehq.org/wine/wine/-/merge_requests/9694#note_128697
IMO tests are meant to be run regularly or are otherwise meaningless and doomed to bitrot and fail without anybody noticing. As our testing policy is generally designed around testing MRs and nightly runs of the test suite in non-interactive mode, I don't see much value in interactive tests. People may run a couple of tests in interactive mode but nobody will run the entire test suite when reviewing.
I always run interactive tests for modules which I maintain. I also think that tests don't have to be run regularly to have meaning. They can also act as documentation. Our tests have always served the purpose of being both conformance tests and regression tests. Only the latter demands regular running, and frankly, there is a lot of wiggle room on what constitutes "regular". I think it would be better if someone™ was regularly running the whole test suite with interactive tests included. But even in the absence of that I would rather have our interactive tests than not.
If necessary I think we could perhaps consider increasing the test timeouts on a case-by-case basis, but it's usually better to try to find some interesting test subset rather than being exhaustive. Looking at the change here I would say that testing the entire parameter matrix seems a bit overkill, and only varying over one dimension at a time would be enough?
Yeah, this seems overkill. Even if I was running interactive tests it's too much. -- https://gitlab.winehq.org/wine/wine/-/merge_requests/9694#note_128705
IMO tests are meant to be run regularly or are otherwise meaningless and doomed to bitrot and fail without anybody noticing. As our testing policy is generally designed around testing MRs and nightly runs of the test suite in non-interactive mode, I don't see much value in interactive tests. People may run a couple of tests in interactive mode but nobody will run the entire test suite when reviewing.
I am not particularly interested in argueing for interactive tests, since what I'm pushing for here is something else; and I agree that in general tests that are run regularly and frequently are better than tests that are run irregularly and rarely; but tests that are run irregularly and rarely are better than tests that are never run because they do not exist. Even if you discover about a problem after a year, that's better than never realizing that, or having to debug it from scratch because of a application that fails. If nothing else because you don't have to write again the test.
I could find an "extended" test suite useful, but only if it's run regularly. If it is so much more expensive that we can't afford running it in MRs and nightly runs, I kind of doubt we can do that?
What would you think about running it daily? Not on every MR, in order to make MR pipelines a quicker feedback; doing an extended test daily still gives a relatively good feedback without starving the MR pipeline queue.
If necessary I think we could perhaps consider increasing the test timeouts on a case-by-case basis, but it's usually better to try to find some interesting test subset rather than being exhaustive. Looking at the change here I would say that testing the entire parameter matrix seems a bit overkill, and only varying over one dimension at a time would be enough?
I'm not sure. I have tried different approaches, and in many cases I left some test cases around and later discovery that they were meaningful and I was making incorrect assumptions because of that. Running the whole matrix isn't terribly slow after all, it still takes around a minute (give or take something depending on the hardware and OS). So I see a good reason for not doing that on each MR, but still doing it on an extended run. Getting smart about what to include or not is likely going to result into not being smart enough and not catching a regression when there is one. -- https://gitlab.winehq.org/wine/wine/-/merge_requests/9694#note_128757
Well, it seems I have to give up. -- https://gitlab.winehq.org/wine/wine/-/merge_requests/9694#note_129528
This merge request was closed by Giovanni Mascellani. -- https://gitlab.winehq.org/wine/wine/-/merge_requests/9694
participants (6)
-
Elizabeth Figura (@zfigura) -
Giovanni Mascellani -
Giovanni Mascellani (@giomasce) -
Giovanni Mascellani (@giomasce) -
Matteo Bruni (@Mystral) -
Rémi Bernon (@rbernon)