From: Michael Stefaniuc mstefani@winehq.org
Wine-Bug: https://bugs.winehq.org/show_bug.cgi?id=55637 --- dlls/dmime/tests/dmime.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/dlls/dmime/tests/dmime.c b/dlls/dmime/tests/dmime.c index e390f86f1bf..6c5dec6a0e6 100644 --- a/dlls/dmime/tests/dmime.c +++ b/dlls/dmime/tests/dmime.c @@ -2853,9 +2853,9 @@ static void test_performance_pmsg(void) hr = IDirectMusicPerformance_SendPMsg(performance, msg); ok(hr == S_OK, "got %#lx\n", hr); hr = IDirectMusicPerformance_SendPMsg(performance, msg); - ok(hr == DMUS_E_ALREADY_SENT, "got %#lx\n", hr); + flaky_wine ok(hr == DMUS_E_ALREADY_SENT, "got %#lx\n", hr); hr = IDirectMusicPerformance_FreePMsg(performance, msg); - ok(hr == DMUS_E_CANNOT_FREE, "got %#lx\n", hr); + flaky_wine ok(hr == DMUS_E_CANNOT_FREE, "got %#lx\n", hr);
hr = IDirectMusicPerformance_FreePMsg(performance, clone); ok(hr == S_OK, "got %#lx\n", hr); @@ -2991,7 +2991,7 @@ static void test_performance_pmsg(void) switch (delivery_flags[i]) { case DMUS_PMSGF_TOOL_IMMEDIATE: ok(duration <= 50, "got %lu\n", duration); break; - case DMUS_PMSGF_TOOL_QUEUE: ok(duration >= 50 && duration <= 125, "got %lu\n", duration); break; + case DMUS_PMSGF_TOOL_QUEUE: flaky ok(duration >= 50 && duration <= 125, "got %lu\n", duration); break; case DMUS_PMSGF_TOOL_ATTIME: ok(duration >= 125 && duration <= 500, "got %lu\n", duration); break; } }
This merge request was approved by Michael Stefaniuc.
I think the DMUS_E_ALREADY_SENT failures should not happen anymore after d0c3a0e03d557498195a70693003deae5b3e4588 (or rather much less frequently as it's always timing dependent).
Regarding the timing test I'm not completely sure making it flaky is the best option. Maybe we should just get rid of the test entirely if there's too much variation.
I think the DMUS_E_ALREADY_SENT failures should not happen anymore after d0c3a0e03d557498195a70693003deae5b3e4588 (or rather much less frequently as it's always timing dependent).
Right, forgot about that and just looked at the top of MR where I saw those failures.\ So those can be dropped then.
Regarding the timing test I'm not completely sure making it flaky is the best option. Maybe we should just get rid of the test entirely if there's too much variation.
You're not sure and I'm deferring the decision to you.\ So going with flaky is the simplest workaround for now to stop that noise. And then later on we can remove the whole test if it isn't that useful.
On Wed Oct 11 21:11:31 2023 +0000, Michael Stefaniuc wrote:
I think the DMUS_E_ALREADY_SENT failures should not happen anymore
after d0c3a0e03d557498195a70693003deae5b3e4588 (or rather much less frequently as it's always timing dependent). Right, forgot about that and just looked at the top of MR where I saw those failures.\ So those can be dropped then.
Regarding the timing test I'm not completely sure making it flaky is
the best option. Maybe we should just get rid of the test entirely if there's too much variation. You're not sure and I'm deferring the decision to you.\ So going with flaky is the simplest workaround for now to stop that noise. And then later on we can remove the whole test if it isn't that useful.
Do we know why we are getting these timing failures?
Getting 150 ms instead of 125 ms I can understand: just a scheduling delay because the system is busy. But how can we get 31 ms, less than the expected 50 ms minimum?
What is the reasoning behind the 50 and 100 ms thresholds?
On Thu Oct 12 15:32:20 2023 +0000, Francois Gouget wrote:
Do we know why we are getting these timing failures? Getting 150 ms instead of 125 ms I can understand: just a scheduling delay because the system is busy. But how can we get 31 ms, less than the expected 50 ms minimum? What is the reasoning behind the 50 and 100 ms thresholds?
No very clever reasoning, just some values which were supposedly different enough, as well as compatible with the performance default read-ahead which is apparently of ~70ms (difference between QUEUE and ATTIME delivery), and not too large to avoid slowing down the test.
Fwiw I only see spurious failures with unexpected high times (~265ms) for QUEUE delivery mode in the recent runs, maybe it's a genuine bug rather than timing failure.