On Wed Jul 26 17:25:19 2023 +0000, Santino Mazza wrote:
> mmm, what about adding a timeout for WaitForSingleObject and do the
> LeaveCriticalSection if the timeout passed? Not sure how long it should be.
Oh I just saw that we have `PRESENTER_MIXER_HAS_INPUT` flag and is set when it receives the `MFVP_MESSAGE_PROCESSINPUTNOTIFY` message, but when the streaming thread receives the `EVRM_PROCESS_INPUT` message it doesn't set that flag, shouldn't we set that flag and check for it before waiting for the thread?
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/3319#note_40361
This is meant to simplify testing conditions that generally hold true
but may occasionally fail due to interference from external factors
(such as processes that start / stop, network connections being
opened / closed, etc).
The trick is to loop a few times on the set of flaky conditions until
they succeed. During the last attempt all failures are recorded as
usual, while in the previous runs, the tryok() failures area ignored
but cause one more attempt to be made.
The simplest case looks like this:
LOOP_ON_FLAKY_TESTS(3)
{
// ok() failures are never ignored and not retried
ok(..., "check 1", ...);
// tryok() failures are ignored except on the last attempt
tryok(..., "check 2", ...);
}
There is also:
* attempt_retry() which marks the current attempt as failed as if
calling tryok(0), and returns true if another attempt can be made.
* attempt_failed() which returns true if an ok() call failed.
---
This is independent from the 'flaky' mechanism which adds some naming
constraints. The loop macro is still called LOOP_ON_FLAKY_TESTS()
despite being unrelated to the flaky mechanism. The attempt_retry()
and attempt_failed() macro names also don't make it obvious that they
are related to tryok().
I think this mechanism is better than the flaky one because a flaky test
can go bad without anyone noticing, whereas if a tryok() starts failing
systematically it will cause a real failure.
The other side of that coin is that, unlike flaky, the tryok()
mechanism does not entirely eliminate the possibility of getting a
failure, it just reduces it; though by adjusting the maximum number of
attempts one can achieve an arbitrarily low failure rate. For instance
if an ok() call fails 10% of the time and one wants a maximum of 1 in
a million failure rate, use LOOP_ON_FLAKY_TESTS(6). The cost is an
increased run time in the worst case.
This also limits the use of this mechanism to tests that have a
reasonably low failure rate to start with (otherwise one has to loop
too many times). Also note that there are cases where looping
essentially reduce the failure rate to zero. For instance
ieframe:webbrowser fails if IE creates a net session while the test is
counting them. But IE only creates the one net session on start up so
trying even one more time should guarantee that the test will succeed.
Other cases like scheduling delays and the creation of network
connections are more probabilistic in nature. Maybe a comment in test.h
should offer some guideline as to the target failure rate.
Eventually this may replace the flaky mechanism but that depends on
how well it works in practice and how practical it is to loop on flaky
tests. It seems to be going well in the few cases I looked at. But I
think this mechanism has value even if the two end up coexisting
indefinitely.
This MR uses the tryok() in some actual tests for illustration and testing purposes. The final MR will probably split most of those off to separate MRs.
--
v2: mmdevapi/tests: Replace flaky with tryok() in the capture tests.
mmdevapi/tests: Replace flaky with tryok() in the render tests.
quartz/tests: Replace flaky() with tryok() to work around scheduling delays.
DEBUG ieframe/tests: tryok() framework testing ground.
ieframe/tests: Work around a network session race condition.
advapi32/tests: Replace the custom loop with tryok() mechanism.
https://gitlab.winehq.org/wine/wine/-/merge_requests/3418
On Wed Jul 26 15:17:52 2023 +0000, Nikolay Sivov wrote:
> This should be arranged differently, you can't generally assume that
> you've entered this lock when calling the helper.
mmm, what about adding a timeout for WaitForSingleObject and do the LeaveCriticalSection if the timeout passed? Not sure how long it should be.
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/3319#note_40359
On Wed Jul 26 15:17:52 2023 +0000, Nikolay Sivov wrote:
> This does hang on my system, but is it reliable or is it depending on
> calls made close one after another in time?
It's definitely not a reliable way to test for this behavior, but I'm not sure how could I do it.
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/3319#note_40353
Nikolay Sivov (@nsivov) commented about dlls/evr/presenter.c:
>
> PostThreadMessageW(presenter->thread.tid, EVRM_STOP, 0, 0);
>
> + /* Make sure streaming thread is not blocked by critical section before waiting for it. */
> + LeaveCriticalSection(&presenter->cs);
> WaitForSingleObject(presenter->thread.hthread, INFINITE);
> CloseHandle(presenter->thread.hthread);
> + EnterCriticalSection(&presenter->cs);
>
This should be arranged differently, you can't generally assume that you've entered this lock when calling the helper.
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/3319#note_40348
Ensured to work with both the current `klist` implementation and the one modified to use KerbQueryTicketCacheExMessage. Patch for `klist` will be sent separately.
--
v5: dlls/kerberos: Implement KerbQueryTicketCacheExMessage.
https://gitlab.winehq.org/wine/wine/-/merge_requests/3393
This is meant to simplify testing conditions that generally hold true
but may occasionally fail due to interference from external factors
(such as processes that start / stop, network connections being
opened / closed, etc).
The trick is to loop a few times on the set of flaky conditions until
they succeed. During the last attempt all failures are recorded as
usual, while in the previous runs, the tryok() failures area ignored
but cause one more attempt to be made.
The simplest case looks like this:
LOOP_ON_FLAKY_TESTS(3)
{
// ok() failures are never ignored and not retried
ok(..., "check 1", ...);
// tryok() failures are ignored except on the last attempt
tryok(..., "check 2", ...);
}
There is also:
* attempt_retry() which marks the current attempt as failed as if
calling tryok(0), and returns true if another attempt can be made.
* attempt_failed() which returns true if an ok() call failed.
---
This is independent from the 'flaky' mechanism which adds some naming
constraints. The loop macro is still called LOOP_ON_FLAKY_TESTS()
despite being unrelated to the flaky mechanism. The attempt_retry()
and attempt_failed() macro names also don't make it obvious that they
are related to tryok().
I think this mechanism is better than the flaky one because a flaky test
can go bad without anyone noticing, whereas if a tryok() starts failing
systematically it will cause a real failure.
The other side of that coin is that, unlike flaky, the tryok()
mechanism does not entirely eliminate the possibility of getting a
failure, it just reduces it; though by adjusting the maximum number of
attempts one can achieve an arbitrarily low failure rate. For instance
if an ok() call fails 10% of the time and one wants a maximum of 1 in
a million failure rate, use LOOP_ON_FLAKY_TESTS(6). The cost is an
increased run time in the worst case.
This also limits the use of this mechanism to tests that have a
reasonably low failure rate to start with (otherwise one has to loop
too many times). Also note that there are cases where looping
essentially reduce the failure rate to zero. For instance
ieframe:webbrowser fails if IE creates a net session while the test is
counting them. But IE only creates the one net session on start up so
trying even one more time should guarantee that the test will succeed.
Other cases like scheduling delays and the creation of network
connections are more probabilistic in nature. Maybe a comment in test.h
should offer some guideline as to the target failure rate.
Eventually this may replace the flaky mechanism but that depends on
how well it works in practice and how practical it is to loop on flaky
tests. It seems to be going well in the few cases I looked at. But I
think this mechanism has value even if the two end up coexisting
indefinitely.
This MR uses the tryok() in some actual tests for illustration and testing purposes. The final MR will probably split most of those off to separate MRs.
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/3418