This is meant to simplify testing conditions that generally hold true but may occasionally fail due to interference from external factors (such as processes that start / stop, network connections being opened / closed, etc). The trick is to loop a few times on the set of flaky conditions until they succeed. During the last attempt all failures are recorded as usual, while in the previous runs, the tryok() failures area ignored but cause one more attempt to be made.
The simplest case looks like this: LOOP_ON_FLAKY_TESTS(3) { // ok() failures are never ignored and not retried ok(..., "check 1", ...); // tryok() failures are ignored except on the last attempt tryok(..., "check 2", ...); }
There is also: * attempt_retry() which marks the current attempt as failed as if calling tryok(0), and returns true if another attempt can be made. * attempt_failed() which returns true if an ok() call failed.
--- This is independent from the 'flaky' mechanism which adds some naming constraints. The loop macro is still called LOOP_ON_FLAKY_TESTS() despite being unrelated to the flaky mechanism. The attempt_retry() and attempt_failed() macro names also don't make it obvious that they are related to tryok().
I think this mechanism is better than the flaky one because a flaky test can go bad without anyone noticing, whereas if a tryok() starts failing systematically it will cause a real failure.
The other side of that coin is that, unlike flaky, the tryok() mechanism does not entirely eliminate the possibility of getting a failure, it just reduces it; though by adjusting the maximum number of attempts one can achieve an arbitrarily low failure rate. For instance if an ok() call fails 10% of the time and one wants a maximum of 1 in a million failure rate, use LOOP_ON_FLAKY_TESTS(6). The cost is an increased run time in the worst case.
This also limits the use of this mechanism to tests that have a reasonably low failure rate to start with (otherwise one has to loop too many times). Also note that there are cases where looping essentially reduce the failure rate to zero. For instance ieframe:webbrowser fails if IE creates a net session while the test is counting them. But IE only creates the one net session on start up so trying even one more time should guarantee that the test will succeed. Other cases like scheduling delays and the creation of network connections are more probabilistic in nature. Maybe a comment in test.h should offer some guideline as to the target failure rate.
Eventually this may replace the flaky mechanism but that depends on how well it works in practice and how practical it is to loop on flaky tests. It seems to be going well in the few cases I looked at. But I think this mechanism has value even if the two end up coexisting indefinitely.
This MR uses the tryok() in some actual tests for illustration and testing purposes. The final MR will probably split most of those off to separate MRs.
-- v2: mmdevapi/tests: Replace flaky with tryok() in the capture tests. mmdevapi/tests: Replace flaky with tryok() in the render tests. quartz/tests: Replace flaky() with tryok() to work around scheduling delays. DEBUG ieframe/tests: tryok() framework testing ground. ieframe/tests: Work around a network session race condition. advapi32/tests: Replace the custom loop with tryok() mechanism.