I'm concerned by some of the recent events regarding WineTest:
* A couple of times compilation of the official binaries broke. This got caught within a couple of days which is okayish.
* But the winehq.org upgrade completely broke test.winehq.org and it took 10 days for anyone to notice!
* Then there's the wininet:ftp, wininet:http & co breakage, also caused by the winehq.org upgrade. This time it took a whopping 24 days for anyone to notice! https://www.winehq.org/pipermail/wine-devel/2019-October/151834.html
* There's also TestBot VMs that have almost completely stopped succeeding at posting WineTest results: vista, vistau64*, w8*, w864. I'm probably more to blame for this than anyone else but I would have expected someone to publicly wonder what's going on with these VMs.
* Finally there's the gdi32:bitmap not-quite regression. I may have missed something but I did not see it mentioned before. https://www.winehq.org/pipermail/wine-devel/2019-October/152082.html
My sinking feeling is: * Nobody looks at test.winehq.org. * Major WineTest regressions can sail through without anyone noticing.
My feeling is that nobody feels responsible for making sure test.winehq.org works and for spotting regressions.
Hence why I think we may need someone whose responsibility would be to monitor test.winehq.org, analyze the WineTest results, diagnose new issues and report them.
I did something like that a few times but it takes quite a bit of time so I had to stop. Essentially it went something like this:
* Every couple of weeks, open the results for the latest build: https://test.winehq.org/data/7d954f23356f0aaf49b1ef0c4bed83041cb41c08/
* For each line that has a failure on Windows, open the corresponding test page. For instance: https://test.winehq.org/data/tests/advapi32:service.html
* If the tests start failing consistenly on a specific date on at least one platform, dig in to figure out why. For instance: https://test.winehq.org/data/tests/gdi32:bitmap.html https://test.winehq.org/data/tests/wininet:http.html
* Likewise, if a test has been consistently getting 13 failures on a platform and now consistently gets 19 it may be because a commit introduced new failures. For instance: https://test.winehq.org/data/tests/advapi32:service.html
This requires regular checks otherwise failure is the new normal and there is nothing to see.
Hi Francois,
One of the issues I see, is that it's not easy to access, in terms of a quick link.
Can it be added along side the "Winehq", "Wiki", "AppDB"... links we already have?
Every time I remember to check it out, I have to find the link in wiki. (I don't have it bookmarked)
Having an email sent to the list, once per release/week, would also help in making
the website more visible to more people.
Regards
Alistair.
On 05-10-19 02:02, Francois Gouget wrote:
I'm concerned by some of the recent events regarding WineTest:
A couple of times compilation of the official binaries broke. This got caught within a couple of days which is okayish.
But the winehq.org upgrade completely broke test.winehq.org and it took 10 days for anyone to notice!
Then there's the wininet:ftp, wininet:http & co breakage, also caused by the winehq.org upgrade. This time it took a whopping 24 days for anyone to notice! https://www.winehq.org/pipermail/wine-devel/2019-October/151834.html
There's also TestBot VMs that have almost completely stopped succeeding at posting WineTest results: vista, vistau64*, w8*, w864. I'm probably more to blame for this than anyone else but I would have expected someone to publicly wonder what's going on with these VMs.
Finally there's the gdi32:bitmap not-quite regression. I may have missed something but I did not see it mentioned before. https://www.winehq.org/pipermail/wine-devel/2019-October/152082.html
There's for instance also the winstation tests that somehow started failing quite recently:
https://test.winehq.org/data/tests/user32:winstation.html
without any changes to the tests:
https://source.winehq.org/git/wine.git/history/HEAD:/dlls/user32/tests/winst...
They (at least sometimes) succeeded at the start of the year:
https://web.archive.org/web/20190124112119/http://test.winehq.org/data/
so it must be due to some testbot change. I was actually under the assumption that you were checking the test log after testbot changes to see if the changes caused any failures, but I guess not.
I actually noticed the failures as soon as they started happening since I check the test page quite often (because I get excited when I see green stuff), but again assumed someone was monitoring that already. Since I already look at it quite often, I can just file bug reports when I see consistent new failures if you like.
Best, Sven
On Sat, 5 Oct 2019, Sven Baars wrote: [...]
so it must be due to some testbot change. I was actually under the assumption that you were checking the test log after testbot changes to see if the changes caused any failures, but I guess not.
That depends on the type of change. Most of the TestBot changes (commits to tool/testbot) have no impact on the test results. Only the VM changes would make a difference. So I normally check the results after changing the VMs. But it's not actually very practical because when VMs already have a significant number of failures, some of which are random, it's hard to say if a new run is really better or worse :-(
[...]
Since I already look at it quite often, I can just file bug reports when I see consistent new failures if you like.
Yes, I think that could be useful. In particular if the new failures are caused by changes in the tests it's useful to identify the bad commit, or at least the range of bad commits.
If the new failure is not caused by a change in the tests and there's no other obvious explanation, then maybe it's better to ask what's up on wine-dev.
On 07-10-19 04:16, Francois Gouget wrote:
[...]
Since I already look at it quite often, I can just file bug reports when I see consistent new failures if you like.
Yes, I think that could be useful. In particular if the new failures are caused by changes in the tests it's useful to identify the bad commit, or at least the range of bad commits.
If the new failure is not caused by a change in the tests and there's no other obvious explanation, then maybe it's better to ask what's up on wine-dev.
I ended up submitting some patches over the weekend to fix some of the test failures. If the ftp server gets fixed, and someone has a look at the gdi32:bitmap and gdi32:clipping tests, I think we should be seeing green again.
Am 05.10.19 um 02:02 schrieb Francois Gouget:
I'm concerned by some of the recent events regarding WineTest:
A couple of times compilation of the official binaries broke. This got caught within a couple of days which is okayish.
But the winehq.org upgrade completely broke test.winehq.org and it took 10 days for anyone to notice!
Then there's the wininet:ftp, wininet:http & co breakage, also caused by the winehq.org upgrade. This time it took a whopping 24 days for anyone to notice! https://www.winehq.org/pipermail/wine-devel/2019-October/151834.html
There's also TestBot VMs that have almost completely stopped succeeding at posting WineTest results: vista, vistau64*, w8*, w864. I'm probably more to blame for this than anyone else but I would have expected someone to publicly wonder what's going on with these VMs.
Finally there's the gdi32:bitmap not-quite regression. I may have missed something but I did not see it mentioned before. https://www.winehq.org/pipermail/wine-devel/2019-October/152082.html
My sinking feeling is:
- Nobody looks at test.winehq.org.
- Major WineTest regressions can sail through without anyone noticing.
My feeling is that nobody feels responsible for making sure test.winehq.org works and for spotting regressions.
Hence why I think we may need someone whose responsibility would be to monitor test.winehq.org, analyze the WineTest results, diagnose new issues and report them.
I did something like that a few times but it takes quite a bit of time so I had to stop. Essentially it went something like this:
- Every couple of weeks, open the results for the latest build: https://test.winehq.org/data/7d954f23356f0aaf49b1ef0c4bed83041cb41c08/
There is https://www.winehq.org/~jwhite/latest.html which has grown very long meanwhile, maybe someone can use those scripts and make bad things more obvious
On Sat, 5 Oct 2019 at 02:03, Francois Gouget fgouget@free.fr wrote:
Hence why I think we may need someone whose responsibility would be to monitor test.winehq.org, analyze the WineTest results, diagnose new issues and report them.
I think ideally yes. More generally, I'm usually in favour of making specific people responsible for specific things. I think it's worth pointing out that ideally we'd have someone that's skilled enough to investigate and write fixes for a good chunk of the failures we have or that are going to come up. Bringing failures to the attention of people familiar with the relevant areas helps, but it's also often enough the case that other commitments are the reason that the people in question didn't notice the failures in the first place.