https://bugs.winehq.org/show_bug.cgi?id=51751
Bug ID: 51751 Summary: Winetest fails on w10 Pro 20H2 Product: Wine Version: 6.16 Hardware: x86-64 OS: Windows Status: UNCONFIRMED Severity: normal Priority: P2 Component: -unknown Assignee: wine-bugs@winehq.org Reporter: saulius2@gmail.com
(It seems there is no component dedicated to the client part of the Winetest / WRT suite, so I leave it as -unknown)
I ran precompiled Winetest on w10, and it finished with the error:
... Running: xmllite:writer (701 of 701) Running tests - 69 failures Running: Done (701 of 701) Cleaning up - 69 failures Warning: 69 tests failed. There is probably something broken with your setup. You need to address this before submitting results. Finished - 69 failures
Thus the results weren't submitted.
This is clean version of w10 (just reinstalled) on physical machine (HP EliteBook 840 G3).
How should I proceed further?
https://bugs.winehq.org/show_bug.cgi?id=51751
Saulius K. saulius2@gmail.com changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |saulius2@gmail.com
--- Comment #1 from Saulius K. saulius2@gmail.com --- Created attachment 70643 --> https://bugs.winehq.org/attachment.cgi?id=70643 WRT report from build 1cddd8d5715d on Win10 20H2
The build is 1cddd8d5715dcbba618425c20bfabf19f9a20422:
C:\Users\hp>Downloads\winetest-latest.exe --version 1cddd8d5715d
The OS is w10 Pro version 20H2 (to be exact, version 2009):
C:\Users\hp>Reg Query "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion" /v ReleaseId
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion ReleaseId REG_SZ 2009
C:\Users\hp>ver
Microsoft Windows [Version 10.0.19042.1165]
https://bugs.winehq.org/show_bug.cgi?id=51751
Saulius K. saulius2@gmail.com changed:
What |Removed |Added ---------------------------------------------------------------------------- Summary|Winetest fails on w10 Pro |Too many Winetest failures |20H2 |on w10 Pro 20H2 (on | |physical machine)
https://bugs.winehq.org/show_bug.cgi?id=51751
--- Comment #2 from Saulius K. saulius2@gmail.com --- Created attachment 70644 --> https://bugs.winehq.org/attachment.cgi?id=70644 WRT output from build 1cddd8d5715d on Win10 20H2
https://bugs.winehq.org/show_bug.cgi?id=51751
--- Comment #3 from Nikolay Sivov bunglehead@gmail.com --- If you want to understand and fix some of these failures, you'll have to look module by module. Is it possible you're running with dpi above 96? If that's the case, you could set it to traditional default, and see if that improves numbers.
https://bugs.winehq.org/show_bug.cgi?id=51751
--- Comment #4 from Saulius K. saulius2@gmail.com --- (In reply to Nikolay Sivov from comment #3)
If you want to understand and fix some of these failures, you'll have to look module by module.
That's what I want to do (or be done by someone else) in the end.
Now I want to report that Winetest reporting logic contains some serious flaw.
Results generated on the clean install of the supported OS on a branded machine _should not_ be rejected in a such (non-verbose) manner, IMO.
Results should be either:
(1) rejected with saying where exactly the testing operator should look (telling difference at the large scale metrics compared to the accepted results). Or (2) accepted by the server in usual manner so the testing operator can see it on https://test.winehq.org/data by oneself.
Which one is more appropriate for the current workflow? Or is there any argument to reject both proposed ways?
Is it possible you're running with dpi above 96? If that's the case, you could set it to traditional default, and see if that improves numbers.
Thanks for the hint. I will test that ASAP.
In case it helps, it still will not in general (for other, uninformed test operators / testing contributors).
A change in reporting logic is still needed, IMO.
https://bugs.winehq.org/show_bug.cgi?id=51751
--- Comment #5 from Nikolay Sivov bunglehead@gmail.com --- The motivation to reject on large number of failures is to avoid reports that we know are broken results. You can argue that tests are broken, and that might be the case, figuring out first if there is a major misconfiguration (even if from the point of view of test expectations) is preferred.
https://bugs.winehq.org/show_bug.cgi?id=51751
--- Comment #6 from Saulius K. saulius2@gmail.com --- (In reply to Nikolay Sivov from comment #5)
The motivation to reject on large number of failures is to avoid reports that we know are broken results.
Only that in this case developers don't know if the results are broken. Developers can only guess.
My aim is to improve the situation both for developers and test operators.
figuring out first if there is a major misconfiguration (even if from the point of view of test expectations) is preferred.
And where is the public instructions for that process?
A comment in the Bugzilla doesn't count as documentation (meaning it would be another bug, this time in WineHQ.org, I guess).
https://bugs.winehq.org/show_bug.cgi?id=51751
--- Comment #7 from Nikolay Sivov bunglehead@gmail.com --- Instructions for what? Winetest gives you a warning, explaining why results were not submitted. If you want to make tests run cleaner on that configuration, take a look at the log to see what fails the most, and start from there.
https://bugs.winehq.org/show_bug.cgi?id=51751
--- Comment #8 from Saulius K. saulius2@gmail.com --- (In reply to Nikolay Sivov from comment #7)
Instructions for what?
I am referring to this process:
figuring out first if there is a major misconfiguration
To detect a major misconfiguration means to check a specific list of configuration items/settings on a system (like the DPI setting you mentioned).
I get this following: if developer says "misconfiguration", to be constructive the one should have quite specific definition of it at ones hand.
Which is what I miss (a configuration checkup list/tool). Does this mean the project haven't documented any such such measure?
Winetest gives you a warning, explaining why results were not submitted.
I may repeat myself, but for non-developer the explanation is too lose/broad to start looking at results.
If you want to make tests run cleaner on that configuration, take a look at the log to see what fails the most, and start from there.
OK, particularly which things do you mean by "what" here, a:
* per-DLL failures, * per-function-group failures, * per-specific-function failures, * per-speficif-line failures.
As former contributor, I ask for being specific, particular, not for the generalizations. A documentation URL would do too. Thanks:)
https://bugs.winehq.org/show_bug.cgi?id=51751
Sveinar Søpler cybermax@dexter.no changed:
What |Removed |Added ---------------------------------------------------------------------------- CC| |cybermax@dexter.no
--- Comment #9 from Sveinar Søpler cybermax@dexter.no --- I am not entirely sure the tests are supposed to be run "by normal people". It (i suppose) is a mostly undocumented test tool that you probably need to know intimately to figure out :)
You could try to ask in the wine-devel mailing list, but i am not sure what help would be given :)
I am also not sure this "bug" will be picked up as a "winetests" bug, since it is filed under Product: Wine. If the right person sees it, you might get lucky tho.
https://bugs.winehq.org/show_bug.cgi?id=51751
--- Comment #10 from Saulius K. saulius2@gmail.com --- (In reply to Sveinar Søpler from comment #9)
I am not entirely sure the tests are supposed to be run "by normal people".
My impression was that it doesn't matter who runs the tests.
It (i suppose) is a mostly undocumented test tool that you probably need to know intimately to figure out :)
Some kind of documentation is [2]:
--- quote --- To simplify running the tests, a single Windows executable containing all the tests (named WineTest) is updated daily and can be downloaded from the bottom of WineHQ's testing page. This helps collect results from a broader set of Windows, Unix and Mac environments than the Wine TestBot farm can cover.
Running WineTest requires no programming skills whatsoever. Simply download the executable, answer any prompts that come up, and let the test suite do the rest. This makes running the test suite one of the simplest ways that users can help improve Wine. --- quote ---
I am also not sure this "bug" will be picked up as a "winetests" bug, since it is filed under Product: Wine. If the right person sees it, you might get lucky tho.
I guess that's perfect chance for the new bugreport under Product: WineHQ Bugzilla (probably missing a component).
Thanks for the support.
[2]:https://wiki.winehq.org/Conformance_Tests#Wine_conformance_tests
https://bugs.winehq.org/show_bug.cgi?id=51751
Saulius K. saulius2@gmail.com changed:
What |Removed |Added ---------------------------------------------------------------------------- Component|-unknown |testcases