So, in a radical break from tradition, we're trying to accomplish something useful at Wineconf.
Specifically, we're making 'make test' work for everyone, not just Alexandre.
Maarten Lankhorst is maintaining a tree of all of our test related patches.
So, for those that want to play, the thing to do is to git fetch git://repo.or.cz/wine/testsucceed.git master:testsucceed git rebase testsucceed
Then you can do a make test (with a clean .wine directory) and share your results. It's useful to use Dan's filter: egrep '__test|make.*ok|Backtrace' <your-log-file-here>
Up through the end of day tomorrow, you can send me the results of that egrep and I'll tally and triage.
If you have a patch, send it in, and CC Maarten so he can update it.
The goal is to have the number of failures as close to 0 as possible, and for the non 0 cases, to at least understand each failure.
Cheers,
Jeremy
Jeremy White wrote:
So, in a radical break from tradition, we're trying to accomplish something useful at Wineconf.
Specifically, we're making 'make test' work for everyone, not just Alexandre.
Maarten Lankhorst is maintaining a tree of all of our test related patches.
So, for those that want to play, the thing to do is to git fetch git://repo.or.cz/wine/testsucceed.git master:testsucceed git rebase testsucceed
Then you can do a make test (with a clean .wine directory) and share your results. It's useful to use Dan's filter: egrep '__test|make.*ok|Backtrace' <your-log-file-here>
Up through the end of day tomorrow, you can send me the results of that egrep and I'll tally and triage.
If you have a patch, send it in, and CC Maarten so he can update it.
The goal is to have the number of failures as close to 0 as possible, and for the non 0 cases, to at least understand each failure.
Cheers,
Jeremy
Hi,
It's nice to fix all the test failures when running them in Wine. We however have several tests that still fail on Windows so how can we be sure the test results in Wine are correct? Some that are failing on a lot of Windows boxes:
advpack:advpack advpack:files comctl32:monthcal crypt32:encode gdi32:font kernel32:actctx kernel32:process usp10:usp10 wintrust:softpub
I've just picked some that fail on different versions of Windows. There are also several tests that consistently fail on a particular version of Windows.
Am Samstag, 6. Oktober 2007 20:28:36 schrieb Paul Vriens:
Jeremy White wrote:
So, in a radical break from tradition, we're trying to accomplish something useful at Wineconf.
Specifically, we're making 'make test' work for everyone, not just Alexandre.
Maarten Lankhorst is maintaining a tree of all of our test related patches.
So, for those that want to play, the thing to do is to git fetch git://repo.or.cz/wine/testsucceed.git master:testsucceed git rebase testsucceed
Then you can do a make test (with a clean .wine directory) and share your results. It's useful to use Dan's filter: egrep '__test|make.*ok|Backtrace' <your-log-file-here>
Up through the end of day tomorrow, you can send me the results of that egrep and I'll tally and triage.
If you have a patch, send it in, and CC Maarten so he can update it.
The goal is to have the number of failures as close to 0 as possible, and for the non 0 cases, to at least understand each failure.
Cheers,
Jeremy
Hi,
It's nice to fix all the test failures when running them in Wine. We however have several tests that still fail on Windows so how can we be sure the test results in Wine are correct? Some that are failing on a lot of Windows boxes:
I brought that up on WineConf, but I want to add it here too: It is a bit tricky to draw the line between a broken test and a badly set up Windows installation. There are situations where applications are broken on a specific Windows installation. As an example, d3d tests may fail when run in VMware. VMWare has a D3D to GL wrapper simmilar to Wine. This is some work in progress piece of code which fails to run many apps. So if the d3d test fails in VMWare then it is likely VMWare's bug. The problem can occur in other parts too, like an application overwriting system global libraries(aka DLL hell), or malware, copy protection rootkits, security apps, system tuners, etc. For example, there is a problem with Steam on Wine that occurs on Windows too, triggered by various tuning apps.
So a failure on Windows doesn't necessarily mean that the test is wrong.
So a failure on Windows doesn't necessarily mean that the test is wrong.
Stefan, I know this has been hard for you, and I think video tests are a worst case, but I think we have to push you on these tests.
I think a test that fails, or crashes a system because of a driver bug is a broken test. It's admittedly hard to write a test that will detect a vmware situation or driver bug and work around it, but I think that is what we should do.
It's just like compiling without warnings. Once there are no warnings, it's easy to make sure none creep in. But once just a few are excused as being 'understandable', then a lot more quickly creep in, and no one pays attention to warnings.
Just my Swiss Francs .02
Cheers,
Jeremy
On 10/10/2007, Jeremy White jwhite@codeweavers.com wrote:
I think a test that fails, or crashes a system because of a driver bug is a broken test. It's admittedly hard to write a test that will detect a vmware situation or driver bug and work around it, but I think that is what we should do.
At that point the only real option is keeping a list of blacklisted video drivers (which some applications actually sort of do), but I'm not sure we really want to go there. I do think we should investigate native failures, and fix them if possible, but if tests fail because eg. vmware has a broken d3d implementation, I think the test is essentially doing what it should do.
On 10/10/07, H. Verbeet hverbeet@gmail.com wrote:
At that point the only real option is keeping a list of blacklisted video drivers (which some applications actually sort of do), but I'm not sure we really want to go there. I do think we should investigate native failures, and fix them if possible, but if tests fail because eg. vmware has a broken d3d implementation, I think the test is essentially doing what it should do.
Conversely couldn't you write a test that detects known good drivers via the loaded kernel module and make the test dependent on their presence? Rather than running the test and not knowing if it will pass or fail, it makes more since to only run the test when we know it SHOULD pass and then if it fails we have identified a regression.
On 10/10/07, Steven Edwards winehacker@gmail.com wrote:
On 10/10/07, H. Verbeet hverbeet@gmail.com wrote:
At that point the only real option is keeping a list of blacklisted video drivers (which some applications actually sort of do), but I'm not sure we really want to go there. I do think we should investigate native failures, and fix them if possible, but if tests fail because eg. vmware has a broken d3d implementation, I think the test is essentially doing what it should do.
Conversely couldn't you write a test that detects known good drivers via the loaded kernel module and make the test dependent on their presence? Rather than running the test and not knowing if it will pass or fail, it makes more since to only run the test when we know it SHOULD pass and then if it fails we have identified a regression.
Just to be clear, what I mean is a whitelist rather than a blacklist. I am not sure if it would really work in practice as much as the vendors update the drivers but it provides a stable reference point with which to work from. The overall goal being stability when seeking to identify what is really a regression. It seems obvious some sort of detection will have to be done or else we will never have a stable framework.
On Wed, 10 Oct 2007, Steven Edwards wrote: [...]
Just to be clear, what I mean is a whitelist rather than a blacklist.
The problem with whitelists is that they will stop us from finding where there are issues. They may be ok when running the tests on Windows as we don't really care if our tests cannot run on a Windows system due to a Windows driver bug. However we want to know about the Linux systems (Mac OS X, FreeBSD, etc) where Wine runs into driver bugs because we want to workaround these bugs where possible. So on Linux we should use blacklists.
Steven Edwards wrote:
On 10/10/07, H. Verbeet hverbeet@gmail.com wrote:
At that point the only real option is keeping a list of blacklisted video drivers (which some applications actually sort of do), but I'm not sure we really want to go there. I do think we should investigate native failures, and fix them if possible, but if tests fail because eg. vmware has a broken d3d implementation, I think the test is essentially doing what it should do.
Conversely couldn't you write a test that detects known good drivers via the loaded kernel module and make the test dependent on their presence? Rather than running the test and not knowing if it will pass or fail, it makes more since to only run the test when we know it SHOULD pass and then if it fails we have identified a regression.
But isn't introducing bad or good lists the same as using the the Windows version? I though the general idea is to have tests that act on behavior?
On 10/10/07, Paul Vriens paul.vriens.wine@gmail.com wrote:
But isn't introducing bad or good lists the same as using the the Windows version? I though the general idea is to have tests that act on behavior?
I don't think so, a buggy driver is outside of the scope of something we can fix or care to workaround in Wine. We already do some detection of certain devices via PCI id's somewhere in the Wine DirectX code for certain features and I think this should be extended somehow to the drivers and present a warning to the user or some such. The more I think about it, blacklisting really seems to be the only way given the timing of the driver releases.
This issue goes beyond just the testing framework. Let say user has a buggy ATI or Nvidia driver installed and it works for OpenGL demos and the like under Linux but when the user runs Wine it hangs the system because we are stressing the driver in ways beyond the normal eye-candy the WM pushes. The user is going to think its a problem in Wine and not in the driver. I see this day in and day out at CodeWeavers and every single time the users blame us even when its fairly obvious that the hang or crash only affects 3D applications.
Am Mittwoch, 10. Oktober 2007 10:07:58 schrieb Steven Edwards:
On 10/10/07, Paul Vriens paul.vriens.wine@gmail.com wrote:
But isn't introducing bad or good lists the same as using the the Windows version? I though the general idea is to have tests that act on behavior?
We already do some detection of certain devices via PCI id's somewhere in the Wine DirectX code for certain features and I think this should be extended somehow to the drivers and present a warning to the user or some such.
Just fyi, the PCI id stuff in WineD3D is the other way: It finds a PCI id from the features the card has to report a proper id to the game. It doesn't read the real PCI id to tell which features the card has.
"H. Verbeet" hverbeet@gmail.com writes:
On 10/10/2007, Jeremy White jwhite@codeweavers.com wrote:
I think a test that fails, or crashes a system because of a driver bug is a broken test. It's admittedly hard to write a test that will detect a vmware situation or driver bug and work around it, but I think that is what we should do.
At that point the only real option is keeping a list of blacklisted video drivers (which some applications actually sort of do), but I'm not sure we really want to go there. I do think we should investigate native failures, and fix them if possible, but if tests fail because eg. vmware has a broken d3d implementation, I think the test is essentially doing what it should do.
Yes, if the code is going to break in normal use too, then it's OK for the test to fail. It simply means the platform is useless both for regression testing and for normal apps that use the functionality. If we really have to add workarounds for broken drivers, this has to be done in the code, not in the tests.
On Tue, 9 Oct 2007, Jeremy White wrote: [...]
I think a test that fails, or crashes a system because of a driver bug is a broken test. It's admittedly hard to write a test that will detect a vmware situation or driver bug and work around it, but I think that is what we should do.
It's just like compiling without warnings. Once there are no warnings, it's easy to make sure none creep in. But once just a few are excused as being 'understandable', then a lot more quickly creep in, and no one pays attention to warnings.
I would add that if 'make test' crashes a user's X server or the whole system, then that user will just stop running 'make test' at all (or will run it in VNC if he is very very persistent), and will certainly not set them up to run automatically and report the results to tests.winehq.org. With people running 'make test' and reporting their results being already so few and far between this is pretty bad.
So it's important that these issues be investigated and either worked around in Wine, or a clear way of fixing them by upgrading the drivers found and documented.
On Fri, Oct 12, 2007 at 12:22:43PM +0200, Francois Gouget wrote:
On Tue, 9 Oct 2007, Jeremy White wrote: [...]
I think a test that fails, or crashes a system because of a driver bug is a broken test. It's admittedly hard to write a test that will detect a vmware situation or driver bug and work around it, but I think that is what we should do.
It's just like compiling without warnings. Once there are no warnings, it's easy to make sure none creep in. But once just a few are excused as being 'understandable', then a lot more quickly creep in, and no one pays attention to warnings.
I would add that if 'make test' crashes a user's X server or the whole system, then that user will just stop running 'make test' at all (or will run it in VNC if he is very very persistent), and will certainly not set them up to run automatically and report the results to tests.winehq.org. With people running 'make test' and reporting their results being already so few and far between this is pretty bad.
So it's important that these issues be investigated and either worked around in Wine, or a clear way of fixing them by upgrading the drivers found and documented.
Or do a bugreport for X / your distro! :)
But I just tried bothering our X developers, but mentioning the Xorg "ati" driver (for my Radeon Mobility) just causes laughs. :/
Ciao, Marcus
On Fri, 12 Oct 2007, Marcus Meissner wrote: [...]
Or do a bugreport for X / your distro! :)
But I just tried bothering our X developers, but mentioning the Xorg "ati" driver (for my Radeon Mobility) just causes laughs. :/
Yep.
For a while my X server was crashing whenever I would exit MythTV or run the Wine tests. It turns out the ATI drivers (or X itself, I'm not sure) have a bug which is triggered when DRI is disabled, but the X developers did not seem interested in fixing it. For me DRI got disabled because the radeon_dri.so file got moved to another Debian package during an upgrade and thus was missing. Hopefully this bit of info can help others who get X crashes when running the Wine tests.