Patchwatcher is online and giving reasonably good feedback on the patch stream. The bug that caused every patch to be marked 'failed tests' is fixed, and the blacklist is expanded enough that false regressions seem to be rare.
There are still bugs: 1. the dashboard shows no status column for http://kegel.com/wine/patchwatcher/results/32.txt even though http://kegel.com/wine/patchwatcher/results/32.log exists and shows that there was some strange problem in applying the patch. 2. The most recent patch shows up as 'queued' but the link to the patch doesn't work.
Results are at http://kegel.com/wine/patchwatcher/results/ and source code is at http://winezeug.googlecode.com
Next steps: - add timeout to handle hanging tests (every day or two I have to kill some test or other). There's no portable way to do this from wine/test.h that works in Win9x, so I'll probably set WINETEST_WRAPPER to run the tests via a wrapper that implements the timeout.
- distribute across multiple machines by splitting into master (which watches the patch stream) and slaves (which execute tests). I will probably use http and ftp for this so the slaves can be remote.
- support multiple architectures (anybody want to run the slaves on MacOSX for me?)
- improve the web page to have error counts and perhaps separate links to the build and test logs
- merge the chroot support (though this will need porting to run on MacOSX)
- add valgrind as the next step after running tests
- improve the web page generator to show current status (e.g. show the results from "make test" before starting valgrind)
Have you considered using some of the tools out there for automated builds and looking to integrate patchwatcher to extend them to suit your purpose.
A number of the features you suggest below are most likely already implemented within existing automated builds.
I currently use buildbot at work for managing builds, and it looks like that it could handle many of your tasks if patchwatcher could be integrated into it.
On Wed, Aug 13, 2008 at 10:42:18AM -0700, Dan Kegel wrote:
Next steps:
- add timeout to handle hanging tests (every day or two
I have to kill some test or other). There's no portable way to do this from wine/test.h that works in Win9x, so I'll probably set WINETEST_WRAPPER to run the tests via a wrapper that implements the timeout.
Buildbot supports configuable timeouts.
- distribute across multiple machines by splitting into master
(which watches the patch stream) and slaves (which execute tests). I will probably use http and ftp for this so the slaves can be remote.
Also has master slave architecture, and they are also looking to support load balancing in the future.
- support multiple architectures (anybody want to run
the slaves on MacOSX for me?)
Uses python for communications, so only the individual steps need to be cross platform, think configure, make depend and make.
- improve the web page to have error counts and perhaps
separate links to the build and test logs
That's seems more project specific than buildbot, but who knows, it might be implementable in a generic way to suit buildbot.
- merge the chroot support (though this will need porting to
run on MacOSX)
Would have to be added as a custom step
- add valgrind as the next step after running tests
Supports individual steps
- improve the web page generator to show current status
(e.g. show the results from "make test" before starting valgrind)
Generates separate logs for each step
website is: http://buildbot.net/trac
On Thu, Aug 14, 2008 at 2:49 AM, Darragh Bailey felix@compsoc.nuigalway.ie wrote:
I currently use buildbot at work for managing builds, and it looks like that it could handle many of your tasks if patchwatcher could be integrated into it.
Good idea. Even Mozilla, home of http://www.mozilla.org/tinderbox.html, uses it for everything but SeaMonkey: https://wiki.mozilla.org/Buildbot It looks like buildbot has a new feature (try --diff) that does almost what we want. It doesn't handle patch series, so we would have to concatenate patch series into one big patch to fit for now. They even have mailwatcher thingies, but they're for watching commit messages rather than potential patches. So we'd have to clone and mutate one of those a bit and hook it into try --diff (well, we could keep using my mailwatcher, but that wouldn't be the buildbot way).
Also, its timeout is a coarse-grained one, but we also need a timeout for individual tests. That's ok, it's not hard to add into runtests.
On the downside, they don't seem to have anything like the report page I have, so we'd need to add that to buildbot.
What to do, what to do... how about this: are there any python users on the list who would be willing to help adapt buildbot to our needs? I don't think I can handle it alone, I'm pretty busy. Maybe I should focus on getting valgrind hooked into my existing patchwatcher while somebody else looks at buildbot. I'll join the buildbot mailing list and chat with those guys a bit to see where they think the patchwatcher functionality could fit in. - Dan
On Thursday 14 August 2008 16:38:40 Dan Kegel wrote:
What to do, what to do... how about this: are there any python users on the list who would be willing to help adapt buildbot to our needs?
I've subscribed to the buildbot list as well, and I'll start looking over the code once I figured out which VCS to install to get the latest greatest buildbot code. I wish people out there could just agree on which VCS is best and use git. ;)
If I get the api docs right, the best option seems to be to subclass their mailwatcher to trigger the try --diff. The hardest part will be getting the patch series logic to work or possibly keeping track of the regression tests. On the plus side, it should be possible to run buildslave instances on win32, so perhaps there's a decent way to build and run the tests on win32 native right away as well.
Cheers, Kai
Patchwatcher is online and giving reasonably good feedback on the patch stream. The bug that caused every patch to be marked 'failed tests' is fixed, and the blacklist is expanded enough that false regressions seem to be rare.
You mentioned sporadic test failures in the d3d9:visual and ddraw:visual tests earlier, but I did not have time to look at it back then. Can you send me logs of the failures and successes?
I am afraid that the d3d9 failure is a driver bug; I don't know yet what is up with the ddraw test failure
On Thu, Aug 14, 2008 at 7:37 AM, Stefan Dösinger stefan@codeweavers.com wrote:
You mentioned sporadic test failures in the d3d9:visual and ddraw:visual tests earlier, but I did not have time to look at it back then. Can you send me logs of the failures and successes?
I am afraid that the d3d9 failure is a driver bug; I don't know yet what is up with the ddraw test failure
Sure, I just replied in another thread about that. See also http://bugs.winehq.org/show_bug.cgi?id=10221