Hi,
I've abandoned my chroot aproach to improving security in patchwatcher. Instead I've implemented the ability to run untrusted code as a user different than the one running patchwatcher. This is because creating a chroot where Wine could be compiled and tested proved to be too difficult and platform-dependent.
I've also added external time limits for running untrusted code. This as a whole should help prevent individual patches from stalling the patch watching process.
It very easy to set up. All you need is a low-privileged user (but enough to run the tests, e.g. audio, video groups) and an empty folder where you can write but this user can only read (not your home folder, it shouldn't have access there anyway).
To use it, start with a clean patchwatcher and adjust the variables in the patchwatcher.sh, then run "patchwatcher.sh intialize". It will instruct you to run some commands as root (setuid the wrapper). Run initialize again and it should build wine and run baseline tests. Then you can test it by putting a patch in patches/ and issuing the try_one_patch command. To start watching use the continuous_build command.
Patch is attached.
On Mon, 8 Sep 2008, Ambroz Bizjak wrote:
Hi,
I've abandoned my chroot aproach to improving security in patchwatcher. Instead I've implemented the ability to run untrusted code as a user different than the one running patchwatcher. This is because creating a chroot where Wine could be compiled and tested proved to be too difficult and platform-dependent.
This seems like an almost perfect task for a virtual machine: * set up you virtual machine to taste * take a snapshot * to test a patch, fire up the virtual machine * have it test the patch * after the test or when it times out, revert it to the snapshot * rinse (done in the step above), repeat
This could be done with VirtualBox, but maybe other alternatives based on Xen or KVM or some such would be better. The main issue I see with this is that the OpenGL / DirectSound tests will not run on the real hardware (as usual), but maybe a Xen-like approach could help there.
It would also make it easy to test on FreeBSD / Solaris, at least if based on something like VirtualBox (not sure about the Xen-like approaches).
On Wed, Sep 10, 2008 at 4:37 AM, Francois Gouget fgouget@free.fr wrote:
This seems like an almost perfect task for a virtual machine: ... The main issue I see with this is that the OpenGL / DirectSound tests will not run on the real hardware (as usual)
I just came off a project (Zumastor) which used a virtual machine in its test loop as you suggest. We ended up using qemu because kvm was too broken at the time. The whole experience left a bad taste in my mouth, so I'm pushing ahead with a more lightweight approach to be able to make progress on the key problem.
If somebody wants to run patchwatcher inside VMs, great, that's definitely a safe way to go, and events might push us in that direction someday.
On Wed, Sep 10, 2008 at 5:02 AM, Dan Kegel dank@kegel.com wrote:
On Wed, Sep 10, 2008 at 4:37 AM, Francois Gouget fgouget@free.fr wrote:
This seems like an almost perfect task for a virtual machine:
Incidentally, I documented how to produce a really small vmware image for Ubuntu at http://kegel.com/linux/jeos-vmware-player/ (I used this as a demo platform for Zumastor, and wanted the demo to be as easy to download as possible.)
Ambroz wrote:
I think I'll try getting a small Gentoo system to run in UML with a read-only root fs and make it boot as fast as possible. To try a patch, I would give it read access to the master Wine tree on the host, it would copy it to a writable temp folder and try it out. After it's finished or if the external timeout elapses, the UML process will be terminated and all of its writable storage will be reverted.
Right. That's how the refactored patchwatcher is designed. There's a shared directory containing one subdirectory for each build slave. Each slave is expected to somehow get a read/write mount to its own subdirectory of the shared directory. The master watches the mailing list and puts incoming patches into an inbox directory. Each patch series is called a job, and gets its own subdirectory of inbox. The master dispatches a job to a build slave by moving the job directory into one of the build slave's directories.
The build slaves watch for jobs to appear in their directory. When one appears, then do all the builds it implies, then create a log file. The master notices the log file and moves that job out of the slave's subdirectory.
So the slave can be in another real machine, another virtual machine, or running as another user; anything as long as it can get read/write access to its subdirectory of the shared directory. - Dan
Ambroz wrote:
The problem with your design right now is that you want to run the slave in some isolated environment and expect it to be secure. The build slave itself is a mission-critical process and putting it in a quarantine to run together with untrusted code allows malicious patches to interfere with its operation. This means an attacker can just kill it from inside his patch, causing the whole patch building operation to fail, or corrupt the baseline tree, or send hundreds of fake emails through the slave interface.
It can't directly send fake emails, since the build slave doesn't have the email password, but it could certainly disrupt the build slave and make it give bogus and malicious results. The design isn't for security, it's for ease of prototyping and plugging in new build slaves.
So I plan to run the build slave itself in a trusted environment, but make it quarantine individual build operations (similar to my previous design with user switching). This way the impact of an attack is highly limited - all it can theoretically do is fake his own patch results.
Yes, good, just please don't change the interface to the build master, all your changes should be encapsulated in a custom build slave. - Dan
Francois Gouget wrote:
On Mon, 8 Sep 2008, Ambroz Bizjak wrote:
Hi,
I've abandoned my chroot aproach to improving security in patchwatcher. Instead I've implemented the ability to run untrusted code as a user different than the one running patchwatcher. This is because creating a chroot where Wine could be compiled and tested proved to be too difficult and platform-dependent.
This seems like an almost perfect task for a virtual machine:
- set up you virtual machine to taste
- take a snapshot
- to test a patch, fire up the virtual machine
- have it test the patch
- after the test or when it times out, revert it to the snapshot
- rinse (done in the step above), repeat
This could be done with VirtualBox, but maybe other alternatives based on Xen or KVM or some such would be better. The main issue I see with this is that the OpenGL / DirectSound tests will not run on the real hardware (as usual), but maybe a Xen-like approach could help there.
It would also make it easy to test on FreeBSD / Solaris, at least if based on something like VirtualBox (not sure about the Xen-like approaches).
Yep. Virtualizaion has 3D shortcomings.
I can see the way how to use pbuilder/pdebuild toolchain on dedicated user account in Debian to automate this in pretty safe and easy way.
pbuilder uses fakeroot/chroot for this and its use is a nobrainer, hellish easy and effective.
But this is limited to Debian systems only. Positive is that we still have access to 3DHW (although not concurrent/parallel).
Anybody has experience with User-mode Linux kernels for that?
~*~
Another environment is OpenSolaris. There we can leverage technologie of zones & ZFS for cheap pseudovirtualization and fast FS recovery using FS snapshots.
~*~
IMO there is no silver bullet to bite all problems on all OS. We can build OS-specific toolchains around patchwatcher and I think that's more viable alternative.
Cheers Hark
On Wed, Sep 10, 2008 at 5:06 AM, Vit Hrachovy vit.hrachovy@sandbox.cz wrote:
I can see the way how to use pbuilder/pdebuild toolchain on dedicated user account in Debian to automate this in pretty safe and easy way.
pbuilder uses fakeroot/chroot for this and its use is a nobrainer, hellish easy and effective.
But this is limited to Debian systems only. Positive is that we still have access to 3DHW (although not concurrent/parallel).
Yes. We used pbuilder in the automated test for zumastor, and were tied to Debian as a result. We obviously need to avoid requiring that for patchwatcher, which has to run on non-Debian systems. (BTW, we had some difficulty with unreliable distribution mirrors; the only way to get pbuilder to be reliable was to point to a local archive of all the packages.)
Anybody has experience with User-mode Linux kernels for that?
That's getting even further away from the hardware...
IMO there is no silver bullet to bite all problems on all OS. We can build OS-specific toolchains around patchwatcher and I think that's more viable alternative.
Indeed. After I finish refactoring patchwatcher, the build slaves will be pretty simple, and it'll be easy to put together custom build slaves for various environments. In particular, a pbuilder-based build slave for Debian / Ubuntu seems like a good idea (as long as you use a local package archive to avoid the flakiness I mentioned above). - Dan