I have upgraded Debian on vm1 from 7 to 8.6 and the filesystem from ext3 to ext4. This gets it a non-antique version of the kernel and QEMU which includes Huw's fix for the ntdll:exception test. So there should be one less unfixable failure for VMs being run on vm1.
That said I'm not really happy with the state of things on vm1: for some reason its reverts are much slower than they should be. For instance with the wvista VM doing 3 reverts in a row on otherwise idle machines gives me:
vm1 56.361s 46.803s 46.569s vm2 10.206s 9.783s 9.912s vm3 9.583s 8.137s 8.004s
One could think that vm2 has much better hardware but that's just not the case: PassMark Disk Disk Host CPU MT ST Type Speed vm1 Xeon E3-1230v2 8855 1940 HD 130 MB/s vm2 Opteron 6128 6959 n/a HD Raid0 144 MB/s vm3 Xeon E3-1226v3 7462 2127 SSD 354 MB/s
All hosts have 16 GB of RAM. So vm2 has what looks like a somewhat slower processor and a marginally faster disk thanks to Raid0. That should not be sufficient for it to be 4 times faster, particularly given that vm3 has a disk that's over twice as fast is only marginally faster than vm2.
Indeed during the revert vm1 shows no read traffic (lots of RAM to cache that), but a steady 6 MB/s stream of writes ! In contrast on vm2 writes quickly ramp up to 80 MB/s, then stop after ~5 seconds and QEMU just uses CPU for the last ~4 seconds.
vm3 also shows it's not an AMD vs Intel thing. There may be some missing CPU feature on vm1 but if that's the case I don't see what.
So there is still a mystery there and if anyone has an insight into it it would be appreciated.
Despite that I have put vm1 back to active duty. But I decided to keep the wvistau64 VM on the vm2 host. This means vm1 essentially only has one VM to take care of and since most of the time it's going to be idle already it should not slow jobs down.
I also applied all Windows updates up to 2016/07/25 to the w7u VM. This caused some new failures that Hans started tackling / fixed. Any other new failure on the 3 VMs affected by the upgrade (wvista, wvistau64 and w7u) should be fixable.
The configuration of the other two vms, wvista and wvistau64 should be essentially unchanged.
Finally I also performed a minor upgrade on vm3: from Debian 8.2 to 8.6. The goal was to see if the minor QEMU upgrade would be sufficient to get Windows 10 to stop crashing when running WineTest. It looks like that's not the case. So the next step will be to grab QEMU 2.7 from the Debian BackPorts but there's a dumb command line incompatibility with libvirt 1.2.9 and there's no easily installable alternative :-( Fortunately I have a qemu wrapper script that should bridge the gap.
Francois Gouget wrote:
Indeed during the revert vm1 shows no read traffic (lots of RAM to cache that), but a steady 6 MB/s stream of writes ! In contrast on vm2 writes quickly ramp up to 80 MB/s, then stop after ~5 seconds and QEMU just uses CPU for the last ~4 seconds.
Maybe vm2 mounts its file systems with noatime? (see e.g. http://www.tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/cha... ). For vm3 that wouldn't make such a big difference because of its SSD.
Jonas
On Sun, 4 Dec 2016, Jonas Maebe wrote:
Francois Gouget wrote:
Indeed during the revert vm1 shows no read traffic (lots of RAM to cache that), but a steady 6 MB/s stream of writes ! In contrast on vm2 writes quickly ramp up to 80 MB/s, then stop after ~5 seconds and QEMU just uses CPU for the last ~4 seconds.
Maybe vm2 mounts its file systems with noatime?
The VM hosts all use relatime which provides essentially all the benefits of noatime.
But in this case I expect the writes all go to the qcow2 disk image and I know vm1 is capable of sustaining more than 6 MB/s writes (e.g. when copying >100 GB around).
On 05/12/16 01:33, Francois Gouget wrote:
On Sun, 4 Dec 2016, Jonas Maebe wrote:
Francois Gouget wrote:
Indeed during the revert vm1 shows no read traffic (lots of RAM to cache that), but a steady 6 MB/s stream of writes ! In contrast on vm2 writes quickly ramp up to 80 MB/s, then stop after ~5 seconds and QEMU just uses CPU for the last ~4 seconds.
Maybe vm2 mounts its file systems with noatime?
The VM hosts all use relatime which provides essentially all the benefits of noatime.
I actually meant the file system in the VM, but I guess there are Windows rather than Linux VMs? Additionally, what is the "revert" operation exactly? Is it like an "svn revert"/"git reset --hard HEAD" in the VM, or some qemu operation, or something else?
But in this case I expect the writes all go to the qcow2 disk image and I know vm1 is capable of sustaining more than 6 MB/s writes (e.g. when copying >100 GB around).
One thing you could look at is the output of iostat on the host while the operations are going on, in partical the transactions-per-second, to check whether the issue is that one is using a lot of small writes (for what ever reason) while the other uses fewer, larger writes.
Jonas
On 5 December 2016 at 20:00, Jonas Maebe jonas-devlists@watlock.be wrote:
rather than Linux VMs? Additionally, what is the "revert" operation exactly? Is it like an "svn revert"/"git reset --hard HEAD" in the VM, or some qemu operation, or something else?
Actually, yes, could you give some more details about how things are setup exactly? I assume "revert" means a qemu loadvm here, but perhaps it doesn't. How is qemu being invoked? Are these the exact same images being loaded on vm1 and vm2? Or were these the same originally and have since been independently modified? I know we're using libvirt in some form, but this can be reproduced with just qemu as well, right? What do qemu-img info and qemu-img check think about the images?
On Mon, 5 Dec 2016, Jonas Maebe wrote:
On 05/12/16 01:33, Francois Gouget wrote:
On Sun, 4 Dec 2016, Jonas Maebe wrote:
Francois Gouget wrote:
Indeed during the revert vm1 shows no read traffic (lots of RAM to cache that), but a steady 6 MB/s stream of writes ! In contrast on vm2 writes quickly ramp up to 80 MB/s, then stop after ~5 seconds and QEMU just uses CPU for the last ~4 seconds.
Maybe vm2 mounts its file systems with noatime?
The VM hosts all use relatime which provides essentially all the benefits of noatime.
I actually meant the file system in the VM, but I guess there are Windows rather than Linux VMs?
Yes, Windows VMs, Vista for the one I have mentionned before.
Additionally, what is the "revert" operation exactly? Is it like an "svn revert"/"git reset --hard HEAD" in the VM, or some qemu operation, or something else?
virsh --connect qemu:///system snapshot-revert wtbwvista up2014-wtb
This reverts the wtbwvista VM to the up2014-wtb which is a live snapshot.
But in this case I expect the writes all go to the qcow2 disk image and I know vm1 is capable of sustaining more than 6 MB/s writes (e.g. when copying >100 GB around).
One thing you could look at is the output of iostat on the host while the operations are going on, in partical the transactions-per-second, to check whether the issue is that one is using a lot of small writes (for what ever reason) while the other uses fewer, larger writes.
Well, the average read size seems to be the same, 32KB, but the number of transactions sure is different.
vm1 $ iostat -d -h /dev/sda 1 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 2.00 0.00 16.00 0 16 3.00 0.00 52.00 0 52 83.00 0.00 3268.00 0 3268 208.00 0.00 6656.00 0 6656 204.00 0.00 6528.00 0 6528 193.00 0.00 6208.00 0 6208 198.00 0.00 6344.00 0 6344 208.00 0.00 6656.00 0 6656 200.00 0.00 6400.00 0 6400 192.00 0.00 6144.00 0 6144 210.00 0.00 6720.00 0 6720 201.00 0.00 6416.00 0 6416 199.00 0.00 6028.00 0 6028 203.00 0.00 6528.00 0 6528 200.00 0.00 6400.00 0 6400 194.00 0.00 6208.00 0 6208 201.00 0.00 6444.00 0 6444 207.00 0.00 6592.00 0 6592 202.00 0.00 6464.00 0 6464 209.00 0.00 6656.00 0 6656 208.00 0.00 6656.00 0 6656 194.00 0.00 6232.00 0 6232 202.00 0.00 6404.00 0 6404 191.00 0.00 6988.00 0 6988 203.00 0.00 6528.00 0 6528 200.00 0.00 6400.00 0 6400 205.00 0.00 6588.00 0 6588 211.00 0.00 6720.00 0 6720 202.00 0.00 6464.00 0 6464 196.00 0.00 6272.00 0 6272 203.00 0.00 6464.00 0 6464 197.00 0.00 6284.00 0 6284 190.00 0.00 5860.00 0 5860 200.00 0.00 6400.00 0 6400 204.00 0.00 6528.00 0 6528 204.00 0.00 6528.00 0 6528 196.00 0.00 6232.00 0 6232 194.00 0.00 6212.00 0 6212 200.00 0.00 6400.00 0 6400 202.00 0.00 6464.00 0 6464 196.00 0.00 6272.00 0 6272 192.00 0.00 6144.00 0 6144 192.00 0.00 6092.00 0 6092 190.00 0.00 6080.00 0 6080 192.00 0.00 6144.00 0 6144 201.00 0.00 6400.00 0 6400 191.00 0.00 6144.00 0 6144 182.00 4.00 5284.00 4 5284 3.00 0.00 0.00 0 0 6.00 0.00 72.00 0 72
$ filefrag /var/lib/libvirt/images/wtbwvista.qcow2 /var/lib/libvirt/images/wtbwvista.qcow2: 284 extents found $ ls -lh /var/lib/libvirt/images/wtbwvista.qcow2 -rw-r--r-- 1 libvirt-qemu libvirt-qemu 31G Dec 7 01:48 /var/lib/libvirt/images/wtbwvista.qcow2
vm2 $ iostat -d -h /dev/sda 1 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 2514.00 0.00 81220.00 0 81220 2856.00 64.00 91048.00 64 91048 2368.00 88.00 75592.00 88 75592 1563.00 384.00 50128.00 384 50128 53.00 60.00 1844.00 60 1844 0.00 0.00 0.00 0 0 424.00 616.00 12008.00 616 12008 392.00 2652.00 11364.00 2652 11364 495.00 972.00 15872.00 972 15872 425.00 360.00 14016.00 360 14016
$ filefrag /var/lib/libvirt/images/wtbwvista.qcow2 /var/lib/libvirt/images/wtbwvista.qcow2: 79 extents found $ ls -lh /var/lib/libvirt/images/wtbwvista.qcow2 -rw-r--r-- 1 root root 31G Dec 7 01:43 /var/lib/libvirt/images/wtbwvista.qcow2
On a spinning disk 2500+ IO/s only makes sense if they are contiguous. 200 IO/s however makes sense for random IO. But eve on vm1 the disk image file is not that fragmented. And given that it was restored from the same backup on both machines at a couple of days interval I see no reason for one to cause random IO and not the other.
On 07/12/16 09:42, Francois Gouget wrote:
On a spinning disk 2500+ IO/s only makes sense if they are contiguous. 200 IO/s however makes sense for random IO. But eve on vm1 the disk image file is not that fragmented. And given that it was restored from the same backup on both machines at a couple of days interval I see no reason for one to cause random IO and not the other.
I don't know how the snapshot-revert operation works internally, but with the slow VM you have a total of 289,844 KB written, while with the fast one it is 353,092 KB.
Perhaps with the slow one, it "optimizes" the operation by not writing a bunch of unchanged blocks, which in turn requires more seeking. You could maybe try strace'ing both to look at the operations at the file system level rather than at the block device one.
Jonas
On 2016-12-04 05:32, Francois Gouget wrote:
That said I'm not really happy with the state of things on vm1: for some reason its reverts are much slower than they should be. ... So there is still a mystery there and if anyone has an insight into it it would be appreciated.
Maybe you could simply replace vm1 with a copy of vm2 and then restore only the few relevant pieces (network, hostname, VM snapshot). This should rule out most configuration issues.
In general, if those machines are only used for TestBot, you could build a common root partition for all the machines. That would be easier to keep up-to-date and to copy to new machines, if necessary. You could then have another partition where you have the per-machine configuration, VM snapshots etc.
I upgraded and synchronized the configuration of all four VM hosts. So now they either all work great or are all broken ;-)
They are now all on Debian 8.6 with three pieces from backports: * The Linux kernel is now 4.8.11-1~bpo8+1. * The new kernel required linux-base to be upgraded to 4.3~bpo8+1. * QEMU is now 1:2.7+dfsg-3~bpo8+2. * libvirt is still 1.2.9-9+deb8u3 as there is nothing newer for Debian 8. This required using a small script to get it to play nice with QEMU 2.7 (see attached file).
I synchronized their configuration by diffing their /etc directories and editing the files to remove differences (Cluster SSH is really nice for that). They should now essentially be identical (there's obvious differences in hostnames, ssh server keys, etc).
The upgrade and configuration syncing did not solve the performance issue on vm1 so that mystery is still intact.
The build VM went offline a few times. Fortunately it turns out it was my fault. What happened is that on december 16 the TestBot failed to recreate the wtb live snapshot. Given that the VM with in an unknown state I reverted it to the wtbbase8 powered off snapshot and recreated the live snapshot from that. However wtbbase8 still had the old 3.16 kernel that was causing the build VM to regularly go offline. So this time I went back to wtbbase8, upgraded the kernel again, took a new wtbbase8.1 powered off snapshot for the next time I need one, and then recreated the wtb live snapshot.
Now the question is why did the TestBot fail to recreate the wtb snapshot? The only theory I have right now is that the network glitched somewhere between deleting the old snapshot and creating the new one. It's the first time this happened so hopefully it won't happen again any time soon. Still it at some point it would be nice to change the procedure to one that can just be re-run if it fails. That means reverting to a different snapshot than the one we delete and recreate.
Also as you can see from the WineTest results page the 64 bit Windows 8 and Windows 10 VMs no longer crash while running WineTest.
* On w1064 Windows crash and reboot was caused by ntdll:exception. The workaround is to tell kvm to ignore accesses to unsupported MSR. See the links in TestBot bug 40240 for more details. Unfortunately this needs to be set every time after boot and I forgot to do so when I did the hosts upgrades. So there's a fe days gap. But I have now added an init script that should take care of that automatically on boot. http://bugs.winehq.com/show_bug.cgi?id=40240
* On w864 the Windows freeze was caused by rasapi32:rasapi. The workaround is to configure access to the VM through Spice rather than VNC. Somehow this makes a difference for a bunch of tests even though no client connects to the VM while it's running the tests. See the TestBot bug 42185. https://bugs.winehq.org/show_bug.cgi?id=42185
I also updated the TestBot test suite and put it up on GitHub. The wtbsuite as I call it is a set of patches that apply on top of Wine and which can be submitted in bulk to the TestBot to verify that it works as expected. The patches strive to exercise all the situations the TestBot can run into like patches that don't apply, build failures, timeouts, tests that crash, patch sets, etc. When appropriate the patches contain a reference to the relevant TestBot bug. You can find the test suite there: https://github.com/fgouget/wine/tree/wtbsuite
This week I tweaked the configuration of some VMs. Specifically:
* All Windows 7+ VMs now use Spice for 'display'; remote access really and it goes further than just display like VNC, it also includes forwarding the audio to the client and more.
But none of this should matter really since nobody connects to the VMs while they are running the tests. Still switching the remoting method from VNC to Spice fixes a bunch of audio tests!
If I remember correctly we only had two holdouts: w7u and w8. So we'll see if their test results improve (those of this friday may not reflect the new state 100% yet).
* w7pro64 and w864 seem to be doing reasonably well sound-wise: w864 does have one failure in the 64 bit mmdevapi:capture & render but not in the 32 bit ones. The w1064 results are not so rosy but it's Windows 10 so it's still kind of expected. So maybe removing the soundcard from 64 bit VMs won't be necessary? I'll let Hans and Andrew verify.
* All Windows 7+ VMs also got a new beta TestAgentd which is no longer run from an iconized cmd window. Instead it simply detaches from the console and thus runs without a window. This is meant to help with some Direct3D tests and I hope Stefan can remind me of the details.
* I have also created a standalone testcase for the Windows 10 64 bit crash and reported it to QEMU in the hope they can do something about it: https://bugs.winehq.org/show_bug.cgi?id=40240 https://bugs.launchpad.net/qemu/+bug/1658141
* I also created a standalone testcase for the Spice audio issue mentionned above. I added it to the bug I created 15 months ago in the hope it can help someone reproduce and fix the bug. But given that there's no indication anyone even looked at this bug that's probably a foolish hope. https://bugs.winehq.org/show_bug.cgi?id=39442 https://bugs.launchpad.net/qemu/+bug/1499908
* But I think bugs sometimes get fixed: I tried to reproduce the rasapi32:rasapi Windows 8 64 bit bug and couldn't. That probably means it got fixed by last week's QEMU and Linux kernel upgrades. Yay! https://bugs.winehq.org/show_bug.cgi?id=42185
* Tonight the build VM failed to rebuild Wine: https://testbot.winehq.org/JobDetails.pl?Key=27894
It turns out that some build processes fell victim to the OOM killer! I guess the issue is that 512 MB is not always enough for parallel builds, particularly when one process is building an ~80 MB winetest.exe (though when investigating this I also got the issue with makedep once!).
So I splurged and doubled the RAM allocated to the VM. The balloon driver will hopefully prevent that from having too much of a performance impact on snapshot reverts.