[libvirt-users] virtual drive performance

Hi, I'm investigating a performance issue on a virtualized windows server host that is run on a ubuntu machine via libvirt/qemu. While the host can easily read/write on the raid drive with 100Mmb/s as observable with dd, the virtualized windows server running on that host is barely able to read/write with at most 8Mb/s and averages around 1.4Mb/s. This has grown to the extent that the virtualized host is often unresponsive and even unable to start up its services with system default timeouts. Any help to improve the situation is greatly appreciated. This is the configuration of the virtualized host: ~$ virsh dumpxml windows-server-2016-x64 <domain type='kvm' id='1'> <name>windows-server-2016-x64</name> <uuid>XXX</uuid> <memory unit='KiB'>2097152</memory> <currentMemory unit='KiB'>2097152</currentMemory> <vcpu placement='static'>2</vcpu> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-xenial'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> </hyperv> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>IvyBridge</model> <topology sockets='1' cores='2' threads='1'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> <timer name='hypervclock' present='yes'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/bin/kvm-spice</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/data/virtuals/machines/windows-server-2016-x64/image.qcow2'/> <backingStore/> <target dev='hda' bus='ide'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/var/data/virtuals/machines/windows-server-2016-x64/dvd.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:0e:f2:23'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='rtl8139'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <input type='tablet' bus='usb'> <alias name='input0'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/> </graphics> <video> <model type='vga' vram='16384' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='apparmor' relabel='yes'> <label>libvirt-XXX</label> <imagelabel>libvirt-XXX</imagelabel> </seclabel> </domain> Cheers, Dominik

Hi Dominik, Sure, I beleive you can improve using: <cpu mode='host-passthrough'> </cpu> add io='native' <driver name='qemu' type='qcow2' cache='none' io='native'/> After that, please try again, but I can see other thing, for example, change the hda=IDE to virtio. Cheers! Thiago 2017-06-14 5:26 GMT-03:00 Dominik Psenner <dpsenner@gmail.com>:
Hi,
I'm investigating a performance issue on a virtualized windows server host that is run on a ubuntu machine via libvirt/qemu. While the host can easily read/write on the raid drive with 100Mmb/s as observable with dd, the virtualized windows server running on that host is barely able to read/write with at most 8Mb/s and averages around 1.4Mb/s. This has grown to the extent that the virtualized host is often unresponsive and even unable to start up its services with system default timeouts. Any help to improve the situation is greatly appreciated.
This is the configuration of the virtualized host:
~$ virsh dumpxml windows-server-2016-x64 <domain type='kvm' id='1'> <name>windows-server-2016-x64</name> <uuid>XXX</uuid> <memory unit='KiB'>2097152</memory> <currentMemory unit='KiB'>2097152</currentMemory> <vcpu placement='static'>2</vcpu> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-xenial'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> </hyperv> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>IvyBridge</model> <topology sockets='1' cores='2' threads='1'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> <timer name='hypervclock' present='yes'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/bin/kvm-spice</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/data/virtuals/machines/windows-server-2016-x64/ image.qcow2'/> <backingStore/> <target dev='hda' bus='ide'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/var/data/virtuals/machines/windows-server-2016-x64/ dvd.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:0e:f2:23'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='rtl8139'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <input type='tablet' bus='usb'> <alias name='input0'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/> </graphics> <video> <model type='vga' vram='16384' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='apparmor' relabel='yes'> <label>libvirt-XXX</label> <imagelabel>libvirt-XXX</imagelabel> </seclabel> </domain>
Cheers, Dominik
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users

On Wed, 2017-06-14 at 15:32 -0300, Thiago Oliveira wrote: [...]
I can see other thing, for example, change the hda=IDE to virtio.
I'd say switching the disk from IDE to virtio should be the very first step - and while you're at it, you might as well use virtio for the network interface too. -- Andrea Bolognani / Red Hat / Virtualization

Hi, Thank you for your input. We already tried several tweaks but without luck. For example adding io='native' did not help improve the performance. It behaved exactly the same way before and after. I've read somewhere that cache='writethrough' could also help improving the performance, but we cannot do that because we take live snapshots to backup the machine while it runs. When cache is enabled, we observed that sometimes an external live snapshot cannot be merged with blockcommit without the host being shut down. Would you please explain what <cpu mode='host-passthrough' /> should do to improve the performance? Switching from IDE to virtio basically means that the host then knows that it runs on virtualized hardware and can do things differently? But it also requires to modify the host with specialized drivers that even influence the boot process. That feels more like a hack than a solution. We're astonished why the virtualized IO is so much slower. I could understand a performance penalty of 10% or even 20%, but a drop from 120Mb/s IO read to 1.4Mb/s IO read is suspicious to everyone of us. We'd have expected at least a throughput of 50Mb/s while reading from disk which is more than half the IO that the hardware can do. Please note that we do not observe the hosting machine to peak 100% with CPU or IO (using top and iotop) when the virtualized host does some io. Is there a lock contention or something else going on? When running a virtualized host for example with virtual box we don't see such an impact. What does virtual box do differently to improve virtualized IO and could that help libvirt/qemu/kvm? On 2017-06-15 04:08, Andrea Bolognani wrote:
On Wed, 2017-06-14 at 15:32 -0300, Thiago Oliveira wrote: [...]
I can see other thing, for example, change the hda=IDE to virtio. I'd say switching the disk from IDE to virtio should be the very first step - and while you're at it, you might as well use virtio for the network interface too.
-- Andrea Bolognani / Red Hat / Virtualization

I'm in no way a performance expert, so I can't comment on most of the points you raise; hopefully someone with more experience in the area will be able to help you. That said... On Mon, 2017-06-19 at 12:38 +0200, Dominik Psenner wrote:
Switching from IDE to virtio basically means that the host then knows that it runs on virtualized hardware and can do things differently? But it also requires to modify the host with specialized drivers that even influence the boot process. That feels more like a hack than a solution.
... I don't see why this would be a problem: installing VirtIO drivers in a guest is not unlike installing drivers that are tailored to the specific GPU in your laptop rather than relying on the generic VGA drivers shipped with the OS. Different hardware, different drivers. Moreover, recent Windows versions ship Enlightened I/O drivers which AFAIK do for guests running on Hyper-V pretty much what VirtIO drivers do for those running on QEMU/KVM. -- Andrea Bolognani / Red Hat / Virtualization

On Mon, Jun 19, 2017 at 12:38 PM, Dominik Psenner <dpsenner@gmail.com> wrote:
When running a virtualized host for example with virtual box we don't see such an impact. What does virtual box do differently to improve virtualized IO and could that help libvirt/qemu/kvm?
Hello, I would jump here to try to put some information Even in virtualbox, at virtual storage chapter https://www.virtualbox.org/manual/ch05.html you get: " In general, you should avoid IDE unless it is the only controller supported by your guest. Whether you use SATA, SCSI or SAS does not make any real difference. The variety of controllers is only supplied for VirtualBox for compatibility with existing hardware and other hypervisors. " In fact, just tried with a not so new version of virtualbox (4.3.6 I don't use it very often), if I create a VM with Windows 2012 as OS, by default it sets the os disk on top of a virtualized sata controller, that should be more efficient. Or are you saying that you explicitly configured Virtualbox and selected IDE as controller type for the guest? Indeed I verified that in new version of virt-manager, when you configure a windows 2012 R2 qemu/kvm VM instead, it does set IDE as the controller and so the performance problems you see. Probably you choose the default proposed? In the past I also tried IDE on vSphere and it had the same performance problems, because it is fully virtualized and unoptimized You should set SCSI as controller type in my opinion if you have a recent version of libvirt/qemu/kvm That said, I don't know what is the level of support for W2016 at time with virtio and virtio-scsi drivers. You can download iso and virtual floppy images here: https://fedoraproject.org/wiki/Windows_Virtio_Drivers The method could be to add a new disk with the desired controller to the guest: virtio or virtio-scsi Then configure it using iso or vfd images Then shutdown the guest and set also the boot disk with the same virtio or virtio-scsi controller and try to boot again. Having installed the drivers it should auto reconfigure (not tried with w2012 and w2016) If all goes well shutdown guest again and remove the second disk Also, you can try to install a new guest and change the controller and provide the installation process the vfs image file. Try it on a test system to see if it works and if it gives you the desired performance. HIH a little, Gianluca

On Tue, Jun 20, 2017 at 10:29 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com
wrote:
That said, I don't know what is the level of support for W2016 at time with virtio and virtio-scsi drivers. You can download iso and virtual floppy images here: https://fedoraproject.org/wiki/Windows_Virtio_Drivers
This message below just posted at ovirt-users mailing list so that for drivers you can use this iso, that seems supporting W2016 (not tested myslef yet): http://lists.ovirt.org/pipermail/users/2017-June/082717.html Gianluca

Installing the virtio drivers is probably the best option, but is going to remain our last resort because it has further implications like a larger maintenance window. Thanks for pointing us towards the W2016 virtio drivers. Your last email was a little unclear to me. Would you expect a performance boost by changing bus='ide' to bus='scsi'? For instance changing this: <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/data/virtuals/machines/windows-server-2016-x64/image.qcow2'/> <backingStore/> <target dev='hda' bus='ide'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> to the following: <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/data/virtuals/machines/windows-server-2016-x64/image.qcow2'/> <backingStore/> <target dev='hda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> Do you see any gotchas in this configuration that could prevent the virtualized guest to power on and boot up? On 2017-06-20 15:12, Gianluca Cecchi wrote:
On Tue, Jun 20, 2017 at 10:29 AM, Gianluca Cecchi <gianluca.cecchi@gmail.com <mailto:gianluca.cecchi@gmail.com>> wrote:
That said, I don't know what is the level of support for W2016 at time with virtio and virtio-scsi drivers. You can download iso and virtual floppy images here: https://fedoraproject.org/wiki/Windows_Virtio_Drivers <https://fedoraproject.org/wiki/Windows_Virtio_Drivers>
This message below just posted at ovirt-users mailing list so that for drivers you can use this iso, that seems supporting W2016 (not tested myslef yet): http://lists.ovirt.org/pipermail/users/2017-June/082717.html Gianluca

On Tue, Jun 20, 2017 at 3:38 PM, Dominik Psenner <dpsenner@gmail.com> wrote:
to the following:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/data/virtuals/machines/windows-server-2016- x64/image.qcow2'/> <backingStore/> <target dev='hda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
Do you see any gotchas in this configuration that could prevent the virtualized guest to power on and boot up?
When I configure like this, from a linux guest point of view I get this Symbios Logic SCSI Controller: 00:08.0 SCSI storage controller: LSI Logic / Symbios Logic 53c895a But htis is true only if you add the SCSI controller too, not only the disk definition. In my case <controller type='scsi' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </controller> Note the slot='0x08' that is reflected into the first field of lspci inside my linux guest. So between your controllers you have to add the SCSI one In my case (Fedora 25 with virt-manager-1.4.1-2.fc25.noarch, qemu-kvm-2.7.1-6.fc25.x86_64, libvirt-2.2.1-2.fc25.x86_64) with "Disk bus" set as SCSI in virt-manager, the xml defintiion for the guest is automatically updated with the controller if not existent yet. And the disk definition sections is like this: <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/slaxsmall.qcow2'/> <target dev='sda' bus='scsi'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> So I think you should set dev='sda' and not 'hda' in your xml for it I don't kknow if w2016 contains the symbios logic drivers already installed, so that a "simple" reboot could imply an automatic reconfiguration of the guest.... Note also that in Windows when the hw configuration is considered heavily changed, you could be asked to register again (I don't think that the IDE --> SCSI should imply it...) Gianluca

On Tue, Jun 20, 2017 at 04:24:32PM +0200, Gianluca Cecchi wrote:
On Tue, Jun 20, 2017 at 3:38 PM, Dominik Psenner <dpsenner@gmail.com> wrote:
to the following:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/data/virtuals/machines/windows-server-2016- x64/image.qcow2'/> <backingStore/> <target dev='hda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
Do you see any gotchas in this configuration that could prevent the virtualized guest to power on and boot up?
When I configure like this, from a linux guest point of view I get this Symbios Logic SCSI Controller: 00:08.0 SCSI storage controller: LSI Logic / Symbios Logic 53c895a
But htis is true only if you add the SCSI controller too, not only the disk definition. In my case
<controller type='scsi' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </controller>
Note the slot='0x08' that is reflected into the first field of lspci inside my linux guest. So between your controllers you have to add the SCSI one
In my case (Fedora 25 with virt-manager-1.4.1-2.fc25.noarch, qemu-kvm-2.7.1-6.fc25.x86_64, libvirt-2.2.1-2.fc25.x86_64) with "Disk bus" set as SCSI in virt-manager, the xml defintiion for the guest is automatically updated with the controller if not existent yet. And the disk definition sections is like this:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/slaxsmall.qcow2'/> <target dev='sda' bus='scsi'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
So I think you should set dev='sda' and not 'hda' in your xml for it
I am actually very curious to know if that would make a difference. I don't have a such windows vm images ready to test at present. Dan
I don't kknow if w2016 contains the symbios logic drivers already installed, so that a "simple" reboot could imply an automatic reconfiguration of the guest.... Note also that in Windows when the hw configuration is considered heavily changed, you could be asked to register again (I don't think that the IDE --> SCSI should imply it...)
Gianluca
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users

Hi a small update on this. I just migrated the vm from the site to my laptop and fired it up. The exact same xml configuration (except file paths and such) starts up and bursts with 50Mb/s to 115Mb/s in the guest. This allows only one reasonable answer: the cpu on my laptop is somehow better suited to emulate IO than the CPU built into the host on site. The host there is a HP proliant microserver gen8 with xeon processor. But the processor there is also never capped at 100% when the guest copies files. I just ran another test by copying a 3Gb large file on the guest. What I can observe on my computer is that the copy process is not at a constant rate but rather starts with 90Mb/s, then drops down to 30Mb/s, goes up to 70Mb/s, drops down to 1Mb/s, goes up to 75Mb/s, drops to 1Mb/s, goes up to 55Mb/s and the pattern continues. Please note that the drive is still configured as: <driver name='qemu' type='qcow2' cache='none' io='threads'/> and I would expect a constant rate that is either high or low since there is no caching involved and the underlying hard drive is a samsung ssd evo 850. To have an idea how fast that drive is on my laptop: $ dd if=/dev/zero of=testfile bs=1M count=1000 oflag=direct 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB, 1000 MiB) copied, 2.47301 s, 424 MB/s I can further observe that the smaller the saved chunks are the slower the overall performance is: dd if=/dev/zero of=testfile bs=512K count=1000 oflag=direct 1000+0 records in 1000+0 records out 524288000 bytes (524 MB, 500 MiB) copied, 1.34874 s, 389 MB/s $ dd if=/dev/zero of=testfile bs=5K count=1000 oflag=direct 1000+0 records in 1000+0 records out 5120000 bytes (5.1 MB, 4.9 MiB) copied, 0.105109 s, 48.7 MB/s $ dd if=/dev/zero of=testfile bs=1K count=10000 oflag=direct 10000+0 records in 10000+0 records out 10240000 bytes (10 MB, 9.8 MiB) copied, 0.668438 s, 15.3 MB/s $ dd if=/dev/zero of=testfile bs=512 count=20000 oflag=direct 20000+0 records in 20000+0 records out 10240000 bytes (10 MB, 9.8 MiB) copied, 1.10964 s, 9.2 MB/s Could this be a limiting factor? Does qemu/kvm do many many writes of just a few bytes? Ideas, anyone? Cheers 2017-06-21 20:46 GMT+02:00 Dan <srwx4096@gmail.com>:
On Tue, Jun 20, 2017 at 04:24:32PM +0200, Gianluca Cecchi wrote:
On Tue, Jun 20, 2017 at 3:38 PM, Dominik Psenner <dpsenner@gmail.com> wrote:
to the following:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/data/virtuals/machines/windows-server-2016- x64/image.qcow2'/> <backingStore/> <target dev='hda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
Do you see any gotchas in this configuration that could prevent the virtualized guest to power on and boot up?
When I configure like this, from a linux guest point of view I get this Symbios Logic SCSI Controller: 00:08.0 SCSI storage controller: LSI Logic / Symbios Logic 53c895a
But htis is true only if you add the SCSI controller too, not only the disk definition. In my case
<controller type='scsi' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </controller>
Note the slot='0x08' that is reflected into the first field of lspci inside my linux guest. So between your controllers you have to add the SCSI one
In my case (Fedora 25 with virt-manager-1.4.1-2.fc25.noarch, qemu-kvm-2.7.1-6.fc25.x86_64, libvirt-2.2.1-2.fc25.x86_64) with "Disk bus" set as SCSI in virt-manager, the xml defintiion for the guest is automatically updated with the controller if not existent yet. And the disk definition sections is like this:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/slaxsmall.qcow2'/> <target dev='sda' bus='scsi'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
So I think you should set dev='sda' and not 'hda' in your xml for it
I am actually very curious to know if that would make a difference. I don't have a such windows vm images ready to test at present.
Dan
I don't kknow if w2016 contains the symbios logic drivers already installed, so that a "simple" reboot could imply an automatic reconfiguration of the guest.... Note also that in Windows when the hw configuration is considered heavily changed, you could be asked to register again (I don't think that the IDE --> SCSI should imply it...)
Gianluca
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
-- Dominik Psenner

Hi again, just today an issue I've thought to be resolved popped up again. We backup the machine by doing: virsh snapshot-create-as --domain domain --name backup --no-metadata --atomic --disk-only --diskspec hda,snapshot=external # backup hda.qcow2 virsh blockcommit domain hda --active --pivot Every now and then this process fails with the following error message: error: failed to pivot job for disk hda error: block copy still active: disk 'hda' not ready for pivot yet Could not merge changes for disk hda of domain. VM may be in invalid state. I expect live backups are a great asset and should work. Is this a bug that may relates also to the virtual drive performance issues we observe? Cheers 2017-07-02 10:10 GMT+02:00 Dominik Psenner <dpsenner@gmail.com>:
Hi
a small update on this. I just migrated the vm from the site to my laptop and fired it up. The exact same xml configuration (except file paths and such) starts up and bursts with 50Mb/s to 115Mb/s in the guest. This allows only one reasonable answer: the cpu on my laptop is somehow better suited to emulate IO than the CPU built into the host on site. The host there is a HP proliant microserver gen8 with xeon processor. But the processor there is also never capped at 100% when the guest copies files.
I just ran another test by copying a 3Gb large file on the guest. What I can observe on my computer is that the copy process is not at a constant rate but rather starts with 90Mb/s, then drops down to 30Mb/s, goes up to 70Mb/s, drops down to 1Mb/s, goes up to 75Mb/s, drops to 1Mb/s, goes up to 55Mb/s and the pattern continues. Please note that the drive is still configured as:
<driver name='qemu' type='qcow2' cache='none' io='threads'/>
and I would expect a constant rate that is either high or low since there is no caching involved and the underlying hard drive is a samsung ssd evo 850. To have an idea how fast that drive is on my laptop:
$ dd if=/dev/zero of=testfile bs=1M count=1000 oflag=direct 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB, 1000 MiB) copied, 2.47301 s, 424 MB/s
I can further observe that the smaller the saved chunks are the slower the overall performance is:
dd if=/dev/zero of=testfile bs=512K count=1000 oflag=direct 1000+0 records in 1000+0 records out 524288000 bytes (524 MB, 500 MiB) copied, 1.34874 s, 389 MB/s
$ dd if=/dev/zero of=testfile bs=5K count=1000 oflag=direct 1000+0 records in 1000+0 records out 5120000 bytes (5.1 MB, 4.9 MiB) copied, 0.105109 s, 48.7 MB/s
$ dd if=/dev/zero of=testfile bs=1K count=10000 oflag=direct 10000+0 records in 10000+0 records out 10240000 bytes (10 MB, 9.8 MiB) copied, 0.668438 s, 15.3 MB/s
$ dd if=/dev/zero of=testfile bs=512 count=20000 oflag=direct 20000+0 records in 20000+0 records out 10240000 bytes (10 MB, 9.8 MiB) copied, 1.10964 s, 9.2 MB/s
Could this be a limiting factor? Does qemu/kvm do many many writes of just a few bytes?
Ideas, anyone?
Cheers
2017-06-21 20:46 GMT+02:00 Dan <srwx4096@gmail.com>:
On Tue, Jun 20, 2017 at 04:24:32PM +0200, Gianluca Cecchi wrote:
On Tue, Jun 20, 2017 at 3:38 PM, Dominik Psenner <dpsenner@gmail.com> wrote:
to the following:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/data/virtuals/machines/windows-server-2016- x64/image.qcow2'/> <backingStore/> <target dev='hda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
Do you see any gotchas in this configuration that could prevent the virtualized guest to power on and boot up?
When I configure like this, from a linux guest point of view I get this Symbios Logic SCSI Controller: 00:08.0 SCSI storage controller: LSI Logic / Symbios Logic 53c895a
But htis is true only if you add the SCSI controller too, not only the disk definition. In my case
<controller type='scsi' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </controller>
Note the slot='0x08' that is reflected into the first field of lspci inside my linux guest. So between your controllers you have to add the SCSI one
In my case (Fedora 25 with virt-manager-1.4.1-2.fc25.noarch, qemu-kvm-2.7.1-6.fc25.x86_64, libvirt-2.2.1-2.fc25.x86_64) with "Disk bus" set as SCSI in virt-manager, the xml defintiion for the guest is automatically updated with the controller if not existent yet. And the disk definition sections is like this:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/slaxsmall.qcow2'/> <target dev='sda' bus='scsi'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
So I think you should set dev='sda' and not 'hda' in your xml for it
I am actually very curious to know if that would make a difference. I don't have a such windows vm images ready to test at present.
Dan
I don't kknow if w2016 contains the symbios logic drivers already installed, so that a "simple" reboot could imply an automatic reconfiguration of the guest.... Note also that in Windows when the hw configuration is considered heavily changed, you could be asked to register again (I don't think that the IDE --> SCSI should imply it...)
Gianluca
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
-- Dominik Psenner
-- Dominik Psenner

mybe this is because you physic host memory is small then this will Causing instability of the virtual machine But I'm just guessing You can try to increase your memory Wang Liming 发件人: libvirt-users-bounces@redhat.com [mailto:libvirt-users-bounces@redhat.com] 代表 Dominik Psenner 发送时间: 2017年7月2日 16:22 收件人: libvirt-users@redhat.com 主题: Re: [libvirt-users] virtual drive performance Hi again, just today an issue I've thought to be resolved popped up again. We backup the machine by doing: virsh snapshot-create-as --domain domain --name backup --no-metadata --atomic --disk-only --diskspec hda,snapshot=external # backup hda.qcow2 virsh blockcommit domain hda --active --pivot Every now and then this process fails with the following error message: error: failed to pivot job for disk hda error: block copy still active: disk 'hda' not ready for pivot yet Could not merge changes for disk hda of domain. VM may be in invalid state. I expect live backups are a great asset and should work. Is this a bug that may relates also to the virtual drive performance issues we observe? Cheers 2017-07-02 10:10 GMT+02:00 Dominik Psenner <dpsenner@gmail.com>: Hi a small update on this. I just migrated the vm from the site to my laptop and fired it up. The exact same xml configuration (except file paths and such) starts up and bursts with 50Mb/s to 115Mb/s in the guest. This allows only one reasonable answer: the cpu on my laptop is somehow better suited to emulate IO than the CPU built into the host on site. The host there is a HP proliant microserver gen8 with xeon processor. But the processor there is also never capped at 100% when the guest copies files. I just ran another test by copying a 3Gb large file on the guest. What I can observe on my computer is that the copy process is not at a constant rate but rather starts with 90Mb/s, then drops down to 30Mb/s, goes up to 70Mb/s, drops down to 1Mb/s, goes up to 75Mb/s, drops to 1Mb/s, goes up to 55Mb/s and the pattern continues. Please note that the drive is still configured as: <driver name='qemu' type='qcow2' cache='none' io='threads'/> and I would expect a constant rate that is either high or low since there is no caching involved and the underlying hard drive is a samsung ssd evo 850. To have an idea how fast that drive is on my laptop: $ dd if=/dev/zero of=testfile bs=1M count=1000 oflag=direct 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB, 1000 MiB) copied, 2.47301 s, 424 MB/s I can further observe that the smaller the saved chunks are the slower the overall performance is: dd if=/dev/zero of=testfile bs=512K count=1000 oflag=direct 1000+0 records in 1000+0 records out 524288000 bytes (524 MB, 500 MiB) copied, 1.34874 s, 389 MB/s $ dd if=/dev/zero of=testfile bs=5K count=1000 oflag=direct 1000+0 records in 1000+0 records out 5120000 bytes (5.1 MB, 4.9 MiB) copied, 0.105109 s, 48.7 MB/s $ dd if=/dev/zero of=testfile bs=1K count=10000 oflag=direct 10000+0 records in 10000+0 records out 10240000 bytes (10 MB, 9.8 MiB) copied, 0.668438 s, 15.3 MB/s $ dd if=/dev/zero of=testfile bs=512 count=20000 oflag=direct 20000+0 records in 20000+0 records out 10240000 bytes (10 MB, 9.8 MiB) copied, 1.10964 s, 9.2 MB/s Could this be a limiting factor? Does qemu/kvm do many many writes of just a few bytes? Ideas, anyone? Cheers 2017-06-21 20:46 GMT+02:00 Dan <srwx4096@gmail.com>: On Tue, Jun 20, 2017 at 04:24:32PM +0200, Gianluca Cecchi wrote:
On Tue, Jun 20, 2017 at 3:38 PM, Dominik Psenner <dpsenner@gmail.com> wrote:
to the following:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/data/virtuals/machines/windows-server-2016- x64/image.qcow2'/> <backingStore/> <target dev='hda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
Do you see any gotchas in this configuration that could prevent the virtualized guest to power on and boot up?
When I configure like this, from a linux guest point of view I get this Symbios Logic SCSI Controller: 00:08.0 SCSI storage controller: LSI Logic / Symbios Logic 53c895a
But htis is true only if you add the SCSI controller too, not only the disk definition. In my case
<controller type='scsi' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </controller>
Note the slot='0x08' that is reflected into the first field of lspci inside my linux guest. So between your controllers you have to add the SCSI one
In my case (Fedora 25 with virt-manager-1.4.1-2.fc25.noarch, qemu-kvm-2.7.1-6.fc25.x86_64, libvirt-2.2.1-2.fc25.x86_64) with "Disk bus" set as SCSI in virt-manager, the xml defintiion for the guest is automatically updated with the controller if not existent yet. And the disk definition sections is like this:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/slaxsmall.qcow2'/> <target dev='sda' bus='scsi'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
So I think you should set dev='sda' and not 'hda' in your xml for it
I am actually very curious to know if that would make a difference. I don't have a such windows vm images ready to test at present. Dan
I don't kknow if w2016 contains the symbios logic drivers already installed, so that a "simple" reboot could imply an automatic reconfiguration of the guest.... Note also that in Windows when the hw configuration is considered heavily changed, you could be asked to register again (I don't think that the IDE --> SCSI should imply it...)
Gianluca
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
-- Dominik Psenner -- Dominik Psenner

Just a little catch-up. This time I was able to resolve the issue by doing: virsh blockjob domain hda --abort virsh blockcommit domain hda --active --pivot Last time I had to shut down the virtual machine and do this while being offline. Thanks Wang for your valuable input. As far as the memory goes, there's plenty of head room: $ free -h total used free shared buff/cache available Mem: 7.8G 1.8G 407M 9.7M 5.5G 5.5G Swap: 8.0G 619M 7.4G 2017-07-02 10:26 GMT+02:00 王李明 <wanglm@certusnet.com.cn>:
mybe this is because you physic host memory is small
then this will Causing instability of the virtual machine
But I'm just guessing
You can try to increase your memory
Wang Liming
*发件人:* libvirt-users-bounces@redhat.com [mailto:libvirt-users-bounces@ redhat.com] *代表 *Dominik Psenner *发送时间:* 2017年7月2日 16:22 *收件人:* libvirt-users@redhat.com *主题:* Re: [libvirt-users] virtual drive performance
Hi again,
just today an issue I've thought to be resolved popped up again. We backup the machine by doing:
virsh snapshot-create-as --domain domain --name backup --no-metadata --atomic --disk-only --diskspec hda,snapshot=external
# backup hda.qcow2
virsh blockcommit domain hda --active --pivot
Every now and then this process fails with the following error message:
error: failed to pivot job for disk hda error: block copy still active: disk 'hda' not ready for pivot yet Could not merge changes for disk hda of domain. VM may be in invalid state.
I expect live backups are a great asset and should work. Is this a bug that may relates also to the virtual drive performance issues we observe?
Cheers
2017-07-02 10:10 GMT+02:00 Dominik Psenner <dpsenner@gmail.com>:
Hi
a small update on this. I just migrated the vm from the site to my laptop and fired it up. The exact same xml configuration (except file paths and such) starts up and bursts with 50Mb/s to 115Mb/s in the guest. This allows only one reasonable answer: the cpu on my laptop is somehow better suited to emulate IO than the CPU built into the host on site. The host there is a HP proliant microserver gen8 with xeon processor. But the processor there is also never capped at 100% when the guest copies files.
I just ran another test by copying a 3Gb large file on the guest. What I can observe on my computer is that the copy process is not at a constant rate but rather starts with 90Mb/s, then drops down to 30Mb/s, goes up to 70Mb/s, drops down to 1Mb/s, goes up to 75Mb/s, drops to 1Mb/s, goes up to 55Mb/s and the pattern continues. Please note that the drive is still configured as:
<driver name='qemu' type='qcow2' cache='none' io='threads'/>
and I would expect a constant rate that is either high or low since there is no caching involved and the underlying hard drive is a samsung ssd evo 850. To have an idea how fast that drive is on my laptop:
$ dd if=/dev/zero of=testfile bs=1M count=1000 oflag=direct 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB, 1000 MiB) copied, 2.47301 s, 424 MB/s
I can further observe that the smaller the saved chunks are the slower the overall performance is:
dd if=/dev/zero of=testfile bs=512K count=1000 oflag=direct 1000+0 records in 1000+0 records out 524288000 bytes (524 MB, 500 MiB) copied, 1.34874 s, 389 MB/s
$ dd if=/dev/zero of=testfile bs=5K count=1000 oflag=direct 1000+0 records in 1000+0 records out 5120000 bytes (5.1 MB, 4.9 MiB) copied, 0.105109 s, 48.7 MB/s
$ dd if=/dev/zero of=testfile bs=1K count=10000 oflag=direct 10000+0 records in 10000+0 records out 10240000 bytes (10 MB, 9.8 MiB) copied, 0.668438 s, 15.3 MB/s
$ dd if=/dev/zero of=testfile bs=512 count=20000 oflag=direct 20000+0 records in 20000+0 records out 10240000 bytes (10 MB, 9.8 MiB) copied, 1.10964 s, 9.2 MB/s
Could this be a limiting factor? Does qemu/kvm do many many writes of just a few bytes?
Ideas, anyone?
Cheers
2017-06-21 20:46 GMT+02:00 Dan <srwx4096@gmail.com>:
On Tue, Jun 20, 2017 at 04:24:32PM +0200, Gianluca Cecchi wrote:
On Tue, Jun 20, 2017 at 3:38 PM, Dominik Psenner <dpsenner@gmail.com> wrote:
to the following:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/data/virtuals/machines/windows-server-2016- x64/image.qcow2'/> <backingStore/> <target dev='hda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
Do you see any gotchas in this configuration that could prevent the virtualized guest to power on and boot up?
When I configure like this, from a linux guest point of view I get this Symbios Logic SCSI Controller: 00:08.0 SCSI storage controller: LSI Logic / Symbios Logic 53c895a
But htis is true only if you add the SCSI controller too, not only the disk definition. In my case
<controller type='scsi' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </controller>
Note the slot='0x08' that is reflected into the first field of lspci inside my linux guest. So between your controllers you have to add the SCSI one
In my case (Fedora 25 with virt-manager-1.4.1-2.fc25.noarch, qemu-kvm-2.7.1-6.fc25.x86_64, libvirt-2.2.1-2.fc25.x86_64) with "Disk bus" set as SCSI in virt-manager, the xml defintiion for the guest is automatically updated with the controller if not existent yet. And the disk definition sections is like this:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/slaxsmall.qcow2'/> <target dev='sda' bus='scsi'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
So I think you should set dev='sda' and not 'hda' in your xml for it
I am actually very curious to know if that would make a difference. I don't have a such windows vm images ready to test at present.
Dan
I don't kknow if w2016 contains the symbios logic drivers already installed, so that a "simple" reboot could imply an automatic reconfiguration of the guest.... Note also that in Windows when the hw configuration is considered heavily changed, you could be asked to register again (I don't think that the IDE --> SCSI should imply it...)
Gianluca
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
--
Dominik Psenner
--
Dominik Psenner
-- Dominik Psenner

Hi, different day, same issue.. cronjob runs and fails: $ virsh snapshot-create-as --domain domain --name backup --no-metadata --atomic --disk-only --diskspec hda,snapshot=external error: failed to pivot job for disk hda error: block copy still active: disk 'hda' not ready for pivot yet Could not merge changes for disk hda of domain. VM may be in invalid state. Then running the following in the morning succeeds and successfully pivotes the snapshot into the base image while the vm is live: $ virsh blockjob domain hda --abort $ virsh blockcommit domain hda --active --pivot Successfully pivoted This need of manual interventions is becoming a tiring job.. I someone else seeing the same issue or has an idea what the cause could be? Can I trust the output and is the base image really up to the latest state? Cheers 2017-07-02 10:30 GMT+02:00 Dominik Psenner <dpsenner@gmail.com>:
Just a little catch-up. This time I was able to resolve the issue by doing:
virsh blockjob domain hda --abort virsh blockcommit domain hda --active --pivot
Last time I had to shut down the virtual machine and do this while being offline.
Thanks Wang for your valuable input. As far as the memory goes, there's plenty of head room:
$ free -h total used free shared buff/cache available Mem: 7.8G 1.8G 407M 9.7M 5.5G 5.5G Swap: 8.0G 619M 7.4G
2017-07-02 10:26 GMT+02:00 王李明 <wanglm@certusnet.com.cn>:
mybe this is because you physic host memory is small
then this will Causing instability of the virtual machine
But I'm just guessing
You can try to increase your memory
Wang Liming
*发件人:* libvirt-users-bounces@redhat.com [mailto:libvirt-users-bounces@ redhat.com] *代表 *Dominik Psenner *发送时间:* 2017年7月2日 16:22 *收件人:* libvirt-users@redhat.com *主题:* Re: [libvirt-users] virtual drive performance
Hi again,
just today an issue I've thought to be resolved popped up again. We backup the machine by doing:
virsh snapshot-create-as --domain domain --name backup --no-metadata --atomic --disk-only --diskspec hda,snapshot=external
# backup hda.qcow2
virsh blockcommit domain hda --active --pivot
Every now and then this process fails with the following error message:
error: failed to pivot job for disk hda error: block copy still active: disk 'hda' not ready for pivot yet Could not merge changes for disk hda of domain. VM may be in invalid state.
I expect live backups are a great asset and should work. Is this a bug that may relates also to the virtual drive performance issues we observe?
Cheers
2017-07-02 10:10 GMT+02:00 Dominik Psenner <dpsenner@gmail.com>:
Hi
a small update on this. I just migrated the vm from the site to my laptop and fired it up. The exact same xml configuration (except file paths and such) starts up and bursts with 50Mb/s to 115Mb/s in the guest. This allows only one reasonable answer: the cpu on my laptop is somehow better suited to emulate IO than the CPU built into the host on site. The host there is a HP proliant microserver gen8 with xeon processor. But the processor there is also never capped at 100% when the guest copies files.
I just ran another test by copying a 3Gb large file on the guest. What I can observe on my computer is that the copy process is not at a constant rate but rather starts with 90Mb/s, then drops down to 30Mb/s, goes up to 70Mb/s, drops down to 1Mb/s, goes up to 75Mb/s, drops to 1Mb/s, goes up to 55Mb/s and the pattern continues. Please note that the drive is still configured as:
<driver name='qemu' type='qcow2' cache='none' io='threads'/>
and I would expect a constant rate that is either high or low since there is no caching involved and the underlying hard drive is a samsung ssd evo 850. To have an idea how fast that drive is on my laptop:
$ dd if=/dev/zero of=testfile bs=1M count=1000 oflag=direct 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB, 1000 MiB) copied, 2.47301 s, 424 MB/s
I can further observe that the smaller the saved chunks are the slower the overall performance is:
dd if=/dev/zero of=testfile bs=512K count=1000 oflag=direct 1000+0 records in 1000+0 records out 524288000 bytes (524 MB, 500 MiB) copied, 1.34874 s, 389 MB/s
$ dd if=/dev/zero of=testfile bs=5K count=1000 oflag=direct 1000+0 records in 1000+0 records out 5120000 bytes (5.1 MB, 4.9 MiB) copied, 0.105109 s, 48.7 MB/s
$ dd if=/dev/zero of=testfile bs=1K count=10000 oflag=direct 10000+0 records in 10000+0 records out 10240000 bytes (10 MB, 9.8 MiB) copied, 0.668438 s, 15.3 MB/s
$ dd if=/dev/zero of=testfile bs=512 count=20000 oflag=direct 20000+0 records in 20000+0 records out 10240000 bytes (10 MB, 9.8 MiB) copied, 1.10964 s, 9.2 MB/s
Could this be a limiting factor? Does qemu/kvm do many many writes of just a few bytes?
Ideas, anyone?
Cheers
2017-06-21 20:46 GMT+02:00 Dan <srwx4096@gmail.com>:
On Tue, Jun 20, 2017 at 04:24:32PM +0200, Gianluca Cecchi wrote:
On Tue, Jun 20, 2017 at 3:38 PM, Dominik Psenner <dpsenner@gmail.com> wrote:
to the following:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/data/virtuals/machines/windows-server-2016- x64/image.qcow2'/> <backingStore/> <target dev='hda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
Do you see any gotchas in this configuration that could prevent the virtualized guest to power on and boot up?
When I configure like this, from a linux guest point of view I get this Symbios Logic SCSI Controller: 00:08.0 SCSI storage controller: LSI Logic / Symbios Logic 53c895a
But htis is true only if you add the SCSI controller too, not only the disk definition. In my case
<controller type='scsi' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </controller>
Note the slot='0x08' that is reflected into the first field of lspci inside my linux guest. So between your controllers you have to add the SCSI one
In my case (Fedora 25 with virt-manager-1.4.1-2.fc25.noarch, qemu-kvm-2.7.1-6.fc25.x86_64, libvirt-2.2.1-2.fc25.x86_64) with "Disk bus" set as SCSI in virt-manager, the xml defintiion for the guest is automatically updated with the controller if not existent yet. And the disk definition sections is like this:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/slaxsmall.qcow2'/> <target dev='sda' bus='scsi'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
So I think you should set dev='sda' and not 'hda' in your xml for it
I am actually very curious to know if that would make a difference. I don't have a such windows vm images ready to test at present.
Dan
I don't kknow if w2016 contains the symbios logic drivers already installed, so that a "simple" reboot could imply an automatic reconfiguration of the guest.... Note also that in Windows when the hw configuration is considered heavily changed, you could be asked to register again (I don't think that the IDE --> SCSI should imply it...)
Gianluca
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
--
Dominik Psenner
--
Dominik Psenner
-- Dominik Psenner
-- Dominik Psenner

Of course the cronjob fails when trying to virsh blockcommit and not when creating the snapshot, sorry for the noise. 2017-07-07 9:15 GMT+02:00 Dominik Psenner <dpsenner@gmail.com>:
Hi,
different day, same issue.. cronjob runs and fails:
$ virsh snapshot-create-as --domain domain --name backup --no-metadata --atomic --disk-only --diskspec hda,snapshot=external error: failed to pivot job for disk hda error: block copy still active: disk 'hda' not ready for pivot yet Could not merge changes for disk hda of domain. VM may be in invalid state.
Then running the following in the morning succeeds and successfully pivotes the snapshot into the base image while the vm is live:
$ virsh blockjob domain hda --abort $ virsh blockcommit domain hda --active --pivot Successfully pivoted
This need of manual interventions is becoming a tiring job..
I someone else seeing the same issue or has an idea what the cause could be? Can I trust the output and is the base image really up to the latest state?
Cheers
2017-07-02 10:30 GMT+02:00 Dominik Psenner <dpsenner@gmail.com>:
Just a little catch-up. This time I was able to resolve the issue by doing:
virsh blockjob domain hda --abort virsh blockcommit domain hda --active --pivot
Last time I had to shut down the virtual machine and do this while being offline.
Thanks Wang for your valuable input. As far as the memory goes, there's plenty of head room:
$ free -h total used free shared buff/cache available Mem: 7.8G 1.8G 407M 9.7M 5.5G 5.5G Swap: 8.0G 619M 7.4G
2017-07-02 10:26 GMT+02:00 王李明 <wanglm@certusnet.com.cn>:
mybe this is because you physic host memory is small
then this will Causing instability of the virtual machine
But I'm just guessing
You can try to increase your memory
Wang Liming
*发件人:* libvirt-users-bounces@redhat.com [mailto:libvirt-users-bounces@ redhat.com] *代表 *Dominik Psenner *发送时间:* 2017年7月2日 16:22 *收件人:* libvirt-users@redhat.com *主题:* Re: [libvirt-users] virtual drive performance
Hi again,
just today an issue I've thought to be resolved popped up again. We backup the machine by doing:
virsh snapshot-create-as --domain domain --name backup --no-metadata --atomic --disk-only --diskspec hda,snapshot=external
# backup hda.qcow2
virsh blockcommit domain hda --active --pivot
Every now and then this process fails with the following error message:
error: failed to pivot job for disk hda error: block copy still active: disk 'hda' not ready for pivot yet Could not merge changes for disk hda of domain. VM may be in invalid state.
I expect live backups are a great asset and should work. Is this a bug that may relates also to the virtual drive performance issues we observe?
Cheers
2017-07-02 10:10 GMT+02:00 Dominik Psenner <dpsenner@gmail.com>:
Hi
a small update on this. I just migrated the vm from the site to my laptop and fired it up. The exact same xml configuration (except file paths and such) starts up and bursts with 50Mb/s to 115Mb/s in the guest. This allows only one reasonable answer: the cpu on my laptop is somehow better suited to emulate IO than the CPU built into the host on site. The host there is a HP proliant microserver gen8 with xeon processor. But the processor there is also never capped at 100% when the guest copies files.
I just ran another test by copying a 3Gb large file on the guest. What I can observe on my computer is that the copy process is not at a constant rate but rather starts with 90Mb/s, then drops down to 30Mb/s, goes up to 70Mb/s, drops down to 1Mb/s, goes up to 75Mb/s, drops to 1Mb/s, goes up to 55Mb/s and the pattern continues. Please note that the drive is still configured as:
<driver name='qemu' type='qcow2' cache='none' io='threads'/>
and I would expect a constant rate that is either high or low since there is no caching involved and the underlying hard drive is a samsung ssd evo 850. To have an idea how fast that drive is on my laptop:
$ dd if=/dev/zero of=testfile bs=1M count=1000 oflag=direct 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB, 1000 MiB) copied, 2.47301 s, 424 MB/s
I can further observe that the smaller the saved chunks are the slower the overall performance is:
dd if=/dev/zero of=testfile bs=512K count=1000 oflag=direct 1000+0 records in 1000+0 records out 524288000 bytes (524 MB, 500 MiB) copied, 1.34874 s, 389 MB/s
$ dd if=/dev/zero of=testfile bs=5K count=1000 oflag=direct 1000+0 records in 1000+0 records out 5120000 bytes (5.1 MB, 4.9 MiB) copied, 0.105109 s, 48.7 MB/s
$ dd if=/dev/zero of=testfile bs=1K count=10000 oflag=direct 10000+0 records in 10000+0 records out 10240000 bytes (10 MB, 9.8 MiB) copied, 0.668438 s, 15.3 MB/s
$ dd if=/dev/zero of=testfile bs=512 count=20000 oflag=direct 20000+0 records in 20000+0 records out 10240000 bytes (10 MB, 9.8 MiB) copied, 1.10964 s, 9.2 MB/s
Could this be a limiting factor? Does qemu/kvm do many many writes of just a few bytes?
Ideas, anyone?
Cheers
2017-06-21 20:46 GMT+02:00 Dan <srwx4096@gmail.com>:
On Tue, Jun 20, 2017 at 04:24:32PM +0200, Gianluca Cecchi wrote:
On Tue, Jun 20, 2017 at 3:38 PM, Dominik Psenner <dpsenner@gmail.com> wrote:
to the following:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/data/virtuals/machines/windows-server-2016- x64/image.qcow2'/> <backingStore/> <target dev='hda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
Do you see any gotchas in this configuration that could prevent the virtualized guest to power on and boot up?
When I configure like this, from a linux guest point of view I get this Symbios Logic SCSI Controller: 00:08.0 SCSI storage controller: LSI Logic / Symbios Logic 53c895a
But htis is true only if you add the SCSI controller too, not only the disk definition. In my case
<controller type='scsi' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </controller>
Note the slot='0x08' that is reflected into the first field of lspci inside my linux guest. So between your controllers you have to add the SCSI one
In my case (Fedora 25 with virt-manager-1.4.1-2.fc25.noarch, qemu-kvm-2.7.1-6.fc25.x86_64, libvirt-2.2.1-2.fc25.x86_64) with "Disk bus" set as SCSI in virt-manager, the xml defintiion for the guest is automatically updated with the controller if not existent yet. And the disk definition sections is like this:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/slaxsmall.qcow2'/> <target dev='sda' bus='scsi'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
So I think you should set dev='sda' and not 'hda' in your xml for it
I am actually very curious to know if that would make a difference. I don't have a such windows vm images ready to test at present.
Dan
I don't kknow if w2016 contains the symbios logic drivers already installed, so that a "simple" reboot could imply an automatic reconfiguration of the guest.... Note also that in Windows when the hw configuration is considered heavily changed, you could be asked to register again (I don't think that the IDE --> SCSI should imply it...)
Gianluca
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
--
Dominik Psenner
--
Dominik Psenner
-- Dominik Psenner
-- Dominik Psenner
-- Dominik Psenner

hi all: I used openstack+ceph to create a virtual machine for the window 2008 Enterprise Edition sp2. Now I find that almost every day I copy a large 60G file for the first time under two different folders on the same disk. Replication rate occurs only 2MB/s. But after this copy is finished, it is normal to copy again, and it can reach 70MB/s. But the first time the second day of the same operation, it will appear again. Not necessarily every day, of course any help will be appricate Wang Liming

On Wed, Jun 14, 2017 at 10:26:09AM +0200, Dominik Psenner wrote:
Hi,
I'm investigating a performance issue on a virtualized windows server host that is run on a ubuntu machine via libvirt/qemu. While the host can easily read/write on the raid drive with 100Mmb/s as observable with dd, the virtualized windows server running on that host is barely able to read/write with at most 8Mb/s and averages around 1.4Mb/s. This has grown to the extent that the virtualized host is often unresponsive and even unable to start up its services with system default timeouts. Any help to improve the situation is greatly appreciated.
Just to provide some even weirder numbers. With Debian on bare metal hosting Debian VM by qemu, kvm, for the I/O model comparison, I found there were about 40% performance drop (read), 26K(transaction/s) to 15 K (transaction/s) in pbench switching from IDE to virtio; while 18 K for bare metal. There would be 0 % to 70 % improvement in apache bench (request/s) depending on concurrency level, switching from IDE-e1000 to virtio-virtio. Very likely something was seriously wrong, as I expected when I/O bound, virtio would have been the winner. Dan
This is the configuration of the virtualized host:
~$ virsh dumpxml windows-server-2016-x64 <domain type='kvm' id='1'> <name>windows-server-2016-x64</name> <uuid>XXX</uuid> <memory unit='KiB'>2097152</memory> <currentMemory unit='KiB'>2097152</currentMemory> <vcpu placement='static'>2</vcpu> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-xenial'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> </hyperv> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>IvyBridge</model> <topology sockets='1' cores='2' threads='1'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> <timer name='hypervclock' present='yes'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/bin/kvm-spice</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/data/virtuals/machines/windows-server-2016-x64/image.qcow2'/> <backingStore/> <target dev='hda' bus='ide'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/var/data/virtuals/machines/windows-server-2016-x64/dvd.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:0e:f2:23'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='rtl8139'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <input type='tablet' bus='usb'> <alias name='input0'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/> </graphics> <video> <model type='vga' vram='16384' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='apparmor' relabel='yes'> <label>libvirt-XXX</label> <imagelabel>libvirt-XXX</imagelabel> </seclabel> </domain>
Cheers, Dominik
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
participants (6)
-
Andrea Bolognani
-
Dan
-
Dominik Psenner
-
Gianluca Cecchi
-
Thiago Oliveira
-
王李明