[libvirt-users] Detach disk from VM - virsh (working) vs. PHP (not working)
by Jan Horak
Hi all,
i created a script in PHP for create a virtual server with two QCOW2 discs … one is our system for installation and second is target system.
After successfully instalation (create a blank Debian system, prepare all files and grub partitions) i need a restart virtual without a installation disk.
If i use Virsh:
detach-disk --domain debian-test2 --persistent --target vda
reset debian-test2
everything works well.
If i use a PHP, there is „complicated“ way and „simple“ way:
1, complicated:
libvirt_domain_destroy($domain);
libvirt_domain_undefine($domain);
$xml = domain_create_xml($name,$uuid,$memory,$cpu,$vnc,$mac);
$domain = libvirt_domain_define_xml($server->conn, $xml);
libvirt_domain_disk_add($domain, "/users/".$name.".img", "vdb", "virtio", "qcow2", NULL);
libvirt_domain_create($domain);
(or instead libvirt_domain_disk_add i can define disk directly in XML)
But in this case, the server will not boot (GRUB error)
2, simple:
libvirt_domain_disk_remove($domain,“vda“);
libvirt_domain_reboot($domain);
The problem of this solution is thats not working. The remove disk is failing with error „Unable attach disk“ - i looks to source code, and yes, there is a mystake with „attach“/„detach“, but main problem i see in log from libvirt:
Aug 1 02:57:05 ry libvirtd[19051]: missing source information for device vda
I try to put source detail to xml in source of PHP module
libvirt-domain.c:
822 if (asprintf(&newXml,
823 " <disk type='file' device='disk'>\n"
824 " <target dev='%s'/>\n"
825 " </disk>", dev) < 0) {
826 set_error("Out of memory" TSRMLS_CC);
827 goto error;
828 }
but my attempts was unsuccesfull (i’m not C programmer).
Questions:
A, why complicated way is not working and system dont want boot (GRUB error) if virsh works fine
B, why libvirt_domain_disk_remove is not work ? I use libvirt and libvirt-php latest from git.
Thank you,
Jan
5 years, 3 months
[libvirt-users] Researching why different cache modes result in 'some' guest filesystem corruption..
by vincent@cojot.name
Hi All,
I've been chasing down an issue in recent weeks (my own lab, so no prod
here) and I'm reaching out in case someone might have some guidance to
share.
I'm running fairly large VMs (RHOSP underclouds - 8vcpu, 32gb ram, about
200gb single disk as a growable qcow2) on some RHEL7.6 hypervisors (kernel
3.10.0-927.2x.y, libvirt 4.5.0, qemu-kvm-1.5.3) on top of SSD/NVMe drives
with various filesystems (vxfs, zfs, etc..) and using ECC RAM.
The issue can be described as follows:
- the guest VMs work fine for a while (days, weeks) but after a kernel
update (z-stream) comes in, I am often greeted by the following message
immediately after rebooting (or attempting to reboot into the new
kernel):
"error: not a correct xfs inode"
- booting the previous kernel works fine and re-generating the initramfs
for the new kernel (from the n-1 kernel) does not solve anything.
- if booted from an ISO, xfs_repair does not find errors.
- on ext4, there seems to be some kind of corruption there too.
I'm building the initial guest image qcow2 for those guest VMs this way:
1) start with a rhel-guest image (currently
rhel-server-7.6-update-5-x86_64-kvm.qcow2)
2) convert to LVM by doing this:
qemu-img create -f qcow2 -o preallocation=metadata,cluster_size=1048576,lazy_refcounts=off final_guest.qcow2 512G
virt-format -a final_guest.qcow2 --partition=mbr --lvm=/dev/rootdg/lv_root --filesystem=xfs
guestfish --ro -a rhel_guest.qcow2 -m /dev/sda1 -- tar-out / - | \
guestfish --rw -a final_guest.qcow2 -m /dev/rootdg/lv_root -- tar-in - /
3) use "final_guest.qcow2" as the basis for my guests with LVM.
After chasing down this issue some more and attempting various
things (build the image on Fedora29, build a real XFS filesystem inside a
VM and use the generated qcow2 as a basis instead of virt-format)..
..I've noticed that the SATA disk of each of those guests were using
'directsync' (instead of 'Hypervisor Default'). As soon as I switched to
'None', the XFS issues disappeared and I've now applied several
consecutive kernel updates without issues. Also, 'directsync' or
'writethrough', while providing decent performance, both exhibited the XFS
'corruption' behaviour.. Only 'none' seem to have solved that.
I've read the docs but I thought it was OK to use those modes (UPS,
Battery-Backed RAID, etc..)
Does anyone have any idea what's going on or what I may be doing wrong?
Thanks for reading,
Vincent
5 years, 3 months
[libvirt-users] Why librbd disallow VM live migration if the disk cache mode is not none or directsync
by Ming-Hung Tsai
I'm curious that why librbd sets this limitation? The rule first
appeared in librbd.git commit d57485f73ab. Theoretically, a
write-through cache is also safe for VM migration, if the cache
implementation guarantees that cache invalidation and disk write are
synchronous operations.
For example, I'm using Ceph RBD images as VM storage backend. The Ceph
librbd supports synchronous write-through cache, by setting
rbd_cache_max_dirty to zero, and setting
rbd_cache_block_writes_upfront to true, thus it would be safe for VM
migration. Is that true? Any suggestion would be appreciated. Thanks.
Ming-Hung Tsai
5 years, 3 months