[libvirt-users] Managing VMs across multiple physical hosts
by Daniel Corbe
Hello List,
We’re currently managing VMs across approximately 26 physical hosts. We tend to migrate VMs around a lot to do things like spread load and keep VMs running when physical hosts are in need of package updates.
Suffice it to say sometimes finding out which physical host a VM is currently running on can be a pain.
Anyone know of an open source tool that will give me a snapshot of what’s running where across a list of physical hosts? Doesn’t have to be anything fancy. A command-line utility will do.
Best,
Daniel
8 years, 7 months
[libvirt-users] VM crash : Failed to terminate process X with SIGKILL: Device or resource busy
by Michel Villeneuve
Hello
since I changed my hypervisor from centos 6.3 to Fedora-23
I had many problems with differents VMs.
Very often once by day ( I have about 150 VM ),
some VMs crash or freeze indifferently and I got
messages like this on console.
[<fffffffff8000a08bd>] wake_bit_funtion +0w0/0x23.....
[<fffffffff8800a0ead>] :jbd:journal_get_write_access+0x22/0x33
;;;;
[<fffffffff800013ccd>] :ext3:ext3_dirty_inode+0x63/0x7b
And so the VM is crashed and can't be accessed but often the ping command
can respond
Before my migration I never meet these problems, It 's strictly the same
VMs between the 6.3 and
fedora23 release. I just changed the parameter :
<type arch='x86_64' machine='rhel-6.0.0'>hvm</type>
to
<type arch='x86_64' machine='pc-i440fx-2.4'>hvm</type>
and do a virsh define
I tried some other version of the parameters without success.
and I also added a lockd manager in fedora23.
Before and I used libvirt.0.9.5 or 1.0.2 on centos 6.03 without lockd
A major problem with these crashs is that the VMs couldn't be destroyed
by the virsh command, the qemu process is notified as defunct by the ps
command
If I try a virsh destroy
I get in log file
2016-04-20 20:32:47.318+0000: 5541: info : virEventPollRunOnce:641 :
EVENT_POLL_RUN: nhandles=11 timeout=-1
2016-04-20 20:32:55.028+0000: 5567: debug : virProcessKillPainfully:368 :
Timed out waiting after SIGTERM to process 8720, sending SIGKILL
2016-04-20 20:33:00.032+0000: 5567: error : virProcessKillPainfully:398 :
Failed to terminate process 8720 with SIGKILL: Périphérique ou ressource
occupé
or on console
Failed to terminate process xxx with SIGTERM: Device or resource busy
and the VM is still in the list in a "Stopping" state
Result of ps ps on the the qemu process attached to the VM
qemu 8720 1 0 avril20 ? 00:07:16
[qemu-system-x86] <defunct>
root 8733 2 0 avril20 ? 00:00:01 [vhost-8720]
root 8735 2 0 avril20 ? 00:00:00
[kvm-pit/8720]
libvirtd seems to be in an anormal state. If I restart the libvirtd
the virsh command just hang and never remove the VM from the list.
The only seems to reboot the hypervisor but all the VMs in production too.
Is there a way to remove the process qemu in defunct state without reboot
the hypervisor.
Perhaps the probleme come from the VM parameters which have been created on
6.3 Centos and libvirt <1.0 version. Do I need to convert some other
parameters ?
I 'am trying to put a new hypervisor in aFailed to terminate process X with
SIGKILL: Device or resource busy version level less than fedora23 perhaps a
Centos 7.2 to
see what 's happen and if there is a problem like mine.
Thanks
PS:
I put the log_level to 1
----------------------information in logfile
[root@kvmserver6 ~]# ls -al /var/lib/libvirt/qemu/domain-1-TEST-VM-A
total 8
drwxr-x--- 2 qemu qemu 4096 20 avril 22:01 .
drwxr-x--x. 18 qemu qemu 4096 20 avril 22:01 ..
srwxrwxr-x 1 qemu qemu 0 20 avril 22:01 monitor.sock
/var/lib/libvirt/qemu/channel/target/domain-1-TEST-VM-A/
[root@kvmserver6 ~]# cat /var/log/libvirt/qemu/TEST-VM-A.log
2016-04-20 20:01:36.714+0000: starting up libvirt version: 1.3.3, package:
1.fc23 (Unknown, 2016-04-06-15:17:39, thinkpad2), qemu version: 2.4.1
(qemu-2.4.1-8.fc23), hostname: kvmserver6.univ-brest.fr
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name TEST-VM-A,debug-threads=on -S
-machine pc-i440fx-2.4,accel=kvm,usb=off -m 1024 -realtime mlock=off -smp
1,sockets=1,cores=1,threads=1 -uuid 1e4c27e4-123e-719a-9fdf-f783d34cbb40
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-TEST-VM-A/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/libvirt/images/POOL_PROD4/TEST-VM-A.img,format=raw,if=none,id=drive-virtio-disk0
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=26,id=hostnet0,vhost=on,vhostfd=28 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:77:11:11,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -vnc 0.0.0.0:0,password -k fr
-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
ES1370,id=sound0,bus=pci.0,addr=0x4 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on
char device redirected to /dev/pts/1 (label charserial0)
qemu: terminating on signal 15 from pid 5541
--
Michel Villeneuve
Tel 02 98 01 71 61
8 years, 7 months
[libvirt-users] uefi built from tiancore via edk2 can't persist boot changes
by jsl6uy js16uy
Apologies if this has been gone over, but I believe I have checked the
intertubes more than a bit.....
I am using libvirt and have vms booting under an OVMF.fd to use an efi
firmware. I can create vms, linux ubuntu, and they will boot up. However,
everytime I reboot am I dropped into the default efi shell provide by the
tianocore build.
Then I must walk the FS to the booting efi app and run, in this case
grubx64.efi, to actually finish booting the host.
I had tried adding boot entries with efibootmgr within the OS and also bcfg
with the efi shell. I get no errors when adding an entry, and the new entry
shows up and I can manipulate the entry i.e. set it to next boot and the
like. Soon as I reboot I get dumped back to efi shell.
Am I missing something? Are the var changes not being stored? Is there
someplace to look for an error perhaps?
I also can use the same efi bootloader under libvirt to boot a hybrid iso.
location of the efi application on the hybrid iso is /EFI/boot/bootx64.efi.
Drops me right into my grub menu, after falling thru from EFI FLOPPY1 and
EFI FLOPPY2 to EFI DVD
So since that was/is working, tried adding that to my efi partition so it
has /EFI/boot/bootx64.efi. The thinking being that since I can't add an
entry, I will setup for the "known" efi boot path. Of course that didn't
work either
I am on arch linux:
Linux X 4.4.5-1-ARCH #1 SMP PREEMPT Thu Mar 10 07:38:19 CET 2016 x86_64
GNU/Linux
local/libvirt 1.3.2-3
API for controlling virtualization engines
(openvz,kvm,qemu,virtualbox,xen,etc)
local/libvirt-python 1.3.1-1
libvirt python binding
UEFI:
Shell> ver
UEFI Interactive Shell v2.1
EDK II
UEFI v2.50 (EDK II, 0x00010000)
for this host....my nvram setting look like
<os>
<type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type>
<loader readonly='yes' type='pflash'>/home/xyz/OVMF.fd</loader>
<nvram
template='/usr/share/OVMF/OVMF_VARS.fd'>/var/lib/libvirt/qemu/nvram/X_VARS.fd</nvram>
<boot dev='hd'/>
</os>
8 years, 7 months
[libvirt-users] ssh-askpass in libvirt
by Alex Roithman
It seems, that libvirt use system environments for finding the ask-pass
application. You can to tell me this variables? Or libvirt uses another
mechanism for call to ssh-askpass application?
8 years, 7 months
[libvirt-users] Create multiple domains from single saved domain state (is UUID/name fixed?)
by Jonas Finnemann Jensen
Hi,
I would like to save a running domain (ie. disk + memory) and be able to
restore it multiple times creating duplicates of the orignal domain all
starting from the same state.
Use case:
I'm building a task-processing system for use in a CI flow.
I want to run multiple VMs in parallel using the same image (always
starting from the same state).
And to avoid needlessly booting between each task, I would like to save
(and distribute) the domain state, so that I just restore from memory.
However, I can't seems to change the UUID or the name of a domain once it
is saved.
Nor do I seem able to rename a domain while it is running.
I can obviously duplicate both the disks and the file to which I saved the
domain state using "virsh save".
But I seem unable to rename before I restore.. Any ideas?
Could I do this with snapshots? I suspect not since I see
virDomainSnapshotRedefinePrep() calling
virDomainDefCheckABIStability which raises the error here:
https://fossies.org/dox/libvirt-1.3.3/domain__conf_8c_source.html#l17991
Out of curiosity does anyone know what horrors might befall me if I were to
remove the lines protecting against name and UUID changes? Then compile my
own libvirt...
The comment in the code says name can be changed, but I'm guessing I would
have to change the UUID too. Does anyone see how that would create issues?
I'm not sure how libvirt uses the UUID internally.
--
Regards Jonas Finnemann Jensen.
8 years, 7 months
[libvirt-users] Make disk "unaffected by snapshots"
by Basin Ilya
Hi.
Is it possible to make a virtual disk unaffected by snapshots just like writethrough disks in VBox?
I tried raw with shareable flag, but snapshot creation fails with: "internal snapshot for disk vdb unsupported for storage type raw".
8 years, 7 months