[libvirt-users] Mounting VM filesystem on host while VM running
by Patrick PICHON
Hello,
All my VMs are using LVs created on the host side.
I'm using collectd to monitor some of the ressources of my host as well
as the libvirt pluging to monitor my VMs.
Collectd has an interesting plugin ( df ) which can monitor the
filesystem usage.
I would like to use it to monitor the VMs filesytsem usage.
As it is obvious that if I mount the VM filesystem on the host while the
VM is running, I'm wondering if I could mount it in READONLY without
expecting any panic/corruption problem.
Any advices ?
Patrick
8 years, 4 months
[libvirt-users] Libvirt: dynamic ownership did not work
by Jonatan Schlag
Hi,
I have a very strange problem with libvirt. I work on some machines
with libvirt (Debian/ Arch Linux) and libvirt set the ownership of
images file automatically to the qemu user / group for example on Arch
Linux to nobody:kvm.
So when I copy an image file with root and use I then with qemu,
libvirt change the owner/ group to nobody:kvm.
But I also compiled libvirt for a machine (gcc 4.9.4 glibc 2.12) and on
this machine libvirt did not change the ownership of the image files
which results in this error:
libvirtError: internal error: process exited while connecting to
monitor: able-ticketing,seamless-migration=on -device
qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2
-device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device
hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev
spicevmc,id=charredir0,name=usbredir -device
usb-redir,chardev=charredir0,id=redir0 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on
2016-08-03T18:19:47.494512Z qemu-system-x86_64: -drive
file=/data/hdd1/libvirt/images/test.img,format=raw,if=none,id=drive-virtio-disk0:
Could not open '/data/hdd1/libvirt/images/test.img': Permission denied
When I set the ownership manually to nobody:kvm everything is fine, but
I could not work out why libvirt is unable to set the ownership
automatically.
Can anybody give me a hint where I could search further to work out the
problem?
My libvirt version is 1.3.3.2 and the settings dynamic_ownership = 1 in
/etc/libvirt/qemu.conf is set.
I also created a bug report where I described the problem a little bit
more detailed.
https://bugzilla.redhat.com/show_bug.cgi?id=1363864
Thanks for every help.
Regards Jonatan
8 years, 4 months
[libvirt-users] NPIV storage pools do not map to same LUN units across hosts.
by Nitesh Konkar
Link: http://wiki.libvirt.org/page/NPIV_in_libvirt
Topic: Virtual machine configuration change to use vHBA LUN
There is a NPIV storage pool defined on two hosts and pool contains a
total of 8 volumes, allocated from a storage device.
Source:
# virsh vol-list poolvhba0
Name Path
------------------------------------------------------------------------------
unit:0:0:0 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000366
unit:0:0:1 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000367
unit:0:0:2 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000368
unit:0:0:3 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000369
unit:0:0:4 /dev/disk/by-id/wwn-0x6005076802818bda300000000000036a
unit:0:0:5 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000380
unit:0:0:6 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000381
unit:0:0:7 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000382
--------------------------------------------------------------------
Destination:
--------------------------------------------------------------------
# virsh vol-list poolvhba0
Name Path
------------------------------------------------------------------------------
unit:0:0:0 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000380
unit:0:0:1 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000381
unit:0:0:2 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000382
unit:0:0:3 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000367
unit:0:0:4 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000368
unit:0:0:5 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000366
unit:0:0:6 /dev/disk/by-id/wwn-0x6005076802818bda300000000000036a
unit:0:0:7 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000369
--------------------------------------------------------------------
As you can see in the above output,the same set of eight LUNs from the
storage server have been mapped,
but the order that the LUNs are probed on each host is different,
resulting in different unit names
on the two different hosts .
If the the guest XMLs is referencing its storage by "unit" number then is
it safe to migrate such guests because the "unit number" is assigned by the
driver according to the specific way it probes the storage and hence
when you migrate
these guests , it results in different unit names on the destination hosts.
Thus the migrated guest gets mapped to the wrong LUNs and is given the
wrong disks.
The problem is that the LUN numbers on the destination host and source
host do not agree.
Example, LUN 0 on source_host, for example, may be LUN 5 on destination_host.
When the guest is given the wrong disk, it suffers a fatal I/O error. (This is
manifested as fatal I/O errors since the guest has no idea that its disks just
changed out under it.)The migration does not take into account that
the unit numbers do
match on on the source and destination sides.
So, should libvirt make sure that the guest domains reference NPIV
pool volumes by their
globally-unique wwn instead of by "unit" numbers?
The guest XML references its storage by "unit" number.
Eg:-
<disk type='volume' device='lun'>
<driver name='qemu' type='raw' cache='none'/>
<source pool='poolvhba0' volume='unit:0:0:0'/>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</disk>
I am planning to write a patch for it. Any comments on the above
observation/approach would be appreciated.
Thanks,
Nitesh.
8 years, 4 months
[libvirt-users] Error with OpenStack starting an instance
by Silvia Fichera
Hi all,
I have installed OpenStack using devstack, when I try to launch an instance
I have an error related to libvirt:
libvirtError: Cannot get interface MTU on 'br-int'
and this don't allow to create a VM.
Any hint to solve it?
Thanks
--
Silvia Fichera
8 years, 4 months
[libvirt-users] Crash after connection close when callback is in progress
by Vincent Bernat
Hey!
It seems that if I close a connection while a domain event callback is
in progress, I can easily have a crash. Here is a backtrace:
#v+
#0 virFree (ptrptr=0x0) at ../../../src/util/viralloc.c:582
save_errno = <optimized out>
#1 0x00007fc8328a4ad2 in virObjectEventCallbackListPurgeMarked (cbList=0xadfc30) at ../../../src/conf/object_event.c:282
freecb = <optimized out>
n = 0
#2 virObjectEventStateFlush (state=0xaf5380) at ../../../src/conf/object_event.c:819
tempQueue = {
count = 0,
events = 0x0
}
#3 virObjectEventTimer (timer=<optimized out>, opaque=0xaf5380) at ../../../src/conf/object_event.c:560
state = 0xaf5380
#4 0x00007fc83280b7aa in virEventPollDispatchTimeouts () at ../../../src/util/vireventpoll.c:457
cb = 0x7fc8328a48d0 <virObjectEventTimer>
timer = 1
opaque = 0xaf5380
now = 1470212691501
i = 0
ntimeouts = 1
#5 virEventPollRunOnce () at ../../../src/util/vireventpoll.c:653
fds = 0x7fc824000920
ret = <optimized out>
timeout = <optimized out>
nfds = 1
__func__ = "virEventPollRunOnce"
__FUNCTION__ = "virEventPollRunOnce"
#6 0x00007fc83280a141 in virEventRunDefaultImpl () at ../../../src/util/virevent.c:314
__func__ = "virEventRunDefaultImpl"
#7 0x0000000000400b37 in loop (arg=0x0) at crash.c:8
__PRETTY_FUNCTION__ = "loop"
#v-
And the state of cbList:
#v+
>>> print *cbList
$2 = {
nextID = 11419456,
count = 1,
callbacks = 0x0
}
#v-
I have another thread, but it is just sleeping when the crash happens.
Here is the source code:
#+begin_src c
#include <stdio.h>
#include <unistd.h>
#include <assert.h>
#include <pthread.h>
#include <libvirt/libvirt.h>
void* loop(void *arg) {
while (1) {
assert (virEventRunDefaultImpl() >= 0);
}
return NULL;
}
void callback(virConnectPtr conn, virDomainPtr dom, void *opaque) {
// Do nothing.
}
void freecb(void *opaque) {
// Do nothing.
}
int main() {
assert(virInitialize() >= 0);
assert(virEventRegisterDefaultImpl() >= 0);
pthread_t event_loop;
assert(pthread_create(&event_loop, NULL, loop, NULL) == 0);
virConnectPtr conn = virConnectOpen("test:///default");
assert(conn != NULL);
int cbid = virConnectDomainEventRegisterAny(conn, NULL,
VIR_DOMAIN_EVENT_ID_LIFECYCLE,
callback, NULL,
freecb);
assert(cbid != -1);
virDomainPtr dom = virDomainDefineXML(conn,
"<domain type=\"test\">"
"<name>new VM</name>"
"<memory unit=\"KiB\">8192</memory>"
"<os><type>hvm</type> </os>"
"</domain>");
assert(dom != NULL);
assert(virDomainCreate(dom) != -1);
virDomainFree(dom);
assert(virConnectDomainEventDeregisterAny(conn, cbid) != -1);
if (virConnectClose(conn) > 0 ) {
printf("leak...\n");
}
usleep(100000);
return 0;
}
#+end_src
Running this program in an infinite loop triggers the bug in less than 1
second (most of the time, just after displaying "leak...").
I am using libvirt 2.0.0 (in Debian). I have also file the following
bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1363628
--
Write clearly - don't sacrifice clarity for "efficiency".
- The Elements of Programming Style (Kernighan & Plauger)
8 years, 4 months
[libvirt-users] systemd restarts libvirt
by Ishmael Tsoaela
Hi All,
Can anyone please assist with this issue I am facing.
There is continuous restart of libvirt, this causes virt-manager to
disconnect.
syslogs:
systemd[1]: libvirt-bin.service: Start operation timed out. Terminating.
systemd[1]: Failed to start Virtualization daemon.
systemd[1]: libvirt-bin.service: Unit entered failed state.
systemd[1]: libvirt-bin.service: Failed with result 'timeout'.
systemd[1]: libvirt-bin.service: Service hold-off time over, scheduling
restart.
systemd[1]: Stopped Virtualization daemon.
systemd[1]: Starting Virtualization daemon...
dnsmasq[1296]: read /etc/hosts - 7 addresses
libvirtd (libvirt) 1.3.4
QEMU emulator version 2.3.0
vir-manager 1.4.0
8 years, 4 months
[libvirt-users] Some question about code of xenapi
by netcas
Hello, guys! I am doing some experiment to control XenServer with libvirt, when I define a domain, it always fails and gives the following message:
error: Failed to define domain from /root/new.xml
error: internal error: Couldn't get VM information from XML
so I diving into the souce and found it always fails at network creating, and the code in libvirt-1.3.5\src\xenapi\xenapi_utils.c, line 437 is following:
xen_vif_create(session, &vif, vif_record);
if (!vif) {
xen_vif_free(vif);
xen_vif_record_free(vif_record);
xen_network_record_free(net_rec);
xen_network_set_free(net_set);
return 0;
}
xen_vif_record_free(vif_record);
xen_network_record_free(net_rec);
... ...
return -1;
should this if be if (vif) {...}, if the vif is null, why need to free it?
8 years, 4 months
[libvirt-users] Live Disk Backup
by Prof. Dr. Michael Schefczyk
Dear All,
using CentOS 7.2.1511, and libvirt from ovirt repositories (currently 1.2.17-13.el7_2.5, but without otherwise using ovirt) I am regularly backing up my VMs which are on qcow2 files. In general, I am trying to follow http://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit
A typical backup script would be
#!/bin/bash
dt=`date +%y%m%d`
if virsh dominfo dockers10a | grep -q -E '^Status: *laufend|^State: *running'
then
virsh snapshot-create-as --domain dockers10a dockers10a --diskspec vda,file=/home/dockers10asnap.qcow2 --disk-only --no-metadata --atomic
cp /kvm01/dockers10a.qcow2 /backup/dockers10a$dt.qcow2
virsh blockcommit dockers10a vda --active --verbose --pivot
virsh snapshot-delete dockers10a dockers10a
rm /home/dockers10asnap.qcow2
fi
I am fully aware that the third line from the end "virsh snapshot-delete ..." will fail under regular circumstances. It is just there as a precaution to delete unnecessary snapshots should a previous backup have failed.
For some time, I am noticing that from time to time backup fails in the way that the xml definition file of the VM backed up keeps the temporary file (in the example above /home/dockers10asnap.qcow2) as the source file. Then, at least upon rebooting the host, it will be unable to restart the VM. In addition, lots of other troubles can arise (following backups failing, storage issues).
I am using a similar setup on four hosts. It seems that the better the resources of the host are, the lower the likelihood of the problem occurring - but that cannot be an acceptable state.
Can someone please point me to how to avoid this?
Regards,
Michael Schefczyk
8 years, 4 months