[libvirt-users] Live migration of instance using KVM hypervisor fails
by Gurjar, Unmesh
Hi,
I am trying to migrate a running instance, but it fails with the following error:
$ virsh migrate --live instance-00000008 qemu+tcp://10.2.3.150/system --verbose
error: operation failed: migration job: unexpectedly failed
I can see following in the instance specific qemu log directory (/var/log/libvirt/qemu/instance-00000008.log) on the destination host:
2012-04-12 03:57:26.211: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.14 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name instance-00000008 -uuid 86b357d0-b76d-4cde-a0d1-90fb65508ff2 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000008.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot c -kernel /opt/stack/nova/mnt_instances/instance-00000008/kernel -initrd /opt/stack/nova/mnt_instances/instance-00000008/ramdisk -append root=/dev/vda console=ttyS0 -drive file=/opt/stack/nova/mnt_instances/instance-00000008/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,ifname=tap57b16714-9a,script=,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:45:b3:c6,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/opt/stack/nova/mnt_instances/instance-00000008/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -usb -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -k en-us -vga cirrus -incoming tcp:0.0.0.0:49169 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Domain id=20 is tainted: high-privileges
Domain id=20 is tainted: shell-scripts
char device redirected to /dev/pts/17
2012-04-12 03:57:26.850: shutting down
can't delete tap57b16714-9a from eth1: Operation not supported
SIOCSIFADDR: Permission denied
SIOCSIFFLAGS: Permission denied
SIOCSIFFLAGS: Permission denied
/etc/qemu-ifdown: could not launch network script
Libvirt version on both hosts:
$ libvirtd --version
libvirtd (libvirt) 0.9.2
$ virsh --version
0.9.2
Here are my libvirtd.conf details:
listen_tls = 0
listen_tcp = 1
unix_sock_group = "libvirtd"
unix_sock_rw_perms = "0770"
auth_unix_ro = "none"
auth_unix_rw = "none"
auth_tcp = "none"
Would be great if someone can point out the issue here.
Note: I have disable apparmor for libvirtd profile and am able to list the instances running on the remote host (from both the servers).
Thanks & Regards,
Unmesh Gurjar | Lead Engineer | Vertex Software Private Ltd. | w. +91.20.6604.1500 x 379 | m. +91.982.324.7631 | unmesh.gurjar(a)nttdata.com<mailto:unmesh.gurjar@nttdata.com> | Follow us on Twitter@NTTDATAAmericas
______________________________________________________________________
Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding
12 years, 7 months
[libvirt-users] No way to obtain guest's cpu and mem usage?
by RaSca
Hi everybody,
I'm using the PHP API to make a web interface interact with the virtual
machines installed on some hypervisor.
Everything is fine, but I would like to find a way to get the guest's
cpu and mem usage, so that it should be possible to make some rrd
graphs. I didn't find out anything and also it seems looking around that
there is no way to obtain those data.
What is strange for me is that programs like virt-top are showing
exactly what I'm looking for. Can anyone help me to find out a way to
retrieve those statistics?
Another related question: do you think I can obtain the same data by
using this:
ps aux | egrep "[/]usr/libexec/qemu-kvm.*libvirt.*" | awk '{print $3"/"$4}'
on the hypervisor? These are related to the kvm process, and the kvm
process IS the virtual machine... Or not?
Thanks to everybody,
--
RaSca
Mia Mamma Usa Linux: Niente è impossibile da capire, se lo spieghi bene!
rasca(a)miamammausalinux.org
http://www.miamammausalinux.org
12 years, 7 months
[libvirt-users] Libvirtd not starting on reboot
by Shawn Davis
Hello everyone,
I have libvirt and qemu/kvm running on ubuntu 11.10 desktop. Everything is
working great when it is on. The problem is if I reboot, virsh doesn't see
the hypervisor anymore and I can't manually start libvirtd. Libvirtd is
currently located in /usr/sbin/libvirtd and I originally started it with
sudo ./libvirtd -d from that directory. How do I ensure that libvirtd
starts on bootup of the host machine?
Thanks,
Shawn
12 years, 7 months
[libvirt-users] vol-wipe and delete
by Shawn Davis
Hello all,
I know libvirt has the following two commands:
vol-wipe
vol-delete
Can someone please explain to me how these function? Do these only remove
previous session data so that a new virtual instance cannot see it or does
it remove it at the physical level level too so that the data cannot be
found anywhere on the physical drive? I know Eucalyptus and KVM use sparse
files upon instance creation and the instance thinks the ephemeral space is
all zeroes even though it is just sparse. I appreciate any insight in how
the two virsh commands work and if they use zeroes, sparse, or some other
method.
Thanks,
Shawn
12 years, 7 months
[libvirt-users] virsh attach-disk with cache=none and io=native for raw devices (online)
by Frido Roose
Hello,
I see there is an option with virsh attach-disk to set the cache to "none" for raw devices, but I can't find how to attach the disk with io=native (needed for performance reasons).
The goal is to attach a disk online, directly with the proper performance settings.
I know I can set this in virt-manager, but then I have to restart the VM to apply the change, so this is an offline operation.
With virsh edit, I guess I also would have to restart the VM...
--
Frido Roose
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
12 years, 7 months
[libvirt-users] PCI passthrough error
by Jaap Winius
Hi folks,
Has anyone encountered the following PCI passthrough error?
error: internal error Process exited while reading console \
log output: char device redirected to /dev/pts/1
assigned_dev_pci_read: pread failed, ret = 0 errno = 2
It's produced after I've detached the PCI device from the base OS and
have tried to start up the guest domain.
To get to this point, I mostly followed these instructions:
http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/c...
The distro I'm using is Debian squeeze, which by default comes with
libvirt 0.8.3 and qemu-kvm 0.12.5, although to avoid a different PCI
passthrough error, I used Debian backports for squeeze to upgrade them
to libvirt 0.9.8 and qemu-kvm 1.0.
The motherboard involved has VT-d support, which I've enabled with the
"intel_iommu=on" kernel option (dmesg shows "Intel-IOMMU: enabled"). I
did not bother with setsebool because SELinux is disabled.
According to lspci, the device I want to pass through to the guest
domain, a USB controller, has bus/slot/function 00:1a.0, so I added
the following stanza to the <devices> section of my guest domain:
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x00'
slot='0x1a' function='0x0'/>
</source>
</hostdev>
Actually, every time I save this configuration, libvirt changes it to:
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x00'
slot='0x1a' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x05' function='0x0'/>
</hostdev>
Huh, slot 5? I don't have any PCI devices that use slot 5. Well, at
least the system doesn't complain, but I worry that this might be a
symptom of something more serious.
Anyway, I'd be very grateful if anyone has any tips on how I might
avoid the aforementioned error and get PCI passthrough to to work.
Thanks,
Jaap
12 years, 7 months
[libvirt-users] Libvirt-0.9.11 compilation on Centos 5.6
by Rajat Mehrotra
Hi All,
I am trying to compile libvirt-0.9.11 on Centos 5.6 (32 bit) for xen 4.1.
I have followed following procedure to compile the libvirt library.
1. ./autogen.sh --system --with-xen
2. make
While running "make", I am receiving following errors :
-------------------------------------------------------------------------------------------------------------
CC libvirt_iohelper-iohelper.o
CCLD libvirt_iohelper
CC libvirt_parthelper-parthelper.o
CCLD libvirt_parthelper
libvirt_parthelper-parthelper.o: In function `main':
/root/Downloads/libvirt-0.9.11/src/storage/parthelper.c:103: undefined
reference to `ped_device_get'
/root/Downloads/libvirt-0.9.11/src/storage/parthelper.c:117: undefined
reference to `ped_disk_new'
/root/Downloads/libvirt-0.9.11/src/storage/parthelper.c:123: undefined
reference to `ped_disk_next_partition'
/root/Downloads/libvirt-0.9.11/src/storage/parthelper.c:173: undefined
reference to `ped_disk_next_partition'
collect2: ld returned 1 exit status
make[3]: *** [libvirt_parthelper] Error 1
make[3]: Leaving directory `/root/Downloads/libvirt-0.9.11/src'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/root/Downloads/libvirt-0.9.11/src'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/Downloads/libvirt-0.9.11'
make: *** [all] Error 2
------------------------------------------------------------------------------------------------------------
I have installed parted (GNU parted) 3.0 ( previously it was 1.8.1 .
I have seen that a lot of people have reported this error but no one has
shared the solution.
Does anyone remember the solution?
Thanks in advance.
Rajat
12 years, 7 months
[libvirt-users] virsh works but libvirt-java throws a 'No route to host'
by Jon Drews
I'm able to connect to my xenserver using virsh like so and I see the VMs.
[jdrews@flynx ~]$ virsh -c xenapi://root@192.168.1.23?no_verify=1 list
Enter root's password for 192.168.1.23:
Id Name State
----------------------------------------------------
Struct did not contain expected field memory_overhead.
Struct did not contain expected field snapshot_info.
Struct did not contain expected field snapshot_metadata.
Struct did not contain expected field parent.
Struct did not contain expected field children.
Struct did not contain expected field bios_strings.
Struct did not contain expected field protection_policy.
Struct did not contain expected field is_snapshot_from_vmpp.
Struct did not contain expected field appliance.
Struct did not contain expected field start_delay.
Struct did not contain expected field shutdown_delay.
Struct did not contain expected field order.
Struct did not contain expected field VGPUs.
Struct did not contain expected field attached_PCIs.
Struct did not contain expected field suspend_SR.
Struct did not contain expected field version.
0 Control domain on host: xenserver-4 running
Struct did not contain expected field memory_overhead.
Struct did not contain expected field snapshot_info.
Struct did not contain expected field snapshot_metadata.
Struct did not contain expected field parent.
Struct did not contain expected field children.
Struct did not contain expected field bios_strings.
Struct did not contain expected field protection_policy.
Struct did not contain expected field is_snapshot_from_vmpp.
Struct did not contain expected field appliance.
Struct did not contain expected field start_delay.
Struct did not contain expected field shutdown_delay.
Struct did not contain expected field order.
Struct did not contain expected field VGPUs.
Struct did not contain expected field attached_PCIs.
Struct did not contain expected field suspend_SR.
Struct did not contain expected field version.
23 VM1 (RH5.2 10.4.1.69 eth0) running
Struct did not contain expected field memory_overhead.
Struct did not contain expected field snapshot_info.
Struct did not contain expected field snapshot_metadata.
Struct did not contain expected field parent.
Struct did not contain expected field children.
Struct did not contain expected field bios_strings.
Struct did not contain expected field protection_policy.
Struct did not contain expected field is_snapshot_from_vmpp.
Struct did not contain expected field appliance.
Struct did not contain expected field start_delay.
Struct did not contain expected field shutdown_delay.
Struct did not contain expected field order.
Struct did not contain expected field VGPUs.
Struct did not contain expected field attached_PCIs.
Struct did not contain expected field suspend_SR.
Struct did not contain expected field version.
25 DeviceSimulator (192.168.1.150) running
Struct did not contain expected field memory_overhead.
Struct did not contain expected field snapshot_info.
Struct did not contain expected field snapshot_metadata.
Struct did not contain expected field parent.
Struct did not contain expected field children.
Struct did not contain expected field bios_strings.
Struct did not contain expected field protection_policy.
Struct did not contain expected field is_snapshot_from_vmpp.
Struct did not contain expected field appliance.
Struct did not contain expected field start_delay.
Struct did not contain expected field shutdown_delay.
Struct did not contain expected field order.
Struct did not contain expected field VGPUs.
Struct did not contain expected field attached_PCIs.
Struct did not contain expected field suspend_SR.
Struct did not contain expected field version.
28 VM3 (RH 5.2 10.4.1.15 eth0) running
Struct did not contain expected field memory_overhead.
Struct did not contain expected field snapshot_info.
Struct did not contain expected field snapshot_metadata.
Struct did not contain expected field parent.
Struct did not contain expected field children.
Struct did not contain expected field bios_strings.
Struct did not contain expected field protection_policy.
Struct did not contain expected field is_snapshot_from_vmpp.
Struct did not contain expected field appliance.
Struct did not contain expected field start_delay.
Struct did not contain expected field shutdown_delay.
Struct did not contain expected field order.
Struct did not contain expected field VGPUs.
Struct did not contain expected field attached_PCIs.
Struct did not contain expected field suspend_SR.
Struct did not contain expected field version.
30 Router 10.4.1.x (192.168.1.254) running
I'm using a compiled version of libvirt that I enabled the xenapi protocol.
[jdrews@flynx ~]$ virsh -v
0.9.10
I can also start and stop VMs via virsh.
So the next move for me was to get this working with the java bindings,
libvirt-java. I grabbed libvirt-0.4.7.jar and built it. Eclipse is set up
and I can see everything correctly. I built a small runnable test jar and
ran it.
[jdrews@flynx ~]$ java -jar VirtAPITesting.jar
connecting on: xenapi://root@192.168.1.23?no_verify=1
Enter root's password for 192.168.1.23
WARNING: THE ENTERED PASSWORD WILL NOT BE MASKED!
mytestpass
exception caught:org.libvirt.LibvirtException: unable to connect to server
at '192.168.1.23:16514': No route to host
level:VIR_ERR_ERROR
code:VIR_ERR_SYSTEM_ERROR
domain:VIR_FROM_RPC
hasConn:false
hasDom:false
hasNet:false
message:unable to connect to server at '192.168.1.23:16514': No route to
host
str1:%s
str2:unable to connect to server at '192.168.1.23:16514': No route to host
str3:null
int1:-1
int2:-1
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:58)
Caused by: java.lang.NullPointerException
at
com.codemettle.VirtAPI.testing.VirtAPITesting.main(VirtAPITesting.java:29)
... 5 more
But I know there is a route to the host as virsh could get there. Sanity
check: ping works fine too.
[jdrews@flynx ~]$ ping 192.168.1.23
PING 192.168.1.23 (192.168.1.23) 56(84) bytes of data.
64 bytes from 192.168.1.23: icmp_req=1 ttl=64 time=1.58 ms
Here is the code I'm running in the VirtAPITesting.jar. For the most part
it's a direct copy of the example at the end of the libvirt java page:
http://libvirt.org/java.html
package com.jdrews.VirtAPI.testing;
import org.libvirt.*;
public class VirtAPITesting {
public static void main(String[] args) throws InterruptedException {
System.setProperty("jna.library.path", "/usr/local/lib/");
Connect conn=null;
System.out.println(
ConnectAuth defaultAuth = new ConnectAuthDefault();
try{
conn = new Connect("xenapi://root@192.168.1.23?no_verify=1",
defaultAuth, 0);
} catch (LibvirtException e){
System.out.println("exception caught:"+e);
System.out.println(e.getError());
}
try {
Domain testDomain=conn.domainLookupByName("DeviceSimulator
(192.168.1.150)");
System.out.println("Domain:" + testDomain.getName() + " id " +
testDomain.getID() + " running " +
testDomain.getOSType());
} catch (LibvirtException e) {
System.out.println("exception caught:"+e);
System.out.println(e.getError());
}
}
}
Does anyone know what's going on here? Any help or advice would be greatly
appreciated. Thanks!
--
Jon Drews
jondrews.com
12 years, 7 months
[libvirt-users] NFS + sanlock problems
by Kyrre Begnum
Hello list,
In short: I am experiencing a problem where trying to start a VM which
is allready running on a different libvirt host. Starting it fails,
thanks to sanlock, but the disk dies on the original hypervisor and the
vms filesystem is rendered read-only.
Long version:
i have the following setup:
Two machines (H1 and H2) using CentOS 6.2 with the packages:
qemu-kvm-0.12.1.2-2.209.el6_2.4.x86_64
libvirt-client-0.9.4-23.el6_2.7.x86_64
libvirt-0.9.4-23.el6_2.7.x86_64
libvirt-lock-sanlock-0.9.4-23.el6_2.7.x86_64
They share two NFS folders, one for filesystems ( net set up as a
libvirt storage pool ) and one for sanlock.
sanlock is started without watchdog.
/etc/libvirt/qemu-sanlock.conf:
auto_disk_leases = 1
disk_lease_dir = "/opt/mln/sanlock"
host_id = 51
( host_id is 52 on H2 )
cat /etc/libvirt/qemu.conf | grep sanlock
lock_manager = "sanlock"
A virtual machine is created ( with virsh create ) on H1 using the
following xml:
<domain type='kvm' >
<name>rhel1.locktest</name>
<memory>524288</memory>
<vcpu></vcpu>
<os>
<type arch='x86_64' >hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/opt/mln/projects/root/locktest/images/rhel1'/>
<target dev='hda' bus='ide'/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' unit='0'/>
</disk>
<disk type='block' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ide0-1-0'/>
<address type='drive' controller='0' bus='1' unit='0'/>
</disk>
<controller type='ide' index='0'>
<alias name='ide0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<interface type='bridge'>
<source bridge='br0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/8'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/8'>
<source path='/dev/pts/8'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<input type='mouse' bus='ps2'/>
<sound model='ich6'>
<alias name='sound0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</sound>
<graphics type='spice' />
<video>
<model type='qxl' heads='1'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='selinux' relabel='yes'>
<label>system_u:system_r:svirt_t:s0:c141,c961</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c141,c961</imagelabel>
</seclabel>
</domain>
Here is the qemu log ( with debugging enabled ) from H1:
2012-04-08 22:03:13.346: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice
/usr/bin/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp
1,sockets=1,cores=1,threads=1 -name rhel1.locktest -uuid
e3bc07fe-8b01-5f90-7c91-ad154630379e -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/rhel1.locktest.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -drive
file=/opt/mln/projects/root/locktest/images/rhel1,if=none,id=drive-ide0-0-0,format=raw
-device
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1
-drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-device rtl8139,vlan=0,id=net0,mac=52:54:00:b6:8d:14,bus=pci.0,addr=0x3
-net tap,fd=25,vlan=0,name=hostnet0 -chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -usb -spice
port=5900,addr=127.0.0.1,disable-ticketing -vga qxl -global
qxl-vga.vram_size=67108864 -device
intel-hda,id=sound0,bus=pci.0,addr=0x4 -device
hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
2012-04-08 20:03:13.351+0000: 3485: info : libvirt version: 0.9.4,
package: 23.el6_2.7 (CentOS BuildSystem <http://bugs.centos.org>,
2012-03-26-14:12:59, c6b5.bsys.dev.centos.org)
2012-04-08 20:03:13.351+0000: 3485: debug : virCommandHook:1920 : Run
hook 0x484a70 0x7faa81356610
2012-04-08 20:03:13.351+0000: 3485: debug : qemuProcessHook:2147 :
Obtaining domain lock
2012-04-08 20:03:13.351+0000: 3485: debug : virDomainLockManagerNew:123
: plugin=0x7faa74018450 dom=0x7faa680008c0 withResources=1
2012-04-08 20:03:13.351+0000: 3485: debug : virLockManagerNew:291 :
plugin=0x7faa74018450 type=0 nparams=4 params=0x7faa81355a60 flags=0
2012-04-08 20:03:13.351+0000: 3485: debug : virLockManagerLogParams:98
: key=uuid type=uuid value=e3bc07fe-8b01-5f90-7c91-ad154630379e
2012-04-08 20:03:13.351+0000: 3485: debug : virLockManagerLogParams:94
: key=name type=string value=rhel1.locktest
2012-04-08 20:03:13.351+0000: 3485: debug : virLockManagerLogParams:82
: key=id type=uint value=2
2012-04-08 20:03:13.351+0000: 3485: debug : virLockManagerLogParams:82
: key=pid type=uint value=3485
2012-04-08 20:03:13.351+0000: 3485: debug : virDomainLockManagerNew:135
: Adding leases
2012-04-08 20:03:13.351+0000: 3485: debug : virDomainLockManagerNew:140
: Adding disks
2012-04-08 20:03:13.351+0000: 3485: debug :
virDomainLockManagerAddDisk:86 : Add disk
/opt/mln/projects/root/locktest/images/rhel1
2012-04-08 20:03:13.351+0000: 3485: debug :
virLockManagerAddResource:320 : lock=0x7faa680021d0 type=0
name=/opt/mln/projects/root/locktest/images/rhel1 nparams=0 params=(nil)
flags=0
2012-04-08 20:03:13.352+0000: 3485: debug : virLockManagerAcquire:337 :
lock=0x7faa680021d0 state='(null)' flags=3 fd=0x7faa81355bcc
2012-04-08 20:03:13.352+0000: 3485: debug :
virLockManagerSanlockAcquire:725 : Register sanlock 3
2012-04-08 20:03:13.353+0000: 3485: debug :
virLockManagerSanlockAcquire:783 : Acquire completed fd=3
2012-04-08 20:03:13.353+0000: 3485: debug : virLockManagerFree:374 :
lock=0x7faa680021d0
2012-04-08 20:03:13.353+0000: 3485: debug : qemuProcessHook:2172 :
Moving procss to cgroup
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupNew:602 : New group
/libvirt/qemu/rhel1.locktest
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupDetect:261 :
Detected mount/mapping 0:cpu at /cgroup/cpu in
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupDetect:261 :
Detected mount/mapping 1:cpuacct at /cgroup/cpuacct in
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupDetect:261 :
Detected mount/mapping 2:cpuset at /cgroup/cpuset in
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupDetect:261 :
Detected mount/mapping 3:memory at /cgroup/memory in
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupDetect:261 :
Detected mount/mapping 4:devices at /cgroup/devices in
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupDetect:261 :
Detected mount/mapping 5:freezer at /cgroup/freezer in
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupDetect:261 :
Detected mount/mapping 6:blkio at /cgroup/blkio in
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupMakeGroup:523 :
Make group /libvirt/qemu/rhel1.locktest
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupMakeGroup:545 :
Make controller /cgroup/cpu/libvirt/qemu/rhel1.locktest/
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupMakeGroup:545 :
Make controller /cgroup/cpuacct/libvirt/qemu/rhel1.locktest/
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupMakeGroup:545 :
Make controller /cgroup/cpuset/libvirt/qemu/rhel1.locktest/
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupMakeGroup:545 :
Make controller /cgroup/memory/libvirt/qemu/rhel1.locktest/
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupMakeGroup:545 :
Make controller /cgroup/devices/libvirt/qemu/rhel1.locktest/
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupMakeGroup:545 :
Make controller /cgroup/freezer/libvirt/qemu/rhel1.locktest/
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupMakeGroup:545 :
Make controller /cgroup/blkio/libvirt/qemu/rhel1.locktest/
2012-04-08 20:03:13.353+0000: 3485: debug : virCgroupSetValueStr:319 :
Set value '/cgroup/cpu/libvirt/qemu/rhel1.locktest/tasks' to '3485'
2012-04-08 20:03:13.359+0000: 3485: debug : virCgroupSetValueStr:319 :
Set value '/cgroup/cpuacct/libvirt/qemu/rhel1.locktest/tasks' to '3485'
2012-04-08 20:03:13.367+0000: 3485: debug : virCgroupSetValueStr:319 :
Set value '/cgroup/cpuset/libvirt/qemu/rhel1.locktest/tasks' to '3485'
2012-04-08 20:03:13.375+0000: 3485: debug : virCgroupSetValueStr:319 :
Set value '/cgroup/memory/libvirt/qemu/rhel1.locktest/tasks' to '3485'
2012-04-08 20:03:13.383+0000: 3485: debug : virCgroupSetValueStr:319 :
Set value '/cgroup/devices/libvirt/qemu/rhel1.locktest/tasks' to '3485'
2012-04-08 20:03:13.391+0000: 3485: debug : virCgroupSetValueStr:319 :
Set value '/cgroup/freezer/libvirt/qemu/rhel1.locktest/tasks' to '3485'
2012-04-08 20:03:13.399+0000: 3485: debug : virCgroupSetValueStr:319 :
Set value '/cgroup/blkio/libvirt/qemu/rhel1.locktest/tasks' to '3485'
2012-04-08 20:03:13.407+0000: 3485: debug : qemuProcessHook:2178 : Setup
CPU affinity
2012-04-08 20:03:13.407+0000: 3485: debug :
qemuProcessInitCpuAffinity:1464 : Setting CPU affinity
2012-04-08 20:03:13.407+0000: 3485: debug : qemuProcessHook:2185 :
Setting up security labelling
2012-04-08 20:03:13.407+0000: 3485: debug :
virSecurityDACSetProcessLabel:630 : Dropping privileges of VM to 107:107
2012-04-08 20:03:13.408+0000: 3485: debug : qemuProcessHook:2192 : Hook
complete ret=0
2012-04-08 20:03:13.408+0000: 3485: debug : virCommandHook:1922 : Done
hook 0
2012-04-08 20:03:13.408+0000: 3485: debug : virCommandHook:1935 :
Notifying parent for handshake start on 27
2012-04-08 20:03:13.408+0000: 3485: debug : virCommandHook:1956 :
Waiting on parent for handshake complete on 28
2012-04-08 20:03:13.409+0000: 3485: debug : virCommandHook:1972 : Hook
is done 0
char device redirected to /dev/pts/1
do_spice_init: starting 0.8.3
spice_server_add_interface: SPICE_INTERFACE_MIGRATION
spice_server_add_interface: SPICE_INTERFACE_KEYBOARD
spice_server_add_interface: SPICE_INTERFACE_MOUSE
spice_server_add_interface: SPICE_INTERFACE_QXL
red_worker_main: begin
spice_server_add_interface: SPICE_INTERFACE_PLAYBACK
spice_server_add_interface: SPICE_INTERFACE_RECORD
handle_dev_input: start
I can log in to the VM and everything is fine. Next, i try to start the
VM on H2, i get the following error:
error: Failed to create domain from libvirt/rhel1.locktest.xml
error: internal error Failed to acquire lock: error -243
All seems well, but now the VMs filesystem is in read-only state....
This is the qemu log from H2:
2012-04-08 22:05:27.833: starting up
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice
/usr/bin/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp
1,sockets=1,cores=1,threads=1 -name rhel1.locktest -uuid
b1aa8f3f-4391-1014-3759-79d47a4dffdd -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/rhel1.locktest.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -drive
file=/opt/mln/projects/root/locktest/images/rhel1,if=none,id=drive-ide0-0-0,format=raw
-device
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1
-drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-device rtl8139,vlan=0,id=net0,mac=52:54:00:d7:a4:4d,bus=pci.0,addr=0x3
-net tap,fd=25,vlan=0,name=hostnet0 -chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -usb -spice
port=5900,addr=127.0.0.1,disable-ticketing -vga qxl -global
qxl-vga.vram_size=67108864 -device
intel-hda,id=sound0,bus=pci.0,addr=0x4 -device
hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
2012-04-08 20:05:27.847+0000: 5141: info : libvirt version: 0.9.4,
package: 23.el6_2.7 (CentOS BuildSystem <http://bugs.centos.org>,
2012-03-26-14:12:59, c6b5.bsys.dev.centos.org)
2012-04-08 20:05:27.847+0000: 5141: debug : virCommandHook:1920 : Run
hook 0x484a70 0x7f9b3c673610
2012-04-08 20:05:27.848+0000: 5141: debug : qemuProcessHook:2147 :
Obtaining domain lock
2012-04-08 20:05:27.848+0000: 5141: debug : virDomainLockManagerNew:123
: plugin=0x7f9b30016aa0 dom=0x7f9b2000b6a0 withResources=1
2012-04-08 20:05:27.848+0000: 5141: debug : virLockManagerNew:291 :
plugin=0x7f9b30016aa0 type=0 nparams=4 params=0x7f9b3c672a60 flags=0
2012-04-08 20:05:27.848+0000: 5141: debug : virLockManagerLogParams:98
: key=uuid type=uuid value=b1aa8f3f-4391-1014-3759-79d47a4dffdd
2012-04-08 20:05:27.848+0000: 5141: debug : virLockManagerLogParams:94
: key=name type=string value=rhel1.locktest
2012-04-08 20:05:27.848+0000: 5141: debug : virLockManagerLogParams:82
: key=id type=uint value=9
2012-04-08 20:05:27.848+0000: 5141: debug : virLockManagerLogParams:82
: key=pid type=uint value=5141
2012-04-08 20:05:27.848+0000: 5141: debug : virDomainLockManagerNew:135
: Adding leases
2012-04-08 20:05:27.848+0000: 5141: debug : virDomainLockManagerNew:140
: Adding disks
2012-04-08 20:05:27.848+0000: 5141: debug :
virDomainLockManagerAddDisk:86 : Add disk
/opt/mln/projects/root/locktest/images/rhel1
2012-04-08 20:05:27.848+0000: 5141: debug :
virLockManagerAddResource:320 : lock=0x7f9b2000dd90 type=0
name=/opt/mln/projects/root/locktest/images/rhel1 nparams=0 params=(nil)
flags=0
2012-04-08 20:05:27.849+0000: 5141: debug : virLockManagerAcquire:337 :
lock=0x7f9b2000dd90 state='(null)' flags=3 fd=0x7f9b3c672bcc
2012-04-08 20:05:27.850+0000: 5141: debug :
virLockManagerSanlockAcquire:725 : Register sanlock 3
2012-04-08 20:05:27.850+0000: 5141: debug :
virLockManagerSanlockAcquire:783 : Acquire completed fd=3
2012-04-08 20:05:27.850+0000: 5141: debug : virLockManagerFree:374 :
lock=0x7f9b2000dd90
2012-04-08 20:05:27.850+0000: 5141: debug : qemuProcessHook:2172 :
Moving procss to cgroup
2012-04-08 20:05:27.850+0000: 5141: debug : virCgroupNew:602 : New group
/libvirt/qemu/rhel1.locktest
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupDetect:261 :
Detected mount/mapping 0:cpu at /cgroup/cpu in
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupDetect:261 :
Detected mount/mapping 1:cpuacct at /cgroup/cpuacct in
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupDetect:261 :
Detected mount/mapping 2:cpuset at /cgroup/cpuset in
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupDetect:261 :
Detected mount/mapping 3:memory at /cgroup/memory in
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupDetect:261 :
Detected mount/mapping 4:devices at /cgroup/devices in
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupDetect:261 :
Detected mount/mapping 5:freezer at /cgroup/freezer in
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupDetect:261 :
Detected mount/mapping 6:blkio at /cgroup/blkio in
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupMakeGroup:523 :
Make group /libvirt/qemu/rhel1.locktest
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupMakeGroup:545 :
Make controller /cgroup/cpu/libvirt/qemu/rhel1.locktest/
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupMakeGroup:545 :
Make controller /cgroup/cpuacct/libvirt/qemu/rhel1.locktest/
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupMakeGroup:545 :
Make controller /cgroup/cpuset/libvirt/qemu/rhel1.locktest/
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupMakeGroup:545 :
Make controller /cgroup/memory/libvirt/qemu/rhel1.locktest/
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupMakeGroup:545 :
Make controller /cgroup/devices/libvirt/qemu/rhel1.locktest/
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupMakeGroup:545 :
Make controller /cgroup/freezer/libvirt/qemu/rhel1.locktest/
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupMakeGroup:545 :
Make controller /cgroup/blkio/libvirt/qemu/rhel1.locktest/
2012-04-08 20:05:27.851+0000: 5141: debug : virCgroupSetValueStr:319 :
Set value '/cgroup/cpu/libvirt/qemu/rhel1.locktest/tasks' to '5141'
2012-04-08 20:05:27.865+0000: 5141: debug : virCgroupSetValueStr:319 :
Set value '/cgroup/cpuacct/libvirt/qemu/rhel1.locktest/tasks' to '5141'
2012-04-08 20:05:27.873+0000: 5141: debug : virCgroupSetValueStr:319 :
Set value '/cgroup/cpuset/libvirt/qemu/rhel1.locktest/tasks' to '5141'
2012-04-08 20:05:27.881+0000: 5141: debug : virCgroupSetValueStr:319 :
Set value '/cgroup/memory/libvirt/qemu/rhel1.locktest/tasks' to '5141'
2012-04-08 20:05:27.893+0000: 5141: debug : virCgroupSetValueStr:319 :
Set value '/cgroup/devices/libvirt/qemu/rhel1.locktest/tasks' to '5141'
2012-04-08 20:05:27.901+0000: 5141: debug : virCgroupSetValueStr:319 :
Set value '/cgroup/freezer/libvirt/qemu/rhel1.locktest/tasks' to '5141'
2012-04-08 20:05:27.909+0000: 5141: debug : virCgroupSetValueStr:319 :
Set value '/cgroup/blkio/libvirt/qemu/rhel1.locktest/tasks' to '5141'
2012-04-08 20:05:27.917+0000: 5141: debug : qemuProcessHook:2178 : Setup
CPU affinity
2012-04-08 20:05:27.917+0000: 5141: debug :
qemuProcessInitCpuAffinity:1464 : Setting CPU affinity
2012-04-08 20:05:27.918+0000: 5141: debug : qemuProcessHook:2185 :
Setting up security labelling
2012-04-08 20:05:27.918+0000: 5141: debug :
virSecurityDACSetProcessLabel:630 : Dropping privileges of VM to 107:107
2012-04-08 20:05:27.918+0000: 5141: debug : qemuProcessHook:2192 : Hook
complete ret=0
2012-04-08 20:05:27.918+0000: 5141: debug : virCommandHook:1922 : Done
hook 0
2012-04-08 20:05:27.918+0000: 5141: debug : virCommandHook:1935 :
Notifying parent for handshake start on 27
2012-04-08 20:05:27.919+0000: 5141: debug : virCommandHook:1956 :
Waiting on parent for handshake complete on 28
2012-04-08 20:05:27.920+0000: 5141: debug : virCommandHook:1972 : Hook
is done 0
char device redirected to /dev/pts/1
do_spice_init: starting 0.8.3
spice_server_add_interface: SPICE_INTERFACE_MIGRATION
spice_server_add_interface: SPICE_INTERFACE_KEYBOARD
spice_server_add_interface: SPICE_INTERFACE_MOUSE
spice_server_add_interface: SPICE_INTERFACE_QXL
red_worker_main: begin
spice_server_add_interface: SPICE_INTERFACE_PLAYBACK
spice_server_add_interface: SPICE_INTERFACE_RECORD
2012-04-08 22:05:28.141: shutting down
qemu: terminating on signal 15 from pid 3138
Back on H1 there are multiple lines in the qemu log for the VM:
block I/O error in device 'drive-ide0-0-0': Input/output error (5)
block I/O error in device 'drive-ide0-0-0': Input/output error (5)
block I/O error in device 'drive-ide0-0-0': Input/output error (5)
block I/O error in device 'drive-ide0-0-0': Input/output error (5)
block I/O error in device 'drive-ide0-0-0': Input/output error (5)
Can someone please help me understand why the filesystem dies on the
original host? Is this an NFS problem? What should I do differently?
Many thanks in advance,
K
12 years, 7 months