[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 3 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] [RFC] per-device metadata
by Francesco Romani
Hi,
Currently libvirt supports metadata in the domain XML. This is very
convenient for data related to the VM, but it is a little awkward for
devices. Let's pretend I want to have extradata (say, a specific port
for a virtual switch) related to a device (say, a NIC). Nowadays I can
store that data in the metadata section, but I need some kind of mapping
to correlate this piece of information to the specific device.
I can use the device alias, but this is not available when the device is
created. This is also more complex when doing hotplug/hotunplug, because
I need to do update device and update metadata; if either fails, the
entire operation must be considered failed.
It would be nice to be able to attach metadata to the device, and this
is what I'm asking for/proposing in this mail.
Would it be possible in a future libvirt release?
If this is not possible, what's the best way to do the aforementioned
mapping? If it's alias (or device address), how can I be sure than I'm
addressing (no pun intended) the right device when I don't have them?
(e.g. hotplug new device, or just first time VM created)?
Thanks,
--
Francesco Romani
Red Hat Engineering Virtualization R & D
IRC: fromani
7 years, 8 months
[libvirt-users] Bluetooth Device Support
by Max Ehrlich
Hi,
I want to create a virtual hci device on my virtual machine. I have seen
that qemu has options supporting this
https://qemu.weilnetz.de/doc/qemu-doc.html#Bluetooth_0028R_0029-options
and
https://qemu.weilnetz.de/doc/qemu-doc.html#pcsys_005fusb
Is there any support for these options in libvirt? I was not able to find
anything in documentation so I added the qemu command line xml as follows
<qemu:commandline>
<qemu:arg value='-usbdevice'/>
<qemu:arg value='bt:hci'/>
<qemu:arg value='-bt'/>
<qemu:arg value='hci,host'/>
</qemu:commandline>
but when I try to run my machine from virt-manager i get the following
error:
Error starting domain: internal error: process exited while connecting to
monitor: 2017-02-28T21:25:51.735987Z qemu-system-x86_64: -device
ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x1d.0x7: PCI: slot 29 function 7 not
available for ich9-usb-ehci1, in use by ich9-usb-ehci1
this error does not appear if I remove the commandline xml nodes
Does anyone have any guidance for troubleshooting this?
If it helps at all, virsh -v gives 2.1.0, virt-manager about page lists
1.3.2 and qemu-system-x86_64 --help gives 2.6.1 I am running Ubuntu 16.10.
thanks,
Max Ehrlich
7 years, 9 months
[libvirt-users] Redhat 7: cgroup CPUACCT controller is not mounted
by youssef.elfathi@orange.com
Hi,
With a non-root user account, I am launching virtual machines and would like to get CPU stats for each Core (using python API or not) but face the following problem:
- When I issue the command "virsh --readonly cpu-stats MY_DOMAIN" I got the following error:
error: Failed to retrieve CPU statistics for domain 'MY_DOMAIN'
error: Requested operation is not valid: cgroup CPUACCT controller is not mounted
- I checked that cgroup is well mounted:
$ cat /proc/mounts | grep cgroup
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0 cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0 cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0 cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0 cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0 cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0 cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0 cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0 cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0 cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0 cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0 cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
$ cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
cpuset 6 1 1
cpu 7 1 1
cpuacct 7 1 1
memory 11 1 1
devices 2 1 1
freezer 5 1 1
net_cls 8 1 1
blkio 10 1 1
perf_event 9 1 1
hugetlb 3 1 1
pids 4 1 1
net_prio 8 1 1
- I checked the system-cgtop but don't have no CPU info for my VMs (first line starting with /):
Path Tasks %CPU Memory Input/s Output/s
/ 332 808.0 21.3G - -
/system.slice/auditd.service 1 - - - -
/system.slice/crond.service 1 - - - -
/system.slice/dbus.service 1 - - - -
/system.slice/gssproxy.service 1 - - - -
/system.slice/irqbalance.service 1 - - - -
/system.slice/ksmtuned.service 2 - - - -
/system.slice/libvirtd.service 1 - - - -
/system.slice/lvm2-lvmetad.service 1 - - - -
/system.slice/polkit.service 1 - - - -
/system.slice/rhnsd.service 1 - - - -
/system.slice/rhsmcertd.service 1 - - - -
/system.slice/rsyslog.service 1 - - - -
/system.slice/sshd.service 1 - - - -
/system.slice/system-getty.slice/getty(a)tty1.service 1 - - - -
/system.slice/systemd-journald.service 1 - - - -
/system.slice/systemd-logind.service 1 - - - -
/system.slice/systemd-udevd.service 1 - - - -
/system.slice/tuned.service 1 - - - -
/user.slice/user-3972.slice/session-15191.scope 3 - - - -
/user.slice/user-3972.slice/session-16005.scope 4 - - - -
/user.slice/user-3972.slice/session-16019.scope 10 - - - -
Thanks by advance for your help!
Regards,
Youssef
_________________________________________________________________________________________________________________________
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.
7 years, 9 months
[libvirt-users] shutdown -r now hangs in qemu-/kvm-vm
by Dan Johansson
Since updating app-emulation/libvirt and/or app-emulation/qemu (both
were updated at the same time) I have a problem executing "shutdown -r
now" in the vm ("shutdown -h now" works fine).
When I execute "shutdown -r now" in the vm the shutdown process runs
perfect until "Remounting root-filesystem readonly" and than it hangs
and I have to "Force Power Off" to reboot.
Any suggestion what could be wrong and what I can do to solve it?
--
Dan Johansson,
***************************************************
This message is printed on 100% recycled electrons!
***************************************************
7 years, 9 months
[libvirt-users] error : Failed to switch root mount into slave mode: Permission denied
by Kyle Peterson
libvirt-3.0.0
When attemping to create a virtual machine I receive the error "error : Failed to switch root mount into slave mode: Permission denied”.
I’m attempting to run qemu/libvirt/virt-manager in an Arch Linux lxc container on a Ubuntu 16.04 host. The host uses zfs for its containers. The arch container is set up as a priveleged container. I do already have kvm/qemu/libvirt working in a Ubuntu container. The reason for the arch container is because I want to try a newer version of qemu/libvirt.
I’m not finding anything on google about this error message. Any way to get around it?
[root@arch ~]# uname -a
Linux arch 4.8.0-39-generic #42~16.04.1-Ubuntu SMP Mon Feb 20 15:06:07 UTC 2017 x86_64 GNU/Linux
[root@arch ~]# cat /proc/mounts
storage/lxd_root/containers/arch / zfs rw,noatime,xattr,posixacl 0 0
none /dev tmpfs rw,relatime,size=492k,mode=755 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
proc /proc/sys/net proc rw,nosuid,nodev,noexec,relatime 0 0
proc /proc/sys proc ro,nosuid,nodev,noexec,relatime 0 0
proc /proc/sysrq-trigger proc ro,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs ro,nosuid,nodev,noexec,relatime 0 0
sysfs /sys/devices/virtual/net sysfs rw,relatime 0 0
sysfs /sys/devices/virtual/net sysfs rw,nosuid,nodev,noexec,relatime 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
udev /dev/fuse devtmpfs rw,nosuid,relatime,size=8179548k,nr_inodes=2044887,mode=755 0 0
udev /dev/net/tun devtmpfs rw,nosuid,relatime,size=8179548k,nr_inodes=2044887,mode=755 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
rpool/ROOT/ubuntu /dev/lxd zfs rw,relatime,xattr,noacl 0 0
rpool/ROOT/ubuntu /dev/kvm zfs rw,relatime,xattr,noacl 0 0
rpool/ROOT/ubuntu /dev/mem zfs rw,relatime,xattr,noacl 0 0
storage/downloads /mnt/downloads zfs rw,noatime,xattr,posixacl 0 0
storage/kvm_root/iso /mnt/iso zfs rw,noatime,xattr,posixacl 0 0
rpool/ROOT/ubuntu /dev/.lxd-mounts zfs rw,relatime,xattr,noacl 0 0
lxcfs /proc/cpuinfo fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
lxcfs /proc/diskstats fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
lxcfs /proc/meminfo fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
lxcfs /proc/stat fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
lxcfs /proc/swaps fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
lxcfs /proc/uptime fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
devpts /dev/console devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=666 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
tmpfs /tmp tmpfs rw,nosuid,nodev 0 0
tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=1646852k,mode=700 0 0
7 years, 9 months
[libvirt-users] "virsh list" hangs
by Yunchih Chen
`virsh list` hangs on my server that hosts a bunch of VMs.
This might be due to the Debian upgrade I did on Feb 15, which upgrades
`libvirt` from 2.4.0-1 to 3.0.0-2.
I have tried restarting libvirtd for a few times, without luck.
Attached below are some relevant logs; let me know if you need some more
for debugging.
Thanks for your help!!
root@vm-host:~# uname -a
Linux vm-host 4.6.0-1-amd64 #1 SMP Debian 4.6.4-1 (2016-07-18) x86_64
GNU/Linux
root@vm-host:~# apt-cache policy libvirt-daemon
libvirt-daemon:
Installed: 3.0.0-2
Candidate: 3.0.0-2
Version table:
*** 3.0.0-2 500
500 http://debian.csie.ntu.edu.tw/debian testing/main amd64
Packages
100 /var/lib/dpkg/status
root@vm-host:~# strace -o /tmp/trace -e trace=network,file,poll virsh
list # hangs forever .....
^C
root@vm-host:~# tail -10 /tmp/trace
access("/etc/libvirt/libvirt.conf", F_OK) = 0
open("/etc/libvirt/libvirt.conf", O_RDONLY) = 5
access("/proc/vz", F_OK) = -1 ENOENT (No such file or
directory)
socket(AF_UNIX, SOCK_STREAM, 0) = 5
connect(5, {sa_family=AF_UNIX,
sun_path="/var/run/libvirt/libvirt-sock"}, 110) = 0
getsockname(5, {sa_family=AF_UNIX}, [128->2]) = 0
poll([{fd=5, events=POLLOUT}, {fd=6, events=POLLIN}], 2, -1) = 1
([{fd=5, revents=POLLOUT}])
poll([{fd=5, events=POLLIN}, {fd=6, events=POLLIN}], 2, -1) = ?
ERESTART_RESTARTBLOCK (Interrupted by signal)
--- SIGINT {si_signo=SIGINT, si_code=SI_KERNEL} ---
+++ killed by SIGINT +++
root@vm-host:~# lsof /var/run/libvirt/libvirt-sock # hangs too ...
^C
root@vm-host:~# LIBVIRT_DEBUG=1 virsh list
2017-02-17 15:58:36.126+0000: 18505: info : libvirt version: 3.0.0,
package: 2 (Guido Günther <agx(a)sigxcpu.org> Wed, 25 Jan 2017 07:04:08 +0100)
2017-02-17 15:58:36.126+0000: 18505: info : hostname: vm-host
2017-02-17 15:58:36.126+0000: 18505: debug : virGlobalInit:386 :
register drivers
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:684 : driver=0x7f1e5aca2c40 name=Test
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:695 : registering Test as driver 0
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:684 : driver=0x7f1e5aca4ac0 name=OPENVZ
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:695 : registering OPENVZ as driver 1
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:684 : driver=0x7f1e5aca5260 name=VMWARE
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:695 : registering VMWARE as driver 2
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:684 : driver=0x7f1e5aca3720 name=remote
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:695 : registering remote as driver 3
2017-02-17 15:58:36.127+0000: 18505: debug :
virEventRegisterDefaultImpl:267 : registering default event implementation
2017-02-17 15:58:36.127+0000: 18505: debug : virEventPollAddHandle:115 :
Used 0 handle slots, adding at least 10 more
2017-02-17 15:58:36.127+0000: 18505: debug :
virEventPollInterruptLocked:722 : Skip interrupt, 0 0
2017-02-17 15:58:36.127+0000: 18505: info : virEventPollAddHandle:140 :
EVENT_POLL_ADD_HANDLE: watch=1 fd=3 events=1 cb=0x7f1e5a7fc140
opaque=(nil) ff=(nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virEventRegisterImpl:234 :
addHandle=0x7f1e5a7fc860 updateHandle=0x7f1e5a7fcb90
removeHandle=0x7f1e5a7fc1a0 addTimeout=0x7f1e5a7fc310
updateTimeout=0x7f1e5a7fc510 removeTimeout=0x7f1e5a7fc6e0
2017-02-17 15:58:36.127+0000: 18505: debug : virEventPollAddTimeout:230
: Used 0 timeout slots, adding at least 10 more
2017-02-17 15:58:36.127+0000: 18505: debug :
virEventPollInterruptLocked:722 : Skip interrupt, 0 0
2017-02-17 15:58:36.127+0000: 18505: info : virEventPollAddTimeout:253 :
EVENT_POLL_ADD_TIMEOUT: timer=1 frequency=-1 cb=0x563a29758360
opaque=0x7fff70941380 ff=(nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenAuth:1245 :
name=<null>, auth=0x7f1e5aca2a00, flags=0
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f5f50 classname=virConnect
2017-02-17 15:58:36.127+0000: 18505: debug : virConfLoadConfig:1604 :
Loading config file '/etc/libvirt/libvirt.conf'
2017-02-17 15:58:36.127+0000: 18505: debug : virConfReadFile:778 :
filename=/etc/libvirt/libvirt.conf
2017-02-17 15:58:36.127+0000: 18506: debug : virThreadJobSet:99 : Thread
18506 is now running job vshEventLoop
2017-02-17 15:58:36.127+0000: 18506: debug : virEventRunDefaultImpl:311
: running default event implementation
2017-02-17 15:58:36.127+0000: 18505: debug : virFileClose:108 : Closed fd 5
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupTimeouts:525 : Cleanup 1
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupHandles:574 : Cleanup 1
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18506: debug : virEventPollMakePollFDs:401
: Prepare n=0 w=1, f=3 e=1 d=0
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCalculateTimeout:338 : Calculate expiry of 1 timers
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCalculateTimeout:371 : No timeout is pending
2017-02-17 15:58:36.127+0000: 18506: info : virEventPollRunOnce:640 :
EVENT_POLL_RUN: nhandles=1 timeout=-1
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfGetValueString:932 :
Get value string (nil) 0
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1040
: no name, allowing driver auto-select
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1083
: trying driver 0 (Test) ...
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1098
: driver 0 Test returned DECLINED
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1083
: trying driver 1 (OPENVZ) ...
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1098
: driver 1 OPENVZ returned DECLINED
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1083
: trying driver 2 (VMWARE) ...
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1098
: driver 2 VMWARE returned DECLINED
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1083
: trying driver 3 (remote) ...
2017-02-17 15:58:36.127+0000: 18505: debug : remoteConnectOpen:1343 :
Auto-probe remote URI
2017-02-17 15:58:36.127+0000: 18505: debug : doRemoteOpen:907 :
proceeding with name =
2017-02-17 15:58:36.127+0000: 18505: debug : doRemoteOpen:916 :
Connecting with transport 1
2017-02-17 15:58:36.127+0000: 18505: debug : doRemoteOpen:1051 :
Proceeding with sockname /var/run/libvirt/libvirt-sock
2017-02-17 15:58:36.127+0000: 18505: debug :
virNetSocketNewConnectUNIX:639 : path=/var/run/libvirt/libvirt-sock
spawnDaemon=0 binary=<null>
2017-02-17 15:58:36.127+0000: 18505: debug :
virNetSocketNewConnectUNIX:703 : connect() succeeded
2017-02-17 15:58:36.127+0000: 18505: debug : virNetSocketNew:235 :
localAddr=0x7fff70940d00 remoteAddr=0x7fff70940d90 fd=5 errfd=-1 pid=0
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f7980 classname=virNetSocket
2017-02-17 15:58:36.127+0000: 18505: info : virNetSocketNew:291 :
RPC_SOCKET_NEW: sock=0x563a2a7f7980 fd=5 errfd=-1 pid=0
localAddr=127.0.0.1;0, remoteAddr=127.0.0.1;0
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f7d80 classname=virNetClient
2017-02-17 15:58:36.127+0000: 18505: info : virNetClientNew:328 :
RPC_CLIENT_NEW: client=0x563a2a7f7d80 sock=0x563a2a7f7980
2017-02-17 15:58:36.127+0000: 18505: info : virObjectRef:296 :
OBJECT_REF: obj=0x563a2a7f7d80
2017-02-17 15:58:36.127+0000: 18505: info : virObjectRef:296 :
OBJECT_REF: obj=0x563a2a7f7980
2017-02-17 15:58:36.127+0000: 18505: debug :
virEventPollInterruptLocked:726 : Interrupting
2017-02-17 15:58:36.127+0000: 18505: info : virEventPollAddHandle:140 :
EVENT_POLL_ADD_HANDLE: watch=2 fd=5 events=1 cb=0x7f1e5a96cd10
opaque=0x563a2a7f7980 ff=0x7f1e5a96ccc0
2017-02-17 15:58:36.127+0000: 18505: debug : virKeepAliveNew:199 :
client=0x563a2a7f7d80, interval=-1, count=0
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f8080 classname=virKeepAlive
2017-02-17 15:58:36.127+0000: 18505: info : virKeepAliveNew:218 :
RPC_KEEPALIVE_NEW: ka=0x563a2a7f8080 client=0x563a2a7f7d80
2017-02-17 15:58:36.127+0000: 18505: info : virObjectRef:296 :
OBJECT_REF: obj=0x563a2a7f7d80
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f6740 classname=virConnectCloseCallbackData
2017-02-17 15:58:36.127+0000: 18506: debug : virEventPollRunOnce:650 :
Poll got 1 event(s)
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollDispatchTimeouts:432 : Dispatch 1
2017-02-17 15:58:36.127+0000: 18505: info : virObjectRef:296 :
OBJECT_REF: obj=0x563a2a7f6740
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollDispatchHandles:478 : Dispatch 1
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f7fa0 classname=virNetClientProgram
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f7b60 classname=virNetClientProgram
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f7910 classname=virNetClientProgram
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollDispatchHandles:492 : i=0 w=1
2017-02-17 15:58:36.127+0000: 18505: info : virObjectRef:296 :
OBJECT_REF: obj=0x563a2a7f7fa0
2017-02-17 15:58:36.127+0000: 18506: info :
virEventPollDispatchHandles:506 : EVENT_POLL_DISPATCH_HANDLE: watch=1
events=1
2017-02-17 15:58:36.127+0000: 18505: info : virObjectRef:296 :
OBJECT_REF: obj=0x563a2a7f7b60
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupTimeouts:525 : Cleanup 1
2017-02-17 15:58:36.127+0000: 18505: info : virObjectRef:296 :
OBJECT_REF: obj=0x563a2a7f7910
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupHandles:574 : Cleanup 2
2017-02-17 15:58:36.127+0000: 18505: debug : doRemoteOpen:1170 : Trying
authentication
2017-02-17 15:58:36.127+0000: 18506: debug : virEventRunDefaultImpl:311
: running default event implementation
2017-02-17 15:58:36.127+0000: 18505: debug : virNetMessageNew:46 :
msg=0x563a2a7fa470 tracked=0
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupTimeouts:525 : Cleanup 1
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupHandles:574 : Cleanup 2
2017-02-17 15:58:36.127+0000: 18506: debug : virEventPollMakePollFDs:401
: Prepare n=0 w=1, f=3 e=1 d=0
2017-02-17 15:58:36.127+0000: 18505: debug :
virNetMessageEncodePayload:386 : Encode length as 28
2017-02-17 15:58:36.127+0000: 18506: debug : virEventPollMakePollFDs:401
: Prepare n=1 w=2, f=5 e=1 d=0
2017-02-17 15:58:36.127+0000: 18505: info :
virNetClientSendInternal:2104 : RPC_CLIENT_MSG_TX_QUEUE:
client=0x563a2a7f7d80 len=28 prog=536903814 vers=1 proc=66 type=0
status=0 serial=0
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCalculateTimeout:338 : Calculate expiry of 1 timers
2017-02-17 15:58:36.127+0000: 18505: debug : virNetClientCallNew:2057 :
New call 0x563a2a7f7340: msg=0x563a2a7fa470, expectReply=1, nonBlock=0
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCalculateTimeout:371 : No timeout is pending
2017-02-17 15:58:36.127+0000: 18505: debug : virNetClientIO:1866 :
Outgoing message prog=536903814 version=1 serial=0 proc=66 type=0
length=28 dispatch=(nil)
2017-02-17 15:58:36.127+0000: 18506: info : virEventPollRunOnce:640 :
EVENT_POLL_RUN: nhandles=2 timeout=-1
2017-02-17 15:58:36.127+0000: 18505: debug : virNetClientIO:1925 : We
have the buck head=0x563a2a7f7340 call=0x563a2a7f7340
2017-02-17 15:58:36.127+0000: 18505: info : virEventPollUpdateHandle:152
: EVENT_POLL_UPDATE_HANDLE: watch=2 events=0
2017-02-17 15:58:36.127+0000: 18505: debug :
virEventPollInterruptLocked:726 : Interrupting
2017-02-17 15:58:36.127+0000: 18506: debug : virEventPollRunOnce:650 :
Poll got 1 event(s)
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollDispatchTimeouts:432 : Dispatch 1
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollDispatchHandles:478 : Dispatch 2
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollDispatchHandles:492 : i=0 w=1
2017-02-17 15:58:36.127+0000: 18506: info :
virEventPollDispatchHandles:506 : EVENT_POLL_DISPATCH_HANDLE: watch=1
events=1
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupTimeouts:525 : Cleanup 1
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupHandles:574 : Cleanup 2
2017-02-17 15:58:36.127+0000: 18506: debug : virEventRunDefaultImpl:311
: running default event implementation
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupTimeouts:525 : Cleanup 1
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupHandles:574 : Cleanup 2
2017-02-17 15:58:36.127+0000: 18506: debug : virEventPollMakePollFDs:401
: Prepare n=0 w=1, f=3 e=1 d=0
2017-02-17 15:58:36.127+0000: 18506: debug : virEventPollMakePollFDs:401
: Prepare n=1 w=2, f=5 e=0 d=0
2017-02-17 15:58:36.128+0000: 18506: debug :
virEventPollCalculateTimeout:338 : Calculate expiry of 1 timers
2017-02-17 15:58:36.128+0000: 18506: debug :
virEventPollCalculateTimeout:371 : No timeout is pending
2017-02-17 15:58:36.128+0000: 18506: info : virEventPollRunOnce:640 :
EVENT_POLL_RUN: nhandles=1 timeout=-1
^C
--
--
Yun-Chih Chen 陳耘志
Network/Workstation Assistant
Dept. of Computer Science and Information Engineering
National Taiwan University
Tel: +886-2-33664888 ext. 217/204
Email: ta217(a)csie.ntu.edu.tw
Website: http://wslab.csie.ntu.edu.tw/
7 years, 9 months
[libvirt-users] Determining domain job kind from job stats?
by Milan Zamazal
Hi, is there a reliable way to find out to what kind of job does the
information returned from virDomainGetJobStats or provided in
VIR_DOMAIN_EVENT_ID_JOB_COMPLETED event callback belong to?
I'm specifically interested in distinguishing host-to-host migration
jobs (e.g. those started by virDomainMigrateToUri* functions) from other
jobs. If there is no better way, I'm thinking about examining presence
or values of certain fields in the stats. I'd be fine with that as long
as I can be sure it's a reliable way to identify the job kind.
Thanks,
Milan
7 years, 9 months
[libvirt-users] Virsh command hanging
by abhishek jain
Hi,
I started the VMs with libvirt 3 days ago (17th Feb). Now when I am trying to shutdown the domain, all my virsh command is hanging even virt-manager remains in "connecting.." mode and is not showing active domains.When I set the libvirt debug env and call "virsh list" it hangs in poll. Here is the logsetenv LIBVIRT_DEBUG 1 virsh list
2017-02-21 05:31:09.241+0000: 125812: debug :virEventPollCalculateTimeout:320 : Calculate expiry of 0 timers
2017-02-21 05:31:09.241+0000: 125811: debug :virEventPollUpdateHandle:146 : EVENT_POLL_UPDATE_HANDLE: watch=2 events=0
2017-02-21 05:31:09.241+0000: 125812: debug :virEventPollCalculateTimeout:346 : Timeout at 0 due in -1 ms
2017-02-21 05:31:09.241+0000: 125812: debug :virEventPollRunOnce:614 : EVENT_POLL_RUN: nhandles=2 timeout=-1
2017-02-21 05:31:09.241+0000: 125811: debug :virEventPollInterruptLocked:701 : Interrupting
2017-02-21 05:31:09.241+0000: 125812: debug :virEventPollRunOnce:625 : Poll got 1 event(s)
2017-02-21 05:31:09.242+0000: 125812: debug :virEventPollDispatchTimeouts:410 : Dispatch 0
2017-02-21 05:31:09.242+0000: 125812: debug :virEventPollDispatchHandles:455 : Dispatch 2
2017-02-21 05:31:09.242+0000: 125812: debug :virEventPollDispatchHandles:469 : i=0 w=1
2017-02-21 05:31:09.242+0000: 125812: debug :virEventPollDispatchHandles:483 : EVENT_POLL_DISPATCH_HANDLE: watch=1 events=1
2017-02-21 05:31:09.242+0000: 125812: debug :virEventPollCleanupTimeouts:501 : Cleanup 0
2017-02-21 05:31:09.242+0000: 125812: debug :virEventPollCleanupTimeouts:537 : Found 0 out of 0 timeout slots used,releasing 0
2017-02-21 05:31:09.242+0000: 125812: debug :virEventPollCleanupHandles:549 : Cleanup 2
2017-02-21 05:31:09.242+0000: 125812: debug :virEventRunDefaultImpl:244 : running default event implementation
2017-02-21 05:31:09.242+0000: 125812: debug : virEventPollCleanupTimeouts:501: Cleanup 0
2017-02-21 05:31:09.242+0000: 125812: debug :virEventPollCleanupTimeouts:537 : Found 0 out of 0 timeout slots used,releasing 0
2017-02-21 05:31:09.242+0000: 125812: debug :virEventPollCleanupHandles:549 : Cleanup 2
2017-02-21 05:31:09.242+0000: 125812: debug :virEventPollMakePollFDs:378 : Prepare n=0 w=1, f=4 e=1 d=0
2017-02-21 05:31:09.242+0000: 125812: debug :virEventPollMakePollFDs:378 : Prepare n=1 w=2, f=6 e=0 d=0
2017-02-21 05:31:09.242+0000: 125812: debug : virEventPollCalculateTimeout:320: Calculate expiry of 0 timers
2017-02-21 05:31:09.242+0000: 125812: debug :virEventPollCalculateTimeout:346 : Timeout at 0 due in -1 ms
2017-02-21 05:31:09.242+0000: 125812: debug :virEventPollRunOnce:614 : EVENT_POLL_RUN: nhandles=1 timeout=-1<<hang>>
here is the log snippet of /var/log/libvirt/libvirtd.log2017-02-17 06:18:26.198+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:18:41.022+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:18:56.017+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:19:11.019+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:19:26.016+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:19:41.021+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:19:56.014+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:20:11.021+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:20:26.014+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:20:41.014+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:20:56.016+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:21:11.019+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:21:26.018+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:21:41.022+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:21:56.019+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:22:11.020+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-17 06:22:26.012+0000: 5052: error :daemonStreamHandleAbort:609 : stream aborted at client request
2017-02-18 01:53:36.308+0000: 5052: warning :virKeepAliveTimerInternal:156 : No response from client 0xe14160 after 5keepalive messages in 30 seconds
2017-02-18 01:53:47.346+0000: 5052: warning :virKeepAliveTimerInternal:156 : No response from client 0xe14c70 after 5 keepalivemessages in 30 seconds /var/log/libvirt/libvirtd.log
What could be the problem. I am running virsh (0.10.2) on RHEL6
Thanks for your help in advance.
RegardsAbhishek
7 years, 9 months