[libvirt-users] Authenticating virsh with username and password
by Petr Kotas
Hi,
I am new to libvirt and still learning, so thanks for any help.
I am using libvirt configured with username and password.
I need to utilize libvirt service "libvirt-guests.service" to shutdown all
my VMs on system shutdown.
Unfortunately it fails due to failed authentication.
Jun 26 15:56:55 localhost virsh[13326]: All-whitespace username.
Jun 26 15:56:55 localhost libvirt-guests.sh[13114]: Unable to connect to
libvirt currently. Retrying .. 2Please enter your authenti
Jun 26 15:56:55 localhost libvirtd[1155]: End of file while reading data:
Input/output error
Is it possible to specify user and password somehow? I cannot run kerberos
or some other token based authentication.
Thanks for any help.
Kind regards,
Petr Kotas
7 years, 5 months
[libvirt-users] Configuring VMM for USB/Serial
by Brianna Moczynski
Hello,
I keep receiving errors such as the following:
Error rebooting domain: Timed out during operation: cannot acquire state change
lock (held by qemuProcessReconnect)
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in tmpcb
callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83, in newfn
ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/domain.py", line 1378, in reboot
self._backend.reboot(0)
File "/usr/lib/python2.7/dist-packages/libvirt.py", line 2024, in reboot
if ret == -1: raise libvirtError ('virDomainReboot() failed', dom=self)
libvirtError: Timed out during operation: cannot acquire state change lock (held
by qemuProcessReconnect)
Whereby VMM loads, showing my VM in the "Paused" state and will not start, restart, shout down, etc.
Restarting libvirt did not seem to have an effect on the issue.
I ultimately solved it by removing and reinstalling the VM itself.
Additionally, and I believe the issues may be somewhat related, I am having the following issues:
libvirt not supporting forwarding ttys as usb serial devices (for pci serial cards (multiple serial ports))
virt-manager not supporting redirecting usb devices by vendor:device id
I was wondering if you might have any suggestions on how to configure libvirt so that I do not have these issues.
Thanks,
BJM
7 years, 5 months
[libvirt-users] About libvirt concurrency
by 马金舟
hi all:
hello. I would like to ask a question about concurrency. Whether
libvirt can achieve at the same time open 200 virtual machines.
I try to open 200 virtual machines at the same time. oh on. when i am
connected one of the virtual machines, i waited for a long time. I guess
libvirt can't achieve concurrent operaction. I wish I could get a real
result as soon as possible. thank you.
7 years, 5 months
[libvirt-users] GPIO support in libvirt
by 陶 缘
Hi, Dear libvirt experts:
I am deploying a Virtual Machine to my hardware running linux.
The thing is about the LED, the LED is controlled by GPIO operation.
I do not know if my current qemu or libvirt can virtualize GPIO operation.
And if yes, where can I find the options of qemu or libvirt(preferred)
I am using qemu 2.4.0 and libvirt 1.3.2
Appreciate if hear from you
Thanks
eddy
7 years, 5 months
[libvirt-users] guest A from virbr0 can talk to guest B in virbr1 but not vice versa
by Travis S. Johnson
Hello,
I came across an interesting problem in my home lab a few weeks ago as I'm
prepping for my RHCE exam using Michael Jang study guide. I've been at this
for days now, and I still can't wrap my head around how two or more virtual
networks in default NAT configuration are even allowed to communicate with
each other despite what the libvirt documentation said.
Here's the excerpt I'm referring to in the wiki link here:
http://wiki.libvirt.org/page/Networking#Forwarding_Incoming_Connections:
> By default, guests that are connected via a virtual network with <forward
> mode='nat'/> can make any outgoing network connection they like. Incoming
> connections are allowed from the host, and from other guests connected to
> the same libvirt network, but all other incoming connections are blocked by
> iptables rules.
Also here's another assertion from 'The virtual network driver' section in
http://libvirt.org/firewall.html:
> type=nat
>
> Allow inbound related to an established connection. Allow outbound, but
> only from our expected subnet. Allow traffic between guests. Deny all other
> inbound. Deny all other outbound.
I have three virtual networks with the following configs:
-----------------------------------------------------------------------------
<network connections='1'>
<name>default</name>
<uuid>9c6796be-d54e-42bc-bcbe-2e4feee7154a</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:5a:5d:0e'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
<network connections='1'>
<name>outsider</name>
<uuid>247e380a-8795-466a-b94a-5be2d05267bb</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:7f:a1:fb'/>
<domain name='outsider'/>
<ip address='192.168.100.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.100.2' end='192.168.100.254'/>
</dhcp>
</ip>
</network>
<network connections='1'>
<name>besider</name>
<uuid>cc714cce-dbba-452d-b2bf-d36084dcb723</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr2' stp='on' delay='0'/>
<mac address='52:54:00:59:67:7f'/>
<domain name='besider'/>
<ip address='192.168.110.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.110.2' end='192.168.110.254'/>
</dhcp>
</ip>
</network>
----------------------------------------------------------------------------
Here is the output of the 'FORWARD' iptables chain rules on my host (still
using firewall-cmd):
------------------------------------------------------------------------
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source
destination
8967 14M ACCEPT all -- * virbr2 0.0.0.0/0
192.168.110.0/24 ctstate RELATED,ESTABLISHED
5262 279K ACCEPT all -- virbr2 * 192.168.110.0/24
0.0.0.0/0
0 0 ACCEPT all -- virbr2 virbr2 0.0.0.0/0
0.0.0.0/0
70 5832 REJECT all -- * virbr2 0.0.0.0/0
0.0.0.0/0 reject-with icmp-port-unreachable
0 0 REJECT all -- virbr2 * 0.0.0.0/0
0.0.0.0/0 reject-with icmp-port-unreachable
8510 13M ACCEPT all -- * virbr0 0.0.0.0/0
192.168.122.0/24 ctstate RELATED,ESTABLISHED
5177 275K ACCEPT all -- virbr0 * 192.168.122.0/24
0.0.0.0/0
0 0 ACCEPT all -- virbr0 virbr0 0.0.0.0/0
0.0.0.0/0
61 5100 REJECT all -- * virbr0 0.0.0.0/0
0.0.0.0/0 reject-with icmp-port-unreachable
0 0 REJECT all -- virbr0 * 0.0.0.0/0
0.0.0.0/0 reject-with icmp-port-unreachable
8612 13M ACCEPT all -- * virbr1 0.0.0.0/0
192.168.100.0/24 ctstate RELATED,ESTABLISHED
5172 273K ACCEPT all -- virbr1 * 192.168.100.0/24
0.0.0.0/0
0 0 ACCEPT all -- virbr1 virbr1 0.0.0.0/0
0.0.0.0/0
0 0 REJECT all -- * virbr1 0.0.0.0/0
0.0.0.0/0 reject-with icmp-port-unreachable
0 0 REJECT all -- virbr1 * 0.0.0.0/0
0.0.0.0/0 reject-with icmp-port-unreachable
0 0 ACCEPT all -- * * 0.0.0.0/0
0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- lo * 0.0.0.0/0
0.0.0.0/0
0 0 FORWARD_direct all -- * * 0.0.0.0/0
0.0.0.0/0
0 0 FORWARD_IN_ZONES_SOURCE all -- * * 0.0.0.0/0
0.0.0.0/0
0 0 FORWARD_IN_ZONES all -- * * 0.0.0.0/0
0.0.0.0/0
0 0 FORWARD_OUT_ZONES_SOURCE all -- * * 0.0.0.0/0
0.0.0.0/0
0 0 FORWARD_OUT_ZONES all -- * * 0.0.0.0/0
0.0.0.0/0
0 0 DROP all -- * * 0.0.0.0/0
0.0.0.0/0 ctstate INVALID
0 0 REJECT all -- * * 0.0.0.0/0
0.0.0.0/0 reject-with icmp-host-prohibited
--------------------------------------------------------------------------
I have a VM in each network:
nest1.example.com (virbr0) - 192.168.122.50
nest2.example.org (virbr1) - 192.168.100.100
nest3.example.net (virbr2) - 192.168.110.25
I'm quite aware the above iptables rules were added by libvirt, but I'm
still managing the firewall primarily through the *firewall-cmd* command.
>From what I gathered...
----------------------------------------------------------
nest3 can ping nest1 and nest2.
Nest3 ping nest1 and made a hit here:
8967 14M ACCEPT all -- * virbr2 0.0.0.0/0
192.168.110.0/24 ctstate RELATED,ESTABLISHED
5262 279K ACCEPT all -- virbr2 * 192.168.110.0/24
0.0.0.0/0
Nest3 ping nest2 and made a hit here:
8967 14M ACCEPT all -- * virbr2 0.0.0.0/0
192.168.110.0/24 ctstate RELATED,ESTABLISHED
5262 279K ACCEPT all -- virbr2 * 192.168.110.0/24
0.0.0.0/0
----------------------------------------------------------
----------------------------------------------------------
Nest1 can ping nest2, but cannot ping nest3.
Nest1 ping nest2 and made a hit here:
8510 13M ACCEPT all -- * virbr0 0.0.0.0/0
192.168.122.0/24 ctstate RELATED,ESTABLISHED
5177 275K ACCEPT all -- virbr0 * 192.168.122.0/24
0.0.0.0/0
Nest1 ping nest3 and made a hit here:
70 5832 REJECT all -- * virbr2 0.0.0.0/0
0.0.0.0/0 reject-with icmp-port-unreachable
----------------------------------------------------------
----------------------------------------------------------
Nest2 cannot ping nest1 and nest3.
Nest2 ping nest1 and made a hit here:
61 5100 REJECT all -- * virbr0 0.0.0.0/0
0.0.0.0/0 reject-with icmp-port-unreachable
Nest2 ping test3 and made a hit here:
70 5832 REJECT all -- * virbr2 0.0.0.0/0
0.0.0.0/0 reject-with icmp-port-unreachable
----------------------------------------------------------
>From my observation, I see that the order of the virtual networks in the
iptables FORWARD chain makes a difference. Each chunk associated with the
virtual network in the chain, consisting of five lines, is exactly as
described in the aforementioned link (http://libvirt.org/firewall.html).
The virtual network in the top chunk of the chain can communicate with
virtually all networks as opposed to the network in the last chunk that is
consistent with the intention of the original design.
I'm using CentOS 7.3 with libvirt 2.0. I even tried reproducing this with
CentOS 6.9 as I thought it was possible the firewalld may have influenced
the change, but I was still getting similar result.
Right now, I'm not certain if this is already a reported known bug, but I'm
highly convinced this configuration is unofficially unsupported for quite a
while. Can someone confirm this? This is my very first mailing list
submission ever, and I apologize in advance if I couldn't figure out how to
conveniently search up a similar discussion as this one in the archive. If
this is in fact near impossible to implement in accordance to the intended
design, then I'd like to have this confirmation publicly included in the
docs.
Thanks,
Travis Johnson
7 years, 5 months
[libvirt-users] VM fails to start on boot-up
by Andy Gibbs
Hi,
I have created a VM with the option inside virt-manager to "start virtual machine on host boot up". It is the only VM on the host to be configured this way. However, I am getting this error in the log file when it attempts to start it on boot up:
2017-06-19 07:15:18.491+0000: starting up libvirt version: 2.0.0, package: 10.el7_3.9 (CentOS BuildSystem <http://bugs.centos.org>, 2017-05-25-20:52:28, c1bm.rdu2.centos.org), qemu version: 2.6.0 (qemu-kvm-ev-2.6.0-28.el7_3.9.1), hostname: server.mydomain.lan
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name guest=MyVM,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-MyVM/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,vmport=off -cpu Broadwell,+rtm,+hle -m 8192 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid cb6f4d45-091d-47f8-931a-8daf6b0cf2b8 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-MyVM/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x6.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x6 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x6.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x6.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/var/lib/libvirt/images/MyVM.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0e:2e:75,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-1-MyVM/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel1,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -spice port=5900,tls-port=5901,addr=127.0.0.1,disable-ticketing,x509-dir=/etc/pki/vdsm/libvirt-spice,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=2 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1,bus=usb.0,port=3 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on
char device redirected to /dev/pts/0 (label charserial0)
((null):3344): Spice-ERROR **: reds.c:4043:do_spice_init: statistics shm_open failed, Permission denied
2017-06-19 07:15:18.824+0000: shutting down
However, when I start the VM manually through virt-manager, it starts up fine.
Why might it fail during boot-up? And is there anything I can do to solve the problem?
Many thanks,
Andy
7 years, 5 months
[libvirt-users] vendor_id state='off' not working as expected
by Sean Whalen
Hi All,
I'm trying to counter VM detection methods for a malware sandbox, using
Pafish as a benchmark tool.
https://github.com/a0rtega/pafish
I have <hyperv><vendor_id state='off'/></hyperv> defined in a Windows 7
domain on Ubuntu 17.10, but for some reason this just changes the
hypervisor vendor to "Microsoft Hv" instead of the default "KVMKVMKVM"
vendor.
How can I remove the hypervisor vendor completely?
7 years, 5 months
[libvirt-users] [qemu-kvm] Network bandwidth limits via libvirt
by 卓浩凡
Hi all,
I try to understand why the network bandwidth limits are not applied to my ubuntu 16.04.2 VM (qemu-kvm driver) and I would need some insights.
I create my VM network with virsh and when I issue a dumpxml on my network, I can see that bandwidth limits are set:
virsh # net-dumpxml nat_limit
<network>
<name>nat_limit</name>
<uuid>4b5e128d-9ad0-4ccc-9424-dee60b71861a</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:af:99:73'/>
<bandwidth>
<inbound average='10' peak='50' burst='10'/>
<outbound average='128' peak='256' burst='256'/>
</bandwidth>
<ip address='192.168.123.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.123.2' end='192.168.123.254'/>
</dhcp>
</ip>
</network>
And the net info is:
virsh # net-info nat_limit
Name: nat_limit
UUID: 4b5e128d-9ad0-4ccc-9424-dee60b71861a
Active: yes
Persistent: yes
Autostart: yes
Bridge: virbr1
Then, I create my VM, and the source of interface is the network, like this:
virsh # dumpxml virt20
...
<interface type='network'>
<mac address='52:54:00:a2:5b:10'/>
<source network='nat_limit' bridge='virbr1'/>
<target dev='vnet2'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
...
However when I send a big file with netper I don't see any limits applied to the transfer rate.
I've checked the domain log and I didn't see any qemu command line argument related to bandwidth limits.
Could you tell me why the network bandwidth limits can not work? Do network limits are managed in another way?
Thanks.
Below some information about my hypervisor:
root@ubuntu-04:~# virsh -V
Virsh command line tool of libvirt 1.3.1
See web site at http://libvirt.org/
Compiled with support for:
Hypervisors: QEMU/KVM LXC UML Xen LibXL OpenVZ VMWare VirtualBox ESX Test
Networking: Remote Network Bridging Interface netcf Nwfilter VirtualPort
Storage: Dir Disk Filesystem SCSI Multipath iSCSI LVM RBD Sheepdog ZFS
Miscellaneous: Daemon Nodedev AppArmor Secrets Debug Readline Modular
qemu version: 2.5.0
Thanks,
Netsurfed
7 years, 5 months
[libvirt-users] Isolate VMs' network
by Chris
All,
I'm trying to setup a network with some virtual machines, that can connect
to each other and to the internet, but neither to the host nor to other
VMs.
Is there any preconfigured network filter or best-practice for this setup?
Of course, I could setup iptables rules on the host, but I'd prefer
libvirt to handle them.
- Chris
7 years, 5 months
Re: [libvirt-users] libvirtd not accepting connections
by Martin Kletzander
[adding back the ML, you probably hit reply instead of reply-all, this
way other people might help if they know more]
On Fri, Jun 02, 2017 at 08:10:01AM -0400, Michael C. Cambria wrote:
>
>Hi,
>
>libvirtd never seems to get notified that there is work to do. journalct
>-f indicated that nothing was logged when connections were attempted via
>virsh.
>
>I also tried 'LIBVIRT_DEBUG=1 libvirtd --verbose' and once startup
>finished, there were no more log entries even though virsh attempts were
>made.
>
That's because it gets overridden by the configuration files. This
might be a bug, but it's not related to what's happening.
>"ps ax" shows about a dozen "qemu-system-alpha" processes. I don't know
>if it matters but I didn't expect to see this. I didn't intentionally
>configure alpha emulations (assuming that's what it is) and certainly
>don't want to waste resources having it running.
>
Libvirt caches the capabilities of the emulators it can find in your
system in order not to waste resources. These processes are expected to
go away after they reply with all libvirt asks them for. However, it
seems like the initialization cannot be completed precisely due to the
fact that these processes don't communicate.
There might be some details about qemu-system-alpha that are different
when compared to, e.g. qemu-system-x86 and libvirt is not (yet) adapted
to them, but I installed that emulator and libvirt daemon runs as
usual. It looks like a problem in QEMU. Could you, as a workaround,
try uninstalling that qemu binary from your system and restarting the
service?
Also, what versions of libvirt and qemu do you have installed?
>Here is gdb output:
>
>$ sudo gdb -batch -p $(pidof libvirtd) -ex "t a a bt full" > batch.out
>[mcc@eastie-fid4-com triage]$ cat batch.out
>[New LWP 17587]
>[New LWP 17588]
>[New LWP 17589]
>[New LWP 17590]
>[New LWP 17591]
>[New LWP 17592]
>[New LWP 17593]
>[New LWP 17594]
>[New LWP 17595]
>[New LWP 17596]
>[New LWP 17597]
>[New LWP 17598]
>[New LWP 17599]
>[New LWP 17600]
>[New LWP 17601]
>[New LWP 17602]
>[Thread debugging using libthread_db enabled]
>Using host libthread_db library "/lib64/libthread_db.so.1".
>0x00007fcd6b4a501d in poll () at ../sysdeps/unix/syscall-template.S:84
>84 T_PSEUDO (SYSCALL_SYMBOL, SYSCALL_NAME, SYSCALL_NARGS)
>
>Thread 17 (Thread 0x7fcd3bf18700 (LWP 17602)):
>#0 0x00007fcd6b4a501d in poll () at ../sysdeps/unix/syscall-template.S:84
>No locals.
>#1 0x00007fcd6b4c310e in __poll_chk (fds=<optimized out>,
>nfds=<optimized out>, timeout=<optimized out>, fdslen=<optimized out>)
>at poll_chk.c:27
>No locals.
>#2 0x00007fcd6f07bf41 in poll (__timeout=-1, __nfds=<optimized out>,
>__fds=0x7fcd3bf16ec0) at /usr/include/bits/poll2.h:41
>No locals.
>#3 virCommandProcessIO (cmd=cmd@entry=0x7fcd344228f0) at
>util/vircommand.c:2049
> i = <optimized out>
> fds = {{fd = 22, events = 1, revents = 0}, {fd = 24, events =
>1, revents = 0}, {fd = 1802946632, events = 32717, revents = 0}}
> nfds = <optimized out>
> outfd = <optimized out>
> errfd = 24
> inlen = 0
> outlen = 0
> errlen = 0
> inoff = 0
> ret = 0
> __func__ = "virCommandProcessIO"
> __FUNCTION__ = "virCommandProcessIO"
>#4 0x00007fcd6f08025a in virCommandRun (cmd=cmd@entry=0x7fcd344228f0,
>exitstatus=exitstatus@entry=0x7fcd3bf1749c) at util/vircommand.c:2274
> ret = 0
> outbuf = 0x7fcd341c1850 "`\030\034\064\315\177"
> errbuf = 0x0
> st = {st_dev = 140519450702688, st_ino = 5007254661694877440,
>st_nlink = 140519321774352, st_mode = 1862726288, st_uid = 32717, st_gid
>= 1865325018, __pad0 = 32717, st_rdev = 140732281970554, st_size = 0,
>st_blksize = 11, st_blocks = 8, st_atim = {tv_sec = 140519321774368,
>tv_nsec = 140519450703056}, st_mtim = {tv_sec = 140519321774352, tv_nsec
>= 140519450703024}, st_ctim = {tv_sec = 140520244259750, tv_nsec =
>140520310044608}, __glibc_reserved = {140520244259750, 140520310041179,
>140519321774320}}
> string_io = <optimized out>
> async_io = <optimized out>
> str = 0x7fcd3bf17420 "\260t\361;\315\177"
> tmpfd = <optimized out>
> __FUNCTION__ = "virCommandRun"
> __func__ = "virCommandRun"
>#5 0x00007fcd404a27cf in virQEMUCapsInitQMP (qmperr=0x7fcd3bf174a0,
>runGid=107, runUid=107, libDir=<optimized out>, qemuCaps=0x7fcd340fd3e0)
>at qemu/qemu_capabilities.c:3700
> cmd = 0x7fcd344228f0
> pid = 0
> ret = -1
> mon = 0x0
> status = 0
> monarg = 0x7fcd343a2570
>"unix:/var/lib/libvirt/qemu/capabilities.monitor.sock,server,nowait"
> vm = 0x0
> config = {type = 9, data = {file = {path = 0x7fcd34151d90
>"/var/lib/libvirt/qemu/capabilities.monitor.sock", append = 0}, nmdm =
>{master = 0x7fcd34151d90
>"/var/lib/libvirt/qemu/capabilities.monitor.sock", slave = 0x0}, tcp =
>{host = 0x7fcd34151d90
>"/var/lib/libvirt/qemu/capabilities.monitor.sock", service = 0x0, listen
>= false, protocol = 0}, udp = {bindHost = 0x7fcd34151d90
>"/var/lib/libvirt/qemu/capabilities.monitor.sock", bindService = 0x0,
>connectHost = 0x0, connectService = 0x0}, nix = {path = 0x7fcd34151d90
>"/var/lib/libvirt/qemu/capabilities.monitor.sock", listen = false},
>spicevmc = 873799056, spiceport = {channel = 0x7fcd34151d90
>"/var/lib/libvirt/qemu/capabilities.monitor.sock"}}, logfile = 0x0,
>logappend = 0}
> monpath = 0x7fcd34151d90
>"/var/lib/libvirt/qemu/capabilities.monitor.sock"
> pidfile = 0x7fcd341ad8b0
>"/var/lib/libvirt/qemu/capabilities.pidfile"
> xmlopt = 0x0
>#6 virQEMUCapsNewForBinaryInternal (binary=binary@entry=0x7fcd34016cb0
>"/usr/bin/qemu-system-alpha", libDir=<optimized out>,
>cacheDir=0x7fcd343be860 "/var/cache/libvirt/qemu", runUid=107,
>runGid=107, qmpOnly=qmpOnly@entry=false) at qemu/qemu_capabilities.c:3830
> qemuCaps = 0x7fcd340fd3e0
> sb = {st_dev = 64768, st_ino = 1838294, st_nlink = 1, st_mode =
>33261, st_uid = 0, st_gid = 0, __pad0 = 0, st_rdev = 0, st_size =
>8829680, st_blksize = 4096, st_blocks = 17248, st_atim = {tv_sec =
>1496358589, tv_nsec = 77994286}, st_mtim = {tv_sec = 1492132244, tv_nsec
>= 0}, st_ctim = {tv_sec = 1494196699, tv_nsec = 451929606},
>__glibc_reserved = {0, 0, 0}}
> rv = <optimized out>
> qmperr = 0x7fcd341c1870 ""
> __FUNCTION__ = "virQEMUCapsNewForBinaryInternal"
>#7 0x00007fcd404a3a73 in virQEMUCapsNewForBinary (runGid=<optimized
>out>, runUid=<optimized out>, cacheDir=<optimized out>,
>libDir=<optimized out>, binary=0x7fcd34016cb0
>"/usr/bin/qemu-system-alpha") at qemu/qemu_capabilities.c:3871
>No locals.
>#8 virQEMUCapsCacheLookup (cache=cache@entry=0x7fcd341c9000,
>binary=0x7fcd34016cb0 "/usr/bin/qemu-system-alpha") at
>qemu/qemu_capabilities.c:3986
> ret = 0x0
> __func__ = "virQEMUCapsCacheLookup"
>#9 0x00007fcd404a3d22 in virQEMUCapsInitGuest
>(guestarch=VIR_ARCH_ALPHA, hostarch=VIR_ARCH_X86_64,
>cache=0x7fcd341c9000, caps=0x7fcd341a9980) at qemu/qemu_capabilities.c:824
> qemubinCaps = 0x0
> x86_32on64_kvm = <optimized out>
> ppc64_kvm = <optimized out>
> kvmbin = 0x0
> ret = -1
> i = <optimized out>
> binary = 0x7fcd34016cb0 "/usr/bin/qemu-system-alpha"
> kvmbinCaps = 0x0
> native_kvm = <optimized out>
> arm_32on64_kvm = <optimized out>
>#10 virQEMUCapsInit (cache=0x7fcd341c9000) at qemu/qemu_capabilities.c:1109
> caps = 0x7fcd341a9980
> i = 1
> hostarch = VIR_ARCH_X86_64
> __func__ = "virQEMUCapsInit"
>#11 0x00007fcd404def20 in virQEMUDriverCreateCapabilities
>(driver=driver@entry=0x7fcd34342370) at qemu/qemu_conf.c:766
> i = <optimized out>
> j = <optimized out>
> caps = <optimized out>
> sec_managers = 0x0
> doi = <optimized out>
> model = <optimized out>
> lbl = <optimized out>
> type = <optimized out>
> cfg = 0x7fcd3448cbb0
> virtTypes = {3, 1}
> __FUNCTION__ = "virQEMUDriverCreateCapabilities"
> __func__ = "virQEMUDriverCreateCapabilities"
>#12 0x00007fcd4051fef3 in qemuStateInitialize (privileged=true,
>callback=<optimized out>, opaque=<optimized out>) at qemu/qemu_driver.c:844
> driverConf = 0x0
> conn = 0x0
> cfg = 0x7fcd3448cbb0
> run_uid = <optimized out>
> run_gid = <optimized out>
> hugepagePath = 0x0
> i = <optimized out>
> __FUNCTION__ = "qemuStateInitialize"
>#13 0x00007fcd6f1789af in virStateInitialize (privileged=<optimized
>out>, callback=0x55f56a9b3180 <daemonInhibitCallback>,
>opaque=0x55f56be1cf00) at libvirt.c:770
> i = 9
> __func__ = "virStateInitialize"
>#14 0x000055f56a9b31db in daemonRunStateInit (opaque=0x55f56be1cf00) at
>libvirtd.c:959
> dmn = 0x55f56be1cf00
> sysident = 0x7fcd34000910
> __func__ = "daemonRunStateInit"
>#15 0x00007fcd6f0d98f2 in virThreadHelper (data=<optimized out>) at
>util/virthread.c:206
> args = 0x0
> local = {func = 0x55f56a9b31a0 <daemonRunStateInit>, funcName =
>0x55f56a9f28d3 "daemonRunStateInit", worker = false, opaque =
>0x55f56be1cf00}
>#16 0x00007fcd6b7766ca in start_thread (arg=0x7fcd3bf18700) at
>pthread_create.c:333
> __res = <optimized out>
> pd = 0x7fcd3bf18700
> now = <optimized out>
> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140519450707712,
>-3574063505887647860, 0, 140732281962543, 140519450708416,
>140519450707712, 3601779982174594956, 3601954753778231180},
>mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev =
>0x0, cleanup = 0x0, canceltype = 0}}}
> not_first_call = <optimized out>
> pagesize_m1 = <optimized out>
> sp = <optimized out>
> freesize = <optimized out>
> __PRETTY_FUNCTION__ = "start_thread"
>#17 0x00007fcd6b4b0f7f in clone () at
>.../sysdeps/unix/sysv/linux/x86_64/clone.S:105
>No locals.
>
[...]
7 years, 5 months