On Sat, Jun 03, 2017 at 08:27:08PM -0400, Michael C Cambria wrote:
>
>
> On 06/03/2017 05:47 PM, Martin Kletzander wrote:
>> On Sat, Jun 03, 2017 at 05:20:47PM -0400, Michael C Cambria wrote:
>>> I also tried stopping libvirtd, renaming both qemu-system-i386 and
>>> qemu-system-x86_64, start libvirtd. Things get further along; dnsmasq
>>> log messages show up.
>>>
>>> $ sudo systemctl status libvirtd.service
>>> ● libvirtd.service - Virtualization daemon
>>> Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled;
>>> vendor preset: enabled)
>>> Active: active (running) since Sat 2017-06-03 17:14:25 EDT; 7s ago
>>> Docs: man:libvirtd(8)
>>>
http://libvirt.org
>>> Main PID: 3246 (libvirtd)
>>> Tasks: 21 (limit: 4915)
>>> Memory: 107.4M
>>> CPU: 300ms
>>> CGroup: /system.slice/libvirtd.service
>>> ├─3158 /usr/bin/qemu-system-i386 -S -no-user-config
>>> -nodefaults -nographic -M none -qmp
>>> unix:/var/lib/libvirt/qemu/capabilities.monitor.sock,server,nowait
>>> -pidfile /var/lib/libvirt/qemu/capabilities.
>>> ├─3160 /usr/bin/qemu-system-i386 -S -no-user-config
>>> -nodefaults -nographic -M none -qmp
>>> unix:/var/lib/libvirt/qemu/capabilities.monitor.sock,server,nowait
>>> -pidfile /var/lib/libvirt/qemu/capabilities.
>>> ├─3246 /usr/sbin/libvirtd
>>> ├─3457 /sbin/dnsmasq
>>> --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro
>>> --dhcp-script=/usr/libexec/libvirt_leaseshelper
>>> └─3458 /sbin/dnsmasq
>>> --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro
>>> --dhcp-script=/usr/libexec/libvirt_leaseshelper
>>>
>>> Jun 03 17:14:25
example.com libvirtd[3246]: Failed to probe
>>> capabilities
>>> for /usr/bin/qemu-kvm: internal error: Child process (LC_ALL=C
>>> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /usr/bin/qemu-k
>>> Jun 03 17:14:26
example.com dnsmasq[3457]: started, version 2.76
>>> cachesize 150
>>> Jun 03 17:14:26
example.com dnsmasq[3457]: compile time options: IPv6
>>> GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset
>>> auth DNSSEC loop-detect inotify
>>> Jun 03 17:14:26
example.com dnsmasq-dhcp[3457]: DHCP, IP range
>>> 192.168.122.2 -- 192.168.122.254, lease time 1h
>>> Jun 03 17:14:26
example.com dnsmasq-dhcp[3457]: DHCP, sockets bound
>>> exclusively to interface virbr0
>>> Jun 03 17:14:26
example.com dnsmasq[3457]: reading /etc/resolv.conf
>>> Jun 03 17:14:26
example.com dnsmasq[3457]: using nameserver
>>> 172.16.8.1#53
>>> Jun 03 17:14:26
example.com dnsmasq[3457]: read /etc/hosts - 2
>>> addresses
>>> Jun 03 17:14:26
example.com dnsmasq[3457]: read
>>> /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
>>> Jun 03 17:14:26
example.com dnsmasq-dhcp[3457]: read
>>> /var/lib/libvirt/dnsmasq/default.hostsfile
>>>
>>>
>>> $ sudo virsh list
>>> Id Name State
>>> ----------------------------------------------------
>>>
>>> $
>>>
>>> As you can see, none of my vm's show up. I'm guessing the
"qemu-kvm:
>>> internal error" results from renaming qemu-system-x86_64??
>>>
>>
>> Yeah, feel free to move them back.
>>
>>> Could something else (selinux?) be involved?
>>
>> It should not. Now we know it is QEMU, but becuase it is disrupting
>> libvirt in its initialization phase, it looks like nothing works.
>>
>> You could grab a backtrace of the qemu process and try on qemu-devel
>> ML. Also, if you want to make sure (or see after what command gets
>> stuck), you can enable debug logs for libvirt into some file and then
>> grep that for QEMU_MONITOR_ and you'll see.
>>
> What should I be looking for? I see QEMU_MONITOR_ in the source
> (
https://github.com/libvirt/libvirt/tree/v2.2-maint) but never see it in
> the logs.
>
> # pwd
> /etc/libvirt
> # diff libvirtd.conf libvirtd.conf.hold
> 1,4d0
> < log_level = 1
> < log_filters="3:remote 4:event 3:json 3:rpc"
> < log_outputs="1:file:/var/log/libvirt/libvirtd.log"
> < #
> # ls -al /var/log/libvirt/libvirtd.log
> ls: cannot access '/var/log/libvirt/libvirtd.log': No such file or
> directory
> # systemctl restart libvirtd.service
> # ls -al /var/log/libvirt/libvirtd.log
> -rw-------. 1 root root 801113 Jun 3 19:56
> /var/log/libvirt/libvirtd.log
> # grep QEMU_MONITOR_ /var/log/libvirt/libvirtd.log
> # grep QEMU_MONITOR_ /var/log/libvirt/libvirtd.log
> # grep QEMU_ /var/log/libvirt/libvirtd.log
> # grep QEMU /var/log/libvirt/libvirtd.log
> 2017-06-04 00:04:13.825+0000: 3363: debug : virRegisterConnectDriver:684
> : driver=0x7fa3f6c2e0e0 name=QEMU
> 2017-06-04 00:04:13.825+0000: 3363: debug : virRegisterConnectDriver:695
> : registering QEMU as driver 9
> 2017-06-04 00:04:14.302+0000: 3379: debug : virStateInitialize:769 :
> Running global init for QEMU state driver
> 2017-06-04 00:04:14.303+0000: 3379: info : virObjectNew:202 :
> OBJECT_NEW: obj=0x7fa3d0122250 classname=virQEMUDriverConfig
> 2017-06-04 00:04:14.305+0000: 3379: debug :
> virSecurityManagerNewDriver:86 : drv=0x7fa44d580f40 (selinux)
> virtDriver=QEMU flags=a
> 2017-06-04 00:04:14.305+0000: 3379: debug :
> virSecuritySELinuxInitialize:561 : SELinuxInitialize QEMU
> 2017-06-04 00:04:14.307+0000: 3379: debug :
> virSecuritySELinuxQEMUInitialize:516 : Loaded domain context
> 'system_u:system_r:svirt_t:s0', alt domain context
> 'system_u:system_r:svirt_tcg_t:s0'
> 2017-06-04 00:04:14.307+0000: 3379: debug :
> virSecuritySELinuxQEMUInitialize:537 : Loaded file context
> 'system_u:object_r:svirt_image_t:s0', content context
> 'system_u:object_r:virt_content_t:s0'
> 2017-06-04 00:04:14.307+0000: 3379: debug :
> virSecurityManagerNewDriver:86 : drv=0x7fa44d580cc0 (stack)
> virtDriver=QEMU flags=a
> 2017-06-04 00:04:14.307+0000: 3379: debug :
> virSecurityManagerNewDriver:86 : drv=0x7fa44d580de0 (dac)
> virtDriver=QEMU flags=a
> 2017-06-04 00:04:14.313+0000: 3379: debug : virQEMUCapsCacheLookup:3985
> : Creating capabilities for /usr/bin/qemu-system-alpha
> 2017-06-04 00:04:14.313+0000: 3379: info : virObjectNew:202 :
> OBJECT_NEW: obj=0x7fa3d010bb00 classname=virQEMUCaps
> 2017-06-04 00:04:14.313+0000: 3379: debug : virQEMUCapsInitCached:3292 :
> No cached capabilities
>
'/var/cache/libvirt/qemu/capabilities/4b21437223cbdecdb4e776fe0bb40302a421554fc350bfbb986908518557721d.xml'
>
> for '/usr/bin/qemu-system-alpha'
> 2017-06-04 00:04:14.313+0000: 3379: debug : virQEMUCapsInitQMP:3673 :
> Try to get caps via QMP qemuCaps=0x7fa3d010bb00
> [root@eastie libvirt]#
>
> Here are the last few lines of that log file:
>
> 2017-06-04 00:04:14.313+0000: 3379: debug : virQEMUCapsCacheLookup:3985
> : Creating capabilities for /usr/bin/qemu-system-alpha
> 2017-06-04 00:04:14.313+0000: 3379: info : virObjectNew:202 :
> OBJECT_NEW: obj=0x7fa3d010bb00 classname=virQEMUCaps
> 2017-06-04 00:04:14.313+0000: 3379: debug : virFileMakePathHelper:2837 :
> path=/var/cache/libvirt/qemu/capabilities mode=0777
> 2017-06-04 00:04:14.313+0000: 3379: debug : virQEMUCapsInitCached:3292 :
> No cached capabilities
>
'/var/cache/libvirt/qemu/capabilities/4b21437223cbdecdb4e776fe0bb40302a421554fc350bfbb986908518557721d.xml'
>
> for '/usr/bin/qemu-system-alpha'
> 2017-06-04 00:04:14.313+0000: 3379: debug : virQEMUCapsInitQMP:3673 :
> Try to get caps via QMP qemuCaps=0x7fa3d010bb00
> 2017-06-04 00:04:14.313+0000: 3379: debug : virCommandRunAsync:2434 :
> About to run LC_ALL=C
> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> /usr/bin/qemu-system-alpha -S -no-user-config -nodefaults -nographic -M
> none -qmp
> unix:/var/lib/libvirt/qemu/capabilities.monitor.sock,server,nowait
> -pidfile /var/lib/libvirt/qemu/capabilities.pidfile -daemonize
> 2017-06-04 00:04:14.314+0000: 3379: debug : virFileClose:102 : Closed
> fd 21
> 2017-06-04 00:04:14.314+0000: 3379: debug : virFileClose:102 : Closed
> fd 23
> 2017-06-04 00:04:14.314+0000: 3379: debug : virFileClose:102 : Closed
> fd 25
> 2017-06-04 00:04:14.314+0000: 3379: debug : virCommandRunAsync:2437 :
> Command result 0, with PID 3405
> #
>
> I changed the log filters to:
>
> log_filters="1:remote 1:event 1:json 1:rpc"
>
you can remove this line completely, it just limits some of the messages
if there are higher numbers than the log_level.
> Tried again, QEMU_MONITOR_ isn't in the log.
>
Now this is weird. I would try a few more things, but it's lengthy
explaining them in the mail. Plus we would be dealing with QEMU issues
at this point. But what baffles me the most is, how come there aren't
more people having the same problem?
I've upgraded from Fedora 20; probably missed a merge of rpmnew with
existing .conf; permission problem, some other mistake along the way to
Fedora 25.
Is there a "how to" similar to [1] that lets one qemu to log that it was
invoked and how far it got?
I removed qemu (dnf remote qemu; sudo dnf remove qemu-common)
build qemu 2.2-maint (assuming this relates to 2:2.7.1-6.fc25) from
github sources
installed qemu from sources (into /usr/local/bin)
Things are a bit better. Where something like "sudo virsh pool-list"
would just hang before, now my storage pools actually are listed. No
luck listing my VM's, but "virsh list" and "virsh list --all" do
not
hang like before:
# virsh list
Id Name State
----------------------------------------------------
# virsh pool-list
Name State Autostart
-------------------------------------------
default active yes
Downloads active yes
guest_images_lvm active yes
#
VM xml is /etc/libvirt/qemu. The network, virbr0 is in
/etc/libvirt/qemu/networks, and that gets created just fine. All have
root:root owner:group:
$ sudo systemctl stop libvirtd.service
$ sudo ip addr show virbr0
7: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
state DOWN group default qlen 1000
link/ether 52:54:00:5b:f7:0b brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
$ sudo ip link del virbr0
$ sudo ip addr show virbr0
Device "virbr0" does not exist.
$ sudo systemctl start libvirtd.service
$ sudo ip addr show virbr0
9: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
state DOWN group default qlen 1000
link/ether 52:54:00:5b:f7:0b brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
$
Anyway, if you'd like to try some more things, feel free to catch
me on
IRC, although I probably won't be available during this weekend.
Not a problem. I appreciate your help.
> Thanks again for your help.
>
Not at all. I didn't help much, did I? :)
>> [1]
http://wiki.libvirt.org/page/DebugLogs
>> [2] I'm not sure about the whole thing to look for, but this is enough)
>
>
> _______________________________________________
> libvirt-users mailing list
> libvirt-users(a)redhat.com
>
https://www.redhat.com/mailman/listinfo/libvirt-users