[libvirt-users] Ubuntu 16.04 libvirt-guests.sh [6917] - running guests under URI address default: no running guests
by Jędrek Domański
Hi
I have recently upgraded from Ubuntu 15.04 to 16.04 and now everytime I
shutdown my PC I get a black screen where it says "libvirt-guests.sh [6917]
- running guests under URI address default: no running guests" and the PC
just hangs forever and will not shutdown. I need to manually kill it in
order to shut it down which is a pain in the ass. I have done some
investigation and here is what I've found out so far.
I have uninstalled libvirt, kvm and qemu:
sudo apt-get purge libvirt* kvm qemu*
The problem has gone, however I noticed some error messages at boot saying
something that "FAILED Qemu .... It flashes so quickly and I am not able to
read what exactly it says. I installed libvirt again:
sudo apt install qemu-kvm libvirt-bin
and the problem is back!
So is this script trying to shutdown my virtual machines? It looks like it
does but since it does not find any it hangs? I don't have any machines,
though. I used to have one, but I removed it. What is interesting, my PC
hangs at shutdown randomly, that is sometimes it hangs for like 15 seconds
and it shutsdown but in most cases it hangs forever. So this happens
randomly.
So, this investigation has brought me to you guys so please help me. I have
attached a screen (sorry for polish error message but the translation is in
the email title).
I have also posted a thread on AskUbuntu on this:
https://askubuntu.com/questions/925696/ubuntu-16-04-
libvirt-guests-sh-6917-running-guests-under-uri-address-default
Thank you,
Jędrzej
7 years, 4 months
[libvirt-users] Broken br0
by virt.persik@9ox.net
Hello,
I wanted to get an extra IP on my local NIC, so I ran `sudo ip addr add
192.168.1.130/24 dev enp4s0`. This didn't work as intended, so I thought
I'd restart the Ubuntu system to have things back to how they were.
Alas, this didn't happen. While the host still has network as usual, none
of the VMs are able to get a DHCP lease from the router, or any
connectivity at all.
I can still see br0 with `ip addr`, but my guess is that it's misconfigured
now.
I tried adding a br1 with virt-manager (Edit > Connection details > Network
interfaces > Add interface) and using that instead, but the app crashed
halfway through; and I can't disable or remove the connection with the app
now.
I wonder if this isn't a routing issue, but I am not very proficient in
that area to troubleshoot and fix.
Where should I start? I'd like to use this as an opportunity to learn more
about networking and VMs, so bear with me if I ask stupid/basic questions.
Thanks!
7 years, 4 months
[libvirt-users] Activation of org.freedesktop.machine1 timed out
by Masoud
Hi
recently i have this error on our server....
how i can fix it?
# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled;
vendor preset: enabled)
Active: active (running) since Sat 2017-07-01 11:08:50 EDT; 2min 54s ago
Docs: man:libvirtd(8)
http://libvirt.org
Main PID: 5233 (libvirtd)
CGroup: /system.slice/libvirtd.service
└─5233 /usr/sbin/libvirtd
Jul 01 11:10:51 localhost.localdomain libvirtd[5233]: error from service:
GetMachineByPID: Activation of org.
Jul 01 11:11:16 localhost.localdomain libvirtd[5233]: error from service:
GetMachineByPID: Activation of org.
Jul 01 11:11:16 localhost.localdomain libvirtd[5233]: error from service:
GetMachineByPID: Activation of org.
Jul 01 11:11:16 localhost.localdomain libvirtd[5233]: error from service:
GetMachineByPID: Activation of org.
Jul 01 11:11:16 localhost.localdomain libvirtd[5233]: error from service:
GetMachineByPID: Activation of org.
Jul 01 11:11:16 localhost.localdomain libvirtd[5233]: error from service:
GetMachineByPID: Activation of org.
Jul 01 11:11:16 localhost.localdomain libvirtd[5233]: error from service:
GetMachineByPID: Activation of org.
Jul 01 11:11:16 localhost.localdomain libvirtd[5233]: error from service:
GetMachineByPID: Activation of org.
Jul 01 11:11:42 localhost.localdomain libvirtd[5233]: Activation of
org.freedesktop.machine1 timed out
Jul 01 11:11:42 localhost.localdomain libvirtd[5233]: internal error:
Failed to autostart VM 'v4375': Activat
--
Chief Technology officer
Masoud
7 years, 4 months
[libvirt-users] virtual drive performance
by Dominik Psenner
Hi,
I'm investigating a performance issue on a virtualized windows server
host that is run on a ubuntu machine via libvirt/qemu. While the host
can easily read/write on the raid drive with 100Mmb/s as observable with
dd, the virtualized windows server running on that host is barely able
to read/write with at most 8Mb/s and averages around 1.4Mb/s. This has
grown to the extent that the virtualized host is often unresponsive and
even unable to start up its services with system default timeouts. Any
help to improve the situation is greatly appreciated.
This is the configuration of the virtualized host:
~$ virsh dumpxml windows-server-2016-x64
<domain type='kvm' id='1'>
<name>windows-server-2016-x64</name>
<uuid>XXX</uuid>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<vcpu placement='static'>2</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-xenial'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
</hyperv>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>IvyBridge</model>
<topology sockets='1' cores='2' threads='1'/>
</cpu>
<clock offset='localtime'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
<timer name='hypervclock' present='yes'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/kvm-spice</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source
file='/var/data/virtuals/machines/windows-server-2016-x64/image.qcow2'/>
<backingStore/>
<target dev='hda' bus='ide'/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source
file='/var/data/virtuals/machines/windows-server-2016-x64/dvd.iso'/>
<backingStore/>
<target dev='hdb' bus='ide'/>
<readonly/>
<alias name='ide0-0-1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<alias name='usb'/>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<alias name='usb'/>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<alias name='usb'/>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:0e:f2:23'/>
<source bridge='br0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/1'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/1'>
<source path='/dev/pts/1'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<input type='tablet' bus='usb'>
<alias name='input0'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
<listen type='address' address='127.0.0.1'/>
</graphics>
<video>
<model type='vga' vram='16384' heads='1'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
</video>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='apparmor' relabel='yes'>
<label>libvirt-XXX</label>
<imagelabel>libvirt-XXX</imagelabel>
</seclabel>
</domain>
Cheers,
Dominik
7 years, 4 months
[libvirt-users] Question about disabling '3dnowprefetch' CPU feature in Xen Guest using libvirt
by Charles Shih
Dear All,
I'm reaching this mail-list to ask a small question about disabling
'3dnowprefetch' CPU feature in Xen Guest using libvirt.
This is my environment:
Fedora release 26 (Twenty Six)
4.11.0-0.rc3.git0.2.fc26.x86_64
xen-4.8.1-2.fc26.x86_64
libvirt-3.2.1-1.fc26.x86_64
I can disable '3dnowprefetch' CPU feature in guest via 'xl' command,
with adding `cpuid='host,3dnowprefetch=0'` into the CFG file.
However, follow the instruction (https://libvirt.org/
formatdomain.html#elementsCPU), I added the following block into my XML
file:
```
<cpu mode='host-passthrough' check='none'>
<feature policy='disable' name='3dnowprefetch'/>
</cpu>
```
Created the instance, seemed this feature was not being disabled in my
guest. (I was able to see '3dnowprefetch' in `lscpu` outputs)
Anybody has idea on this? Does libvirt support this for Xen?
Any help would be appreciated. Thank you in advance.
Regards,
Charles
Charles Shih (史晨)
Quality Engineer
Red Hat, Platform QE, Virt QE, Section 1
Email: cheshi(a)redhat.com
IRC: cheshi @ #eng-china, #hyperv ,#qa, #virt
T: +86 10 65627484 <010%206562%207484> - IP: 8387484
M: +86 18611268098 <186%201126%208098>
7 years, 4 months
[libvirt-users] node device lifecycle callback can't resive event
by netsurfed
Hi all,
I registe node device lifecycle event callback function after connect qemu. In the callback function, I printf the event and detail.
When I plug a USB into host, and then pull out, but there is nothing happen. The callback was not called.
Could you tell me Why? And what dose node device mean?
I want to auto-hotplug usb devices by using libvirt.
Bellow the call of libvirt:
// node device lifecycle callback function
void nodeDeviceEventLifecycleCallback(virConnectPtr conn, virNodeDevicePtr dev, int event,
int detail, void * opaque)
{
printf("event = %d, detail= %d\n", event, detail);
}
// registe node device lifecycle event
int registeNodeDevEvent()
{
int ret = virEventRegisterDefaultImpl();
if (ret) { return -1;}
ret = virConnectNodeDeviceEventRegisterAny(vmh_conn, NULL, VIR_NODE_DEVICE_EVENT_ID_LIFECYCLE,
VIR_NODE_DEVICE_EVENT_CALLBACK(nodeDeviceEventLifecycleCallback), NULL, NULL);
if (-1 == ret) { return ret;}
}
Below some information about my hypervisor:
root@ubuntu-05:/datapool/zhuohf# virsh -v
3.4.0
root@ubuntu-05:/datapool/zhuohf# qemu-x86_64 -version
qemu-x86_64 version 2.9.0
Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
7 years, 4 months