Re: [libvirt-users] Compiling Libvirt on Windows for Hyper V support
by Daniel P. Berrangé
Re-adding the mailing list CC - please don't drop it.
On Tue, Aug 20, 2019 at 11:54:34AM -0400, reza shahriari wrote:
> Hi,
>
> I am using msys2. When I run `./configure` without any parameters and I still see missing pthreads impl.
>
> I have installed mingw64 cross compiler toolchain. Here is the output of pacman I see when searching for pthreads (they are all installed).
>
> mingw64/mingw-w64-x86_64-libwinpthread-git 7.0.0.5480.e14d23be-1 (mingw-w64-x86_64-toolchain) [installed]
> MinGW-w64 winpthreads library
> mingw64/mingw-w64-x86_64-winpthreads-git 7.0.0.5480.e14d23be-1 (mingw-w64-x86_64-toolchain) [installed]
> MinGW-w64 winpthreads library
> msys/mingw-w64-cross-winpthreads-git 7.0.0.5480.b627284b-1 (mingw-w64-cross-toolchain mingw-w64-cross) [installed]
> MinGW-w64 winpthreads library for cross-compiler
>
>
> Also, is there any another way to enable hyper-v domains on Libvirt ?
The pthreads library is a mandatory part of libvirt, so we can't
skip that.
Can you provide your "config.log" file - if it is larger than
200 KB, please compress it, or upload it somewhere
Regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
5 years, 3 months
[libvirt-users] Starting VM fails with: "Setting different DAC user or group on /path... which is already in use" after upgrading to libvirt 5.6.0-1
by Nir Soffer
Hi,
I upgraded to a Fedora 29 host using virt-preview repo to
libvirt-daemon-5.6.0-1.fc29.x86_64
The host was using plain Fedora 29 without virt-preview before that.
After the upgrade, starting some vms that were running fine fail now with
this error:
Error starting domain: internal error: child reported (status=125):
Requested operation is not valid: Setting different DAC user or group on
/home/libvirt/images/voodoo4-os.img which is already in use
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in
cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 111, in tmpcb
callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line
66, in newfn
ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/object/domain.py", line 1279,
in startup
self._backend.create()
File "/usr/lib64/python3.7/site-packages/libvirt.py", line 1089, in create
if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirt.libvirtError: internal error: child reported (status=125):
Requested operation is not valid: Setting different DAC user or group on
/home/libvirt/images/voodoo4-os.img which is already in use
These vms we created by creating one vm, and the cloning the vms.
I tried to delete the disks and add them back in one of the vms, but the vm
still fail with the
same error.
I hope that someone have a clue what is the issue, and how it can be fixed.
Here some details about the setup:
vm1:
<domain type='kvm'>
<name>voodoo4</name>
<uuid>0b3aa57a-00b6-4e99-81f9-8f216f85ccaf</uuid>
<title>voodoo4 (fedora 29, gluster)</title>
<metadata>
<libosinfo:libosinfo xmlns:libosinfo="
http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://fedoraproject.org/fedora/29"/>
</libosinfo:libosinfo>
</metadata>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-q35-3.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<vmport state='off'/>
</features>
<cpu mode='host-model' check='partial'>
<model fallback='allow'/>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='sda' bus='sata'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/home/libvirt/images/voodoo4-os.img'/>
<target dev='vda' bus='virtio'/>
<serial>os</serial>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00'
function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/home/libvirt/images/voodoo4-gv0.img'/>
<target dev='vdb' bus='virtio'/>
<serial>gv0</serial>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00'
function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/home/libvirt/images/voodoo4-gv1.img'/>
<target dev='vdc' bus='virtio'/>
<serial>gv1</serial>
<address type='pci' domain='0x0000' bus='0x08' slot='0x00'
function='0x0'/>
</disk>
<controller type='usb' index='0' model='qemu-xhci' ports='15'>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00'
function='0x0'/>
</controller>
<controller type='sata' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f'
function='0x2'/>
</controller>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x10'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x11'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0x12'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0x13'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0x14'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x4'/>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='6' port='0x15'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x5'/>
</controller>
<controller type='pci' index='7' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='7' port='0x16'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x6'/>
</controller>
<controller type='pci' index='8' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='8' port='0x17'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x7'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00'
function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:dd:a4:5c'/>
<source bridge='ovirt'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00'
function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<address type='virtio-serial' controller='0' bus='0' port='2'/>
</channel>
<input type='tablet' bus='usb'>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='spice' autoport='yes'>
<listen type='address'/>
</graphics>
<sound model='ich9'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1b'
function='0x0'/>
</sound>
<video>
<model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1'
primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x0'/>
</video>
<redirdev bus='usb' type='spicevmc'>
<address type='usb' bus='0' port='2'/>
</redirdev>
<redirdev bus='usb' type='spicevmc'>
<address type='usb' bus='0' port='3'/>
</redirdev>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00'
function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</rng>
</devices>
</domain>
vm 2:
<domain type='kvm'>
<name>voodoo5</name>
<uuid>8ded8ea2-6524-4fc0-94f6-31667338a5f2</uuid>
<title>voodoo5 (fedora 29, gluster)</title>
<metadata>
<libosinfo:libosinfo xmlns:libosinfo="
http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://fedoraproject.org/fedora/29"/>
</libosinfo:libosinfo>
</metadata>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-q35-3.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<vmport state='off'/>
</features>
<cpu mode='host-model' check='partial'>
<model fallback='allow'/>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/home/libvirt/images/voodoo5-os.img'/>
<target dev='vda' bus='virtio'/>
<serial>os</serial>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00'
function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/home/libvirt/images/voodoo5-gv0.img'/>
<target dev='vdb' bus='virtio'/>
<serial>gv0</serial>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00'
function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/home/libvirt/images/voodoo5-gv1.img'/>
<target dev='vdc' bus='virtio'/>
<serial>gv1</serial>
<address type='pci' domain='0x0000' bus='0x08' slot='0x00'
function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='sda' bus='sata'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='qemu-xhci' ports='15'>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00'
function='0x0'/>
</controller>
<controller type='sata' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f'
function='0x2'/>
</controller>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x10'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x11'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0x12'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0x13'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0x14'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x4'/>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='6' port='0x15'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x5'/>
</controller>
<controller type='pci' index='7' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='7' port='0x16'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x6'/>
</controller>
<controller type='pci' index='8' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='8' port='0x17'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x7'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00'
function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:53:6a:b4'/>
<source bridge='ovirt'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00'
function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<address type='virtio-serial' controller='0' bus='0' port='2'/>
</channel>
<input type='tablet' bus='usb'>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='spice' autoport='yes'>
<listen type='address'/>
</graphics>
<sound model='ich9'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1b'
function='0x0'/>
</sound>
<video>
<model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1'
primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x0'/>
</video>
<redirdev bus='usb' type='spicevmc'>
<address type='usb' bus='0' port='2'/>
</redirdev>
<redirdev bus='usb' type='spicevmc'>
<address type='usb' bus='0' port='3'/>
</redirdev>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00'
function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</rng>
</devices>
</domain>
ls -lhZ /home/libvirt/images/voodoo4*
# ls -lhZ /home/libvirt/images/voodoo4*
-rw-------. 1 root root system_u:object_r:virt_image_t:s0 20G Aug 17 03:55
/home/libvirt/images/voodoo4-gv0.img
-rw-------. 1 root root system_u:object_r:virt_image_t:s0 20G Aug 17 03:55
/home/libvirt/images/voodoo4-gv1.img
-rw-------. 1 root root system_u:object_r:virt_image_t:s0 50G Aug 17 03:52
/home/libvirt/images/voodoo4-os.img
cat /etc/libvirt/storage/images.xml
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made
using:
virsh pool-edit images
or other application using the libvirt API.
-->
The related pool:
<pool type='dir'>
<name>images</name>
<uuid>f7190095-947d-442b-b94b-4a99790795bc</uuid>
<capacity unit='bytes'>0</capacity>
<allocation unit='bytes'>0</allocation>
<available unit='bytes'>0</available>
<source>
</source>
<target>
<path>/home/libvirt/images</path>
<permissions>
<mode>0755</mode>
<owner>-1</owner>
<group>-1</group>
</permissions>
</target>
</pool>
Nir
5 years, 3 months
[libvirt-users] Compiling Libvirt on Windows for Hyper V support
by reza shahriari
Hi,
I am trying to compile Libvirt from the source code on windows using msys2 but keep hitting issues while running `./configure`.
….
> checking whether C compiler handles -Wno-suggest-attribute=pure… yes
> checking whether C compiler handles -Wno-suggest-attribute=const... yes
> checking for how to force completely read-only GOT table…
> checking for how to avoid indirect lib deps... -Wl,--no-copy-dt-needed-entries
> checking for how to stop undefined symbols at link time…
> checking sys/acl.h usability... no
> checking sys/acl.h presence… no
> checking for sys/acl.h… no
> checking for aa_change_profile in -lapparmor… no
> checking for pthread_mutexattr_init… no
> checking for pthread.h... (cached) no
> configure: error: A pthreads impl is required for building libvirt
...
Does anyone have any experience compiling Libvirt on a windows machine ? Specifically I am trying to enable hyper-v domain on Libvirt.
Thanks,
Reza Shahriari
5 years, 3 months
[libvirt-users] Libvirt API for attaching volume to domain
by Varsha Verma
Hi,
I wanted to know if there is any Python API exposed for libvirt using which
I can attach a libvirt volume to a libvirt domain/VM.
I intend to do something similar to the `attach-disk` command of virsh
using python.
--
*Regards,*
*Varsha Verma*
*Final Year Undergraduate*
*Department of Electrical Engineering*
*IIT-BHU, Varanasi*
5 years, 3 months
[libvirt-users] does virsh have a history with timestamps ?
by Lentes, Bernd
Hi,
i knwo that virsh has its own history, but are somewhere the respective timestamps logged ?
Bernd
--
Bernd Lentes
Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
bernd.lentes(a)helmholtz-muenchen.de
phone: +49 89 3187 1241
phone: +49 89 3187 3827
fax: +49 89 3187 2294
http://www.helmholtz-muenchen.de/idg
Perfekt ist wer keine Fehler macht
Also sind Tote perfekt
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
5 years, 3 months
[libvirt-users] OVS / KVM / libvirt / MTU
by Sven Vogel
Hey there,
I hope anyone can bring some light in the following problem.
<interface type='bridge'>
<source bridge='cloudbr0'/>
<mtu size='2000'/>
<mac address='02:00:74:76:00:01'/>
<model type='virtio'/>
<virtualport type='openvswitch'>
</virtualport>
<vlan trunk='no'>
<tag id='2097'/>
</vlan><link state='up'/>
</interface>
we have an base bridge for example and cloudbr0. After we add an mtu to the vm bridge here it seems the base bridge gets the same mtu like the vnet adapter.
Is this normal behaviour of libvirt together with OVS?
—ovs-vsctl show
5b154321-534d-413e-9761-60476ae06640
Bridge "cloudbr0"
Port "cloudbr0"
Interface "cloudbr0"
type: internal
—MTU from the bridge after set an MTU into the XML File (before we had here 9000)
mtu : 2000
mtu_request : []
name : "cloudbr0"
ofport : 65534
—MTU of the vnet interface
mac_in_use : "fe:00:74:76:00:01"
mtu : 1450
mtu_request : 1450
name : „vnet2"
Thanks for help…
__
Sven Vogel
Teamlead Platform
EWERK RZ GmbH
Brühl 24, D-04109 Leipzig
P +49 341 42649 - 11
F +49 341 42649 - 18
S.Vogel(a)ewerk.com
www.ewerk.com
Geschäftsführer:
Dr. Erik Wende, Hendrik Schubert, Frank Richter
Registergericht: Leipzig HRB 17023
Zertifiziert nach:
ISO/IEC 27001:2013
DIN EN ISO 9001:2015
DIN ISO/IEC 20000-1:2011
EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
Disclaimer Privacy:
Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung, Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System. Vielen Dank.
The contents of this e-mail (including any attachments) are confidential and may be legally privileged. If you are not the intended recipient of this e-mail, any disclosure, copying, distribution or use of its contents is strictly prohibited, and you should please notify the sender immediately and then delete it (including any attachments) from your system. Thank you.
5 years, 3 months
[libvirt-users] Vm in state "in shutdown"
by 马昊骢 Ianmalcolm Ma
Description of problem:
libvirt 3.9 on CentOS Linux release 7.4.1708 (kernel 3.10.0-693.21.1.el7.x86_64) on Qemu version 2.10.0
I’m currently facing a strange situation. Sometimes my vm is shown by ‘virsh list’ as in state “in shutdown” but there is no qemu-kvm process linked to it.
Libvirt log when “in shutdown” state occur is as follows:
“d470c3b284425b9bacb34d3b5f3845fe” is vm’s name, remoteDispatchDomainMemoryStats API is called by ‘collectd’, which is used to collect some vm running states and host information once every 30’s.
2019-07-25 14:23:58.706+0000: 15818: warning : qemuMonitorJSONIOProcessEvent:235 : type: POWERDOWN, vm: d470c3b284425b9bacb34d3b5f3845fe, cost 1.413 secs
2019-07-25 14:23:59.601+0000: 15818: warning : qemuMonitorJSONIOProcessEvent:235 : type: SHUTDOWN, vm: d470c3b284425b9bacb34d3b5f3845fe, cost 1.202 secs
2019-07-25 14:23:59.601+0000: 15818: warning : qemuMonitorJSONIOProcessEvent:235 : type: STOP, vm: d470c3b284425b9bacb34d3b5f3845fe, cost 1.203 secs
2019-07-25 14:23:59.601+0000: 15818: warning : qemuMonitorJSONIOProcessEvent:235 : type: SHUTDOWN, vm: d470c3b284425b9bacb34d3b5f3845fe, cost 1.203 secs
2019-07-25 14:23:59.629+0000: 15818: error : qemuMonitorIORead:597 : Unable to read from monitor: Connection reset by peer
2019-07-25 14:23:59.629+0000: 121081: warning : qemuProcessEventHandler:4840 : vm: d470c3b284425b9bacb34d3b5f3845fe, event: 6 locked
2019-07-25 14:23:59.629+0000: 15822: error : qemuMonitorJSONCommandWithFd:364 : internal error: Missing monitor reply object
2019-07-25 14:24:29.483+0000: 15821: warning : qemuGetProcessInfo:1468 : cannot parse process status data
2019-07-25 14:24:29.829+0000: 15823: warning : qemuDomainObjBeginJobInternal:4391 : Cannot start job (modify, none) for domain d470c3b284425b9bacb34d3b5f3845fe; current job is (query, none) owned by (15822 remoteDispatchDomainMemoryStats, 0 <null>) for (30s, 0s)
2019-07-25 14:24:29.829+0000: 15823: error : qemuDomainObjBeginJobInternal:4403 : Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMemoryStats)
2019-07-25 14:24:29.829+0000: 121081: warning : qemuDomainObjBeginJobInternal:4391 : Cannot start job (destroy, none) for domain d470c3b284425b9bacb34d3b5f3845fe; current job is (query, none) owned by (15822 remoteDispatchDomainMemoryStats, 0 <null>) for (30s, 0s)
2019-07-25 14:24:29.829+0000: 121081: error : qemuDomainObjBeginJobInternal:4403 : Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMemoryStats)
2019-07-25 14:24:29.829+0000: 121081: warning : qemuProcessEventHandler:4875 : vm: d470c3b284425b9bacb34d3b5f3845fe, event: 6, cost 31.459 secs
I’ve tried to find out how did this problem happened. I analyzed the execution process of the job and speculated that the problem occurred as follows:
step one: libvirt send command 'system_powerdown' to qemu.
step two: libvirt receive qemu monitor close event, and then deal EOF event.
step three: remoteDispatchDomainMemoryStats job start on the same vm.
step four: work thread dealing stop job is waited on job.cond, the timeout is 30s.
It seems that the remoteDispatchDomainMemoryStats job is too slow to let the stop job wait timeout.
Then I tried to reproduce this process. The step is as follows:
First step: add a sleep on ‘qemuProcessEventHandler' by using pthread_cond_timedwait. So can execute 'virsh dommemstat active’ command at this time interval.
Second step: start a vm to test
Third step: execute 'virsh shutdown active’ to shutdown the vm
Fourth step: execute 'virsh dommemstat active’ when stop job is sleep.
Then it works. The test vm state became to 'in shutdown’, and the libvirt log is as follows:
“active” is my test vm’s name.
2019-08-05 08:39:57.001+0000: 25889: warning : qemuDomainObjBeginJobInternal:4308 : Starting job: modify (vm=0x7f7bbc145fe0 name=active, current job=none async=none)
2019-08-05 08:39:57.003+0000: 25889: warning : qemuDomainObjEndJob:4522 : Stopping job: modify (async=none vm=0x7f7bbc145fe0 name=active)
2019-08-05 08:39:57.003+0000: 25881: warning : qemuMonitorJSONIOProcessEvent:235 : type: POWERDOWN, vm: active, cost 0.008 secs
2019-08-05 08:39:57.854+0000: 25881: warning : qemuMonitorJSONIOProcessEvent:235 : type: SHUTDOWN, vm: active, cost 1.709 secs
2019-08-05 08:39:57.875+0000: 25881: warning : qemuMonitorJSONIOProcessEvent:235 : type: STOP, vm: active, cost 1.751 secs
2019-08-05 08:39:57.875+0000: 25881: warning : qemuMonitorJSONIOProcessEvent:235 : type: SHUTDOWN, vm: active, cost 1.751 secs
2019-08-05 08:39:57.915+0000: 25881: warning : qemuMonitorIO:756 : Error on monitor <null>
2019-08-05 08:39:57.915+0000: 25881: warning : qemuMonitorIO:777 : Triggering EOF callback
2019-08-05 08:39:57.915+0000: 26915: warning : qemuProcessEventHandler:4822 : usleep 20s
2019-08-05 08:40:01.004+0000: 25886: warning : qemuDomainObjBeginJobInternal:4308 : Starting job: query (vm=0x7f7bbc145fe0 name=active, current job=none async=none)
^@2019-08-05 08:40:17.915+0000: 26915: warning : qemuProcessEventHandler:4845 : vm=0x7f7bbc145fe0, event=6
2019-08-05 08:40:17.915+0000: 26915: warning : qemuProcessEventHandler:4851 : vm: active, event: 6 locked
2019-08-05 08:40:17.915+0000: 26915: warning : qemuDomainObjBeginJobInternal:4308 : Starting job: destroy (vm=0x7f7bbc145fe0 name=active, current job=query async=none)
2019-08-05 08:40:17.915+0000: 26915: warning : qemuDomainObjBeginJobInternal:4344 : Waiting for job (vm=0x7f7bbc145fe0 name=active), job 26915 qemuProcessEventHandler, owned by 25886 remoteDispatchDomainMemoryStats
^@^@^@2019-08-05 08:40:47.915+0000: 26915: warning : qemuDomainObjBeginJobInternal:4405 : Cannot start job (destroy, none) for domain active; current job is (query, none) owned by (25886 remoteDispatchDomainMemoryStats, 0 <null>) for (46s, 0s)
2019-08-05 08:40:47.915+0000: 26915: error : qemuDomainObjBeginJobInternal:4417 : 操作超时:cannot acquire state change lock (held by remoteDispatchDomainMemoryStats)
2019-08-05 08:40:47.915+0000: 26915: warning : qemuProcessEventHandler:4886 : vm: active, event: 6, cost 31.830 secs
The threads backtraces is as follows:
[Switching to thread 17 (Thread 0x7f12fddf5700 (LWP 32580))]
#2 0x00007f12eb26df21 in qemuMonitorSend (mon=mon@entry=0x7f12d0013af0,
msg=msg@entry=0x7f12fddf46e0) at qemu/qemu_monitor.c:1075
1075 if (virCondWait(&mon->notify, &mon->parent.lock) < 0) {
(gdb) l
1070 PROBE(QEMU_MONITOR_SEND_MSG,
1071 "mon=%p msg=%s fd=%d",
1072 mon, mon->msg->txBuffer, mon->msg->txFD);
1073
1074 while (mon && mon->msg && !mon->msg->finished) {
1075 if (virCondWait(&mon->notify, &mon->parent.lock) < 0) {
1076 virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
1077 _("Unable to wait on monitor condition"));
1078 goto cleanup;
1079 }
(gdb) p *msg
$2 = {txFD = -1,
txBuffer = 0x7f12b8000d50 "{\"execute\":\"qom-list\",\"arguments\":{\"path\":\"/machine/peripheral\"},\"id\":\"libvirt-17\"}\r\n", txOffset = 0,
txLength = 85, rxBuffer = 0x0, rxLength = 0, rxObject = 0x0,
finished = false, passwordHandler = 0x0, passwordOpaque = 0x0}
(gdb) bt
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007f130d738316 in virCondWait (c=c@entry=0x7f12d0013b28,
m=m@entry=0x7f12d0013b00) at util/virthread.c:154
#2 0x00007f12eb26df21 in qemuMonitorSend (mon=mon@entry=0x7f12d0013af0,
msg=msg@entry=0x7f12fddf46e0) at qemu/qemu_monitor.c:1075
#3 0x00007f12eb283b40 in qemuMonitorJSONCommandWithFd (
mon=mon@entry=0x7f12d0013af0, cmd=cmd@entry=0x7f12b80008c0,
scm_fd=scm_fd@entry=-1, reply=reply@entry=0x7f12fddf4780)
at qemu/qemu_monitor_json.c:355
#4 0x00007f12eb2904b9 in qemuMonitorJSONCommand (reply=0x7f12fddf4780,
cmd=0x7f12b80008c0, mon=0x7f12d0013af0) at qemu/qemu_monitor_json.c:385
...
I found that on function ‘qemuMonitorUpdateWatch’, if mon->watch is zero, then event would’t be update. But 'qemuMonitorSend'
would still wait on mon->notify. So the remoteDispatchDomainMemoryStats job is blocked, and then the stop job blocked too.
618 static void
619 qemuMonitorUpdateWatch(qemuMonitorPtr mon)
620 {
621 int events =
622 VIR_EVENT_HANDLE_HANGUP |
623 VIR_EVENT_HANDLE_ERROR;
624
625 if (!mon->watch)
626 return;
627
628 if (mon->lastError.code == VIR_ERR_OK) {
629 events |= VIR_EVENT_HANDLE_READABLE;
630
631 if ((mon->msg && mon->msg->txOffset < mon->msg->txLength) &&
632 !mon->waitGreeting)
633 events |= VIR_EVENT_HANDLE_WRITABLE;
634 }
635
636 virEventUpdateHandle(mon->watch, events);
637 }
I try to fix this bugs by add a judgment before qemuMonitorUpdateWatch, and it seem works.
1046 int
1047 qemuMonitorSend(qemuMonitorPtr mon,
1048 qemuMonitorMessagePtr msg)
1049 {
1050 int ret = -1;
1051
1052 /* Check whether qemu quit unexpectedly */
1053 if (mon->lastError.code != VIR_ERR_OK) {
1054 VIR_DEBUG("Attempt to send command while error is set %s",
1055 NULLSTR(mon->lastError.message));
1056 virSetError(&mon->lastError);
1057 return -1;
1058 }
1059
1060 if (!mon->watch) {
1061 VIR_WARN("Attempt to send command while mon->watch is zero");
1062 virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
1063 _("attempt to send command when the monitor is closing"));
1064 return -1;
1065 }
1066
1067 mon->msg = msg;
1068 qemuMonitorUpdateWatch(mon);
1069
1070 PROBE(QEMU_MONITOR_SEND_MSG,
1071 "mon=%p msg=%s fd=%d",
1072 mon, mon->msg->txBuffer, mon->msg->txFD);
1073
1074 while (mon && mon->msg && !mon->msg->finished) {
1075 if (virCondWait(&mon->notify, &mon->parent.lock) < 0) {
1076 virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
1077 _("Unable to wait on monitor condition"));
1078 goto cleanup;
1079 }
1080 }
1081
1082 if (mon->lastError.code != VIR_ERR_OK) {
1083 VIR_DEBUG("Send command resulted in error %s",
1084 NULLSTR(mon->lastError.message));
1085 virSetError(&mon->lastError);
1086 goto cleanup;
1087 }
1088
1089 ret = 0;
1090
1091 cleanup:
1092 mon->msg = NULL;
1093 qemuMonitorUpdateWatch(mon);
1094
1095 return ret;
1096 }
I’m not really sure that this change will affect other process.
Thanks!
Ma haocong
5 years, 3 months
[libvirt-users] libvirt/dnsmasq is not adhering to static DHCP assignments
by Christian Kujau
This is basically a continuation of an older posting[0] I found, but
apparently no solution has been posted. So, I'm trying to setup static
DHCP leases with dnsmasq that is being started by libvirtd:
-----------------------------------------------------------
$ sudo virsh net-dumpxml --network default
<network>
[...]
<dns enable='no'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.130' end='192.168.122.250'/>
<host mac='08:00:27:e2:81:39' name='f30' ip='192.168.56.139'/>
-----------------------------------------------------------
And the domain is indeed being started with that hardware address:
-----------------------------------------------------------
$ virsh dumpxml --domain f30 | grep -B1 -A3 mac\ a
<interface type='bridge'>
<mac address='08:00:27:e2:81:39'/>
<source bridge='virbr0'/>
<target dev='tap0'/>
<model type='virtio'/>
-----------------------------------------------------------
But for some reason the domain gets a different address assigned, albeit
from the correct DHCP pool:
-----------------------------------------------------------
$ ssh 192.168.122.233 "ip addr show"
[...]
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
UP group default qlen 1000
link/ether 08:00:27:e2:81:39 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.233/24 brd 192.168.122.255 scope global dynamic enp1s0
-----------------------------------------------------------
See below for even more details. I've removed the virbr0.status file,
restarted the network, still it gets this .233 address instead of the
configured .139. Also, I'm not able to add a debug log to the
libvirtd/dnsmasq instance, although it should be possible[1], the
xmlns:dnsmasq stanza is overwritten as soon as the configuration is saved.
Short of a debug (queries) log, strace on the libvirtd/dnsmasq process
reveals that the client seems to request this strange IP address:
23002 19:01:59.053558 write(11, "<30>Jul 31 19:01:59 dnsmasq-dhcp[23002]: DHCPREQUEST(virbr0) 192.168.122.233 08:00:27:e2:81:39 ", 95) = 95
23002 19:01:59.054024 write(11, "<30>Jul 31 19:01:59 dnsmasq-dhcp[23002]: DHCPACK(virbr0) 192.168.122.233 08:00:27:e2:81:39 f30", 94) = 94
So I restarted the dhclient process in the domain (a freshly installed
Fedora 30), removed all state files with "192.168.122.233" in them, but
still the domains gets assigned the .233 instead of the .139 address.
The next step would be to disable libvirtd/dnsmasq altogether and run my
own dnsmasq instance, but I wanted to avoid that. Any idead on where to
look next?
Thanks,
Christian.
[0] https://www.redhat.com/archives/libvirt-users/2017-October/msg00070.html
[1] https://libvirt.org/formatnetwork.html#elementsNamespaces
# cd /var/lib/libvirt/dnsmasq/
# grep -r . . | fgrep -v \#
./default.conf:strict-order
./default.conf:port=0
./default.conf:pid-file=/var/run/libvirt/network/default.pid
./default.conf:except-interface=lo
./default.conf:bind-dynamic
./default.conf:interface=virbr0
./default.conf:dhcp-range=192.168.122.130,192.168.122.250,255.255.255.0
./default.conf:dhcp-no-override
./default.conf:dhcp-authoritative
./default.conf:dhcp-lease-max=121
./default.conf:dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile
./default.hostsfile:08:00:27:e2:81:39,192.168.56.139,f30
./virbr0.status:[
./virbr0.status: {
./virbr0.status: "ip-address": "192.168.122.233",
./virbr0.status: "mac-address": "08:00:27:e2:81:39",
./virbr0.status: "hostname": "f30",
./virbr0.status: "client-id": "ff:27:e2:81:39:00:04:8c:ad:c4:7d:04:e8:4b:de:93:4b:76:d8:75:82:86:c8",
./virbr0.status: "expiry-time": 1564628519
./virbr0.status: }
./virbr0.status:]
--
BOFH excuse #40:
not enough memory, go get system upgrade
5 years, 3 months
[libvirt-users] Reg: content of disk is not reflecting in host.
by bharath paulraj
Hi Team,
I am doing a small testing and I don't know if my expectation is correct or
not. Pardon me if I am ignorant.
I created a VM and the VM is running. In the hypervisor I have created
".img" file and attached this .img file to the VM.
My expectation is that, if VM is writing files to the attached disk, then
it should reflect in the .img file created in the hypervisor. But It is not
working as my expectation.
Please correct me if my expectation is wrong.
Steps:
1. Created disk.img in the hypervisor using the command: dd if=/dev/zero
of=disk.img bs=1M count=50; mkfs ext3 -F disk.img
2. Attached the disk to the running VM using the command: virsh attach-disk
<Domain-Name> --source disk.img --target vdb --live
3. In the VM, I mounted the disk and created few files.
4. In the hypervisor, I mounted the disk.img to check if the file created
in the VM exists in the .img file.
>> I am not able to see those files.
Regards,
Bharath
5 years, 3 months