[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 2 months
Set hostname of guest during installation time
by john doe
Hi,
I would like to set the hostname when installing a guest, with the below
command the hostname is not set to 'try06' in the guest:
virt-install --name=try06 --graphic none --pxe --network bridge=virbr0
How can I set the hostname of the guest during installation time?
I realy appriciate the support I'm getting in here, I'm fairly new to
libvirt.
--
John Doe
4 years
Two questions about NVDIMM devices
by Milan Zamazal
Hi,
I've met two situations with NVDIMM support in libvirt where I'm not
sure all the parties (libvirt & I) do the things correctly.
The first problem is with memory alignment and size changes. In
addition to the size changes applied to NVDIMMs by QEMU, libvirt also
makes some NVDIMM size changes for better alignments, in
qemuDomainMemoryDeviceAlignSize. This can lead to the size being
rounded up, exceeding the size of the backing device and QEMU failing to
start the VM for that reason (I've experienced that actually). I work
with emulated NVDIMM devices, not a bare metal hardware, so one might
argue that in practice the device sizes should already be aligned, but
I'm not sure it must be always the case considering labels or whatever
else the user decides to set up. And I still don't feel very
comfortable that I have to count with two internal size adjustments
(libvirt & QEMU) to the `size' value I specify, with the ultimate goal
of getting the VM started and having the NVDIMM aligned properly to make
(non-NVDIMM) memory hot plug working. Is the size alignment performed
by libvirt, especially rounding up, completely correct for NVDIMMs?
The second problem is that a VM fails to start with a backing NVDIMM in
devdax mode due to SELinux preventing access to the /dev/dax* device (it
doesn't happen with any other NVDIMM modes). Who should be responsible
for handling the SELinux label appropriately in that case? libvirt, the
system administrator, anybody else? Using <seclabel> in NVDIMM's source
doesn't seem to be accepted by the domain XML schema.
Thanks,
Milan
4 years, 2 months
Problem with xen config
by Christoph
Hi all,
I 've such a config on xen (4.14):
name = "marax.chao5.int"
uuid = "e0de3cb7-3937-417d-8d63-b0993b377b6a"
maxmem = 16384
memory = 16384
kernel = '/usr/lib64/xen/boot/hvmloader'
vcpus = 16
rtc_timeoffset = 0
localtime = 1
on_poweroff = "destroy"
on_reboot = "restart"
on_crash = "restart"
vif = [
"mac=00:16:3e:05:01:10,bridge=xenbr5,script=vif-bridge,model=e1000" ]
parallel = "none"
serial = "none"
type = "hvm"
loader = "/usr/lib64/xen/boot/hvmloader"
disk = [ "phy:/dev/mapper/marax_c,hda,rw",
"phy:/dev/vg_lilith/lv_marax_d,hdb,rw" ]
max_grant_frames = "128"
pci = [ "01:00.0", "01:00.1", "01:00.2", "01:00.3", "00:1f.3", "05:00.0"
]
pci_permissive = 1
keymap = "de"
vnclisten="0.0.0.0"
pci_power_mgmt=1
xen_platform_pci=1
pci_msitranslate=1
viridian=1
hpet=1
acpi=1
apic=1
pae=1
I want to convert it to a xml format for use it with libvirt/virsh but I
get an error:
"error: An error occurred, but the cause is unknown"
If I comment out the line with pci = [ "01:00.0", "01:00.1", "01:00.2",
"01:00.3", "00:1f.3", "05:00.0" ]
then it works but I dont see the settings like
xen_platform_pci=1
pci_msitranslate=1
pci_permissive = 1
max_grant_frames = "128
and the pci passthrough devices are not there...
Does libvirt doesnt support such a config?
--
------
Greetz
4 years, 3 months
guest-fsfreeze-freeze freezes all mounted block devices
by Marc Roos
I wondered if anyone here can confirm that
virsh qemu-agent-command domain '{"execute":"guest-fsfreeze-freeze"}'
Freezes all mounted block devices filesystems. So if I use 4 block
devices they are all frozen for snapshotting. Or just the root fs?
4 years, 3 months
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 3 months
Cannot pass secret id for backing file after taking external snapshot on encrypted qcow2 file
by yaohua.wu@zstack.io
Hi,
I used 'virsh snapshot-create' create an encrypted external snapshot, when I try to use 'qemu-img check' top file, found no entrance to pass backing-file's secret-id
1、Version
centos-release-8.2-2.2004.0.1.el8.x86_64
libvirt.x86_64 6.0.0-17.el8
qemu-kvm.x86_64 15:4.2.0-19.el82、Reproduce Steps
1)Create an encrypted qcow2
qemu-img create --object secret,id=sec0,data=123456 -f qcow2 -o encrypt.format=luks,encrypt.key-secret=sec0 first.qcow2 1G
2)Create external snapshot with 'encrypted' xml
# cat snap.xml
<domainsnapshot>
<disks>
<disk name='hdc' snapshot='no'/>
<disk name='vdb' snapshot='external'>
<source file='/root/first-snapshot.qcow2'>
<encryption format='luks'>
<secret type='passphrase' uuid='f52a81b2-424e-490c-823d-6bd4235bc572'/>
</encryption>
</source>
</disk>
</disks>
</domainsnapshot>
# virsh dumpxml test-vm | awk '/<disk/,/<\/disk/'
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/root/first-snapshot.qcow2' index='5'/>
<backingStore type='file' index='2'>
<format type='qcow2'/>
<source file='/root/first.qcow2'>
<encryption format='luks'>
<secret type='passphrase' uuid='f981dd17-143f-45bc-88e6-222222222222'/>
</encryption>
</source>
<backingStore/>
</backingStore>
<target dev='vdb' bus='virtio'/>
<encryption format='luks'>
<secret type='passphrase' uuid='f52a81b2-424e-490c-823d-6bd4235bc572'/>
</encryption>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</disk>
3)try to qemu-img check top qcow2 file
Note: The secid of the backing file is not recorded, so when I use qemu-img check/etc.. how to pass the secret to qemu of backing files
# qemu-img info -U first-snapshot.qcow2
image: first-snapshot.qcow2
file format: qcow2
virtual size: 1 GiB (1073741824 bytes)
disk size: 544 KiB
encrypted: yes
cluster_size: 65536
backing file: /root/first.qcow2 ### backing file: json:{"encrypt.format": "luks", "encrypt.key-secret": "secrete-id"}
backing file format: luks
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
encrypt:
ivgen alg: plain64
hash alg: sha256
cipher alg: aes-256
uuid: e4158089-26e4-433f-990e-1d1d0723feee
format: luks
cipher mode: xts
slots:
[0]:
active: true
iters: 1257888
key offset: 4096
stripes: 4000
[1]:
active: false
key offset: 262144
[2]:
active: false
key offset: 520192
[3]:
active: false
key offset: 778240
[4]:
active: false
key offset: 1036288
[5]:
active: false
key offset: 1294336
[6]:
active: false
key offset: 1552384
[7]:
active: false
key offset: 1810432
payload offset: 2068480
master key iters: 300073
corrupt: false
# qemu-img check -U --object secret,id=sec_1,file=/etc/libvirt/secrets/f52a81b2-424e-490c-823d-6bd4235bc572.base64,format=base64 --image-opts encrypt.format=luks,encrypt.key-secret=sec_1,file.filename=first-snapshot.qcow2 --object secret,id=sec_2,file=/etc/libvirt/secrets/f981dd17-143f-45bc-88e6-222222222222.base64,format=base64
qemu-img: Could not open 'encrypt.format=luks,encrypt.key-secret=sec_1,file.filename=first-snapshot.qcow2': Could not open backing file: Parameter 'key-secret' is required for cipher
yaohua.wu(a)zstack.io
4 years, 3 months
libvirt segfaults with "internal,error: Missing monitor reply object", during block live-migration
by Alex Walender
Dear libvirt community,
Using recent Ubuntu Stein Cloud Packages, we are observing random
libvirtd live-migration crashes on the target host.
Libvirt is having a SEGFAULT with the qemu driver. Transferring block
devices usually works without issues.
However, the following memory transfer is causing the target libvirtd
randomly to close down its socket, resulting in a roll-backed migration
process.I can reproduce this with large VMs, which have a large memory pool.
The last error message we see in libvirt logs is:
error : qemuMonitorJSONCommandWithFd:315 : internal error: Missing
monitor reply object
With this, libvirt segfaults and restarts.
Before we encountered this issue, we used an older nova-compute package
(19.0.3).
Not sure if this made a difference with usage of libvirtd-api.
After upgrade, we also see a lot of recurring errors during migration:
warning : qemuDomainObjBeginJobInternal:7044 : Cannot start job (query,
none, none) for domain instance-00008f56; current job is (none, none,
migration in) owned by (0 <null>, 0 <null>, 0
remoteDispatchDomainMigratePrepare3Params (flags=0x809b)) for (0s, 0s,
14834s)
error : qemuDomainObjBeginJobInternal:7066 : Timed out during operation:
cannot acquire state change lock (held by
monitor=remoteDispatchDomainMigratePrepare3Params)
They don't abort the running migration process, but spam every minute to
the systemd journal.
Source and destination run the same packages:
Ubuntu 18.04.4 LTS (GNU/Linux 4.15.0-99-generic x86_64)
OpenStack Stein (Ubuntu Cloud Archive)
Libvirt+QEMU_x86
keystone-common 2:15.0.1-0ubuntu1~cloud0
libvirt-daemon 5.0.0-1ubuntu2.6~cloud0
qemu-system-x86 1:3.1+dfsg-2ubuntu3.7~cloud0
neutron-linuxbridge-agent 2:14.2.0-0ubuntu1~cloud0
neutron-plugin-ml2 2:14.2.0-0ubuntu1~cloud0
nova-compute 2:19.2.0-0ubuntu1~cloud0
nova-compute-libvirt 2:19.2.0-0ubuntu1~cloud0
I have attached source/destination debug logs from libvirtd and
nova-compute here:
https://denzelx.ddns.net/index.php/s/KPJ7vv4aTcb69XD
Any help would be nice!
Best Regards
--
M.Sc Alex Walender
de.NBI Cloud Bielefeld Administrator
Center for Biotechnology (CeBiTec)
University of Bielefeld
33594 Bielefeld
Germany
room: M3-118
phone: +49 (521) 106 2907
4 years, 3 months