[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 2 months
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 3 months
[libvirt-users] Libvirt access control drivers
by Anastasiya Ruzhanskaya
Hello!
According to the documentation access control drivers are not in really
"good condition". There is a polkit, but it can distinguish users only
according the pid. However, I have met some articles about more
fine-grained control and about selinux drivers for libvirt? So, what is the
status now? Should I implement something by myself if I want access based
on login, are their instructions how to write these drivers or there is
smth already?
6 years
[libvirt-users] live migration via unix socket
by David Vossel
Hey,
Over in KubeVirt we're investigating a use case where we'd like to perform
a live migration within a network namespace that does not provide libvirtd
with network access. In this scenario we would like to perform a live
migration by proxying the migration through a unix socket to a process in
another network namespace that does have network access. That external
process would live on every node in the cluster and know how to correctly
route connections between libvirtds.
virsh example of an attempted migration via unix socket.
virsh migrate --copy-storage-all --p2p --live --xml domain.xml my-vm
qemu+unix:///system?socket=destination-host-proxy-sock
In this example, the src libvirtd is able to establish a connection to the
destination libvirtd via the unix socket proxy. However, the migration-uri
appears to require either tcp or rdma network connection. If I force the
migration-uri to be a unix socket, I receive an error [1] indicating that
qemu+unix is not a valid transport.
Technically with qemu+kvm I believe what we're attempting should be
possible (even though it is inefficient). Please correct me if I'm wrong.
Is there a way to achieve this migration via unix socket functionality this
using Libvirt? Also, is there a reason why the migration uri is limited to
tcp/rdma
Thanks!
- David
[1]
https://github.com/libvirt/libvirt/blob/master/src/qemu/qemu_migration.c#...
6 years, 1 month
[libvirt-users] how "safe" is blockcommit ?
by Lentes, Bernd
Hi,
currently i'm following https://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit. I 'm playing around with it and it seems to be quite nice.
What i want is a daily consistent backup of my image file of the guest.
I have the idea of the following procedure:
- Shutdown the guest (i can live with a downtime of a few minutes, it will happen in the night).
And i think it's the only way to have a real clean snapshot
- create a snapshot with snapshot-create-as: snapshot-create-as guest testsn --disk-only
- start the guest again. Changes will now go into the overlay, as e.g. inserts in a database
- rsync the base file to a cifs server. With rsync not the complete, likely big file is transferred but just the delta
- blockcommit the overlay: blockcommit guest /path/to/testsn --active --wait --verbose --pivot
- delete the snapshot: snapshot-delete guest --snapshotname testsn --metadata
- remove the overlay
Is that ok ? How "safe" is blockcommit on a running guest ? It's possible that during the rsync, when the guest is running, some inserts are done in a database.
Is it safe to copy the new sectors (i assume that's what blockcommit does) under a running database ?
Or is it only safe doing blockcommit on a stopped guest ?
Thanks for any answer.
Bernd
P.S. Is the same procedure possible when the guest disk(s) reside directly in a plain logical volume, without a file system in-between ?
--
Bernd Lentes
Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
[ mailto:bernd.lentes@helmholtz-muenchen.de | bernd.lentes(a)helmholtz-muenchen.de ]
phone: +49 89 3187 1241
fax: +49 89 3187 2294
[ http://www.helmholtz-muenchen.de/idg | http://www.helmholtz-muenchen.de/idg ]
wer Fehler macht kann etwas lernen
wer nichts macht kann auch nichts lernen
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. med. Dr. h.c. Matthias H. Tschoep, Heinrich Bassler, Dr. rer. nat. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
6 years, 1 month
[libvirt-users] live migration and config
by Dmitry Melekhov
Hello!
After some mistakes yesterday we ( me and my colleague ) think that it
will be wise for libvirt to check config file existence on remote side
and through error if not,
before migrating, otherwise migration will fail and VM fs can be
damaged, because it is sort of remove of power plug...
We missed twice yesterday :-(
Could you tell me is there already such option or any plans to
implement this?
Thank you!
6 years, 1 month
[libvirt-users] This QEMU doesn't support the LSI 53C895A SCSI controller
by Gionatan Danti
Hi all,
trying to edit a domain xml to enable an LSI SCSI controller I get the
following error:
error: unsupported configuration: This QEMU doesn't support the LSI
53C895A SCSI controller
It is my understanding that the error is raised due to RedHat disabling
this controller in its own qemu-kvm builds. This seems an unfortunate
decision, as it makes harder to migrate from VMWare (which uses LSI SCSI
and SAS adapters) to RH+KVM.
Can anyone elaborate on this decision? It is possible to enable LSI
controller support in RedHat 7.x? Should I open a BZ against it?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti(a)assyoma.it - info(a)assyoma.it
GPG public key ID: FF5F32A8
6 years, 1 month
[libvirt-users] libvirt reported capabilities doesn't match /proc/cpuinfo while the model does match
by Braiam Peguero
Hi,
According to virsh capabilities I only have the following cpu features:
<cpu>
<arch>x86_64</arch>
<model>IvyBridge-IBRS</model>
<vendor>Intel</vendor>
<microcode version='32'/>
<topology sockets='1' cores='4' threads='1'/>
<feature name='ds'/>
<feature name='acpi'/>
<feature name='ss'/>
<feature name='ht'/>
<feature name='tm'/>
<feature name='pbe'/>
<feature name='dtes64'/>
<feature name='monitor'/>
<feature name='ds_cpl'/>
<feature name='vmx'/>
<feature name='smx'/>
<feature name='est'/>
<feature name='tm2'/>
<feature name='xtpr'/>
<feature name='pdcm'/>
<feature name='pcid'/>
<feature name='osxsave'/>
<feature name='arat'/>
<feature name='ssbd'/>
<feature name='xsaveopt'/>
<feature name='invtsc'/>
<pages unit='KiB' size='4'/>
<pages unit='KiB' size='2048'/>
</cpu>
Meanwhile, cpuinfo reports the following:
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe
syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor
ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic
popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm
cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority
ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
This results on my cpu being detected, if I allow host copy, as AMD chip and
the guest becomes unbearably slow. The model of the host cpu is correct.
I'm using Debian testing/unstable.
Compiled against library: libvirt 4.7.0
Using library: libvirt 4.7.0
Using API: QEMU 4.7.0
Running hypervisor: QEMU 2.12.0
--
Braiam
6 years, 1 month
[libvirt-users] Libvirt TLS with Short Lived Certificates
by Charles Urquiola
I want to use short lived certificates with libvirtd to provided TLS access
to the daemon. New certificates are generated on a daily basis and
delivered to the host. Does libvirtd re-read TLS certificates with a
reload of the service, systemctl reload libvirtd, or with a SIGHUP or is a
full restart of the daemon required?
--charlie
6 years, 1 month