[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 2 months
[libvirt-users] OVS / KVM / libvirt / MTU
by Sven Vogel
Hey there,
I hope anyone can bring some light in the following problem.
<interface type='bridge'>
<source bridge='cloudbr0'/>
<mtu size='2000'/>
<mac address='02:00:74:76:00:01'/>
<model type='virtio'/>
<virtualport type='openvswitch'>
</virtualport>
<vlan trunk='no'>
<tag id='2097'/>
</vlan><link state='up'/>
</interface>
we have an base bridge for example and cloudbr0. After we add an mtu to the vm bridge here it seems the base bridge gets the same mtu like the vnet adapter.
Is this normal behaviour of libvirt together with OVS?
—ovs-vsctl show
5b154321-534d-413e-9761-60476ae06640
Bridge "cloudbr0"
Port "cloudbr0"
Interface "cloudbr0"
type: internal
—MTU from the bridge after set an MTU into the XML File (before we had here 9000)
mtu : 2000
mtu_request : []
name : "cloudbr0"
ofport : 65534
—MTU of the vnet interface
mac_in_use : "fe:00:74:76:00:01"
mtu : 1450
mtu_request : 1450
name : „vnet2"
Thanks for help…
__
Sven Vogel
Teamlead Platform
EWERK RZ GmbH
Brühl 24, D-04109 Leipzig
P +49 341 42649 - 11
F +49 341 42649 - 18
S.Vogel(a)ewerk.com
www.ewerk.com
Geschäftsführer:
Dr. Erik Wende, Hendrik Schubert, Frank Richter
Registergericht: Leipzig HRB 17023
Zertifiziert nach:
ISO/IEC 27001:2013
DIN EN ISO 9001:2015
DIN ISO/IEC 20000-1:2011
EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
Auskünfte und Angebote per Mail sind freibleibend und unverbindlich.
Disclaimer Privacy:
Der Inhalt dieser E-Mail (einschließlich etwaiger beigefügter Dateien) ist vertraulich und nur für den Empfänger bestimmt. Sollten Sie nicht der bestimmungsgemäße Empfänger sein, ist Ihnen jegliche Offenlegung, Vervielfältigung, Weitergabe oder Nutzung des Inhalts untersagt. Bitte informieren Sie in diesem Fall unverzüglich den Absender und löschen Sie die E-Mail (einschließlich etwaiger beigefügter Dateien) von Ihrem System. Vielen Dank.
The contents of this e-mail (including any attachments) are confidential and may be legally privileged. If you are not the intended recipient of this e-mail, any disclosure, copying, distribution or use of its contents is strictly prohibited, and you should please notify the sender immediately and then delete it (including any attachments) from your system. Thank you.
5 years, 1 month
[libvirt-users] Researching why different cache modes result in 'some' guest filesystem corruption..
by vincent@cojot.name
Hi All,
I've been chasing down an issue in recent weeks (my own lab, so no prod
here) and I'm reaching out in case someone might have some guidance to
share.
I'm running fairly large VMs (RHOSP underclouds - 8vcpu, 32gb ram, about
200gb single disk as a growable qcow2) on some RHEL7.6 hypervisors (kernel
3.10.0-927.2x.y, libvirt 4.5.0, qemu-kvm-1.5.3) on top of SSD/NVMe drives
with various filesystems (vxfs, zfs, etc..) and using ECC RAM.
The issue can be described as follows:
- the guest VMs work fine for a while (days, weeks) but after a kernel
update (z-stream) comes in, I am often greeted by the following message
immediately after rebooting (or attempting to reboot into the new
kernel):
"error: not a correct xfs inode"
- booting the previous kernel works fine and re-generating the initramfs
for the new kernel (from the n-1 kernel) does not solve anything.
- if booted from an ISO, xfs_repair does not find errors.
- on ext4, there seems to be some kind of corruption there too.
I'm building the initial guest image qcow2 for those guest VMs this way:
1) start with a rhel-guest image (currently
rhel-server-7.6-update-5-x86_64-kvm.qcow2)
2) convert to LVM by doing this:
qemu-img create -f qcow2 -o preallocation=metadata,cluster_size=1048576,lazy_refcounts=off final_guest.qcow2 512G
virt-format -a final_guest.qcow2 --partition=mbr --lvm=/dev/rootdg/lv_root --filesystem=xfs
guestfish --ro -a rhel_guest.qcow2 -m /dev/sda1 -- tar-out / - | \
guestfish --rw -a final_guest.qcow2 -m /dev/rootdg/lv_root -- tar-in - /
3) use "final_guest.qcow2" as the basis for my guests with LVM.
After chasing down this issue some more and attempting various
things (build the image on Fedora29, build a real XFS filesystem inside a
VM and use the generated qcow2 as a basis instead of virt-format)..
..I've noticed that the SATA disk of each of those guests were using
'directsync' (instead of 'Hypervisor Default'). As soon as I switched to
'None', the XFS issues disappeared and I've now applied several
consecutive kernel updates without issues. Also, 'directsync' or
'writethrough', while providing decent performance, both exhibited the XFS
'corruption' behaviour.. Only 'none' seem to have solved that.
I've read the docs but I thought it was OK to use those modes (UPS,
Battery-Backed RAID, etc..)
Does anyone have any idea what's going on or what I may be doing wrong?
Thanks for reading,
Vincent
5 years, 2 months
[libvirt-users] Why librbd disallow VM live migration if the disk cache mode is not none or directsync
by Ming-Hung Tsai
I'm curious that why librbd sets this limitation? The rule first
appeared in librbd.git commit d57485f73ab. Theoretically, a
write-through cache is also safe for VM migration, if the cache
implementation guarantees that cache invalidation and disk write are
synchronous operations.
For example, I'm using Ceph RBD images as VM storage backend. The Ceph
librbd supports synchronous write-through cache, by setting
rbd_cache_max_dirty to zero, and setting
rbd_cache_block_writes_upfront to true, thus it would be safe for VM
migration. Is that true? Any suggestion would be appreciated. Thanks.
Ming-Hung Tsai
5 years, 2 months
[libvirt-users] Libvirt Virtualization Problem
by 王金磊
Dear Sir/Madam:
I use virt-manager to create a new virtual machine(it's name is generic), then I edit generic to set it's cpu_mode as 'host-model', and I start it, but when dumpxml generic, it's cpu_mode is changed as 'custom'.
I want to know that why it is?
And what's the principle of dumpxml?
Thanks!
A user of libvirt
5 years, 2 months
[libvirt-users] virsh command to list CPU,Memory and Storage
by Kaushal Shriyan
Hi,
Is there a way to find out total CPU's, Memory and Storage using virsh
command? For example virsh list --all list out all VM's
11 dockerregistry01 running
12 gitlab running
Any help will be highly appreciated. Thanks in Advance.
Best Regards,
Kaushal
5 years, 2 months
[libvirt-users] <VM LIVE Migration> <Sync conntrack entries>
by bharath paulraj
Hi Team,
I am using QEMU/KVM for launching VMs and libvirt to govern those VMs.
I would like to synchronise the connection tracking entries specific
to the VM during the VM LIVE migrations. It is required when the
firewall is implemented at the host level like libvirt's "network
filters". If stateful firewall is enabled, then unless these
connection tracking entries are synchronised, all the connections to
the VM are lost and all TCP connections should be reestablished. Is
there any option already available? I don't think current libvirt
hooks are helpful, as VM pause in the source hypervisor and VM on in
the destination hypervisor is done by QEMU and it does not wait for
any application that needs to sync-up some metadata — In my case, it
is conntrack entries.
Also I tried with the existing hooks - stop, release, startcpus and
nothing worked well.
Has anybody came across similar scenario? If yes, how you overcome this?
--
Regards,
Bharath
5 years, 2 months
[libvirt-users] Define libvirt network portgroup with native untagged.
by Tanmoy Sinha
Hi,
I have created a libvirt network on top of a OVS bridge (named vlan-br)
which receives all VLAN tagged packets, i.e. connected to a trunk port. The
definition xml is below.
What I want to achieve in the portgroup definition 'trunk-native-1221' is
to allow 1221 as untagged/native but rest all VLAN as tagged. The following
portgroup definition works, but I don't want to enumerate all the tagged
VLANs in the portgroup definition.
I understand what libvirt does on the underlying OVS bridge, once a guest
interface (say vNetX) is attached to the portgroup is to set the vnetX with
tag=1221 and vlan_mode= native-untagged and it sets trunk =
[1222,1221,1223,1224]. Now if I go and clear the trunk setting on the OVS
bridge for that interface I am able to see both tagged and untagged (1221)
packets on the guest.
This is exactly what I want to achieve in the libvirt network definition,
i.e. have one untagged VLAN and* allow all other VLANs without having to
enumerate them in the portgroup definition, as that is hard to maintain. *
<network>
<name>kvm-core-net</name>
<bridge name = 'vlan-br'/>
<forward mode = 'bridge'/>
<virtualport type='openvswitch'/>
.......
<portgroup name='*trunk-native-1221*'>
<vlan trunk='yes'>
<tag id='1222'/>
<tag id='1221' nativeMode='untagged'/>
<tag id='1223'/>
<tag id='1224'/>
</vlan>
</portgtroup>
.....
</network>
Regards
Tanmoy Sinha
5 years, 2 months