[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 3 months
frequent network collapse possibly due to bridging
by Hakan E. Duran
Hi,
I would like some help to troubleshoot the problem I have been having
lately with my VM host, which contains 5 VMs, one of which is for
pi-hole, unbound services. It has been a relatively common occurrence in
the last few weeks for me to find that the host machine has lost its
network when I get back home from work. Restoring the VM/VMs do not fix
the problem, the host needs to be restarted for a fix, otherwise there
is both loss of name resolution, as well as an internet connection; I
cannot ping even IPs such as 8.8.8.8. Since I use the pi-hole VM as the DNS
server for my LAN, this means that my whole LAN gets disconnected from
internet, until the host machine is rebooted. The host machine has a
little complicated network setup: the two gigabit connections are bonded
and bridged to the VMs; however this set up has been serving me so well
for several years now. The problem, on the other hand, appeared a few
weeks ago. This doesn't happen every day but often enough to be annoying
and disruptive for my family.
My question is, how can I troubleshoot this problem and figure out
whether it is truly due to network bridging somehow collapsing or not? I
tried to find some log files but all I could find were the
/var/log/libvirt/qemu/$VM files, and the particular log file for the pi-hole
VM reported the following lines; however, I am not sure if they are
associated with a real crash or just due to shutting down and restarting
the host (please excuse the word-wrapping):
char device redirected to /dev/pts/2 (label charserial0)
qxl_send_events: spice-server bug: guest stopped, ignoring
2022-01-20T23:41:17.012445Z qemu-system-x86_64: terminating on signal 15 from pid 1 (/sbin/init)
2022-01-20 23:41:17.716+0000: shutting down, reason=crashed
2022-01-20 23:42:46.059+0000: starting up libvirt version: 7.10.0, qemu
version: 6.2.0, kernel: 5.10.89-1-MANJARO, hostname: -redacted-
Please excuse my ignorance but is there a way to restart the
networking without rebooting the host machine? This will not solve my
problem since I won't be able to reach to the host remotely if the
networking is down. The real solution would be preventing these network
crashes and the first step in that would be effective troubleshooting in
my opinion. Any input/guidance will be greatly appreciated.
I can provide more info about my host/VM(s) if the above is not adequate.
Thanks,
Hakan Duran
2 years, 10 months
Backend memory object creation - query
by M, Shivakumar
Hello,
For our use-case with Libvirt we want to create the Memory backend object ,
Expected QEMU args would be
-object memory-backend-memfd,id=mem1,size=4096M
Request you to please help us to specify this arg in the libvirt XML.
Thanks,
Shiv
2 years, 11 months
cpuset.mems - Device or resource busy
by lejeczek
Hi guys
On Centos 9 I get these in journal:
...
libvirt version: 8.0.0, package: 1.el9 (builder(a)centos.org,
2022-01-14-15:00:06, )
hostname: dzien.mine.private
Unable to write to
'/sys/fs/cgroup/machine.slice/machine-qemu\x2d4\x2dubusrv2.scope/libvirt/cpuset.mems':
Device or resource busy
...
each time I do:
-> virt-admin daemon-log-outputs
There is more VMs running on the host but it's only these Ubuntu VMs
Libvirt is not happy about.
Any and all suggestions very appreciated.
many thanks, L
2 years, 11 months
Best practices for retrying QEMU P2P migrations
by Raphael Norwitz
Heya,
We’ve recently hit a number of failures on QEMU P2P live migrations
which appear to be caused by transient networking disconnects at
different points in the migration process. We would like to implement
smarter retry logic in our control plane to ensure such issues don’t
stall critical workflows. On the other hand, we cannot blindly retry
every failed migration because doing so greatly lengthens the time to
fail high level autmation when there is a real problem.
Are there currently any generally understood best practices for retrying
migrations from a control plane perspective? Ideally we would decide
whether or not to retry based on error codes, but especially in the QEMU
P2P migration path many generic codes are returned. For example, see [1]
where we attempted to improve an error code for a likely retry-able set
of failure cases.
[1] https://listman.redhat.com/archives/libvir-list/2022-January/msg00217.html
Thanks,
Raphael
2 years, 11 months
What does the positional parameters of "virsh backup-begin" actually do?
by Ahmad Ismail
Normally when I backup a kvm machine I shutdown the machine then run:
virsh list -all
virsh shutdown Ubuntu18
virsh dumpxml Ubuntu18 > /MyBackup/Ubuntu18.xml
cp /var/lib/libvirt/images/Ubuntu18.qcow2 /MyBackup/Ubuntu18.qcow2
However, I found a new sub-command. The help file says:
% virsh backup-begin --help
NAME
backup-begin - Start a disk backup of a live domain
SYNOPSIS
backup-begin <domain> [--backupxml <string>] [--checkpointxml
<string>] [--reuse-external]
DESCRIPTION
Use XML to start a full or incremental disk backup of a live
domain, optionally creating a checkpoint
OPTIONS
[--domain] <string> domain name, id or uuid
--backupxml <string> domain backup XML
--checkpointxml <string> domain checkpoint XML
--reuse-external reuse files provided by caller
The problem with this help is, it is not clear enough.
I understand that I should use virsh backup-begin vm1 to backup a live kvm
machine. However, this command only create .qcow2 files. What about the .xml
file.
What does --backupxml , --checkpointxml & --reuse-external actually do?
When should I use them?
2 years, 11 months
interface hotplug q35 machine type
by Miguel Duarte de Mora Barroso
Hello,
I see that as part of libvirt's documentation [0] the q35 machine type will
feature at most 1 hotplugged PCIe device - default - and users must prepare
in advance according to their expectations of how many ifaces will be
hotplugged:
"""
Slots on the pcie-root controller do not support hotplug, so the device
will be hotplugged into the pcie-root-port controller. If you plan to
hotplug more than a single PCI Express device, you should add a suitable
number of pcie-root-port controllers when defining the guest: for example,
add
```xml
<controller type='pci' model='pcie-root'/>
<controller type='pci' model='pcie-root-port'/>
<controller type='pci' model='pcie-root-port'/>
<controller type='pci' model='pcie-root-port'/>
```
if you expect to hotplug up to three PCI Express devices, either emulated
or assigned from the host.
"""
Is there any alternative to this ?
For our use case, I'm considering mimicking Openstack's implementation -
[1] - and expose a knob that indicates what is the number of PCIe root
ports to be used upon the domain definition.
I wonder how open would the community be to having a machine type alias
that would provide a "better" default - in the sense that it would have
more root port controllers.
[0] - https://libvirt.org/pci-hotplug.html#x86_64-q35
[1] -
https://blueprints.launchpad.net/nova/+spec/configure-amount-of-pcie-ports
2 years, 11 months
DMARC & similar @admins
by lejeczek
Hi guys
My memory fools me? - I remembered this list had
DMAR+other_bits sorted, so users like myself, on Yahoo, did
not loose their own emails.
regards, L
2 years, 11 months
'migrate' says it worked but in reality it did not - centOS 9
by lejeczek
Hi guys.
I have a peculiar situation where between boxes:
C->A
-> $ virsh migrate --unsafe --live c8kubermaster1
qemu+ssh://10.1.1.99/system
-> $ echo $?
0
but above does _not_ happen, instead!! VM was stopped in
started, but _not_ migrated LIVE
A->C
-> $ virsh migrate --unsafe --live c8kubermaster1
qemu+ssh://10.1.1.100/system
-> $ echo $?
0
indeed VM migrates live.
box A & C have virtually identical OS stack,
HW difference is:
C = Ryzen 5 5600G
A = Ryzen 5 3600
domain XML snippet where I think it matters:
...
</metadata>
<memory unit='GiB'>4</memory>
<currentMemory unit='GiB'>4</currentMemory>
<vcpu placement='static'>2</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64'
machine='pc-i440fx-rhel7.6.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>EPYC-IBPB</model>
<feature policy='require' name='ibpb'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='virt-ssbd'/>
<feature policy='disable' name='monitor'/>
<feature policy='require' name='x2apic'/>
<feature policy='require' name='hypervisor'/>
<feature policy='disable' name='svm'/>
<feature policy='require' name='topoext'/>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
...
Initially I submitted a BZ against 'PCS' but continued to
filled with it and I find 'libvirt' might be the
culprit(also?) here.
There is not much in logs, certainly nothing (with default
verbosity) in virtqemud.service
Is it that VM gets migrated but then is restarted on
'migrate_to' host? if so then why?
How to start troubleshooting such 'monstrosity'? - all
suggestions appreciated.
many thanks, L.
2 years, 11 months
unsubscribe
by FRANK, Michael
unsubscribe
The information in this e-mail is confidential. The contents may not be disclosed or used by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as it has been sent over public networks. If you have any concerns over the content of this message or its Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus scanning software but you should take whatever measures you deem to be appropriate to ensure that this message and any attachments are virus free.
2 years, 11 months