Host CPU usage high on guest network load (macvtap).
by André Malm
Hello,
Is it reasonable for one CPU on the host to be loaded 20-30% on 100 mbps
download in a guest?
There is barely any (1-2%) load in the guest. The host is running ubuntu
18.04 on an Intel i7-7700K, virtualization etc is enabled. Everything
except host load on guest network activity is fine.
This is how i setup the network with virt-install:
--network
type=direct,source=enp1s0,source_mode=bridge,model=virtio,mac=00:11:22:33:44:55
In the guest (ubuntu18.04 cloud minimal) i run a simple wget, setting
mode to vepa still works but the cpu load is the same.
4 years, 9 months
Emulation packages for Centos8
by Mauricio Tavares
So I was looking for the qemu-system-ppc so I could emulate that
cpu in my Intel I7 centos8 box. So far I found that available for
fedora (and ubuntu) but not centos. Does anyone have a suggestion for
where (repo) I can get said package?
4 years, 9 months
NFS and unsafe migration
by Gionatan Danti
Hi all,
I have a question about NFS datastore and unsafe migration.
When migrating a virtual machine having a virtio disk in writeback cache
between two hosts sharing a single NFS datastore, I get the following error:
"Unsafe migration: Migration may lead to data corruption if disks use
cache != none or cache != directsync"
I understand why libvirt alerts for unsafe migration in cases where no
coherency is enfoced by the underlying system; however, is it really the
case for nfs?
From what I know (and from the man page), by default nfs has
open-to-close consistency, which seems quite right for migrating a
virtual machine between (as only one host at a time reads/writes/locks
the virtual disk files).
I know that I can simply use cache=none to let the problem go away;
however, this significantly impairs performance when used on nfs.
Am I missing something?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti(a)assyoma.it - info(a)assyoma.it
GPG public key ID: FF5F32A8
4 years, 10 months
KVM not available on system bus
by Kasper Laudrup
Hi libvirt-users,
Hope this is the right place to ask, otherwise please point me in the
right direction.
I have a libvirt virtual machine running on the session bus that I would
like to access through SSH. I have previously done so using X11
forwarding and while it works, it is very sluggish with the connection I
have.
I recently learned that you can access the virtual machine with
virt-viewer through SSH which should perform much better. Unfortunately,
my virtual machine is currently running on the session bus and that
doesn't seem to be supported (please do correct if I'm wrong here).
It doesn't really matter to much if the VM is running on the system bus
or session bus, I just prefer running things as a non privileged user
for all the obvious reasons, so that's where I created the VM initially.
I tried to follow this guide to move my VM to the system bus.
Running `sudo virsh define vm.xml` fails with:
error: invalid argument: could not find capabilities for arch=x86_64
domaintype=kvm
Digging a bit further into it, I figured out that the cause of the error
message is, that I for some reason do not have KVM acceleration support
when running VMs on the system bus (as root) running my VM on the
session bus as a normal user (with the correct group membership) works fine.
Trying to launch virt-manager as root verifies that, as creating a new
VM warns me that I do not have KVM support.
I'm fairly lost as to what to do from here. I must admit I remember
struggling a bit to get the virtual machine to run with KVM support on
the session bus in the first place, but have completely forgotten what
the problem and resolution was.
I'm using Debian stable (Buster) with standard package versions:
QEMU 3.1.0
libvirt 5.0.0
Any kind of help or input would be greatly appreciated.
Thanks a lot and kind regards,
Kasper Laudrup
4 years, 10 months
VM crash, partitiontable deleted CPU 1/KVM[2867866]: segfault at 2 ip 0000560cffe03757 sp 00007f0e8babaeb0 error 4 in qemu-system-x86_64
by Oliver Dzombic
Hi,
Got from a running machine suddenly:
[859069.022739] CPU 1/KVM[2867866]: segfault at 2 ip 0000560cffe03757 sp
00007f0e8babaeb0 error 4 in qemu-system-x86_64[560cffbd4000+45d000]
[859069.022749] Code: 00 00 00 41 57 41 56 41 55 41 54 49 89 cc b9 a2 03
00 00 55 48 89 f5 48 8d 35 1c 35 29 00 53 48 89 fb 48 83 c7 38 48 83 ec
38 <44> 0f b7 52 02 4c 8b 7a 08 4c 89 44 24 08 4c 8d 05 b4 08 29 00 45
[859087.661975] device k1806-YcXLF left promiscuous mode
# qemu-system-x86_64 --version
QEMU emulator version 4.2.0 (qemu-4.2.0-2.fc31)
Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers
# virsh -V
Virsh command line tool of libvirt 6.0.0
See web site at https://libvirt.org/
Compiled with support for:
Hypervisors: QEMU/KVM LXC LibXL OpenVZ VMware VirtualBox ESX Hyper-V Test
Networking: Remote Network Bridging Interface netcf Nwfilter VirtualPort
Storage: Dir Disk Filesystem SCSI Multipath iSCSI LVM RBD Sheepdog
Gluster ZFS
Miscellaneous: Daemon Nodedev SELinux Secrets Debug DTrace Readline
After that the partition table of the zfs volume was cleared.
I also opened a bug report with all informations i could get so far:
https://bugzilla.redhat.com/show_bug.cgi?id=1795082
I am thankful for any idea / suggestion.
Thank you
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
Layer7 Networks
mailto:info@layer7.net
Anschrift:
Layer7 Networks GmbH
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 96293 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic
UST ID: DE259845632
4 years, 10 months
VM crash, partitiontable deleted CPU 1/KVM[2867866]: segfault at 2 ip 0000560cffe03757 sp 00007f0e8babaeb0 error 4 in qemu-system-x86_64
by Oliver Dzombic
Hi,
Got from a running machine suddenly:
[859069.022739] CPU 1/KVM[2867866]: segfault at 2 ip 0000560cffe03757 sp
00007f0e8babaeb0 error 4 in qemu-system-x86_64[560cffbd4000+45d000]
[859069.022749] Code: 00 00 00 41 57 41 56 41 55 41 54 49 89 cc b9 a2 03
00 00 55 48 89 f5 48 8d 35 1c 35 29 00 53 48 89 fb 48 83 c7 38 48 83 ec
38 <44> 0f b7 52 02 4c 8b 7a 08 4c 89 44 24 08 4c 8d 05 b4 08 29 00 45
[859087.661975] device k1806-YcXLF left promiscuous mode
# qemu-system-x86_64 --version
QEMU emulator version 4.2.0 (qemu-4.2.0-2.fc31)
Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers
# virsh -V
Virsh command line tool of libvirt 6.0.0
See web site at https://libvirt.org/
Compiled with support for:
Hypervisors: QEMU/KVM LXC LibXL OpenVZ VMware VirtualBox ESX Hyper-V Test
Networking: Remote Network Bridging Interface netcf Nwfilter VirtualPort
Storage: Dir Disk Filesystem SCSI Multipath iSCSI LVM RBD Sheepdog
Gluster ZFS
Miscellaneous: Daemon Nodedev SELinux Secrets Debug DTrace Readline
After that the partition table of the zfs volume was cleared.
I also opened a bug report with all informations i could get so far:
https://bugzilla.redhat.com/show_bug.cgi?id=1795082
I am thankful for any idea / suggestion.
Thank you !
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
Layer7 Networks
mailto:info@layer7.net
Anschrift:
Layer7 Networks GmbH
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 96293 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic
UST ID: DE259845632
4 years, 10 months
Prevent the firewall from being compromised through libvirtd
by Thomas Luening
Hello @ all
The libvirt-daemon compromises the packet-filtering-rules at daemon-startup, before any VM is started. To prevent this, I first
have create a hook-script which deletes existing rules, but apparently these rules are set after the hook. Removing the defined
networks was no solution either. Worst of all is, a service restart of the daemon may even completely neutralize the firewall.
Is there a solution to prevent this undesirable behavior? No matter how or who what do or with what network configuration a VM
is started, the daemon must not compromise the firewall, by altering them. The Firewall is untouchable and taboo.
What can I do to disable that? Thank you!
Best Regards
Tom
$ dpkg -l libvirt-daemon
||/ Name Version Architektur Beschreibung
+++-=========================-============-============-==================================
ii libvirt-daemon 5.0.0-4 amd64 Virtualization daemon
$ lsb_release -a
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
4 years, 10 months
Repetitive RBD disk definitions
by wferi@niif.hu
Hi,
I'm using libvirt 5.0.0 on a Ceph cluster. The VM disks are all from
the same Ceph pool, so all my <disk> elements are basically the same:
<disk type="network" device="disk">
<driver name="qemu" type="raw" cache="none"/>
<source protocol="rbd" name="vmdisks/VOLNAME">
<host name="x.y.z.1" port="6789"/>
<host name="x.y.z.2" port="6789"/>
<host name="x.y.z.3" port="6789"/>
</source>
<auth username="libvirt">
<secret type="ceph" usage="libvirt_key"/>
</auth>
<target dev="vdX"/>
</disk>
... apart from the VOLNAME and vdX words. Is there a way to factor out
(at least part of) the common stuff? This repetition isn't totally
unmanageable, just not pretty. Creating a libvirt storage pool seemed
like a good solution at first, but I couldn't find a way to
"instantiate" volumes in the domain XML (and defining them cluster-wide
would mean duplicating what Ceph does, which I'd rather avoid).
--
I'd be grateful for any ideas,
Feri.
4 years, 10 months
virsh vol-download uses a lot of memory
by R. Diez
Hi all:
I am using the libvirt version that comes with Ubuntu 18.04.3 LTS.
I have written a script that backs up my virtual machines every night. I want to limit the amount of memory that this backup operation
consumes, mainly to prevent page cache thrashing. I have described the Linux page cache thrashing issue in detail here:
http://rdiez.shoutwiki.com/wiki/Today%27s_Operating_Systems_are_still_inc...
The VM virtual disk weighs 140 GB at the moment. I thought 500 MiB of RAM should be more than enough to back it up, so I added the following
options to the systemd service file associated to the systemd timer I am using:
MemoryLimit=500M
However, the OOM is killing "virsh vol-download":
Jan 21 23:40:00 GS-CEL-L kernel: [55535.913525] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Jan 21 23:40:00 GS-CEL-L kernel: [55535.913527] [ 13232] 1000 13232 5030 786 77824 103 0 BackupWindows10
Jan 21 23:40:00 GS-CEL-L kernel: [55535.913528] [ 13267] 1000 13267 5063 567 73728 132 0 BackupWindows10
Jan 21 23:40:00 GS-CEL-L kernel: [55535.913529] [ 13421] 1000 13421 5063 458 73728 132 0 BackupWindows10
Jan 21 23:40:00 GS-CEL-L kernel: [55535.913530] [ 13428] 1000 13428 712847 124686 5586944 523997 0 virsh
Jan 21 23:40:00 GS-CEL-L kernel: [55535.913532]
oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=/,mems_allowed=0,oom_memcg=/system.slice/VmBackup.service,task_memcg=/system.slice/VmBackup.service,task=virsh,pid=13428,uid=1000
Jan 21 23:40:00 GS-CEL-L kernel: [55535.913538] Memory cgroup out of memory: Killed process 13428 (virsh) total-vm:2851388kB,
anon-rss:486180kB, file-rss:12564kB, shmem-rss:0kB
I wonder why "virsh vol-download" needs so much RAM. It does not get killed straight away, it takes a few minutes to get killed. It starts
using a VMSIZE of around 295 MiB, which is not really frugal for a file download operation, but then it grows and grows.
Note that the virtual machine is not running (shut off) while doing the backup.
Last time I tried with an increased memory limit of 5G, "virsh vol-download" was killed when using 7,4 G, and the partially-downloaded
volume file weighted 60G. Therefore, it looks like "virsh vol-download" is using a percentage of the downloaded size in RAM.
Is there a way to make "virsh vol-download" use less memory?
Thanks in advance,
rdiez
4 years, 10 months
How to detect completion of a paused VM migration on the destination?
by Milan Zamazal
Hi,
when a normally running VM is migrated, libvirt sends
VIR_DOMAIN_EVENT_RESUMED_MIGRATED event on the destination once the
migration completes. I can see that when a paused VM is migrated,
libvirt sends VIR_DOMAIN_EVENT_SUSPENDED_PAUSED instead.
Since there seems to be nothing migration specific about
VIR_DOMAIN_EVENT_SUSPENDED_PAUSED event, my question is: Is it safe to
assume on the destination that this event signals completion of the
incoming migration (unless VIR_DOMAIN_EVENT_RESUMED_MIGRATED is received
before)?
Thanks,
Milan
4 years, 10 months