Hi, Bernd
If you want to operate(e.g. migrate 60) guests concurrently,
you need to edit related settings, then restart libvirtd service.
1) Edit the /etc/libvirt/libvirtd.conf:
max_clients = 5000
max_queued_clients = 1000
min_workers = 500
max_workers = 1000
max_client_requests = 1000
keepalive_interval = -1
2) Edit /etc/libvirt/qemu.conf:
lock_manager = "lockd"
max_processes = 65535
max_files = 65535
keepalive_interval = -1
3) #service libvirtd restart
Regards,
Chenli Hu
----- Original Message -----
From: "Bernd Lentes" <bernd.lentes(a)helmholtz-muenchen.de>
To: "libvirt-ML" <libvirt-users(a)redhat.com>
Sent: Monday, December 10, 2018 8:46:22 PM
Subject: Re: [libvirt-users] concurrent migration of several domains rarely fails
Jim wrote:
>
> What is meant by the "admin interface" ? virsh ?
virsh-admin, which you can use to change some admin settings of libvirtd, e.g.
log_level. You are interested in the keepalive settings above those ones in
libvirtd.conf, specifically
#keepalive_interval = 5
#keepalive_count = 5
> What is meant by "client" in libvirtd.conf ? virsh ?
Yes, virsh is a client, as is virt-manager or any application connecting to
libvirtd.
> Why do i have regular timeouts although my two hosts are very performant ? 128GB
> RAM, 16 cores, 2 1GBit/s network adapter on each host in bonding.
> During migration i don't see much load, although nearly no waiting for IO.
I'd think concurrently migrating 3 VMs on a 1G network might cause some
congestion :-).
> Should i set admin_keepalive_interval to -1 ?
You should try 'keepalive_interval = -1'. You can also avoid sending keepalive
messages from virsh with the '-k' option, e.g. 'virsh -k 0 migrate ...'.
If this doesn't help, are you in a position to test a newer libvirt, preferably
master or the recent 4.10.0 release?
Hi Jim,
Unfortunately not.
I have some more questions, maybe you can help me a bit.
I found
http://epic-alfa.kavli.tudelft.nl/share/doc/libvirt-devel-0.10.2/migratio... ,
which is
quite interesting.
When i migrate with virsh, i use:
virsh --connect=qemu:///system migrate --verbose --live domain
qemu+ssh://ha-idg-1/system
When pacemaker migrates, it creates this sequence:
virsh --connect=qemu:///system --quiet migrate --live domain qemu+ssh://ha-idg-1/system
which is quite the same.
Do i understand the webpage correctly, is this a "Native migration, client to two
libvirtd servers" ?
Furthermore the document says:
"To force migration over an alternate network interface the optional hypervisor
specific URI must be provided".
I have both hosts also connected directly to each other with a bonding device using
round-robin, and an internal ip (192.168.100.xx).
When i want to use this device, which is maybe a bit faster and more secure (directly
connected), how do i have to specify that ?
virsh --connect=qemu:///system --quiet migrate --live domain qemu+ssh://ha-idg-1/system
tcp://192.168.100.xx
Does it have to be the ip from the source or the destination ? Does the source then use
automatically use
also its device with 192.168.100.xx ?
Thanks.
Bernd
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDirig.in Petra Steiner-Hoffmann
Stellv.Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrer: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, Dr. rer.
nat. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
_______________________________________________
libvirt-users mailing list
libvirt-users(a)redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users