[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 1 month
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 2 months
[libvirt-users] libvirt 5.0.0 - LXC container still in "virsh list" output after shutdown
by mxs kolo
Hello.
Centos 7.6 with libvirt build from base "virt" repository:
libvirt-daemon-driver-lxc-5.0.0-1.el7.x86_64
libvirt-client-5.0.0-1.el7.x86_64
libvirt-daemon-5.0.0-1.el7.x86_64
libvirt-daemon-driver-network-5.0.0-1.el7.x86_64
libvirt-libs-5.0.0-1.el7.x86_64
+
systemd-219-62.el7_6.2.x86_64
Now lxc containers with type='direct' can be started, but can't be stopped :)
Before shutdown.
# virsh list --all
Id Name State
------------------------------------------
16312 test.lxc running
# machinectl | grep test
lxc-16312-test.lxc container libvirt-lxc
# pstree -apA 16312
libvirt_lxc,16312 --name test.lxc --console 42 --security=none
--handshake 45 --veth macvlan0
|-systemd,16315
| |-agetty,16690 --noclear --keep-baud console 115200 38400 9600 vt220
| |-dbus-daemon,16659 --system --address=systemd: --nofork
--nopidfile --systemd-activation
| |-rsyslogd,17614 -n
| | |-{rsyslogd},17616
| | `-{rsyslogd},17626
| |-sshd,17613 -D
| |-systemd-journal,16613
| `-systemd-logind,16657
`-{libvirt_lxc},16319
Shutdown.
# virsh shutdown 16312
Domain 16312 is being shutdown
# echo $?
0
After.
In list output still present:
# virsh list --all
Id Name State
------------------------------------------
16312 test.lxc running
# machinectl | grep test | wc -l
0
No main process:
# ps axuwwf | grep libvirt_lxc | grep test | wc -l
0
LXC realy stopped, but only virsh still show it in list
Cgroups "blkio, cpuacct, memory, cpu, cpu,cpuacct, devices, hugetlb,
pids" deleted.
But another cgroups still present in fs:
drwxr-xr-x 2 root root 0 Jan 21 13:07
/sys/fs/cgroup/cpuset/machine.slice/machine-lxc\x2d16312\x2dtest.lxc.scope
drwxr-xr-x 2 root root 0 Jan 21 13:07
/sys/fs/cgroup/freezer/machine.slice/machine-lxc\x2d16312\x2dtest.lxc.scope
drwxr-xr-x 2 root root 0 Jan 21 13:07
/sys/fs/cgroup/net_cls/machine.slice/machine-lxc\x2d16312\x2dtest.lxc.scope
drwxr-xr-x 2 root root 0 Jan 21 13:07
/sys/fs/cgroup/net_cls,net_prio/machine.slice/machine-lxc\x2d16312\x2dtest.lxc.scope
drwxr-xr-x 2 root root 0 Jan 21 13:07
/sys/fs/cgroup/net_prio/machine.slice/machine-lxc\x2d16312\x2dtest.lxc.scope
drwxr-xr-x 2 root root 0 Jan 21 13:07
/sys/fs/cgroup/perf_event/machine.slice/machine-lxc\x2d16312\x2dtest.lxc.scope
Restarted libvirtd temporary place container in correct state:
# systemctl restart libvirtd
# virsh list --all | grep test
- test.lxc shut off
libvirt 4.5.0 and 4.10.0 perform correct LXC shutdown.
b.r.
Maxim Kozin
5 years, 8 months
[libvirt-users] virsh confirmed edits do not persist
by Josh Mcneil
This may be user error, I am new to libvirt.
I am using libvirt 5.0.0. I have created a VM (win10) in the system URI
using virt-manager. I stopped the VM to edit it.
I set my LIBVIRT_DEFAULT_URI="qemu:///system" and am able to read the
domain configuration with `EDITOR=nano virsh edit win10`. When I save
(ctrl+o) and exit (ctrl+x) I see the message "Domain win10 XML
configuration edited." as expected. However, when I run the edit command
again the changes do not appear to have persisted.
I have tried starting and stopping the VM. I read that I may need to define
the domain with virsh so I did that. None of these things seem to work.
I would appreciate it if someone could help me solve this issue in order to
be able to modify domains from virsh!
Thanks in advance for any help!
Josh
5 years, 8 months
[libvirt-users] HELP!
by Shashwat shagun
I'm getting this error (below)
[root@localhost test]# ./test
virError(Code=6, Domain=20, Message='invalid connection pointer in
virConnectListAllDomains')
0 running domains:
when running this program :- (below)
package main
import (
"fmt"
libvirt "github.com/libvirt/libvirt-go"
)
type DomService struct {
Conn *libvirt.Connect
}
func (d *DomService) Connect() error {
var err error
d.Conn, err = libvirt.NewConnect("qemu:///system")
if err != nil {
fmt.Println(err)
}
defer d.Conn.Close()
return nil
}
func (d *DomService) ListDoms() error {
doms, err :=
d.Conn.ListAllDomains(libvirt.CONNECT_LIST_DOMAINS_ACTIVE)
if err != nil {
fmt.Println(err)
}
//fmt.Println(doms)
fmt.Printf("%d running domains:\n", len(doms))
for _, dom := range doms {
name, err := dom.GetName()
if err == nil {
fmt.Printf(" %s\n", name)
}
dom.Free()
}
return nil
}
func main() {
doms := DomService{}
doms.Connect()
doms.ListDoms()
}
--
Regards,
Shashwat Shagun
5 years, 8 months
[libvirt-users] virsh migrate --copy-storage-inc
by Paul van der Vlis
Hello,
I use libvirt on machines without shared storage. My VM's have all one
qcow2-disk, with the same name as the VM.
When I want to migrate a VM, I check if there is an qcow2 image on the
other host with that name. When that's not the case, I copy the image
using rsync first. If the image excist, I don't do that, and I think
that "--copy-storage-inc" will do it.
But I don't know how intelligent "--copy-storage-inc" is. I use LVM
inside the VM's, and it's possible that I have changed the size of a
volume for example.
My questions:
Is "--copy-storage-inc" intelligent enough for such tasks?
Is there documentation what is done during migration? Is it using rsync?
Is it possible to make the migration process more verbose?
Is what I do a good way?
With regards,
Paul van der Vlis
BTW: this is what I do, sometimes the lines are broken wrong:
-----------
# Rsync image when not on the other host:
if test `ssh $other "if test -e /data/$vm.qcow2; then echo ja; else echo
nee; fi"` = "nee"; then
echo "rsync..."
rsync /data/$vm.qcow2 $other://data/$vm.qcow2
if test $? = 0; then echo "gelukt"; else echo "niet gelukt"; exit; fi
fi
# migrate
echo "migrate..."
virsh migrate --live --p2p --tunnelled --copy-storage-inc --persistent
--undefinesource --verbose $vm qemu+ssh://$other/system
if test $? = 0; then echo "gelukt"; else echo "niet gelukt"; exit; fi
-----------
--
Paul van der Vlis Linux systeembeheer Groningen
https://www.vandervlis.nl/
5 years, 8 months
[libvirt-users] libvirtd (4.9) version takes a long time to start
by Sharath Kurudi
Hi,
I have installed libvirt 4.9. libvirtd 4.9 takes a long time to come
up. I enabled debug prints and also put my own prints Logs are below
2019-02-06 05:55:49.082+0000: 377: info : libvirt version: 4.9.0
2019-02-06 05:55:49.082+0000: 377: info : hostname: draco
2019-02-06 05:55:49.082+0000: 377: info : virObjectNew:248 : OBJECT_NEW:
obj=0x558e782d8bb0 classname=virAccessManager
2019-02-06 05:55:49.082+0000: 377: debug : virAccessManagerNewDriver:105 :
Initialized with stack
2019-02-06 05:55:49.082+0000: 377: info : virObjectNew:248 : OBJECT_NEW:
obj=0x558e782cd870 classname=virAccessManager
2019-02-06 05:55:49.082+0000: 377: debug : virAccessManagerNewDriver:105 :
Initialized with none
2019-02-06 05:55:49.082+0000: 377: info : virObjectRef:382 : OBJECT_REF:
obj=0x558e782d8bb0
2019-02-06 05:55:49.082+0000: 377: info : virObjectUnref:344 :
OBJECT_UNREF: obj=0x558e782d8bb0
2019-02-06 05:55:49.082+0000: 377: debug : main:1200 : Decided on pid file
path '/var/run/libvirtd.pid'
2019-02-06 05:55:49.082+0000: 377: debug : main:1213 : Decided on socket
paths '/var/run/libvirt/libvirt-sock', '/var/run/libvirt/libvirt-sock-ro'
and '/var/run/libvirt/libvirt-admin-sock'
2019-02-06 05:55:49.082+0000: 377: debug : virFileClose:109 : Closed fd 5
2019-02-06 05:55:49.082+0000: 378: debug : virFileClose:109 : Closed fd 4
2019-02-06 05:55:49.082+0000: 378: debug : virFileClose:109 : Closed fd 4
2019-02-06 05:55:49.082+0000: 378: debug : virFileClose:109 : Closed fd 6
2019-02-06 05:55:49.083+0000: 379: debug : main:1255 : Ensuring run dir
'/var/run/libvirt' exists
2019-02-06 05:55:49.083+0000: 379: debug : virFileMakePathHelper:3034 :
path=/var/run/libvirt mode=0777
2019-02-06 05:55:49.083+0000: 379: debug : virNetlinkStartup:137 : Running
global netlink initialization
2019-02-06 05:55:49.083+0000: 379: debug : virNetlinkStartup:144 : Sharath
Returning from global netlink initialization
2019-02-06 05:55:49.083+0000: 379: debug : main:1270 : Sharath
virNetlinkStartup done
2019-02-06 05:55:49.083+0000: 379: debug : virNetDaemonNew:127 : Sharath
Running virNetDaemonNew
2019-02-06 05:55:49.083+0000: 379: debug : virNetDaemonNew:131 : Sharath
virNetDaemonInitalize done
2019-02-06 05:55:49.083+0000: 379: info : virObjectNew:248 : OBJECT_NEW:
obj=0x558e782ce1b0 classname=virNetDaemon
2019-02-06 05:55:49.083+0000: 379: debug : virNetDaemonNew:136 : Sharath
virNetDaemonInitalize done
The above logs loop till libvirtd comes up. It is also seen that more than
one instance of libvirtd runs simultaneously during delay.
Any ideas on how to resolve?
Thanks
5 years, 8 months
[libvirt-users] libvirtd (4.9) version takes a long time to start
by Sharath Kurudi
Hi,
I have installed libvirt 4.9. libvirtd 4.9 takes a long time to come
up. I enabled debug prints and also put my own prints Logs are below
2019-02-06 05:55:49.082+0000: 377: info : libvirt version: 4.9.0
2019-02-06 05:55:49.082+0000: 377: info : hostname: draco
2019-02-06 05:55:49.082+0000: 377: info : virObjectNew:248 : OBJECT_NEW:
obj=0x558e782d8bb0 classname=virAccessManager
2019-02-06 05:55:49.082+0000: 377: debug : virAccessManagerNewDriver:105 :
Initialized with stack
2019-02-06 05:55:49.082+0000: 377: info : virObjectNew:248 : OBJECT_NEW:
obj=0x558e782cd870 classname=virAccessManager
2019-02-06 05:55:49.082+0000: 377: debug : virAccessManagerNewDriver:105 :
Initialized with none
2019-02-06 05:55:49.082+0000: 377: info : virObjectRef:382 : OBJECT_REF:
obj=0x558e782d8bb0
2019-02-06 05:55:49.082+0000: 377: info : virObjectUnref:344 :
OBJECT_UNREF: obj=0x558e782d8bb0
2019-02-06 05:55:49.082+0000: 377: debug : main:1200 : Decided on pid file
path '/var/run/libvirtd.pid'
2019-02-06 05:55:49.082+0000: 377: debug : main:1213 : Decided on socket
paths '/var/run/libvirt/libvirt-sock', '/var/run/libvirt/libvirt-sock-ro'
and '/var/run/libvirt/libvirt-admin-sock'
2019-02-06 05:55:49.082+0000: 377: debug : virFileClose:109 : Closed fd 5
2019-02-06 05:55:49.082+0000: 378: debug : virFileClose:109 : Closed fd 4
2019-02-06 05:55:49.082+0000: 378: debug : virFileClose:109 : Closed fd 4
2019-02-06 05:55:49.082+0000: 378: debug : virFileClose:109 : Closed fd 6
2019-02-06 05:55:49.083+0000: 379: debug : main:1255 : Ensuring run dir
'/var/run/libvirt' exists
2019-02-06 05:55:49.083+0000: 379: debug : virFileMakePathHelper:3034 :
path=/var/run/libvirt mode=0777
2019-02-06 05:55:49.083+0000: 379: debug : virNetlinkStartup:137 : Running
global netlink initialization
2019-02-06 05:55:49.083+0000: 379: debug : virNetlinkStartup:144 : Sharath
Returning from global netlink initialization
2019-02-06 05:55:49.083+0000: 379: debug : main:1270 : Sharath
virNetlinkStartup done
2019-02-06 05:55:49.083+0000: 379: debug : virNetDaemonNew:127 : Sharath
Running virNetDaemonNew
2019-02-06 05:55:49.083+0000: 379: debug : virNetDaemonNew:131 : Sharath
virNetDaemonInitalize done
2019-02-06 05:55:49.083+0000: 379: info : virObjectNew:248 : OBJECT_NEW:
obj=0x558e782ce1b0 classname=virNetDaemon
2019-02-06 05:55:49.083+0000: 379: debug : virNetDaemonNew:136 : Sharath
virNetDaemonInitalize done
The above logs loop till libvirtd comes up. It is also seen that more than
one instance of libvirtd runs simultaneously during delay.
Any ideas on how to resolve?
Thanks
Sharath
5 years, 8 months
[libvirt-users] concurrent migration of several domains rarely fails
by Lentes, Bernd
Hi,
i have a two-node cluster with several domains as resources. During testing i tried several times to migrate some domains concurrently.
Usually it suceeded, but rarely it failed. I found one clue in the log:
Dec 03 16:03:02 ha-idg-1 libvirtd[3252]: 2018-12-03 15:03:02.758+0000: 3252: error : virKeepAliveTimerInternal:143 : internal error: connection closed due to keepalive timeout
The domains are configured similar:
primitive vm_geneious VirtualDomain \
params config="/mnt/san/share/config.xml" \
params hypervisor="qemu:///system" \
params migration_transport=ssh \
op start interval=0 timeout=120 trace_ra=1 \
op stop interval=0 timeout=130 trace_ra=1 \
op monitor interval=30 timeout=25 trace_ra=1 \
op migrate_from interval=0 timeout=300 trace_ra=1 \
op migrate_to interval=0 timeout=300 trace_ra=1 \
meta allow-migrate=true target-role=Started is-managed=true \
utilization cpu=2 hv_memory=8000
What is the algorithm to discover the port used for live migration ?
I have the impression that "params migration_transport=ssh" is worthless, port 22 isn't involved for live migration.
My experience is that for the migration tcp ports > 49151 are used. But the exact procedure isn't clear for me.
Does live migration uses first tcp port 49152 and for each following domain one port higher ?
E.g. for the concurrent live migration of three domains 49152, 49153 and 49154.
Why does live migration for three domains usually succeed, although on both hosts just 49152 and 49153 is open ?
Is the migration not really concurrent, but sometimes sequential ?
Bernd
--
Bernd Lentes
Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
[ mailto:bernd.lentes@helmholtz-muenchen.de | bernd.lentes(a)helmholtz-muenchen.de ]
phone: +49 89 3187 1241
fax: +49 89 3187 2294
[ http://www.helmholtz-muenchen.de/idg | http://www.helmholtz-muenchen.de/idg ]
wer Fehler macht kann etwas lernen
wer nichts macht kann auch nichts lernen
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDirig.in Petra Steiner-Hoffmann
Stellv.Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrer: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, Dr. rer. nat. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
5 years, 8 months