[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 3 months
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 4 months
[libvirt-users] UDP broadcasts vs. nat Masquerading issue
by Nikolai Zhubr
Hi all,
I'm observing an issue that as soon as libvirt starts, UPD broadcasts
going through physical network (and unrelated to any virtualization) get
broken. Specifically, windows neighbourhood browsing through samba's
nmbd starts suffering badly (Samba is running on this same box).
At the moment I'm running a quite outdated version 1.2.9 of libvirt, but
other than this issue, it does its job pretty well, so I'd first
consider some patching/backporting rather than totally replacing it with
a new one. Anyway, I first need to better understand what is going on
and what is wrong with it.
This could also be related somewhat to
https://www.redhat.com/archives/libvir-list/2013-September/msg01311.html
but I suppose it is not exactly that thing.
I've already figured the source of trouble is anyway related to these
rules added:
-A POSTROUTING -o br0 -j MASQUERADE
-A POSTROUTING -o enp0s25 -j MASQUERADE
-A POSTROUTING -o virbr2_nic -j MASQUERADE
-A POSTROUTING -o vnet0 -j MASQUERADE
Here, virbr2_nic and vnet0 are used by libvirt for arranging network
configurations for VMs, ok. However, br0 is a main interface of this
host with primary ip address, with enp0s25 being a physical nic of this
host, and it is used for all sorts of regular (unrelated to
virtualization) communications. Also, br0 is used for attaching bridged
(as opposed to NATed) VMs managed by libvirt.
Clearly, libvirt somehow chooses to set up masquerading for literally
all existing network interfaces here (except lo), but I can't see a real
reason for the first two rules in the list above. Furthermore, they
corrupt UDP broadcats coming from outside and reaching this host
(through enp0s25/br0) such that source address gets replaced by this
hosts primary address (as per masquerading). I've verified this by
arranging a hand-crafted UDP listener and printing the respective source
addresses as seen by normal userspace.
Now I've discovered that I can "eliminate" the problem by either:
1. Removing "-A POSTROUTING -o br0 -j MASQUERADE" (manually)
2. Inserting "-A POSTROUTING -s 192.168.0.0/24 -d 192.168.0.255/32 -j
ACCEPT"
(Of course correcting rules by hand is not a solution, just a test)
So question is, how the correct rules should ideally look like? And, is
this issue known/fixed in most current libvirt?
Thank you,
Regards,
Nikolai
5 years, 5 months
[libvirt-users] macvtap vlan and tcp header overhead (and mtu size)
by Marc Roos
I have a host setup for libvirt kvm/qemu vms. And I wonder a bit about
the overhead of the macvtap and how to configure the mtu's properly. To
be able to communicate with the host, I have moved the ip address of the
host from the adapter to a macvtap to allow host communication.
I have the below setup on hosts.
+---------+
| macvtap0|
+--| ip |
| | mtu1500 |
| +---------+
+---------+ | +---------+
|eth0.v100| | | macvtap1|
+--| no ip +-+--| ip |
+---------+ | | mtu1500 | | mtu1500 |
| eth0 | | +---------+ +---------+
| +--+
| mtu9000 | | +---------+
+---------+ | |eth0.v101|
+--| ip |
| mtu9000 |
+---------+
https://pastebin.com/9jJrMCTD
I can do a ping -M do -s 9000 between hosts via the vlan interface
eth0.v101. That is as expected.
The ping -M do -s 1500 macvtap0 and another host or macvtap1 fails. The
maximum size that does not fragment is 1472.
That is 28 bytes??? Where have they gone? I am only using macvtap, can
this be the combination of the parent interface being a vlan and that
macvtap is not properly handling this? Anyone experienced something
similar? Or can explain where these 28 bytes go?
5 years, 5 months
[libvirt-users] mkfs fails on qemu-nbd device
by Tanmoy Sinha
Hi All,
I am unable to figure out the issue here, when I try to create a filesystem
(ext4) on a virtual disk using qemu-nbd. This happens intermittently.
Following is the sequence of commands:-
$> qemu-img create -f qcow2 test.qcow2 30G
$> qemu-nbd --connect=/dev/nbd0 test.qcow2
$> *mkfs.ext4 /dev/nbd0*
* mkfs.ext4: Device size reported to be zero. Invalid partition specified,
or*
*Thu Jun 13 14:44:24 2019 partition table wasn't reread after running
fdisk, due to*
*Thu Jun 13 14:44:24 2019 a modified partition being busy and in use. You
may need to reboot*
*Thu Jun 13 14:44:24 2019 to re-read your partition table.*
Following is the version details:
root@localhost:~# qemu-img --version
qemu-img version 2.8.1(Debian 1:2.8+dfsg-6+deb9u7)
Copyright (c) 2003-2016 Fabrice Bellard and the QEMU Project developers
root@localhost:~# qemu-nbd --version
qemu-nbd version 0.0.1
Written by Anthony Liguori.
Copyright (C) 2006 Anthony Liguori <anthony(a)codemonkey.ws>.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
root@localhost:~# uname -a
Linux localhost 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1+deb9u2 (2019-05-13)
x86_64 GNU/Linux
Regards
Tanmoy Sinha
5 years, 6 months
[libvirt-users] Intermittent live migration hang with ceph RBD attached volume
by Scott Sullivan
Software in use:
*Source hypervisor:* *Qemu:* stable-2.12 branch *Libvirt*: v3.2-maint
branch *OS*: CentOS 6
*Destination hypervisor: **Qemu:* stable-2.12 branch *Libvirt*: v4.9-maint
branch *OS*: CentOS 7
I'm experiencing an intermittent live migration hang of a virtual machine
(KVM) with a ceph RBD volume attached.
At the high level what I see is that when this does happen, the virtual
machine is left in a paused state (per virsh list) on both source and
destination hypervisors indefinitely.
Here's the virsh command I am running on the source (where 10.30.76.66 is
the destination hypervisor):
virsh migrate --live --copy-storage-all --verbose --xml
> /root/live_migration.cfg test_vm qemu+ssh://10.30.76.66/system tcp://
> 10.30.76.66
Here it is in "ps faux" while its in the hung state:
root 10997 0.3 0.0 380632 6156 ? Sl 12:24 0:26
> \_ virsh migrate --live --copy-storage-all --verbose --xml
> /root/live_migration.cfg test_vm qemu+ssh://10.30.76.66/sys
> root 10999 0.0 0.0 60024 4044 ? S 12:24 0:00
> \_ ssh 10.30.76.66 sh -c 'if 'nc' -q 2>&1 | grep "requires an argument"
> >/dev/null 2>&1; then ARG=-q0;else ARG=;fi;'nc' $ARG -U
The only reason i'm using the `--xml` arg is so the auth information can be
updated for the new hypervisor (I setup a cephx user for each hypervisor).
Below is a diff between my normal xml config and the one I passed in --xml
arg to illustrate:
60,61c60,61
> < <auth username="source">
> < <secret type="ceph"
> uuid="d4a47178-ab90-404e-8f25-058148da8446"/>
> ---
> > <auth username="destination">
> > <secret type="ceph"
> uuid="72e9373d-7101-4a93-a7d2-6cce5ec1e6f1"/>
The libvirt secret as shown above is properly setup with good credentials
on both source and destination hypervisors.
When this happens, I don't see anything logged on the destination
hypervisor in the libvirt log. However in the source hypervisors log, I do
see this:
2019-06-21 12:38:21.004+0000: 28400: warning :
> qemuDomainObjEnterMonitorInternal:3764 : This thread seems to be the async
> job owner; entering monitor without asking for a nested job is dangerous
But nothing else logged in the libvirt log on either source or destination.
The actual `virsh migrate --live` command pasted above still runs while
stuck in this state, and it just outputs "Migration: [100 %]" over and
over. If I strace the qemu process on the source, I see this over and over:
ppoll([{fd=9, events=POLLIN}, {fd=8, events=POLLIN}, {fd=4, events=POLLIN},
> {fd=6, events=POLLIN}, {fd=15, events=POLLIN}, {fd=18, events=POLLIN},
> {fd=19, events=POLLIN}, {fd=35, events=0}, {fd=35, events=POLLIN}], 9, {0,
> 14960491}, NULL, 8) = 0 (Timeout)
Here's those fds:
[root@source ~]# ll /proc/31804/fd/{8,4,6,15,18,19,35}
> lrwx------ 1 qemu qemu 64 Jun 21 13:18 /proc/31804/fd/15 -> socket:[931291]
> lrwx------ 1 qemu qemu 64 Jun 21 13:18 /proc/31804/fd/18 -> socket:[931295]
> lrwx------ 1 qemu qemu 64 Jun 21 13:18 /proc/31804/fd/19 -> socket:[931297]
> lrwx------ 1 qemu qemu 64 Jun 21 13:18 /proc/31804/fd/35 -> socket:[931306]
> lrwx------ 1 qemu qemu 64 Jun 21 13:18 /proc/31804/fd/4 -> [signalfd]
> lrwx------ 1 qemu qemu 64 Jun 21 13:18 /proc/31804/fd/6 -> [eventfd]
> lrwx------ 1 qemu qemu 64 Jun 21 13:18 /proc/31804/fd/8 -> [eventfd]
> [root@source ~]#
>
> [root@source ~]# grep -E '(931291|931295|931297|931306)' /proc/net/tcp
> 3: 00000000:170C 00000000:0000 0A 00000000:00000000 00:00000000
> 00000000 107 0 931295 1 ffff88043a27f840 99 0 0 10 -1
>
> 4: 00000000:170D 00000000:0000 0A 00000000:00000000 00:00000000
> 00000000 107 0 931297 1 ffff88043a27f140 99 0 0 10 -1
>
> [root@source ~]#
Further, on the source, if I query the blockjobs status, it says no
blockjob is running:
[root@source ~]# virsh list
> Id Name State
> ----------------------------------------------------
> 11 test_vm paused
> [root@source ~]# virsh blockjob 11 vda
> No current block job for vda
> [root@source ~]#
and that nc/ssh connection is still ok in the hung state:
[root@source~]# netstat -tuapn|grep \.66
> tcp 0 0 10.30.76.48:48876 10.30.76.66:22
> ESTABLISHED 10999/ssh
> [root@source ~]#
> root 10999 0.0 0.0 60024 4044 ? S 12:24 0:00
> \_ ssh 10.30.76.66 sh -c 'if 'nc' -q 2>&1 | grep "requires an argument"
> >/dev/null 2>&1; then ARG=-q0;else ARG=;fi;'nc' $ARG -U
> /var/run/libvirt/libvirt-sock'
Here's the state of the migration on source while its stuck like this:
[root@source ~]# virsh qemu-monitor-command 11 '{"execute":"query-migrate"}'
>
> {"return":{"status":"completed","setup-time":2,"downtime":2451,"total-time":3753,"ram":{"total":2114785280,"postcopy-requests":0,"dirty-sync-count":3,"page-size":4096,"remaining":0,"mbps":898.199209,"transferred":421345514,"duplicate":414940,"dirty-pages-rate":0,"skipped":0,"normal-bytes":416796672,"normal":101757}},"id":"libvirt-317"}
> [root@source ~]#
I'm unable the run the above command on the destination while its in this
state however, and get a lock error (which could be expected perhaps, since
it never cutover to the source yet):
[root@destination ~]# virsh list
> Id Name State
> -----------------------
> 4 test_vm paused
> [root@destination ~]# virsh qemu-monitor-command 4
> '{"execute":"query-migrate"}'
> error: Timed out during operation: cannot acquire state change lock (held
> by monitor=remoteDispatchDomainMigratePrepare3Params)
> [root@destination ~]#
Does anyone have any pointers of other things I should check? Or if this
was/is a known bug in perhaps the old stable-3.2?
I haven't seen this when migrating on a host with libvirt 4.9 on both
source and destinations. However the ones I have with the older 3.2 are
centos 6 based, and aren't as easily upgraded to 4.9. Or, if anyone has
ideas of patches I could potentially look to port to 3.2 to mitigate this,
that would also be welcome. Would also be interested in forcing the cutover
in this state if possible, though I suspect that isn't safe since the
block-job isnt running while in this bad state.
Thanks in advance
5 years, 6 months
[libvirt-users] libvirtd does not update VM .xml configuration on filesystem after virsh blockcommit
by Saso Tavcar
Hi,
Recently We've upgraded some KVM hosts from Fedora 29 to Fedora 30 and
now experience broken VM configurations on filesystem after virsh blockcommit.
Commands "virsh dumpxml ..." and "virsh dumpxml --inactive ..." is showing diffrent configuration than the one on filesystem.
In case of restart libvirtd or system reboot, there are broken VM xml configurations on filesystem.
Everything is OK on Fedora 29 KVM hosts!
0. XML configurations before snapshot is taken (all good, nothing found)
[root@server1 ~]# cat /etc/libvirt/qemu/somedomain.com.ncloud.xml| grep BACK
[root@server1 ~]# cat /etc/libvirt/qemu/somedomain.com.ncloud.xml| grep backingStore
[root@server1 ~]# /usr/bin/virsh --quiet dumpxml somedomain.com.ncloud|grep BACK
[root@server1 ~]# /usr/bin/virsh --quiet dumpxml somedomain.com.ncloud|grep backingStore
[root@server1 ~]# /usr/bin/virsh --quiet dumpxml --inactive somedomain.com.ncloud|grep BACK
[root@server1 ~]# /usr/bin/virsh --quiet dumpxml --inactive somedomain.com.ncloud|grep backingStore
1. When VM snapshot is taken with (all OK)
/usr/bin/virsh --quiet snapshot-create-as --domain somedomain.com.ncloud ....
there is a change of configuration, active, inactive and filesystem:
- active (virsh dumpxml)
[root@server1 ~]# /usr/bin/virsh --quiet dumpxml somedomain.com.ncloud|grep BACK
<source file='/Virtualization/linux/somedomain.com/somedomain.com.ncloud.qcow2-BACKUPING_NOW'/>
<source file='/Virtualization/linux/somedomain.com/somedomain.com.ncloud-swap.qcow2-BACKUPING_NOW'/>
<source file='/Virtualization/linux/somedomain.com/somedomain.com.ncloud-data1.qcow2-BACKUPING_NOW'/>
<source file='/Virtualization/linux/somedomain.com/somedomain.com.ncloud-data2.qcow2-BACKUPING_NOW'/>
[root@server1 ~]# /usr/bin/virsh --quiet dumpxml somedomain.com.ncloud|grep backingStore
<backingStore type='file'>
</backingStore>
<backingStore type='file'>
</backingStore>
<backingStore type='file'>
</backingStore>
<backingStore type='file'>
</backingStore>
- inactive (virsh dumpxml --inactive)
[root@server1 ~]# /usr/bin/virsh --quiet dumpxml --inactive somedomain.com.ncloud|grep BACK
<source file='/Virtualization/linux/somedomain.com/somedomain.com.ncloud.qcow2-BACKUPING_NOW'/>
<source file='/Virtualization/linux/somedomain.com/somedomain.com.ncloud-swap.qcow2-BACKUPING_NOW'/>
<source file='/Virtualization/linux/somedomain.com/somedomain.com.ncloud-data1.qcow2-BACKUPING_NOW'/>
<source file='/Virtualization/linux/somedomain.com/somedomain.com.ncloud-data2.qcow2-BACKUPING_NOW'/>
[root@server1 ~]# /usr/bin/virsh --quiet dumpxml --inactive somedomain.com.ncloud|grep backingStore
<backingStore type='file'>
</backingStore>
<backingStore type='file'>
</backingStore>
<backingStore type='file'>
</backingStore>
<backingStore type='file'>
</backingStore>
- XML configuration on filesystem has changed
[root@server1 ~]# ls -al /etc/libvirt/qemu/somedomain.com.ncloud.xml
-rw-------. 1 root root 6260 Jun 18 23:00 /etc/libvirt/qemu/somedomain.com.ncloud.xml
[root@server1 ~]# cat /etc/libvirt/qemu/somedomain.com.ncloud.xml |grep BACK
<source file='/Virtualization/linux/somedomain.com/somedomain.com.ncloud.qcow2-BACKUPING_NOW'/>
<source file='/Virtualization/linux/somedomain.com/somedomain.com.ncloud-swap.qcow2-BACKUPING_NOW'/>
<source file='/Virtualization/linux/somedomain.com/somedomain.com.ncloud-data1.qcow2-BACKUPING_NOW'/>
<source file='/Virtualization/linux/somedomain.com/somedomain.com.ncloud-data2.qcow2-BACKUPING_NOW'/>
[root@server1 ~]# cat /etc/libvirt/qemu/somedomain.com.ncloud.xml |grep backingStore
<backingStore type='file'>
</backingStore>
<backingStore type='file'>
</backingStore>
<backingStore type='file'>
</backingStore>
<backingStore type='file'>
</backingStore>
2. When VM backup is done, data merged with virsh backcommit ..." and snapshot deleted (NOT OK!!!)
...
/usr/bin/virsh --quiet blockcommit somedomain.com.ncloud sdd --active --pivot
/usr/bin/virsh --quiet snapshot-delete --domain somedomain.com.ncloud somedomain.com.ncloud-SNAPSHOT --metadata
there is a following state of VM configurations:
- active (virsh dumpxml),
[root@server1 ~]# /usr/bin/virsh --quiet dumpxml somedomain.com.ncloud|grep BACK ;;; OK
[root@server1 ~]# /usr/bin/virsh --quiet dumpxml somedomain.com.ncloud|grep backingStore ;;; why is there empty backingStore left ???
<backingStore/>
<backingStore/>
<backingStore/>
<backingStore/>
- inactive
[root@server1 qemu]# virsh dumpxml --inactive somedomain.com.ncloud |grep BACK ;;; OK
[root@server1 qemu]# virsh dumpxml --inactive somedomain.com.ncloud |grep backingStore ;;; OK
- XML on filesystem (.xml file on filesystem has not changed/reverted since snapshot has been taken - NOT OK!!!!, should be cleared of snapshot source file and backingStore)
[root@server1 ~]# ls -al /etc/libvirt/qemu/somedomain.com.ncloud.xml
-rw-------. 1 root root 6260 Jun 18 23:00 /etc/libvirt/qemu/somedomain.com.ncloud.xml
[root@server1 qemu]# cat somedomain.com.ns2.xml |grep BACK
<source file='/Virtualization/linux/somedomain.com/somedomain.com.ns2.qcow2-BACKUPING_NOW'/>
[root@server1 qemu]# cat somedomain.com.ns2.xml |grep backingStore
<backingStore type='file' index='1'>
</backingStore>
##############################################################################################################################################################################
- Fedora 29 has libvirt 4.7.0 and qemu 3.0.1:
[root@solaris1 ~]# rpm -qa |grep libvirt-daemon-kvm
libvirt-daemon-kvm-4.7.0-3.fc29.x86_64
[root@solaris1 ~]# rpm -qa |grep qemu-system-x86
qemu-system-x86-core-3.0.1-3.fc29.x86_64
qemu-system-x86-3.0.1-3.fc29.x86_64
- Fedora 30 has libvirt 5.1.0 and qemu 3.1.0:
[root@server1 ~]# rpm -qa |grep libvirt-daemon-kvm
libvirt-daemon-kvm-5.1.0-8.fc30.x86_64
[root@server1 ~]# rpm -qa |grep qemu-system-x86
qemu-system-x86-3.1.0-8.fc30.x86_64
qemu-system-x86-core-3.1.0-8.fc30.x86_64
For every VM from "virsh list" we do following steps (in script) for VM backup:
/usr/bin/virsh --quiet domblklist somedomain.com.ncloud
/usr/bin/virsh --quiet dumpxml --inactive somedomain.com.ncloud > /Backuping/VMs/Daily/somedomain.com.ncloud.xml
/usr/bin/virsh --quiet snapshot-create-as --domain somedomain.com.ncloud somedomain.com.ncloud-SNAPSHOT --diskspec sda,file=/Virtualization/linux/somedomain.com/somedomain.com.ncloud.qcow2... --diskspec sdb,file=/Virtualization/linux/somedomain.com/somedomain.com.ncloud-swap.... --diskspec sdc,file=/Virtualization/linux/somedomain.com/somedomain.com.ncloud-data1... --diskspec sdd,file=/Virtualization/linux/somedomain.com/somedomain.com.ncloud-data2... --disk-only --atomic --quiesce
/usr/bin/virsh --quiet snapshot-list somedomain.com.ncloud
/usr/bin/scp -p server1.somedomain.us:/Virtualization/linux/somedomain.com/somedomain.com... somedomain.com.ncloud.qcow2
/usr/bin/scp -p server1.somedomain.us:/Virtualization/linux/somedomain.com/somedomain.com... somedomain.com.ncloud-swap.qcow2
/usr/bin/scp -p server1.somedomain.us:/Virtualization/linux/somedomain.com/somedomain.com... somedomain.com.ncloud-data1.qcow2
/usr/bin/scp -p server1.somedomain.us:/Virtualization/linux/somedomain.com/somedomain.com... somedomain.com.ncloud-data2.qcow2
/usr/bin/virsh --quiet blockcommit somedomain.com.ncloud sda --active --pivot
/usr/bin/virsh --quiet blockcommit somedomain.com.ncloud sdb --active --pivot
/usr/bin/virsh --quiet blockcommit somedomain.com.ncloud sdc --active --pivot
/usr/bin/virsh --quiet blockcommit somedomain.com.ncloud sdd --active --pivot
/usr/bin/virsh --quiet snapshot-delete --domain somedomain.com.ncloud somedomain.com.ncloud-SNAPSHOT --metadata
/usr/bin/ssh server1.somedomain.us "/usr/bin/rm /Virtualization/linux/somedomain.com/somedomain.com.ncloud.qcow2-BACKUPING_NOW"
/usr/bin/ssh server1.somedomain.us "/usr/bin/rm /Virtualization/linux/somedomain.com/somedomain.com.ncloud-swap.qcow2-BACKUPING_NOW"
/usr/bin/ssh server1.somedomain.us "/usr/bin/rm /Virtualization/linux/somedomain.com/somedomain.com.ncloud-data1.qcow2-BACKUPING_NOW"
/usr/bin/ssh server1.somedomain.us "/usr/bin/rm /Virtualization/linux/somedomain.com/somedomain.com.ncloud-data2.qcow2-BACKUPING_NOW"
/usr/bin/pigz --best --rsyncable somedomain.com.ncloud.qcow2
/usr/bin/pigz --best --rsyncable somedomain.com.ncloud-swap.qcow2
/usr/bin/pigz --best --rsyncable somedomain.com.ncloud-data1.qcow2
/usr/bin/pigz --best --rsyncable somedomain.com.ncloud-data2.qcow2
There is no error on script (commands) execution:
[root@server1 ~]# /Backuping/bin/simple_KVM_backup.pl --depth 7 --path Daily --hosting_server server1.somedomain.us
###Backuping VM somedomain.com.ncloud:
+ Moving somedomain.com.ncloud.xml.06 to somedomain.com.ncloud.xml.07.
+ Moving somedomain.com.ncloud.qcow2.gz.06 to somedomain.com.ncloud.qcow2.gz.07.
...
+ Disk snapshot somedomain.com.ncloud-SNAPSHOT for somedomain.com.ncloud ... Domain snapshot somedomain.com.ncloud-SNAPSHOT created
Name Creation Time State
-----------------------------------------------------------------------
somedomain.com.ncloud-SNAPSHOT 2019-06-18 23:00:04 +0200 disk-snapshot
Done!
+ Time scp start : 2019-06-18 23:00:05
somedomain.com.ncloud.qcow2 100% 20GB 401.3MB/s 00:51
Done!
somedomain.com.ncloud-swap.qcow2 100% 4097MB 483.5MB/s 00:08
Done!
somedomain.com.ncloud-data1.qcow2 100% 100GB 400.7MB/s 04:15
Done!
somedomain.com.ncloud-data2.qcow2 100% 100GB 440.6MB/s 03:52
Done!
+ Time scp end : 2019-06-18 23:09:14
+ Time blockcommit start : 2019-06-18 23:09:14
Successfully pivoted
Successfully pivoted
Successfully pivoted
Successfully pivoted
+ Time blockcommit end : 2019-06-18 23:09:16
+ Snapshot delete ... Domain snapshot somedomain.com.ncloud-SNAPSHOT deleted
Done!
+ Delete snapshot files ... Done!
+ Time compress start : 2019-06-18 23:09:17
Regards,
saso
5 years, 6 months
[libvirt-users] Libvirt API for getting disk capacity from VM XML
by Varsha Verma
Hello everyone,
I am doing an outreachy internship at Openstack Ironic. In the sushy-tools
project, we are using libvirt VMs to simulate bare metal machines for
testing purposes.
In the XML description of a domain, there are a bunch of disk elements
giving information about the various storage devices attached to the
domain. Is there some way to get the size/capacity of those disks using the
libvirt API?
--
*Regards,*
*Varsha Verma*
*Fourth Year Undergraduate*
*Department of Electrical Engineering*
*IIT-BHU, Varanasi*
5 years, 6 months
[libvirt-users] blockcommit of domain not successfull
by Lentes, Bernd
Hi,
i have several domains running on a 2-node HA-cluster.
Each night i create snapshots of the domains, after copying the consistent raw file to a CIFS server i blockcommit the changes into the raw files.
That's running quite well.
But recent the blockcommit didn't work for one domain:
I create a logfile from the whole procedure:
===============================================================
...
Sat Jun 1 03:05:24 CEST 2019
Target Source
------------------------------------------------
vdb /mnt/snap/severin.sn
hdc -
/usr/bin/virsh blockcommit severin /mnt/snap/severin.sn --verbose --active --pivot
Block commit: [ 0 %]Block commit: [ 15 %]Block commit: [ 28 %]Block commit: [ 35 %]Block commit: [ 43 %]Block commit: [ 53 %]Block commit: [ 63 %]Block commit: [ 73 %]Block commit: [ 82 %]Block commit: [ 89 %]Block commit: [ 98 %]Block commit: [100 %]Target Source
------------------------------------------------
vdb /mnt/snap/severin.sn
...
==============================================================
The libvirtd-log says (it's UTC IIRC):
=============================================================
...
2019-05-31 20:31:34.481+0000: 4170: error : qemuMonitorIO:719 : internal error: End of file from qemu monitor
2019-06-01 01:05:32.233+0000: 4170: error : qemuMonitorIO:719 : internal error: End of file from qemu monitor
2019-06-01 01:05:43.804+0000: 22605: warning : qemuGetProcessInfo:1461 : cannot parse process status data
2019-06-01 01:05:43.848+0000: 22596: warning : qemuGetProcessInfo:1461 : cannot parse process status data
2019-06-01 01:06:11.438+0000: 26112: warning : qemuDomainObjBeginJobInternal:4865 : Cannot start job (destroy, none) for doma
in severin; current job is (modify, none) owned by (5372 remoteDispatchDomainBlockJobAbort, 0 <null>) for (39s, 0s)
2019-06-01 01:06:11.438+0000: 26112: error : qemuDomainObjBeginJobInternal:4877 : Timed out during operation: cannot acquire
state change lock (held by remoteDispatchDomainBlockJobAbort)
2019-06-01 01:06:13.976+0000: 5369: warning : qemuGetProcessInfo:1461 : cannot parse process status data
2019-06-01 01:06:14.028+0000: 22596: warning : qemuGetProcessInfo:1461 : cannot parse process status data
2019-06-01 01:06:44.165+0000: 5371: warning : qemuGetProcessInfo:1461 : cannot parse process status data
2019-06-01 01:06:44.218+0000: 22605: warning : qemuGetProcessInfo:1461 : cannot parse process status data
2019-06-01 01:07:14.343+0000: 5369: warning : qemuGetProcessInfo:1461 : cannot parse process status data
2019-06-01 01:07:14.387+0000: 22598: warning : qemuGetProcessInfo:1461 : cannot parse process status data
2019-06-01 01:07:44.495+0000: 22605: warning : qemuGetProcessInfo:1461 : cannot parse process status data
...
===========================================================
and "cannot parse process status data" continuously until the end of the logfile.
The syslog from the domain itself didn't reveal anything, it just continues to run.
The libvirt log from the domains just says:
qemu-system-x86_64: block/mirror.c:864: mirror_run: Assertion `((&bs->tracked_requests)->lh_first == ((void *)0))' failed.
Hosts are SLES 12 SP4 with libvirt-daemon-4.0.0-8.9.1.x86_64.
Bernd
--
Bernd Lentes
Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
bernd.lentes(a)helmholtz-muenchen.de
phone: +49 89 3187 1241
phone: +49 89 3187 3827
fax: +49 89 3187 2294
http://www.helmholtz-muenchen.de/idg
wer Fehler macht kann etwas lernen
wer nichts macht kann auch nichts lernen
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Stellv. Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
5 years, 6 months