[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 2 months
how to use external snapshots with memory state
by Riccardo Ravaioli
Hi all,
Best wishes for 2021! :)
So I've been reading and playing around with live snapshots and still
haven't figured out how to use an external memory snapshot. My goal is to
take a disk+memory snapshot of a running VM and, if possible, save it in
external files.
As far as I understand, I can run:
$ virsh snapshot-create $VM
... and that'll take an *internal* live snapshot of a given VM, consisting
of its disks and memory state, which will be stored in the qcow2 disk(s) of
the VM. In particular, the memory state will be stored in the first disk of
the VM. I can then use the full range of snapshot commands available:
revert, list, current, delete.
Now, an external snapshot can be taken with:
$ virsh snapshot-create-as --domain $VM mysnapshot --diskspec
vda,file=/home/riccardo/disk_mysnapshot.qcow2,snapshot=external --memspec
file=/home/riccardo/mem_mysnapshot.qcow2,snapshot=external
... with as many "--diskspec" as there are disks in the VM.
I've read the virsh manual and the libvirt API documentation, but it's not
clear to me what exactly I can do then with an external snapshot, in
particular with the file containing the memory state. In articles from 7-8
years ago people state that external memory snapshots cannot be reverted...
is it still the case today? If so, what's a typical usage for such files?
If not with libvirt, is it possible to revert to an external memory + disk
state in other ways, for instance through qemu commands?
Thanks!
Riccardo
3 years, 7 months
Question about migrate the vm with host-passthrough cpu
by Guoyi Tu
Hi there,
Sorry to bother you, but could I ask you a question about the live
migration of virtual machine configured with host-passthough cpu, it
confuse me a lot.
Here i have two hosts with different cpu, and the cpu feature set of old
host is subset of the new one (by virsh cpu-compare). If the vm was
first started on the old host,is it safe to migrate the vm between the
two hosts back and forth with the vm always keep running?
I've test the case in my environment, and the migration succeeds, the
cpu family/model/stepping and features of lscpu in Guest OS is the same.
--
Best Regards,
Guoyi Tu
3 years, 8 months
Some confusion about lsilogic controller
by xingchaochao
Hello,
I have been confused by such a phenomenon recently.
Libvirt is the master branch , and the VM is centos8.2(kernel is 4.18.0-193.el8.aarch64).
When I hot-plug the scsi disk for a virtual machine without a virtio-scsi controller, libvirt will automatically generate an lsilogic controller for the scsi disk.
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='native'/>
<source file='/Images/xcc/tmp.img'/>
<backingStore/>
<target dev='sdt' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
linux-upcHIq:/Images/xcc # virsh list
Id Name State
----------------------
12 g1 running
linux-upcHIq:/Images/xcc # virsh attach-device g1 disk.xml
Device attached successfully
linux-upcHIq:/Images/xcc # virsh dumpxml g1 | grep scsi
<target dev='sdt' bus='scsi'/>
<alias name='scsi0-0-0'/>
<controller type='scsi' index='0' model='lsilogic'>
<alias name='scsi0'/>
But this scsi disk cannot be found through the lsblk command inside the virtual machine.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 600M 0 part /boot/efi
├─vda2 252:2 0 1G 0 part /boot
└─vda3 252:3 0 18.4G 0 part
├─cl-root 253:0 0 16.4G 0 lvm /
└─cl-swap 253:1 0 2G 0 lvm [SWAP]
After hot unplugging the scsi disk, I performed the hot unplug operation of the lsilogic controller. libvirt shows "Device detached successfully", but in fact, the lsilogic controller is not removed from the live XML and persistent XML. Through "virsh dumpxml vmname" and "virsh edit vmname", I can see <controller type='scsi' index='0' model='lsilogic'> is always there.
linux-upcHIq:/Images/xcc # virsh detach-device g1 disk.xml
Device detached successfully
linux-upcHIq:/Images/xcc # virsh dumpxml g1 | grep scsi
<controller type='scsi' index='0' model='lsilogic'>
<alias name='scsi0'/>
linux-upcHIq:/Images/xcc #
linux-upcHIq:/Images/xcc # cat lsi.xml
<controller type='scsi' index='0' model='lsilogic'>
<alias name='scsi0'/>
<address type='pci' domain='0x0000' bus='0x03' slot='0x05' function='0x0'/>
</controller>
linux-upcHIq:/Images/xcc # virsh detach-device g1 lsi.xml
Device detached successfully
linux-upcHIq:/Images/xcc # virsh dumpxml g1 | grep scsi
<controller type='scsi' index='0' model='lsilogic'>
<alias name='scsi0'/>
I am confused, why libvirt chooses to generate an lsilogic controller for the scsi disk when there is no scsi controller, instead of directly reporting an error and exiting the hot plug operation. After all, the scsi disk based on the lsilogic controller is not perceived inside the virtual machine, and lsilogic will remain in the XML file of the virtual machine.
3 years, 8 months
Live backups create-snapshot-as and memspec
by David Wells
Hi all!
I've been using libvirt for some time and until now I have treated
backups of virtual computers as if they where physical computers
installing the backup client on the guest. I am now however facing the
need to backup a couple a couple of guest at the host level so I've been
trying to catch up by reading, googling and by trial and error too. Up
to now I've been able to backup a live machine whith a command like the
following
> virsh snapshot-create-as --domain test --name backup --atomic
> --diskspec vda,snapshot=external --disk-only
This command creates a file test.backup and in the meantime I can backup
the original test.qcow2 but for what I saw this disk image is in a
"dirty" state, as if the machine I could restore from this file had been
turned off whithout a proper shutdown.
I know that I can later restore the machine to its original state by
issuing a command like this
> virsh blockcommit --domain test vda --active --pivot
> virsh snapshot-delete test --metadata backup
I have seen that it is possible to create the snapshot using a memspec
parameter which would make the backup of the guest as if it where in a
clean state, however I haven't found the equivalent of the blockcommit
for the memory file, in a sort of speak, to be able to restore the guest
to it's original state.
Thank you very much!
Best regards.
David Wells.
3 years, 8 months
Upgrading the host
by Marcin Struzak
Hello--
I'm planning on upgrading the OS on my host machine by doing manual
re-installation, which will trigger an upgrade of libvirt from 1.1.3 to
6.1.0. Hardware, storage volumes & paths, etc., will be the same after
upgrade.
I was going to redefine & start everything from .xml files, starting
with the network
virsh net-define <network XML>
virsh net-autostart <network>
virsh net-start <network>
and following with all the guests
virsh define <domain XML>
virsh autostart <domain>
virsh start <domain>
I have the following questions:
1. Do I need to generate XML dumps manually, or is it ok to use the
ones from host's /etc/libvirt?
2. Do I need to define & start storage pools & volumes explicitly, or
will they be picked up from domain definitions?
3. Anything else I should worry about or prepare ahead?
--Marcin
3 years, 9 months
Static DHCP lease never distributed
by Brooks Swinnerton
Hi there,
I have a libvirt network defined as so:
<network>
<name>customers</name>
<bridge name='customers' macTableManager='libvirt' />
<port isolated='yes' />
<dns enable='no' />
<ip address='10.0.0.1' prefix='24'>
<dhcp>
<host mac='02:99:92:43:eb:b8' name='dhcp-test' ip='10.0.0.10' />
</dhcp>
</ip>
</network>
And that network is attached to a virtual machine:
<interface type='network'>
<mac address='02:99:92:43:eb:b8'/>
<source network='customers'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
But for some reason when the domain starts, it never gets an address. If I
tcpdump the bridge that was created by the network I can see it sending out
discover packets, but dnsmasq never seems to respond:
01:26:25.039987 02:99:92:43:eb:b8 > ff:ff:ff:ff:ff:ff, ethertype IPv4
(0x0800), length 342: (tos 0x0, ttl 64, id 0, offset 0, flags [none], proto
UDP (17), length 328)
0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from
02:99:92:43:eb:b8, length 300, xid 0xc7283f76, secs 228, Flags [none]
Client-Ethernet-Address 02:99:92:43:eb:b8
Vendor-rfc1048 Extensions
Magic Cookie 0x63825363
DHCP-Message Option 53, length 1: Discover
Client-ID Option 61, length 7: ether 02:99:92:43:eb:b8
MSZ Option 57, length 2: 576
Parameter-Request Option 55, length 7:
Subnet-Mask, Default-Gateway, Domain-Name-Server, Hostname
Domain-Name, BR, NTP
Vendor-Class Option 60, length 3: "d-i"
Despite that appearing to be the correct mac address.
Looking in /var/lib/libvirt/dnsmasq/customers.hostsfile, it's returning
what I would expect to be there:
02:99:92:43:eb:b8,10.0.0.10,dhcp-test
If I add a <range> stanza to the configuration, that does appear to work,
so it seems this is only related to static addresses.
This is libvirtd 5.8.0.
3 years, 9 months
Re: [ovirt-devel] Issue: Device path changed after adding disks to guest VM
by Nir Soffer
On Wed, Dec 2, 2020 at 4:57 PM Joy Li <joooy.li(a)gmail.com> wrote:
>
> Thanks a lot Nir! Good to know that oVirt cannot guarantee the disk names so that I don't need to spend more time trying to enable such a feature.
>
> I can always reproduce the problem via my application, basically, the procedure is as following:
>
> 1. create a VM
Which guest OS? Can you share the guest disk image or ISO image used
to instal it?
> 2. add disks to the VM (e.g.: disk names: disk1, disk3)
> 3. check the disk mappings via `virsh domblklist `
Please shared the libvirt domain xml (virsh dumpxml vm-name)
> 4. add another disk (let's say, disk2, give a name alphabetically before some existing disks)
Add disk while the vm is running (hotplug)?
> 5. shutdown the VM via hypervisor and start it again (reboot won't work)
What do you mean by "reboot does not work?"
> 6. `virsh domblklist` again, then you might see the problem I mentioned before
Mapping is different compared with state before the reboot?
> There are no virtio devices inside /dev/disk/by-id/xxx of my guest VM.
Maybe you don't have systemd-udev installed?
The links in /dev/disk/... are created by udev during startup, and
when detecting a new disk.
> And I just noticed that the disks mapping information given by hypervisor (from VM configuration or virsh command) is different from the reality inside the VM. The disk name inside the VM was actually not changed.
>
> So now my issue is that given a disk name (/dev/vdb) of a VM, how can I get its wwid? Before I got it from the hypervisor, but now the hypervisor's information is not reliable, and since the disk is unformatted, I cannot use UUID.
You can use:
# udevadm info /dev/vda
P: /devices/pci0000:00/0000:00:02.5/0000:06:00.0/virtio4/block/vda
N: vda
L: 0
S: disk/by-path/pci-0000:06:00.0
S: disk/by-id/virtio-b97e68b2-87ea-45ca-9
S: disk/by-path/virtio-pci-0000:06:00.0
E: DEVPATH=/devices/pci0000:00/0000:00:02.5/0000:06:00.0/virtio4/block/vda
E: DEVNAME=/dev/vda
E: DEVTYPE=disk
E: MAJOR=252
E: MINOR=0
E: SUBSYSTEM=block
E: USEC_INITIALIZED=10518442
E: ID_SERIAL=b97e68b2-87ea-45ca-9
E: ID_PATH=pci-0000:06:00.0
E: ID_PATH_TAG=pci-0000_06_00_0
E: DEVLINKS=/dev/disk/by-path/pci-0000:06:00.0
/dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9
/dev/disk/by-path/virtio-pci-0000:06:00.0
E: TAGS=:systemd:
I tried to reproduce you issue 4.4.5 development build:
Starting vm with 2 direct LUN disks:
# virsh -r dumpxml disk-mapping
...
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='qcow2' cache='none'
error_policy='stop' io='native' discard='unmap'/>
<source dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5'
index='3'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore type='block' index='5'>
<format type='qcow2'/>
<source
dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/ad891316-b80a-4ab5-895b-400108bd0ca1'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
</backingStore>
<target dev='sda' bus='scsi'/>
<serial>40018b33-2b11-4d10-82e4-604a5b135fb2</serial>
<boot order='1'/>
<alias name='ua-40018b33-2b11-4d10-82e4-604a5b135fb2'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/3600140594af345ed76d42058f2b1a454' index='2'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<serial>b97e68b2-87ea-45ca-94fb-277d5b30baa2</serial>
<alias name='ua-b97e68b2-87ea-45ca-94fb-277d5b30baa2'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00'
function='0x0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5' index='1'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<serial>d9a29187-f492-4a0d-aea2-7d5216c957d7</serial>
<alias name='ua-d9a29187-f492-4a0d-aea2-7d5216c957d7'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</disk>
...
# virsh -r domblklist disk-mapping
Target Source
---------------------------------------------------------------------------------------------------------------------------------------------------------------
sdc -
sda /rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5
vda /dev/mapper/3600140594af345ed76d42058f2b1a454
vdb /dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5
In guest:
# ls -lh /dev/disk/by-id/virtio-*
lrwxrwxrwx. 1 root root 9 Jan 6 09:42
/dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 -> ../../vda
lrwxrwxrwx. 1 root root 9 Jan 6 09:42
/dev/disk/by-id/virtio-d9a29187-f492-4a0d-a -> ../../vdb
NOTE: "d9a29187-f492-4a0d-a" are the first characters of the disk id:
"d9a29187-f492-4a0d-aea2-7d5216c957d7"
seen in oVirt:
https://my-engine/ovirt-engine/webadmin/?locale=en_US#disks-general;id=d9...
Adding another disk in sorted in the middle (while the vm is running):
# virsh -r dumpxml disk-mapping
...
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='qcow2' cache='none'
error_policy='stop' io='native' discard='unmap'/>
<source dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5'
index='3'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore type='block' index='5'>
<format type='qcow2'/>
<source
dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/ad891316-b80a-4ab5-895b-400108bd0ca1'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
</backingStore>
<target dev='sda' bus='scsi'/>
<serial>40018b33-2b11-4d10-82e4-604a5b135fb2</serial>
<boot order='1'/>
<alias name='ua-40018b33-2b11-4d10-82e4-604a5b135fb2'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/3600140594af345ed76d42058f2b1a454' index='2'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<serial>b97e68b2-87ea-45ca-94fb-277d5b30baa2</serial>
<alias name='ua-b97e68b2-87ea-45ca-94fb-277d5b30baa2'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00'
function='0x0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5' index='1'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<serial>d9a29187-f492-4a0d-aea2-7d5216c957d7</serial>
<alias name='ua-d9a29187-f492-4a0d-aea2-7d5216c957d7'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/36001405b4d0c0b7544d47438b21296ef' index='7'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vdc' bus='virtio'/>
<serial>e801c2e4-dc2e-4c53-b17b-bf6de99f16ed</serial>
<alias name='ua-e801c2e4-dc2e-4c53-b17b-bf6de99f16ed'/>
<address type='pci' domain='0x0000' bus='0x09' slot='0x00'
function='0x0'/>
</disk>
...
# virsh -r domblklist disk-mapping
Target Source
---------------------------------------------------------------------------------------------------------------------------------------------------------------
sdc -
sda /rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5
vda /dev/mapper/3600140594af345ed76d42058f2b1a454
vdb /dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5
vdc /dev/mapper/36001405b4d0c0b7544d47438b21296ef
In the guest:
# ls -lh /dev/disk/by-id/virtio-*
lrwxrwxrwx. 1 root root 9 Jan 6 09:42
/dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 -> ../../vda
lrwxrwxrwx. 1 root root 9 Jan 6 09:42
/dev/disk/by-id/virtio-d9a29187-f492-4a0d-a -> ../../vdb
lrwxrwxrwx. 1 root root 9 Jan 6 09:51
/dev/disk/by-id/virtio-e801c2e4-dc2e-4c53-b -> ../../vdc
Shutdown VM and start it again
# virsh -r dumpxml disk-mapping
...
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='qcow2' cache='none'
error_policy='stop' io='native' discard='unmap'/>
<source dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5'
index='4'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore type='block' index='6'>
<format type='qcow2'/>
<source
dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/ad891316-b80a-4ab5-895b-400108bd0ca1'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
</backingStore>
<target dev='sda' bus='scsi'/>
<serial>40018b33-2b11-4d10-82e4-604a5b135fb2</serial>
<boot order='1'/>
<alias name='ua-40018b33-2b11-4d10-82e4-604a5b135fb2'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/3600140594af345ed76d42058f2b1a454' index='3'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<serial>b97e68b2-87ea-45ca-94fb-277d5b30baa2</serial>
<alias name='ua-b97e68b2-87ea-45ca-94fb-277d5b30baa2'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00'
function='0x0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/36001405b4d0c0b7544d47438b21296ef' index='2'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<serial>e801c2e4-dc2e-4c53-b17b-bf6de99f16ed</serial>
<alias name='ua-e801c2e4-dc2e-4c53-b17b-bf6de99f16ed'/>
<address type='pci' domain='0x0000' bus='0x09' slot='0x00'
function='0x0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5' index='1'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vdc' bus='virtio'/>
<serial>d9a29187-f492-4a0d-aea2-7d5216c957d7</serial>
<alias name='ua-d9a29187-f492-4a0d-aea2-7d5216c957d7'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</disk>
...
# virsh -r domblklist disk-mapping
Target Source
---------------------------------------------------------------------------------------------------------------------------------------------------------------
sdc -
sda /rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5
vda /dev/mapper/3600140594af345ed76d42058f2b1a454
vdb /dev/mapper/36001405b4d0c0b7544d47438b21296ef
vdc /dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5
In the guest:
# ls -lh /dev/disk/by-id/virtio-*
lrwxrwxrwx. 1 root root 9 Jan 6 09:55
/dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 -> ../../vda
lrwxrwxrwx. 1 root root 9 Jan 6 09:55
/dev/disk/by-id/virtio-d9a29187-f492-4a0d-a -> ../../vdb
lrwxrwxrwx. 1 root root 9 Jan 6 09:55
/dev/disk/by-id/virtio-e801c2e4-dc2e-4c53-b -> ../../vdc
Comparing to state before reboot:
# virsh -r domblklist disk-mapping
Target Source
---------------------------------------------------------------------------------------------------------------------------------------------------------------
sdc -
sda /rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5
vda /dev/mapper/3600140594af345ed76d42058f2b1a454
vdb /dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5
vdc /dev/mapper/36001405b4d0c0b7544d47438b21296ef
# ls -lh /dev/disk/by-id/virtio-*
lrwxrwxrwx. 1 root root 9 Jan 6 09:42
/dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 -> ../../vda
lrwxrwxrwx. 1 root root 9 Jan 6 09:42
/dev/disk/by-id/virtio-d9a29187-f492-4a0d-a -> ../../vdb
lrwxrwxrwx. 1 root root 9 Jan 6 09:51
/dev/disk/by-id/virtio-e801c2e4-dc2e-4c53-b -> ../../vdc
In the guest disks are mapped to the same device name.
It looks like libivrt domblklist is not correct - vdb and vdc are switched.
Peter, this expected?
Nir
>
> Joy
>
> On Wed, Dec 2, 2020 at 1:28 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
>>
>> On Wed, Dec 2, 2020 at 10:27 AM Joy Li <joooy.li(a)gmail.com> wrote:
>> >
>> > Hi All,
>> >
>> > I'm facing the problem that after adding disks to guest VM, the device target path changed (My ovirt version is 4.3). For example:
>> >
>> > Before adding a disk:
>> >
>> > virsh # domblklist <vmname>
>> > Target Source
>> > ---------------------------------------------------------
>> > hdc -
>> > vda /dev/mapper/3600a09803830386546244a546d494f53
>> > vdb /dev/mapper/3600a09803830386546244a546d494f54
>> > vdc /dev/mapper/3600a09803830386546244a546d494f55
>> > vdd /dev/mapper/3600a09803830386546244a546d494f56
>> > vde /dev/mapper/3600a09803830386546244a546d494f57
>> > vdf /dev/mapper/3600a09803830386546244a546d494f58
>> >
>> > After adding a disk, and then shutdown and start the VM:
>> >
>> > virsh # domblklist <vmname>
>> > Target Source
>> > ---------------------------------------------------------
>> > hdc -
>> > vda /dev/mapper/3600a09803830386546244a546d494f53
>> > vdb /dev/mapper/3600a09803830386546244a546d494f54
>> > vdc /dev/mapper/3600a09803830386546244a546d494f6c
>> > vdd /dev/mapper/3600a09803830386546244a546d494f55
>> > vde /dev/mapper/3600a09803830386546244a546d494f56
>> > vdf /dev/mapper/3600a09803830386546244a546d494f57
>> > vdg /dev/mapper/3600a09803830386546244a546d494f58
>> >
>> > The devices' multipath doesn't map to the same target path as before, so in my VM the /dev/vdc doesn't point to the old /dev/mapper/3600a09803830386546244a546d494f55 anymore.
>> >
>> > Anybody knows how can I make the device path mapping fixed without being changed after adding or removing disks.
>>
>> Device nodes are not stable, and oVirt cannot guarantee that you will
>> get the same
>> node in the guest in all runs.
>>
>> You should use /dev/disk/by-id/xxx links to located devices, and blkid to create
>> fstab mounts that do not depend on node names.
>>
>> Regardless, oVirt try to keep devices stable as possible. Do you know
>> how to reproduce
>> this issue reliably?
>>
>> Nir
>>
3 years, 9 months