
Hi, I have a fancy new Ceph cluster and have configured a storage pool using it in libvirt. I have a bunch of disk images in a "filesystem directory" storage pool. I want to migrate those disk images into the RBD pool. What is the process for doing this? I'm using libvirt from Debian Bookworm, version 9.0.0 and QEMU version 7.2. I have an XML file to define my new volume: <volume type='file'> <name>volume</name> <capacity unit='bytes'>8589934592</capacity> <allocation unit='bytes'>6747271168</allocation> <physical unit='bytes'>6747258880</physical> </volume> and I have tried: 1. virsh vol-create-from # virsh vol-create-from ceph volume.xml --inputpool default /var/lib/libvirt/images/volume.qcow2 error: Failed to create vol from volume.xml error: failed to open the RBD image 'volume.qcow2': No such file or directory No idea why it's trying to load "volume.qcow2" from within RBD when I've told it that it's in the default filesystem directory pool and provided a full path. 2. virsh blockcopy No idea how to actually achieve what I want here as it doesn't seem capable of copying to a non-file destination. 3. virsh vol-download and virsh vol-upload # mkfifo tmp # virsh vol-create --pool ceph volume.xml Vol volume created from volume.xml root@multimedia:~# virsh vol-list ceph Name Path ------------------------------------------ torrentbucket VM_images/volume root@multimedia:~# virsh vol-download --pool default volume.qcow2 tmp & [1] 2483977 root@multimedia:~# virsh vol-upload --pool ceph volume tmp error: cannot upload to volume volume error: this function is not supported by the connection driver: storage pool doesn't support volume upload I cannot believe that any sane virtual machine management solution does not have a way to achieve this, so clearly I'm missing something painfully obvious here. I've done a lot of searching for solutions and everything I've found either doesn't work for my specific case or is specifically for copying volumes within a pool or between files on a disk. Could someone please provide me with a working example of how to achieve this? Thanks, -- Julian Calaby Email: julian.calaby@gmail.com Profile: http://www.google.com/profiles/julian.calaby/

On Sun, Feb 04, 2024 at 20:31:33 +1100, Julian Calaby wrote:
Hi,
I have a fancy new Ceph cluster and have configured a storage pool using it in libvirt. I have a bunch of disk images in a "filesystem directory" storage pool. I want to migrate those disk images into the RBD pool.
What is the process for doing this?
[...]
2. virsh blockcopy
This is the proper way to do it for any running VM as it's the only approach which preserves data consistency from VM writes, as well as updates XMLs.
No idea how to actually achieve what I want here as it doesn't seem capable of copying to a non-file destination.
'virsh blockcopy' does in fact support non-file destinations by passing the XML description identical to the <disk> element via the --xml argument. The XML for ceph/rbd should look like: <disk type='network'> <driver name="qemu" type="raw"/> <source protocol="rbd" name="image_name2"> <host name="hostname" port="7000"/> <auth username='myuser'> <secret type='ceph' usage='mypassid'/> </auth> </source> </disk> To run the blockcopy operation and then subsequently switch to the new image after it finishes additional arguments will be needed: virsh blockcopy --domain $VMNAME --path $SOURCEDISKTARGET --xml /path/to/xml --transient-job --verbose --pivot I don't recall now whether you might need to pre-create the properly-sized rbd volume in ceph or qemu can do that for you. For a non-running VM I suggest you either start it, or start it with paused CPUs (if you don't want to run the guest OS) for the duration of the copy.
3. virsh vol-download and virsh vol-upload
# mkfifo tmp # virsh vol-create --pool ceph volume.xml Vol volume created from volume.xml
root@multimedia:~# virsh vol-list ceph Name Path ------------------------------------------ torrentbucket VM_images/volume
root@multimedia:~# virsh vol-download --pool default volume.qcow2 tmp & [1] 2483977 root@multimedia:~# virsh vol-upload --pool ceph volume tmp error: cannot upload to volume volume error: this function is not supported by the connection driver: storage pool doesn't support volume upload
This won't work, as nobody yet volunteered to implement the data operations for the native RBD/ceph libvirt pool type. You'd need to make the rbd volumes available in the host OS as files to be able to use storage driver APIs

Hi Peter, On Mon, Feb 5, 2024 at 6:45 PM Peter Krempa <pkrempa@redhat.com> wrote:
On Sun, Feb 04, 2024 at 20:31:33 +1100, Julian Calaby wrote:
Hi,
I have a fancy new Ceph cluster and have configured a storage pool using it in libvirt. I have a bunch of disk images in a "filesystem directory" storage pool. I want to migrate those disk images into the RBD pool.
What is the process for doing this?
[...]
2. virsh blockcopy
This is the proper way to do it for any running VM as it's the only approach which preserves data consistency from VM writes, as well as updates XMLs.
No idea how to actually achieve what I want here as it doesn't seem capable of copying to a non-file destination.
'virsh blockcopy' does in fact support non-file destinations by passing the XML description identical to the <disk> element via the --xml argument. The XML for ceph/rbd should look like:
<disk type='network'> <driver name="qemu" type="raw"/> <source protocol="rbd" name="image_name2"> <host name="hostname" port="7000"/> <auth username='myuser'> <secret type='ceph' usage='mypassid'/> </auth> </source> </disk>
To run the blockcopy operation and then subsequently switch to the new image after it finishes additional arguments will be needed:
virsh blockcopy --domain $VMNAME --path $SOURCEDISKTARGET --xml /path/to/xml --transient-job --verbose --pivot
And that disk XML is the secret sauce I was looking for. This worked perfectly and was surprisingly quick too.
I don't recall now whether you might need to pre-create the properly-sized rbd volume in ceph or qemu can do that for you.
It does create the image automatically for you. Thanks so much! -- Julian Calaby Email: julian.calaby@gmail.com Profile: http://www.google.com/profiles/julian.calaby/
participants (2)
-
Julian Calaby
-
Peter Krempa