Hey,
I know I said this already ... but it's probably worth fleshing out the
idea a bit "on paper" to see how it works out.
The idea is that perhaps we should support the concept of a "Virtual
Storage Pool" in the same way we now support Virtual Networks.
A virtual storage pool would basically be an area of storage on a
physical machine from which virtual disks can be allocated for guests.
It would be backed by e.g. an LVM Volume Group.
On each host machine, there would be a default storage pool consisting
of a large (e.g. 10G) sparse file loopback mounted and containing an LVM
VG. Users can add other PVs to that VG in order to allocate more storage
to the pool.
So, what would the XML format be like for the simple case of a loopback
sparse file?
<storage_pool>
<name>TestStorage</name>
<uuid>1dd053b2-a068-4f6a-aaae-d7d88ecb504d</uuid>
<pool type='lvm'>
<physical_volume type='file'>
<file size='20971520'>/var/lib/xen/test-storage.img</file>
</physical_volume>
</pool>
</storage_pool>
When you start create a pool, libvirtd would create the file, associate
a loopback device with it, initialise it as a PV using pvcreate and
create a volume group corresponding to the name of the pool using
vgcreate.
On subsequent boots, libvirtd would find that the file exists,
associate a loopback device with the file, run vgscan and check that the
VG exists[1].
(We could allow specifying an alternative VG name and having multiple
physical volumes. Also note, that you'd only need to list physical
volumes which actually need to be "activated" ... i.e. if the VG was
just on /dev/hda3 or something, it wouldn't need to be listed)
Of course, you then need to be able to carve out a chunk for a guest,
so perhaps:
int virStorageAllocateVolume(virStoragePtr storage,
const char *name,
unsigned long size);
int virStorageDeallocateVolume(virStoragePtr storage,
const char *name);
Volumes could be allocated using the XML format too:
<pool type='lvm'>
<volume>
<name>TestVolume</name>
<size>4194304</name>
</volume>
</pool>
And once allocated, you could create a guest with e.g.
<disk type='volume'>
<source volume='TestVolume' />
<target dev='hda' />
</disk>
which would cause libvirtd to first lookup the device path for the
volume and use that when starting the guest.
Of course, we'd need to think about other types of physical volumes.
The simple one is just a plain block device:
<physical_volume type='device'>
<file>/dev/hda3</file>
</physical_volume>
An iSCSI target:
<physical_volume type='iscis'>
<host>storage.devel.redhat.com</host>
<port>3260</port>
<target>iqn.1994-06.com.redhat.devel:markmc.test1</target>
<lun>0</lun>
</physical_volume>
Or a file on an NFS mount:
<physical_volume type='nfs'>
<remote>storage.devel.redhat.com:/mnt/storage/test</remote>
<file>test-storage.img</file>
</physical_volume>
Of course, all guests don't need to fit into this model they can
continue using a file/device directly rather than a storage pool.
Cheers,
Mark.
[1] - One thing to think about is that the VG contains the canonical
list of physical volumes and allocated logical volumes ... so e.g.
libvirt would be confused if you re-named a volume.