Hello,
I've spent a few days googling and reading documentation, but I'm looking
for clarification and advise on setting up KVM/libvirt with shared
storage.
I have 2 (for now) Ubuntu Karmic systems with KVM/virsh/virt-manager set
up and running.
I have a storage server that can do NFS/iSCSI/samba/ etc.
I am trying to figure out the best way to set things up so that I can run
several (eventually) production linux guests (mostly debian stable) on the
2 KVM host systems and be able to migrate them when maintenance is
required on their hosts.
Performance is of medium importance, stability/availibilty is pretty high
priority.
A common opinion seems to be that using LVs to hold disk images gives
possibly the best IO performance, followed by raw, then qcow2.
Would you agree or disagree with this? What evidence can you provide?
I am also rather confused about using shared storage, and what options can
be combined with this.
I have succesfully made an iscsi device available to
libvirt/virsh/virt-manager via an XML file + pool-define + pool-start.
However the documentation states that while you can create pools through
libvirt, volumes have to be pre-allocated:
http://libvirt.org/storage.html
"Volumes must be pre-allocated on the iSCSI server, and cannot be created
via the libvirt APIs."
I'm very unclear on what this means in general, and specifically how you
preallocate the the Volumes.
When I make a iscsi pool available via something like this:
<pool type="iscsi">
<name>virtimages</name>
<source>
<host name="iscsi.example.com"/>
<device path="demo-target"/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
In virt-manger I can see it as a pool, but when I try to select it for
where to create an image, it uses the pool AS A Volume (I think).
That brings me to my next question/misunderstanding...
If you are using shared storage (via NFS or iSCSI) does that also mean you
MUST use a file based image rather than an LVM LV? NFS makes directories
available, not devices. You can make an unformated "raw" LV available via
iSCSI, but its not seen on the far side as a LV but instead as a scsi disk
which you need to partition. You can make a PV>VG>LV out of that and then
make your new LV available to libvirt/KVM but then your stacking a heck of
a lot of LVM/LVs/partion tables up which seems pretty dubious. I'm also
not sure that the stack would be visable/available to the second system
using that iSCSI device. (I'm pretty new to using iSCSI as well.)
Redhat provides pretty good documentation on doing shared storage/live
migration for NFS:
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Virtua...
But unfortunately the section on Shared Storage with iSCSI is a bit
lacking:
"9.1. Using iSCSI for storing guests
This section covers using iSCSI-based devices to store virtualized
guests." and thats it.
For maybe 6-10 guests, should I simply just be using NFS? Is its
performance that much worse than iSCSI for this task?
So since KVM/libvirt is getting used quite often in production now - but
some simple things are not making any sense to me - I'm guessing that I
have some really basic misunderstandings of how to do shared storage /
Pools/ Volumes for migration.
Would one of the kind readers of this least be willing to correct me?
Please CC to my original email address as well as I am not yet a regular
subscriber.
Thanks again!
David.