Hello,
I am trying to find out a best practice for a specific scenario.
First of all I would like to know what is the proper way to set up
multipath, who should care about it the host or the guest. Right now I
have a setup where I have one multipath which sets my host to boot from
FC SAN. I have another multipathed LUN in the host which is essentially a
dm which I attached to a guest, however through virtio. I added the
devices through virsh with pool type of "mpath" and path through
/dev/mapper/. So here is my first question: with such a setup the
multipath and eventual fail-over is taken care by the host OS, right,
the guest will not notice is I suppose? But what about migration. What
if I decide to migrate the guest to another host. How would that work
out? WIth shared directory it is easy, you just have the images in it,
but what about such a setup. I can always add the aforementioned LUN,
where the guest resides to another Storage Group on the SAN where the
new host has access and I can make sure that the mpath device name is
persistent across all hosts, but is that the right approach.
Here is another question that is bugging me. I have FC HBA-s on all
hosts and I would like to make a HBA visible through my guest. I
stumbled upon a bug that is documented in redhat's bugzilla that
attempted creation of NPIV on pci_0000_blah_blah instead on the
scsi_hostX device, but I fixed that easily in the xml and although virsh
doesn't seem to think I completed the nodedev-create NPIV_for_my_FC.xml
the device is seen as a child in nodedev-list and what is more
important the WWN are seen on the SAN switch. So this seems to be
working OK, however I don't know how to attach this new shiny device
that I created to a guest. Could someone give me a hint? What is the
proper way to attach this new devices to a guest OS and do that
persistently. Do I do that with virsh's attach-device and if so what is
the proper XML format? Should I dump the XML for the newly created NPIV
nodedev and try to attach that? And again with multipathing. What if I
decided to create NPIV on my two FC cards in every host, then do the
zoning and attach those newly created NPIV nodes to the guests? Will
that produce the same effect that such a multipath does on the host? It
doesn't really matter if I would use it for root drive or a shared gfs2
storage between guests, I just want to know is this possible and is it
the right thing to do, or should I stick to setup in my previous
paragraph? And last question: what about migration with such NPIV FC
devices. I can move a guest around, but will that move my NPIV FC with
it's WWN so that I can continue using my zoning (I think I just realized
that this might be impossible because I'm migrating the guest, not the
entire setup of libvirt, but probably I'm wrong and I'm not aware of a
proper method to do it correctly?).
Probably those questions are easy to answer, but please bare with me,
I'm playing with libvirt in an enterprise setup only for the last couple
of days and I would really like to make it right and kick some VMware
ass.
By the way, all my hosts are Fedora 14, with kernel 2.6.37 (rebuild of
the one for Fedora 15 from koji, because there was a small glitch with
Qlogic's driver in the stock 2.6.35). Most of my guests will be
SL6,CentOS5/6(when it arrives) and a couple of windows XP guests (but I'm not really concerned with that).
Thanks for the support in advance
P.S. please keep me in CC, as I'm not on the list. Thank you