On 7/20/11, David Ehle <ehle(a)phys.iit.edu> wrote:
Would you be willing /able to elaborate more on this idea? I've
been
looking into it a bit more and while your right that it sounds simpler on
paper, it looks like doing DRBD is packaged for Ubuntu, and there is
pretty good Cook Book documentation for how to do HA NFS on top of DRBD.
I'm a real novice on the multipath topic so I'm having trouble comparing
apples to apples to see what the pros and cons of the two scenarios would
be.
I'm a novice as well (if my nickname didn't make that obvious yet :D)
It's all on paper since I haven't had the time to push for it
internally nor was the original client eager to put out the budget for
the necessary hardware.
My basic idea was this
Physically
=======
[VM Host]
NIC1 -> Switch 1
NIC2 -> Switch 2
[Storage Node] x 2
NIC1 -> Switch 1
NIC2 -> Switch 2
Export HDD x 4 (RAID 10 or could do with RAID 1)
So for VM1
mp1: multipath NIC1 -> Storage1:HDD1, NIC2->Storage1:HDD1
mp2: multipath NIC2 -> Storage2:HDD1, NIC2->Storage2:HDD2
then
md mirror using mp1 and mp2
This way, if one switch fails, multipath should keep the array
working. if one node fails, md mirror should keep it alive.
One question I haven't figured an answer to is whether it would be
better to build the array in the host and simply show the guest a
single drive (less overheads? more flexible? since I could change the
setup on the host as long as the guest definition points to the
correct block device), or do it within the guest itself (faster I/O
since the kernel is aware of more drives?)