[libvirt-users] RAID1 over IP?

I asksed about this in November last year but got on response. Anyone have any ideas now? Does anyone here have any experience with using KVM/libvirt with RAID1 over IP/DRBD or other HA solution? I'm trying to figure out the hardware configuration I would need to be able to survive a failure or planned shutdown of any one unit in a virtualization cluster. KVM/libvirt makes moving running VM's from one host to another a no brainer, but I'm trying to figure out the right way to be able to take a the storage backend for maintenance without disrupting the VMs. Right now I'm thinking something like KVM + libvirt + heartbeat/corosync + pacemaker + DBRM on Ubuntu 10.04 with 3 or 4 nodes - 2 hosts, 2 storage, or 1 host, 1 host + storage, 1 storage. Any thoughts? Thanks!

On 7/19/11, David Ehle <ehle@agni.phys.iit.edu> wrote:
KVM/libvirt makes moving running VM's from one host to another a no brainer, but I'm trying to figure out the right way to be able to take a the storage backend for maintenance without disrupting the VMs.
Right now I'm thinking something like KVM + libvirt + heartbeat/corosync + pacemaker + DBRM on Ubuntu 10.04 with 3 or 4 nodes - 2 hosts, 2 storage, or 1 host, 1 host + storage, 1 storage.
Personally, I had been planning to do it with mdadm multi-path & RAID 1 over iSCSI. Seems to be a more straightforward solution than using a full HA setup if the purpose is to ensure storage remains available when a node is down (for maintenance or otherwise).

That makes a lot of sense... I don't know that I really needed the full suite, as I don't think I'm aiming for that level of HA for the hosted systems. I'll look into it. Can you provide me any better links than what a google search would provide? Thanks! David. On Tue, 19 Jul 2011, Emmanuel Noobadmin wrote:
On 7/19/11, David Ehle <ehle@agni.phys.iit.edu> wrote:
KVM/libvirt makes moving running VM's from one host to another a no brainer, but I'm trying to figure out the right way to be able to take a the storage backend for maintenance without disrupting the VMs.
Right now I'm thinking something like KVM + libvirt + heartbeat/corosync + pacemaker + DBRM on Ubuntu 10.04 with 3 or 4 nodes - 2 hosts, 2 storage, or 1 host, 1 host + storage, 1 storage.
Personally, I had been planning to do it with mdadm multi-path & RAID 1 over iSCSI. Seems to be a more straightforward solution than using a full HA setup if the purpose is to ensure storage remains available when a node is down (for maintenance or otherwise).

On Tue, 19 Jul 2011, Emmanuel Noobadmin wrote:
On 7/19/11, David Ehle <ehle@agni.phys.iit.edu> wrote:
KVM/libvirt makes moving running VM's from one host to another a no brainer, but I'm trying to figure out the right way to be able to take a the storage backend for maintenance without disrupting the VMs.
Right now I'm thinking something like KVM + libvirt + heartbeat/corosync + pacemaker + DBRM on Ubuntu 10.04 with 3 or 4 nodes - 2 hosts, 2 storage, or 1 host, 1 host + storage, 1 storage.
Personally, I had been planning to do it with mdadm multi-path & RAID 1 over iSCSI. Seems to be a more straightforward solution than using a full HA setup if the purpose is to ensure storage remains available when a node is down (for maintenance or otherwise).
Would you be willing /able to elaborate more on this idea? I've been looking into it a bit more and while your right that it sounds simpler on paper, it looks like doing DRBD is packaged for Ubuntu, and there is pretty good Cook Book documentation for how to do HA NFS on top of DRBD. I'm a real novice on the multipath topic so I'm having trouble comparing apples to apples to see what the pros and cons of the two scenarios would be. Thanks! David.

On 7/20/11, David Ehle <ehle@phys.iit.edu> wrote:
Would you be willing /able to elaborate more on this idea? I've been looking into it a bit more and while your right that it sounds simpler on paper, it looks like doing DRBD is packaged for Ubuntu, and there is pretty good Cook Book documentation for how to do HA NFS on top of DRBD.
I'm a real novice on the multipath topic so I'm having trouble comparing apples to apples to see what the pros and cons of the two scenarios would be.
I'm a novice as well (if my nickname didn't make that obvious yet :D) It's all on paper since I haven't had the time to push for it internally nor was the original client eager to put out the budget for the necessary hardware. My basic idea was this Physically ======= [VM Host] NIC1 -> Switch 1 NIC2 -> Switch 2 [Storage Node] x 2 NIC1 -> Switch 1 NIC2 -> Switch 2 Export HDD x 4 (RAID 10 or could do with RAID 1) So for VM1 mp1: multipath NIC1 -> Storage1:HDD1, NIC2->Storage1:HDD1 mp2: multipath NIC2 -> Storage2:HDD1, NIC2->Storage2:HDD2 then md mirror using mp1 and mp2 This way, if one switch fails, multipath should keep the array working. if one node fails, md mirror should keep it alive. One question I haven't figured an answer to is whether it would be better to build the array in the host and simply show the guest a single drive (less overheads? more flexible? since I could change the setup on the host as long as the guest definition points to the correct block device), or do it within the guest itself (faster I/O since the kernel is aware of more drives?)

Hello, I have some hosts with a similar configuration. I have choosen to build the md arrays on the guests, because on your site we use live migration, and i don't know how to handle the mirror while migrating the guest. A disadvantage is the difficulty to boot the guest if the mirror is broken. ----- Original Message ----- From: "Emmanuel Noobadmin" <centos.admin@gmail.com> To: "David Ehle" <ehle@agni.phys.iit.edu> Cc: libvirt-users@redhat.com Sent: Mercredi 20 Juillet 2011 07:06:51 Subject: Re: [libvirt-users] RAID1 over IP? On 7/20/11, David Ehle <ehle@phys.iit.edu> wrote:
Would you be willing /able to elaborate more on this idea? I've been looking into it a bit more and while your right that it sounds simpler on paper, it looks like doing DRBD is packaged for Ubuntu, and there is pretty good Cook Book documentation for how to do HA NFS on top of DRBD.
I'm a real novice on the multipath topic so I'm having trouble comparing apples to apples to see what the pros and cons of the two scenarios would be.
I'm a novice as well (if my nickname didn't make that obvious yet :D) It's all on paper since I haven't had the time to push for it internally nor was the original client eager to put out the budget for the necessary hardware. My basic idea was this Physically ======= [VM Host] NIC1 -> Switch 1 NIC2 -> Switch 2 [Storage Node] x 2 NIC1 -> Switch 1 NIC2 -> Switch 2 Export HDD x 4 (RAID 10 or could do with RAID 1) So for VM1 mp1: multipath NIC1 -> Storage1:HDD1, NIC2->Storage1:HDD1 mp2: multipath NIC2 -> Storage2:HDD1, NIC2->Storage2:HDD2 then md mirror using mp1 and mp2 This way, if one switch fails, multipath should keep the array working. if one node fails, md mirror should keep it alive. One question I haven't figured an answer to is whether it would be better to build the array in the host and simply show the guest a single drive (less overheads? more flexible? since I could change the setup on the host as long as the guest definition points to the correct block device), or do it within the guest itself (faster I/O since the kernel is aware of more drives?) _______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users
participants (3)
-
David Ehle
-
Emmanuel Noobadmin
-
Jean Michault