On Wed, Sep 03, 2008 at 01:53:44PM +0100, Daniel P. Berrange wrote:
On Wed, Sep 03, 2008 at 08:43:44AM -0400, Konrad Rzeszutek wrote:
> > amount of host 'setup'. If a guest is using iSCSI as its storage, then
> > there is a step where the host has to login to the iSCSI target and create
> > device nodes for the LUNs before the guest can be run. You don't want
> > every single host to be logged into all your iSCSI targets all the time.
>
> I am interested to know why you think this is a no-no. If you have a
> set of hosts and you want to be able to migrate between all of them and your
> shared storage is iSCSI why would it make a difference whether you had logged in
> or logged in on the migrate on each host?
In the general case it is a needless scalability bottleneck. If you have 50
iSCSI targets exported on your iSCSI server, and 1000 hosts in the network,
In most cases you would end up with just four iSCSI targets (IQNs) for and after
logging in, you would have 50 logical units (LUNs) assigned for the nodes.
you'll end up with 50,000 connections to your iSCSI server. If
any given host
With an mid-range (or even the low-end LSI ones) iSCSI NASs you would get
two paths per controller, giving you four paths for one disk. With the
setup I mentioned that means 4000 connections.
only needs 1 particular target at any time, the optimal usage would
need
only 1000 connections to your iSCSI server.
Now in the non-general oVirt case, they have a concept of 'hardware pools'
and only migrate VMs within the scope of a pool. So they may well be fine
with having every machine in a pool connected to the requisite iSCSI
targets permanently, because each pool may only ever need 1 particular
target, rather than all 50.
Or there is one target (IQN) with 50 LUNs - which is what a lot of the low-entry
($5K, Dell MD3000i, IBM DS3300), mid-range (MSA 1510i, AX150i, NetApp) iSCSI NAS provide.
Thought the EqualLogic one (high-end) ends up doing what you described. At which
point you could have 50,000 connections.
So in the context of oVirt the question of iSCSI connectivity may be a
non-issue. In the context of libvirt, we cannot assume this because its
a policy decision of the admin / application using libvirt.
Sure. Isn't the code providing a GUID for the iSCSI pool so that before
a migrate, the nodes can compare their GUIDs to find a match?
And if not, complain so that the admin would create a pool.
> What about if the shared storage was Fibre Channel and it was zoned so that
> _all_ nodes saw the disk. Should you disconnect the block device when not in
> usage?
The same principle applies - libvirt cannot make the assumption that all
nodes have all storage available. Higher level apps may be able to make
that assumption.
I thought you meant that libvirt _would_ be making that decision. It seems that
you are thinking the same direction: if we can't find it, let the admin setup
a pool on the other node.
And one of those options could be the admin logging in on all of the nodes
to the same iSCSI IQN/pool.