Please see inline.
Date: Sun, 12 Aug 2012 16:31:29 +0200
Subject: Re: [libvirt] Proposal to add iSCSI support to esx storage driver
From: matthias.bolte(a)googlemail.com
To: ata.husain(a)hotmail.com
CC: libvir-list(a)redhat.com
2012/8/5 Ata Bohra <ata.husain(a)hotmail.com>:
>> Date: Sun, 5 Aug 2012 23:04:07 +0200
>> Subject: Re: [libvirt] Proposal to add iSCSI support to esx storage driver
>> From: matthias.bolte(a)googlemail.com
>> To: ata.husain(a)hotmail.com
>> CC: libvir-list(a)redhat.com
>
>>
>> 2012/8/2 Ata Bohra <ata.husain(a)hotmail.com>:
>> > Hi All,
>> >
>> > I just want to go over the design that I am working on to incorporate
>> > iSCSI
>> > support to libvirt ESX storage driver. The highlights are:
>> >
>> > Current Implementation
>> > At present esx_storage_driver supports only VMFS type datastore and does
>> > not
>> > provide much leeway to enhance or add other supported storage pools such
>> > as
>> > iSCSI.
>> >
>> > Proposal
>> > My proposal is:
>> > 1. Split the current code such as esx_storage_driver becomes more like a
>> > facade; this driver will use "backend" drivers to perform the
request
>> > task
>> > (such as: esx_storage_backend_iscsi and esx_storage_backend_vmfs)
>> > 2. Based on the pool type (lookup can determine storage pool type), the
>> > base
>> > driver then invoke the appropriate backend driver routine to get the job
>> > done.
>> > 3. Backend driver shall implement same routines exposed by
>> > esx_storage_driver if needed, but the implementation will be pertinent
>> > to
>> > its specific type.
>>
>> I took a quick look at the vSphere API regarding iSCSI but I'm not
>> sure how it's supposed to work. Do you have a better understanding
>> about this. I'd like to discuss the conceptual part first. How does
>> storage pool and volume listing/creation/destruction work with iSCSI?
>> Does it differ from the current code at all? If it differs is it that
>> different that we really need this radical split?
>>
>> --
>> Matthias Bolte
>>
http://photron.blogspot.com
>
> Hi Matthias,
>
> Below is my understanding as per the iSCSI operations mapping of vSphere
> APIs and libvirt.
>
> Storage Pool <---> iSCSI target (as ESX provides set of static targets as
> well dynamic target, I am targetting list of only static targets as they
> gaurantee LUN exposed on that IQN and covers corresponding dynamic target
> too)
>
> Volumes <----> Logical Units Number exposed to that host on that IQN.
>
> Above listed mapping are real important for me to get right, please let me
> know if you think they does not map well. ( I have based my understanding as
> per brief discussion mentioned at
http://libvirt.org/storage.html)
>
> As iSCSI and VMFS (encapsulating all ESX supported datastores) operation
> differ significantly such as:
> 1. iSCSI volumes can be listed but cannot be created/destroyed.
> 2. iSCSI ESX data objects have no similarity with datastore type storage
> dataobjects (for iSCSI they are: HostScsiTopology and ScsiLun; it would be
> useful to share the complete mapping if you are interested, please let me
> know).
>
> The current esx_storage_driver.c is written solely for pool/volumes that
> support VMFS datastore operations, BUT a subset of operation can be provided
> for iSCSI storage pool/volume. It is possible to append the current code to
> support iSCSI operation but I think it clutter the code.
> With that intention I proposed to split the pool specific implementation to
> backend drivers and esx libvirt storage interface driver simply delegates
> task to the backend driver.
This sounds good so far. Some remaining questions:
A storage pool as a name and a UUID. Do you already know where to get
this information for a iSCSI target? For example for the existing
datastore handling I had to use the MD5 sum of it's mount path as
UUID.
HostInternetScsiHba represents the iSCSI targets (iSCSI storage pool in
libvirt terminology), it does not have UUID field. But, iSCSI targets are identified on
the basis of IQN (for instance: iqn.2006-01.com.openfiler:tsn.dabb5996b20f) identifiers
that are global and unique per iSCSI target detected by ESX iSCSI initiator. I am using
IQN name (iSCSI Name as per ESX) to define storage pools UUID.
Does ESX use the same naming scheme for iSCSI and the other
datastores: '[datastore-name] path/to/volume/in/datastore.vmdk', that
maps to /path/to/datastore/path/to/volume/in/datastore.vmdk in the VMX
file entry? Or does it use a full qualified URL that is also used in
the VMX entry? If they use different formats it would allow some
storage driver functions to directly distinguish between different
storage pool types and directly call into the correct backend without
prior probing.
As I understand here are two things:1. Raw iSCSI target (volume in
libvirt terminology) i.e. raw LUN exposed which does not have any datastore format created
on it. 2. iSCSI LUN on which either VMFS or any other datastore identified by ESX created
(for instance: NAS datastore).
iSCSI backend driver only needs to take care of LUNs belonging to #1, for all the LUNs
which host can mount as datastore should be covered by existing storage driver
implementation (VMFS backend after refactoring). The iSCSI volumes (raw) are defined by
their "devicePath" (for instance:
/vmfs/devices/disks/t10.F405E46494C45425F49615840723D214D476A4D2E457B416), I would use the
same terminology to match vSphere APIs. As ESX does not recognize raw iSCSI LUNs as
datastores, so I doubt think they can be used inside a VMs VMX file, one needs to first
create datastore to start using it for deploying VMs or storing VMDKs on it.
Okay, I think the next step is that you start to implement your
scheme
and see how it works out.
I got to spent some time and have done a good chunk of
work done, I am hoping to share the design soon.
Thanks!Ata