> +
> + exp_len = exp_base_num
> +
> + if 'DiskPool' in id:
> + if virt == 'Xen' or virt == 'XenFV':
> + # For Xen and XenFV, there is a template for PV and FV,
> so you + # end up with double the number of templates
> + xen_multi = 2
> +
> + if curr_cim_rev >= libvirt_rasd_template_changes and \
> + curr_cim_rev < libvirt_rasd_new_changes:
> + exp_len = exp_base_num + exp_cdrom
> +
> + elif curr_cim_rev >= libvirt_rasd_new_changes and \
> + curr_cim_rev < libvirt_rasd_dpool_changes:
> + exp_len = (exp_base_num + exp_cdrom) * xen_multi +
> + elif curr_cim_rev >= libvirt_rasd_dpool_changes:
> + volumes = enum_volumes(virt, ip)
> + exp_len = (volumes * exp_base_num) * xen_multi +
> (exp_cdrom * 2)
>
I did not understand why we have exp_cdrom * 2.
Oops.. this should have been exp_cdrom * xen_multi (4 CDROM instances
for PV guests, 4 CDROM instances for FV guests). I'll resend with a fix
for this.
Can you brief on what is the expected cdrom records now ??
The CDROM template generation code in libvirt hasn't changed. 4
instances for CDROM are generated (max, min, incr, def). For Xen, we
generate a set of max, min, incr, def for PV and for FV. So you see 8
templates in the Xen case.
With the new changes what is the expected diff DiskPool informations
?
Are you talking about the recent provider changes?
Here's the steps:
1) For min, max, incr, def:
a) For each storage volume in the storage pool do:
i) Checks to see if we can get info on the storage volume.
ii) If libvirt is able to get the info, generate an instance
b) Generate a CDROM inst (for Xen, generate one for PV and one for FV)
> +
> + elif virt == 'KVM':
> + if curr_cim_rev >= libvirt_rasd_new_changes and \
> + curr_cim_rev < libvirt_rasd_dpool_changes:
> + exp_len = exp_base_num + exp_cdrom
> +
> + elif curr_cim_rev >= libvirt_rasd_dpool_changes:
> + volumes = enum_volumes(virt, ip)
> + exp_len = (volumes * exp_base_num) + exp_cdrom
> +
> + return exp_len
> +
--
Kaitlin Rupert
IBM Linux Technology Center
kaitlin(a)linux.vnet.ibm.com