[PATCH 0 of 4] Updating 01_forward.py of EAFP.

# HG changeset patch # User Deepti B. Kalakeri <deeptik@linux.vnet.ibm.com> # Date 1215776529 25200 # Node ID 25cd5c474b0797d36d67e200b4f8ed4ffa9cfedc # Parent 0f8b7f041b91761da40aee3f6574338de6c178fd [TEST] Adding functions to verify the EAFP fields . Added functions verify_common_pool_values(), verify_specific_fields() and verify_disk_mem_proc_pool_values() to be helpful to verify the EAFP fields returned when queried with Logicaldevices. Signed-off-by: Deepti B. Kalakeri <deeptik@linux.vnet.ibm.com> diff -r 0f8b7f041b91 -r 25cd5c474b07 suites/libvirt-cim/lib/XenKvmLib/logicaldevices.py --- a/suites/libvirt-cim/lib/XenKvmLib/logicaldevices.py Thu Jul 10 06:07:36 2008 -0700 +++ b/suites/libvirt-cim/lib/XenKvmLib/logicaldevices.py Fri Jul 11 04:42:09 2008 -0700 @@ -123,3 +123,34 @@ return FAIL return PASS +def verify_common_pool_values(assoc_info, list_values): + if assoc_info['InstanceID'] != list_values['InstanceID']: + field_err(assoc_info, list_values, fieldname = 'InstanceID') + return FAIL + if assoc_info['PoolID'] != list_values['PoolID']: + field_err(assoc_info, list_values, fieldname = 'PoolID') + return FAIL + if assoc_info['ResourceType'] != list_values['ResourceType']: + field_err(assoc_info, list_values, fieldname = 'ResourceType') + return FAIL + return PASS + +def verify_specific_fields(assoc_info, list_values): + if assoc_info['Capacity'] != list_values['Capacity']: + field_err(assoc_info, list_values, fieldname = 'Capacity') + return FAIL + if assoc_info['Reserved'] != list_values['Reserved']: + field_err(assoc_info, list_values, fieldname = 'Reserved') + return FAIL + if assoc_info['AllocationUnits'] != list_values['AllocationUnits']: + field_err(assoc_info, list_values, fieldname = 'AllocationUnits') + return FAIL + return PASS + +def verify_disk_mem_proc_pool_values(assoc_info, list_values): + status = verify_common_pool_values(assoc_info, list_values) + if status != PASS: + return status + status = verify_specific_fields(assoc_info, list_values) + return status +

+def verify_common_pool_values(assoc_info, list_values): + if assoc_info['InstanceID'] != list_values['InstanceID']: + field_err(assoc_info, list_values, fieldname = 'InstanceID') + return FAIL + if assoc_info['PoolID'] != list_values['PoolID']: + field_err(assoc_info, list_values, fieldname = 'PoolID') + return FAIL + if assoc_info['ResourceType'] != list_values['ResourceType']: + field_err(assoc_info, list_values, fieldname = 'ResourceType') + return FAIL + return PASS + +def verify_specific_fields(assoc_info, list_values):
I'd suggest combining this functionality into the function above. The only case where you need to ignore these properties is in the NetworkPool case, so I'd have verify_common_pool_values() take the class name. After checking the common values, check to see if the class name is "NetworkPool" and return from the function if is.
+ if assoc_info['Capacity'] != list_values['Capacity']: + field_err(assoc_info, list_values, fieldname = 'Capacity') + return FAIL + if assoc_info['Reserved'] != list_values['Reserved']: + field_err(assoc_info, list_values, fieldname = 'Reserved') + return FAIL + if assoc_info['AllocationUnits'] != list_values['AllocationUnits']: + field_err(assoc_info, list_values, fieldname = 'AllocationUnits') + return FAIL + return PASS + +def verify_disk_mem_proc_pool_values(assoc_info, list_values):
I don't think this function is needed. This function only calls two other function - it doesn't do any other tasks. Plus, if you make the changes suggested above, then this function becomes obsolete. -- Kaitlin Rupert IBM Linux Technology Center kaitlin@linux.vnet.ibm.com

# HG changeset patch # User Deepti B. Kalakeri <deeptik@linux.vnet.ibm.com> # Date 1215777176 25200 # Node ID 65b1865d10a0570ec785e78affc0f69877bc8189 # Parent 25cd5c474b0797d36d67e200b4f8ed4ffa9cfedc [TEST] Adding functions to common_util.py to support the verifications of EAFP fields. Added the following functions: 1) eafp_dpool_cap_reserve_val() to get the DiskPool's Capacity and Reserved field values depending on which libvirt version is present on the machine. 2) eafp_mpool_reserve_val() to get the MemoryPool's Reserved field value. 3) Also, added get_value() function. Signed-off-by: Deepti B. Kalakeri <deeptik@linux.vnet.ibm.com> diff -r 25cd5c474b07 -r 65b1865d10a0 suites/libvirt-cim/lib/XenKvmLib/common_util.py --- a/suites/libvirt-cim/lib/XenKvmLib/common_util.py Fri Jul 11 04:42:09 2008 -0700 +++ b/suites/libvirt-cim/lib/XenKvmLib/common_util.py Fri Jul 11 04:52:56 2008 -0700 @@ -35,7 +35,9 @@ from XenKvmLib.classes import get_typed_class from CimTest.Globals import logger, log_param, CIM_ERROR_ENUMERATE from CimTest.ReturnCodes import PASS, FAIL, XFAIL_RC -from VirtLib.live import diskpool_list, virsh_version, net_list +from VirtLib.live import diskpool_list, virsh_version, net_list, \ +active_domain_list, virsh_dominfo_usedmem +from VirtLib.utils import run_remote from XenKvmLib.vxml import PoolXML, NetXML test_dpath = "foo" @@ -397,3 +399,66 @@ return PASS +def get_value(server, cmd, log_msg, fieldname): + msg = log_msg % fieldname + ret, value = run_remote(server, cmd) + if ret != 0: + logger.error("%s", log_msg, fieldname) + return FAIL, value + return PASS, value + +def eafp_dpool_cap_reserve_val(server, virt, poolname): + libvirt_version = virsh_version(server, virt) + capacity = reserved = None + if libvirt_version >= '0.4.1': + # get the value from pool-info + log_msg= "Failed to get the '%s' info from pool-info" + dp_name, pname = poolname.split("/") + + cmd = "virsh pool-info %s | awk '/Capacity/ { print \$2}'" \ + % pname + status, cap_val = get_value(server, cmd, log_msg, 'Capacity') + if status != PASS: + return FAIL, capacity, reserved + cap_val = float(cap_val) + capacity = int(cap_val * 1024 * 1024 * 1024) >> 20 + + cmd = "virsh pool-info %s | awk '/Allocation/ { print \$2}'" \ + % pname + status, alloc_val = get_value(server, cmd, log_msg, 'Allocation') + if status != PASS: + return FAIL, capacity, reserved + alloc_val = float(alloc_val) + reserved = int(alloc_val * 1024 * 1024 *1024) >> 20 + + else: + # get info from stat -filesystem + log_msg = "Stat on the '%s' file failed" + + cmd = "stat -f %s | awk '/size/ {print \$7}'" % disk_file + status, f_bsize = get_value(server, cmd, log_msg, disk_file) + if status != PASS: + return FAIL, capacity, reserved + + cmd = " stat -f %s | awk '/Blocks/ {print \$3}'" % disk_file + status, b_total = get_value(server, cmd, log_msg, disk_file) + if status != PASS: + return FAIL, capacity, reserved + cap_val = (int(f_bsize) * int(b_total)) + capacity = (int(f_bsize) * int(b_total)) >> 20 + + cmd = "stat -f %s | awk '/Blocks/ {print \$5}'" % disk_file + status, b_free = get_value(server, cmd, log_msg, disk_file) + if status != PASS: + return FAIL, capacity, reserved + reserved = (cap_val - (int(f_bsize) * int(b_free))) >> 20 + + return PASS, capacity, reserved + +def eafp_mpool_reserve_val(server, virt): + reserved = 0 + doms = active_domain_list(server, virt) + for dom_name in doms: + mem = virsh_dominfo_usedmem(server, dom_name, virt) + reserved += mem + return reserved

+def get_value(server, cmd, log_msg, fieldname):
I wouldn't make a separate function out of this. it probably won't be used except by the eafp_dpool_cap_reserve_val() function.
+ msg = log_msg % fieldname + ret, value = run_remote(server, cmd) + if ret != 0: + logger.error("%s", log_msg, fieldname) + return FAIL, value + return PASS, value + +def eafp_dpool_cap_reserve_val(server, virt, poolname):
The capabilities and reserved values are properties of the ResourcePools - I'd drop eafp from the name since these values aren't related to the EAFP association. Also, I'd move these functions to the logicaldevices.py file. Or, you could create a resourcepool.py file and add the verify_common_pool_values() function to it. The common_util.py is becoming quite large - so, it might be a good idea to try to place more specific functions like these (that deal with a specific provider's values) into a file specific for that provider.
+ libvirt_version = virsh_version(server, virt) + capacity = reserved = None + if libvirt_version >= '0.4.1': + # get the value from pool-info + log_msg= "Failed to get the '%s' info from pool-info" + dp_name, pname = poolname.split("/") + + cmd = "virsh pool-info %s | awk '/Capacity/ { print \$2}'" \ + % pname + status, cap_val = get_value(server, cmd, log_msg, 'Capacity') + if status != PASS: + return FAIL, capacity, reserved + cap_val = float(cap_val) + capacity = int(cap_val * 1024 * 1024 * 1024) >> 20
This calculation is odd. You're converting GB into bytes and then from bytes into megabytes. You can instead convert straight to megabytes: capacity = int(cap_val * 1024)
+ + cmd = "virsh pool-info %s | awk '/Allocation/ { print \$2}'" \ + % pname + status, alloc_val = get_value(server, cmd, log_msg, 'Allocation') + if status != PASS: + return FAIL, capacity, reserved + alloc_val = float(alloc_val) + reserved = int(alloc_val * 1024 * 1024 *1024) >> 20
Same here - convert straight to megabytes
+ + else: + # get info from stat -filesystem + log_msg = "Stat on the '%s' file failed" + + cmd = "stat -f %s | awk '/size/ {print \$7}'" % disk_file + status, f_bsize = get_value(server, cmd, log_msg, disk_file) + if status != PASS: + return FAIL, capacity, reserved + + cmd = " stat -f %s | awk '/Blocks/ {print \$3}'" % disk_file + status, b_total = get_value(server, cmd, log_msg, disk_file) + if status != PASS: + return FAIL, capacity, reserved + cap_val = (int(f_bsize) * int(b_total)) + capacity = (int(f_bsize) * int(b_total)) >> 20 + + cmd = "stat -f %s | awk '/Blocks/ {print \$5}'" % disk_file + status, b_free = get_value(server, cmd, log_msg, disk_file) + if status != PASS: + return FAIL, capacity, reserved + reserved = (cap_val - (int(f_bsize) * int(b_free))) >> 20 + + return PASS, capacity, reserved
If you want to break up the size of this function some, you could do something like: dpool_cap_reserve_val(): libvirt_version = virsh_version(server, virt) capacity = reserved = None if libvirt_version >= '0.4.1': libvirt_dpool_cap_res() else: fs_dpool_cap_res() Having sub functions for each of these breaks up the work nicely, and it's possible (although unlikely) that a test would need to call one of these functions explicitly. -- Kaitlin Rupert IBM Linux Technology Center kaitlin@linux.vnet.ibm.com

# HG changeset patch # User Deepti B. Kalakeri <deeptik@linux.vnet.ibm.com> # Date 1215777594 25200 # Node ID 9155858d4e5aa69393e652e7fd340ab12a63a153 # Parent 65b1865d10a0570ec785e78affc0f69877bc8189 [TEST] Adding functions to live.py. Added the following functions: 1) virsh_nodeinfo_cpucount() to get the number of processors on the machine. 2) virsh_nodeinfo_memsize() to get the memory capacity on the machine. 3) virsh_dominfo_usedmem() to get the memory used by the guest. Signed-off-by: Deepti B. Kalakeri <deeptik@linux.vnet.ibm.com> diff -r 65b1865d10a0 -r 9155858d4e5a lib/VirtLib/live.py --- a/lib/VirtLib/live.py Fri Jul 11 04:52:56 2008 -0700 +++ b/lib/VirtLib/live.py Fri Jul 11 04:59:54 2008 -0700 @@ -347,3 +347,56 @@ return out return None +def virsh_nodeinfo_cpucount(server, virt="Xen"): + """ + Returns the number of processors on the machine. + """ + + cmd = "virsh -c %s nodeinfo | awk '/CPU\(s\)/ { print \$2 }'" \ + % (utils.virt2uri(virt)) + + rc, out = utils.run_remote(server, cmd) + if rc != 0: + return -1 + + try: + cpus = int(out) + return cpus + except ValueError: + return -1 + +def virsh_nodeinfo_memsize(server, virt="Xen"): + """ + Returns the memory capacity on the machine. + """ + + cmd = "virsh -c %s nodeinfo | awk '/Memory size/ { print \$3 }'" \ + % (utils.virt2uri(virt)) + + rc, out = utils.run_remote(server, cmd) + if rc != 0: + return -1 + + try: + msize = int(out) + return msize + except ValueError: + return -1 + +def virsh_dominfo_usedmem(server, vs_name, virt="Xen"): + """ + Returns the memory used by the guest. + """ + + guest_cmd = "virsh -c %s dominfo %s | awk '/Used memory/ { print \$3 }'" \ + % (utils.virt2uri(virt), vs_name) + + rc, out = utils.run_remote(server, guest_cmd) + if rc != 0: + return -1 + + try: + usedmem = int(out) + return usedmem + except ValueError: + return -1

# HG changeset patch # User Deepti B. Kalakeri <deeptik@linux.vnet.ibm.com> # Date 1215778728 25200 # Node ID 2b57a9423f82420b386775bbdf884302d7338ba8 # Parent 9155858d4e5aa69393e652e7fd340ab12a63a153 [TEST] Updating 01_forward.py of EAFP. 1) Modifying the tc to support XenFV and KVM. 2) Modified get_keys() to use proper SystemCreationClassName. 3) Added functions get_id to get the instances for different logical devices so that the DeviceID of the instances can be used in the init_list(). 4) Added init_list() function to create a list of inputs for the EAFP association. 5) Added eafp_list() to create a list of pool values that will be used to verify the return values from the EAFP association. 6) Added the function verify_eafp_values() to call assocation on EAFP and verify the return values. 7) Included cleanup_restore(). Signed-off-by: Deepti B. Kalakeri <deeptik@linux.vnet.ibm.com> diff -r 9155858d4e5a -r 2b57a9423f82 suites/libvirt-cim/cimtest/ElementAllocatedFromPool/01_forward.py --- a/suites/libvirt-cim/cimtest/ElementAllocatedFromPool/01_forward.py Fri Jul 11 04:59:54 2008 -0700 +++ b/suites/libvirt-cim/cimtest/ElementAllocatedFromPool/01_forward.py Fri Jul 11 05:18:48 2008 -0700 @@ -44,163 +44,217 @@ import pywbem from XenKvmLib.test_xml import testxml, testxml_bridge from VirtLib import utils -from XenKvmLib import assoc -from XenKvmLib.test_doms import test_domain_function, destroy_and_undefine_all +from XenKvmLib.assoc import Associators +from XenKvmLib.test_doms import destroy_and_undefine_all from XenKvmLib import devices -from CimTest import Globals -from CimTest.Globals import do_main -from VirtLib.live import network_by_bridge -from CimTest.ReturnCodes import PASS, FAIL, SKIP +from CimTest.Globals import CIM_ERROR_ASSOCIATORS, CIM_ERROR_GETINSTANCE +from XenKvmLib.vxml import get_class +from XenKvmLib.vsms import RASD_TYPE_DISK, RASD_TYPE_PROC, \ +RASD_TYPE_MEM, RASD_TYPE_NET_ETHER, RASD_TYPE_DISK +from XenKvmLib.common_util import create_diskpool_conf, cleanup_restore, \ +eafp_dpool_cap_reserve_val, eafp_mpool_reserve_val +from XenKvmLib.classes import get_typed_class +from XenKvmLib.logicaldevices import verify_common_pool_values, \ +verify_disk_mem_proc_pool_values +from CimTest.Globals import do_main, logger +from VirtLib.live import network_by_bridge, virsh_nodeinfo_cpucount, \ +virsh_nodeinfo_memsize, virsh_dominfo_usedmem +from CimTest.ReturnCodes import PASS, FAIL -sup_types = ['Xen'] +sup_types = ['Xen', 'KVM', 'XenFV'] test_dom = "hd_domain" test_mac = "00:11:22:33:44:aa" test_vcpus = 1 -test_disk = 'xvda' def print_error(cn, detail): - Globals.logger.error(Globals.CIM_ERROR_GETINSTANCE, cn) - Globals.logger.error("Exception: %s", detail) + logger.error(CIM_ERROR_GETINSTANCE, cn) + logger.error("Exception: %s", detail) -def get_keys(cn, device_id): +def get_keys(virt, cn, device_id): id = "%s/%s" % (test_dom, device_id) + sccn = get_typed_class(virt, "ComputerSystem") key_list = { 'DeviceID' : id, 'CreationClassName' : cn, 'SystemName' : test_dom, - 'SystemCreationClassName' : "Xen_ComputerSystem" + 'SystemCreationClassName' : sccn } return key_list +def get_id(server, virt, cname, id): + dev = None + cn = get_typed_class(virt, cname) + try: + key_list = get_keys(virt, cn, id) + dev_class = devices.get_class(cn) + dev = dev_class(server, key_list) + except Exception,detail: + print_error(cn, detail) + return FAIL, dev + return PASS, dev + +def init_list(server, virt, test_disk): + lelist = {} + status, disk = get_id(server, virt, "LogicalDisk", test_disk) + if status != PASS : + return status, lelist + + status, mem = get_id(server, virt, "Memory", "mem") + if status != PASS : + return status, lelist + + status, net = get_id(server, virt, "NetworkPort", test_mac) + if status != PASS: + return status, lelist + + status, proc = get_id(server, virt, "Processor", "0") + if status != PASS: + return status, lelist + + lelist = { + disk.CreationClassName : disk.DeviceID, \ + mem.CreationClassName : mem.DeviceID, \ + net.CreationClassName : net.DeviceID, \ + proc.CreationClassName : proc.DeviceID + } + return status, lelist + +def eafp_list(server, virt, diskid, d_cap, d_reserve, test_network): + + diskpool = { + 'CCName' : get_typed_class(virt, 'DiskPool'), + 'InstanceID' : diskid, + 'PoolID' : diskid, + 'ResourceType' : RASD_TYPE_DISK, + 'Capacity' : d_cap, + 'Reserved' : d_reserve, + 'AllocationUnits' : 'Megabytes' + } + procpool = { + 'CCName' : get_typed_class(virt, 'ProcessorPool'), + 'InstanceID' : "%s/%s" % ("ProcessorPool", "0"), + 'PoolID' : "%s/%s" % ("ProcessorPool", "0"), + 'ResourceType' : RASD_TYPE_PROC, + 'Capacity' : virsh_nodeinfo_cpucount(server, virt), + 'Reserved' : 0, + 'AllocationUnits' : 'Processors' + } + netpool = { + 'CCName' : get_typed_class(virt, 'NetworkPool'), + 'InstanceID' : "%s/%s" % ("NetworkPool", test_network), + 'PoolID' : "%s/%s" % ("NetworkPool", test_network), + 'ResourceType' : RASD_TYPE_NET_ETHER + } + mempool = { + 'CCName' : get_typed_class(virt, 'MemoryPool'), + 'InstanceID' : "%s/%s" % ("MemoryPool", "0"), + 'PoolID' : "%s/%s" % ("MemoryPool", "0"), + 'ResourceType' : RASD_TYPE_MEM, + 'Reserved' : eafp_mpool_reserve_val(server, virt), + 'Capacity' : virsh_nodeinfo_memsize(server, virt), + 'AllocationUnits' :'KiloBytes' + } + eafp_values = { 'procpool' : procpool, + 'diskpool' : diskpool, + 'netpool' : netpool, + 'mempool' : mempool + } + return eafp_values + +def verify_eafp_values(server, virt, diskid, test_network, in_pllist): + # Looping through the in_pllist to get association for devices. + an = get_typed_class(virt, "ElementAllocatedFromPool") + sccn = get_typed_class(virt, "ComputerSystem") + status, d_cap, d_reserve = eafp_dpool_cap_reserve_val(server, virt, + diskid) + if status != PASS: + return FAIL + + eafp_values = eafp_list(server, virt, diskid, d_cap, d_reserve, test_network) + for cn, devid in sorted(in_pllist.items()): + try: + assoc_info = Associators(server, an, cn, + DeviceID = devid, + CreationClassName = cn, + SystemName = test_dom, + SystemCreationClassName = sccn, + virt=virt) + if len(assoc_info) != 1: + logger.error("%s returned %i ResourcePool objects for " + "domain '%s'", an, len(assoc_info), + test_dom) + status = FAIL + break + assoc_eafp_info = assoc_info[0] + CCName = assoc_eafp_info.classname + if CCName == eafp_values['procpool']['CCName']: + list_values = eafp_values['procpool'] + status = verify_disk_mem_proc_pool_values(assoc_eafp_info, + list_values) + elif CCName == eafp_values['netpool']['CCName']: + list_values = eafp_values['netpool'] + status = verify_common_pool_values(assoc_eafp_info, + list_values) + elif CCName == eafp_values['diskpool']['CCName']: + list_values = eafp_values['diskpool'] + status = verify_disk_mem_proc_pool_values(assoc_eafp_info, + list_values) + elif CCName == eafp_values['mempool']['CCName']: + list_values = eafp_values['mempool'] + status = verify_disk_mem_proc_pool_values(assoc_eafp_info, + list_values) + else: + status = FAIL + if status != PASS: + break + except Exception, detail: + logger.error(CIM_ERROR_ASSOCIATORS, an) + logger.error("Exception: %s", detail) + cleanup_restore(server, virt) + status = FAIL + return status + + @do_main(sup_types) def main(): options = main.options + server = options.ip + virt = options.virt status = PASS - idx = 0 + if virt == 'Xen': + test_disk = 'xvda' + else: + test_disk = 'hda' + # Getting the VS list and deleting the test_dom if it already exists. + destroy_and_undefine_all(server) + virt_type = get_class(virt) + vsxml = virt_type(test_dom, vcpus = test_vcpus, mac = test_mac, + disk = test_disk) -# Getting the VS list and deleting the test_dom if it already exists. - destroy_and_undefine_all(options.ip) + # Verify DiskPool on machine + status, diskid = create_diskpool_conf(server, virt) + if status != PASS: + return status - test_xml, bridge = testxml_bridge(test_dom, vcpus = test_vcpus, \ - mac = test_mac, disk = test_disk, \ - server = options.ip) - if bridge == None: - Globals.logger.error("Unable to find virtual bridge") - return SKIP - - if test_xml == None: - Globals.logger.error("Guest xml not created properly") - return FAIL - - virt_network = network_by_bridge(bridge, options.ip) - if virt_network == None: - Globals.logger.error("No virtual network found for bridge %s", bridge) - return SKIP - - ret = test_domain_function(test_xml, options.ip, cmd = "create") + ret = vsxml.create(server) if not ret: - Globals.logger.error("Failed to Create the dom: %s", test_dom) + logger.error("Failed to Create the dom: '%s'", test_dom) return FAIL - try: - cn = "Xen_LogicalDisk" - key_list = get_keys(cn, test_disk) - disk = devices.Xen_LogicalDisk(options.ip, key_list) - except Exception,detail: - print_error(cn, detail) - return FAIL + status, lelist = init_list(server, virt, test_disk) + if status != PASS: + cleanup_restore(server, virt) + vsxml.destroy(server) + return status - try: - cn = "Xen_Memory" - key_list = get_keys(cn, "mem") - mem = devices.Xen_Memory(options.ip, key_list) - except Exception,detail: - print_error(cn, detail) - return FAIL - - try: - cn = "Xen_NetworkPort" - key_list = get_keys(cn, test_mac) - net = devices.Xen_NetworkPort(options.ip, key_list) - except Exception,detail: - print_error(cn, detail) - return FAIL - - try: - cn = "Xen_Processor" - key_list = get_keys(cn, "0") - proc = devices.Xen_Processor(options.ip, key_list) - except Exception,detail: - print_error(cn, detail) - return FAIL - - netpool_id = "NetworkPool/%s" % virt_network - - lelist = { - "Xen_LogicalDisk" : disk.DeviceID, \ - "Xen_Memory" : mem.DeviceID, \ - "Xen_NetworkPort" : net.DeviceID, \ - "Xen_Processor" : proc.DeviceID - } - poollist = [ - "Xen_DiskPool", \ - "Xen_MemoryPool", \ - "Xen_NetworkPool", \ - "Xen_ProcessorPool" - ] - poolval = [ - "DiskPool/foo", \ - "MemoryPool/0", \ - netpool_id, \ - "ProcessorPool/0" - ] - - sccn = "Xen_ComputerSystem" - for cn, devid in sorted(lelist.items()): - try: - assoc_info = assoc.Associators(options.ip, \ - "Xen_ElementAllocatedFromPool", - cn, - DeviceID = devid, - CreationClassName = cn, - SystemName = test_dom, - SystemCreationClassName = sccn) - if len(assoc_info) != 1: - Globals.logger.error("Xen_ElementAllocatedFromPool returned %i\ - ResourcePool objects for domain '%s'", len(assoc_info), test_dom) - status = FAIL - break - - if assoc_info[0].classname != poollist[idx]: - Globals.logger.error("Classname Mismatch") - Globals.logger.error("Returned %s instead of %s", \ - assoc_info[0].classname, \ - poollist[idx]) - status = FAIL - - if assoc_info[0]['InstanceID'] != poolval[idx]: - Globals.logger.error("InstanceID Mismatch") - Globals.logger.error("Returned %s instead of %s", \ - assoc_info[0]['InstanceID'], \ - poolval[idx]) - status = FAIL - - if status != PASS: - break - else: - idx = idx + 1 - - except Exception, detail: - Globals.logger.error(Globals.CIM_ERROR_ASSOCIATORS, \ - 'Xen_ElementAllocatedFromPool') - Globals.logger.error("Exception: %s", detail) - status = FAIL - - ret = test_domain_function(test_dom, options.ip, \ - cmd = "destroy") + virt_network = vsxml.xml_get_net_network() + status = verify_eafp_values(server, virt, diskid, virt_network, lelist) + + cleanup_restore(server, virt) + vsxml.destroy(server) return status if __name__ == "__main__":

+def init_list(server, virt, test_disk): + lelist = {} + status, disk = get_id(server, virt, "LogicalDisk", test_disk) + if status != PASS : + return status, lelist + + status, mem = get_id(server, virt, "Memory", "mem") + if status != PASS : + return status, lelist + + status, net = get_id(server, virt, "NetworkPort", test_mac) + if status != PASS: + return status, lelist + + status, proc = get_id(server, virt, "Processor", "0") + if status != PASS: + return status, lelist + + lelist = { + disk.CreationClassName : disk.DeviceID, \ + mem.CreationClassName : mem.DeviceID, \ + net.CreationClassName : net.DeviceID, \ + proc.CreationClassName : proc.DeviceID + } + return status, lelist
I'd suggest condensing the 4 get_id() calls into a loop. In actuality, you don't need to have this function at all. If you know the 4 values for the reference (CCN, SCCN, DeviceID, and SystemName), then you don't really need to get the individual device instances at all. However, I suspect most CIM clients will do similar step, so this is a reasonable to do in the test.
+ +def eafp_list(server, virt, diskid, d_cap, d_reserve, test_network): +
I'd remove this function all together.
+ +def verify_eafp_values(server, virt, diskid, test_network, in_pllist): + # Looping through the in_pllist to get association for devices. + an = get_typed_class(virt, "ElementAllocatedFromPool") + sccn = get_typed_class(virt, "ComputerSystem") + status, d_cap, d_reserve = eafp_dpool_cap_reserve_val(server, virt, + diskid) + if status != PASS: + return FAIL + + eafp_values = eafp_list(server, virt, diskid, d_cap, d_reserve, test_network) + for cn, devid in sorted(in_pllist.items()): + try: + assoc_info = Associators(server, an, cn, + DeviceID = devid, + CreationClassName = cn, + SystemName = test_dom, + SystemCreationClassName = sccn, + virt=virt) + if len(assoc_info) != 1: + logger.error("%s returned %i ResourcePool objects for " + "domain '%s'", an, len(assoc_info), + test_dom) + status = FAIL + break + assoc_eafp_info = assoc_info[0] + CCName = assoc_eafp_info.classname + if CCName == eafp_values['procpool']['CCName']: + list_values = eafp_values['procpool'] + status = verify_disk_mem_proc_pool_values(assoc_eafp_info, + list_values)
Instead of verifying the values of the host pool returned, I'd determine which pool instances you expect to see (get the instance values by doing GetInstance() calls). Then verify that the pool instance returned by the Associators() call is the same instance your as the pool instance you got using the GetInstance() call. I did something similar for this in the ESD patch I recently sent out. I'd move these resource pool checks to a resource pool specific test. -- Kaitlin Rupert IBM Linux Technology Center kaitlin@linux.vnet.ibm.com

Checking the capabilities and reserved of the resource pools is a good idea, but I think it might be a better approach to put these checks in ResourcePool/enum_01.py. EAFP tests should be concerned with verifying that the pool returned by the association is the pool we expected for the given device. While the properties of the resource pool instance need to be valid, I think checking these values clutters this test and makes it difficult to read. -- Kaitlin Rupert IBM Linux Technology Center kaitlin@linux.vnet.ibm.com
participants (2)
-
Deepti B. Kalakeri
-
Kaitlin Rupert