[PATCH 0/3] cimtest follow patch

This is a patch follows John's 9 patches for cimtest, after it cimtest should only fail 3 case on RH6.4: HostSystem - 01_enum.py: FAIL HostSystem - 03_hs_to_settdefcap.py: FAIL VirtualSystemManagementService - 19_definenetwork_ers.py: FAIL This patch is only for review and test, it may need adjust and merge with John's patch, and change to author name(not root :|), please do not push directly. root (3): test: common_util, use number to check version test: rasd use int as comparation condtion for libvirt version test: RPCS fix nfs issue .../12_create_netfs_storagevolume_errs.py | 2 +- suites/libvirt-cim/lib/XenKvmLib/common_util.py | 32 ++++++++++++++++--- suites/libvirt-cim/lib/XenKvmLib/pool.py | 8 ++-- suites/libvirt-cim/lib/XenKvmLib/rasd.py | 7 ++-- 4 files changed, 36 insertions(+), 13 deletions(-)

From: root <root@RH64wenchao.(none)> Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com> --- suites/libvirt-cim/lib/XenKvmLib/common_util.py | 28 ++++++++++++++++++++-- 1 files changed, 25 insertions(+), 3 deletions(-) diff --git a/suites/libvirt-cim/lib/XenKvmLib/common_util.py b/suites/libvirt-cim/lib/XenKvmLib/common_util.py index 43e5e2c..3316c51 100644 --- a/suites/libvirt-cim/lib/XenKvmLib/common_util.py +++ b/suites/libvirt-cim/lib/XenKvmLib/common_util.py @@ -23,6 +23,7 @@ import os import pywbem import random +import string from time import sleep from tempfile import mkdtemp from commands import getstatusoutput @@ -296,6 +297,17 @@ def conf_file(): logger.error("Creation of Disk Conf file Failed") return status +def get_version_number(version_str): + num = version_str.split(".") + l = len(num) + total = 0 + multiple = 1 + increase = 100 + for i in range(0, l): + t = string.atoi(num[l - 1- i]) * multiple + total = total + t + multiple = multiple * increase + return total def cleanup_restore(server, virt): """ @@ -308,7 +320,11 @@ def cleanup_restore(server, virt): # libvirt_version >= 0.4.1 # Hence Skipping the logic to delete the new conf file # and just returning PASS - if libvirt_version >= '0.4.1': + libvirt_version = virsh_version(server, virt) + libvirt_version_req = "0.4.1" + a = get_version_number(libvirt_version) + b = get_version_number(libvirt_version_req) + if a >= b: return status try: if os.path.exists(back_disk_file): @@ -365,7 +381,10 @@ def create_diskpool(server, virt='KVM', dpool=default_pool_name, def create_diskpool_conf(server, virt, dpool=default_pool_name): libvirt_version = virsh_version(server, virt) - if libvirt_version >= '0.4.1': + libvirt_version_req = "0.4.1" + a = get_version_number(libvirt_version) + b = get_version_number(libvirt_version_req) + if a >= b: status, dpoolname = create_diskpool(server, virt, dpool) diskid = "%s/%s" % ("DiskPool", dpoolname) else: @@ -376,7 +395,10 @@ def create_diskpool_conf(server, virt, dpool=default_pool_name): def destroy_diskpool(server, virt, dpool): libvirt_version = virsh_version(server, virt) - if libvirt_version >= '0.4.1': + libvirt_version_req = "0.4.1" + a = get_version_number(libvirt_version) + b = get_version_number(libvirt_version_req) + if a >= b: if dpool == None: logger.error("No disk pool specified") return FAIL -- 1.7.1

From: root <root@RH64wenchao.(none)> Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com> --- suites/libvirt-cim/lib/XenKvmLib/rasd.py | 7 ++++--- 1 files changed, 4 insertions(+), 3 deletions(-) diff --git a/suites/libvirt-cim/lib/XenKvmLib/rasd.py b/suites/libvirt-cim/lib/XenKvmLib/rasd.py index d65011e..11b0e38 100644 --- a/suites/libvirt-cim/lib/XenKvmLib/rasd.py +++ b/suites/libvirt-cim/lib/XenKvmLib/rasd.py @@ -32,7 +32,7 @@ from XenKvmLib.const import default_pool_name, default_network_name, \ get_provider_version, default_net_type from XenKvmLib.pool import enum_volumes from XenKvmLib.xm_virt_util import virsh_version -from XenKvmLib.common_util import parse_instance_id +from XenKvmLib.common_util import parse_instance_id, get_version_number pasd_cn = 'ProcResourceAllocationSettingData' nasd_cn = 'NetResourceAllocationSettingData' @@ -382,8 +382,9 @@ def get_exp_disk_rasd_len(virt, ip, rev, id): else: exp_len = (volumes * exp_base_num) + exp_cdrom - - if virt != 'LXC' and libvirt_ver >= '0.4.1': + a = get_version_number(libvirt_ver) + b = get_version_number("0.4.1") + if virt != 'LXC' and a >= b: if rev >= libvirt_rasd_storagepool_changes: exp_len += exp_storagevol_rasd -- 1.7.1

From: root <root@RH64wenchao.(none)> Signed-off-by: Wenchao Xia <xiawenc@linux.vnet.ibm.com> --- .../12_create_netfs_storagevolume_errs.py | 2 +- suites/libvirt-cim/lib/XenKvmLib/common_util.py | 4 ++-- suites/libvirt-cim/lib/XenKvmLib/pool.py | 8 ++++---- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/12_create_netfs_storagevolume_errs.py b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/12_create_netfs_storagevolume_errs.py index 004af9f..27cb2f7 100644 --- a/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/12_create_netfs_storagevolume_errs.py +++ b/suites/libvirt-cim/cimtest/ResourcePoolConfigurationService/12_create_netfs_storagevolume_errs.py @@ -154,7 +154,7 @@ def main(): if status != PASS : raise Exception("Failed to verify the Invlaid '%s' " % pool_name) - + except Exception, details: logger.error("Exception details: %s", details) status = FAIL diff --git a/suites/libvirt-cim/lib/XenKvmLib/common_util.py b/suites/libvirt-cim/lib/XenKvmLib/common_util.py index 3316c51..efcda92 100644 --- a/suites/libvirt-cim/lib/XenKvmLib/common_util.py +++ b/suites/libvirt-cim/lib/XenKvmLib/common_util.py @@ -536,8 +536,8 @@ def get_nfs_bin(server): if elems[0] == 'Fedora' or (elems[0] == 'Red' and elems[1] == 'Hat'): for i in range(1, len(elems)): if elems[i] == 'release': - if (elems[0] == 'Fedora' and int(elems[i+1]) >= 15) or \ - (elems[0] == 'Red' and int(elems[i+1]) >= 7): + if (elems[0] == 'Fedora' and get_version_number(elems[i+1]) >= 1500) or \ + (elems[0] == 'Red' and get_version_number(elems[i+1]) >= 700): # Handle this differently - the command would be # "systemctl {start|restart|status} nfs" nfs_server_bin = "systemctl %s nfs" diff --git a/suites/libvirt-cim/lib/XenKvmLib/pool.py b/suites/libvirt-cim/lib/XenKvmLib/pool.py index a5ca331..86898b1 100644 --- a/suites/libvirt-cim/lib/XenKvmLib/pool.py +++ b/suites/libvirt-cim/lib/XenKvmLib/pool.py @@ -26,7 +26,7 @@ from VirtLib import utils from CimTest.Globals import logger, CIM_NS from CimTest.ReturnCodes import PASS, FAIL, SKIP from XenKvmLib.classes import get_typed_class, inst_to_mof -from XenKvmLib.const import get_provider_version, default_pool_name +from XenKvmLib.const import get_provider_version, default_pool_name from XenKvmLib.enumclass import EnumInstances, GetInstance, EnumNames from XenKvmLib.assoc import Associators from VirtLib.utils import run_remote @@ -37,7 +37,7 @@ from CimTest.CimExt import CIMClassMOF from XenKvmLib.vxml import NetXML, PoolXML from XenKvmLib.xm_virt_util import virsh_version from XenKvmLib.vsms import RASD_TYPE_STOREVOL -from XenKvmLib.common_util import destroy_diskpool +from XenKvmLib.common_util import destroy_diskpool, get_version_number cim_errno = pywbem.CIM_ERR_NOT_SUPPORTED cim_mname = "CreateChildResourcePool" @@ -183,7 +183,7 @@ def undefine_netpool(server, virt, net_name): def undefine_diskpool(server, virt, dp_name): libvirt_version = virsh_version(server, virt) - if libvirt_version >= '0.4.1': + if get_version_number(libvirt_version) >= get_version_number("0.4.1"): if dp_name == None: return FAIL @@ -403,7 +403,7 @@ def cleanup_pool_vol(server, virt, pool_name, vol_name, status = destroy_diskpool(server, virt, pool_name) if status != PASS: raise Exception("Unable to destroy diskpool '%s'" % pool_name) - else: + else: status = undefine_diskpool(server, virt, pool_name) if status != PASS: raise Exception("Unable to undefine diskpool '%s'" \ -- 1.7.1

On 04/08/2013 06:16 AM, Wenchao Xia wrote:
This is a patch follows John's 9 patches for cimtest, after it cimtest should only fail 3 case on RH6.4: HostSystem - 01_enum.py: FAIL HostSystem - 03_hs_to_settdefcap.py: FAIL VirtualSystemManagementService - 19_definenetwork_ers.py: FAIL
This patch is only for review and test, it may need adjust and merge with John's patch, and change to author name(not root :|), please do not push directly.
root (3): test: common_util, use number to check version test: rasd use int as comparation condtion for libvirt version test: RPCS fix nfs issue
.../12_create_netfs_storagevolume_errs.py | 2 +- suites/libvirt-cim/lib/XenKvmLib/common_util.py | 32 ++++++++++++++++--- suites/libvirt-cim/lib/XenKvmLib/pool.py | 8 ++-- suites/libvirt-cim/lib/XenKvmLib/rasd.py | 7 ++-- 4 files changed, 36 insertions(+), 13 deletions(-)
While it seems the change resolves some issues I saw in my initial run, I think the official patch needs to describe the problem/symptom and resolution more clearly. In particular, is the change because cimtest was improperly handling the result of the "virsh -v"? Was this only a rhel64 issue? My Fedora system still gets only Indication failures - so there's no 'regressions' there. Unfortunately there are still a number of errors on my rhel64 which I'm looking into. They could be environmental, but I'd still like to get a handle on them. I get the following errors: ElementCapabilities - 02_reverse.py: FAIL ElementCapabilities - 04_reverse_errs.py: FAIL ElementCapabilities - 05_hostsystem_cap.py: FAIL ElementConforms - 01_forward.py: FAIL ElementConforms - 02_reverse.py: FAIL HostedService - 01_forward.py: FAIL HostedService - 02_reverse.py: FAIL HostedService - 04_reverse_errs.py: FAIL HostSystem - 03_hs_to_settdefcap.py: FAIL Profile - 01_enum.py: FAIL Profile - 02_profile_to_elec.py: FAIL Profile - 03_rprofile_gi_errs.py: FAIL RedirectionService - 01_enum_crs.py: FAIL ReferencedProfile - 01_verify_refprof.py: FAIL ReferencedProfile - 02_refprofile_errs.py: FAIL SettingsDefineCapabilities - 04_forward_vsmsdata.py: FAIL SettingsDefineCapabilities - 05_reverse_vsmcap.py: FAIL VirtualSystemManagementService - 19_definenetwork_ers.py: FAIL VirtualSystemMigrationCapabilities - 01_enum.py: FAIL John

On 04/09/2013 10:40 AM, John Ferlan wrote:
On 04/08/2013 06:16 AM, Wenchao Xia wrote:
This is a patch follows John's 9 patches for cimtest, after it cimtest should only fail 3 case on RH6.4: HostSystem - 01_enum.py: FAIL HostSystem - 03_hs_to_settdefcap.py: FAIL VirtualSystemManagementService - 19_definenetwork_ers.py: FAIL
This patch is only for review and test, it may need adjust and merge with John's patch, and change to author name(not root :|), please do not push directly.
root (3): test: common_util, use number to check version test: rasd use int as comparation condtion for libvirt version test: RPCS fix nfs issue
.../12_create_netfs_storagevolume_errs.py | 2 +- suites/libvirt-cim/lib/XenKvmLib/common_util.py | 32 ++++++++++++++++--- suites/libvirt-cim/lib/XenKvmLib/pool.py | 8 ++-- suites/libvirt-cim/lib/XenKvmLib/rasd.py | 7 ++-- 4 files changed, 36 insertions(+), 13 deletions(-)
While it seems the change resolves some issues I saw in my initial run, I think the official patch needs to describe the problem/symptom and resolution more clearly. In particular, is the change because cimtest was improperly handling the result of the "virsh -v"? Was this only a rhel64 issue?
Uh, duh. Should have held off hitting send for just a few minutes. My version on rhel64 is "0.10.2" while on my f18 system it was "1.0.3", so naturally when comparing against "0.4.1" I can "see" why the change was necessary. I can also understand why this is a "new" regression since probably the last time tests were run the virsh version was "0.9.*" or "0.8.*"... While I agree what you did resolves some issues - I think the change is incomplete. You've only changed a few places and my cscope tells me there are 15 callers to virsh_version(). Let's see what I can come up with... John

于 2013-4-9 23:45, John Ferlan 写道:
On 04/09/2013 10:40 AM, John Ferlan wrote:
On 04/08/2013 06:16 AM, Wenchao Xia wrote:
This is a patch follows John's 9 patches for cimtest, after it cimtest should only fail 3 case on RH6.4: HostSystem - 01_enum.py: FAIL HostSystem - 03_hs_to_settdefcap.py: FAIL VirtualSystemManagementService - 19_definenetwork_ers.py: FAIL
This patch is only for review and test, it may need adjust and merge with John's patch, and change to author name(not root :|), please do not push directly.
root (3): test: common_util, use number to check version test: rasd use int as comparation condtion for libvirt version test: RPCS fix nfs issue
.../12_create_netfs_storagevolume_errs.py | 2 +- suites/libvirt-cim/lib/XenKvmLib/common_util.py | 32 ++++++++++++++++--- suites/libvirt-cim/lib/XenKvmLib/pool.py | 8 ++-- suites/libvirt-cim/lib/XenKvmLib/rasd.py | 7 ++-- 4 files changed, 36 insertions(+), 13 deletions(-)
While it seems the change resolves some issues I saw in my initial run, I think the official patch needs to describe the problem/symptom and resolution more clearly. In particular, is the change because cimtest was improperly handling the result of the "virsh -v"? Was this only a rhel64 issue?
Uh, duh. Should have held off hitting send for just a few minutes.
My version on rhel64 is "0.10.2" while on my f18 system it was "1.0.3", so naturally when comparing against "0.4.1" I can "see" why the change was necessary. I can also understand why this is a "new" regression since probably the last time tests were run the virsh version was "0.9.*" or "0.8.*"...
While I agree what you did resolves some issues - I think the change is incomplete. You've only changed a few places and my cscope tells me there are 15 callers to virsh_version().
Let's see what I can come up with...
John
Yes, this fix all bugs show on my machine, maybe there are other caller, but want to sent this first to make it work. I like your patches which compare two version string from libvirt. :> I got only 3 case fail now, may be you can paste some detail of your fail here. Something I know to make it work: 1 host must be able to resolve itself's name, that is ping [MACINENAME] must succeed. 2 two fake image must be created in /var/lib/libvirt/images 3 default disk-pool default must be "virsh pool-undefine".
_______________________________________________ Libvirt-cim mailing list Libvirt-cim@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-cim
-- Best Regards Wenchao Xia

On 04/10/2013 04:26 AM, Wenchao Xia wrote:
于 2013-4-9 23:45, John Ferlan 写道:
On 04/09/2013 10:40 AM, John Ferlan wrote:
On 04/08/2013 06:16 AM, Wenchao Xia wrote:
Yes, this fix all bugs show on my machine, maybe there are other caller, but want to sent this first to make it work. I like your patches which compare two version string from libvirt. :>
I got only 3 case fail now, may be you can paste some detail of your fail here.
Something I know to make it work: 1 host must be able to resolve itself's name, that is ping [MACINENAME] must succeed. 2 two fake image must be created in /var/lib/libvirt/images 3 default disk-pool default must be "virsh pool-undefine".
I'm *painfully* aware of #2 & #3!!! The "default" libvirt location is also /var/lib/libvirt/images and cimtest kept failing strangely until I remembered that two storage pools cannot use the same path to storage. The cimtest results were not very "helpful" at discerning that though with just the following: ElementAllocatedFromPool - 01_forward.py: FAIL ERROR - Expected at least one KVM_DiskPool instance ERROR - Exception details : Failed to get pool details CIM_ERR_NOT_FOUND: No such instance (cimtest-diskpool) I knew cimtest-diskpool wasn't being created, but I had no idea why. I have it on a list of things to do to generate better errors or use a different location (either configurable or chosen). As for the other errors - I'm running libvirt-cim in the rhel6.4 environment with libvirt 0.10.2 installed. I have no predefined domains which differs from my f18 environment. I'm thinking there's some amount of environment differences which I don't yet have the "history" to recognize right away. I presently have 22 failures - * 11 are because neither the Migration Service nor the Capabilities MOF is present. Whether that's not installed by design, I haven't yet figured out. ElementCapabilities - 02_reverse.py: FAIL ElementCapabilities - 04_reverse_errs.py: FAIL ElementCapabilities - 05_hostsystem_cap.py: FAIL ElementConforms - 01_forward.py: FAIL HostedService - 01_forward.py: FAIL HostedService - 02_reverse.py: FAIL HostedService - 04_reverse_errs.py: FAIL SettingsDefineCapabilities - 04_forward_vsmsdata.py: FAIL SettingsDefineCapabilities - 05_reverse_vsmcap.py: FAIL VirtualSystemMigrationCapabilities - 01_enum.py: FAIL VirtualSystemMigrationCapabilities - 02_vsmc_gi_errs.py: FAIL * 2 are because of the bad comparison in RPCS for RHEL versions, although I used a slightly different mechanism than you did: - (elems[0] == 'Red' and int(elems[i+1]) >= 7): + (elems[0] == 'Red' and float(elems[i+1]) >= 7.0): * 9 I still need to research some more: -------------------------------------------------------------------- ElementConforms - 02_reverse.py: FAIL ERROR - Failed to get associators information for KVM_ElementConformsToProfile ERROR - Exception: u'KVM_ComputerSystem' -------------------------------------------------------------------- ElementConforms - 03_ectp_fwd_errs.py: XFAIL ERROR - 'KVM_ElementConformsToProfile' association failed to generate an exception and 'INVALID_InstID_Keyname' passed. ERROR - ------ FAILED: INVALID_InstID_Keyname------ ERROR - 'KVM_ElementConformsToProfile' association failed to generate an exception and 'INVALID_InstID_Keyvalue' passed. ERROR - ------ FAILED: INVALID_InstID_Keyvalue------ ERROR - 'KVM_ElementConformsToProfile' association failed to generate an exception and 'INVALID_InstID_Keyname' passed. ERROR - ------ FAILED: INVALID_InstID_Keyname------ ERROR - 'KVM_ElementConformsToProfile' association failed to generate an exception and 'INVALID_InstID_Keyvalue' passed. ERROR - ------ FAILED: INVALID_InstID_Keyvalue------ -------------------------------------------------------------------- HostSystem - 03_hs_to_settdefcap.py: FAIL ERROR - Failed to get associatornames according to KVM_AllocationCapabilities ERROR - Exception: list index out of range -------------------------------------------------------------------- Profile - 01_enum.py: FAIL ERROR - Profile CIM:DSP1042-SystemVirtualization-1.0.0 is not found ERROR - Properties check for KVM_RegisteredProfile failed -------------------------------------------------------------------- Profile - 02_profile_to_elec.py: FAIL ERROR - KVM_RegisteredProfile with Virtual System Profile was not returned -------------------------------------------------------------------- Profile - 03_rprofile_gi_errs.py: FAIL ERROR - Unexpected errno 6, desc CIM_ERR_NOT_FOUND: KVM_RegisteredProfile.InstanceID="INVALID_Instid_KeyValue" ERROR - Expected No such instance 6 ERROR - NameError : global name 'tc' is not defined Traceback (most recent call last): File "/home/cimtest.work/suites/libvirt-cim/lib/XenKvmLib/const.py", line 141, in do_try rc = f() File "03_rprofile_gi_errs.py", line 85, in main logger.error("------ FAILED: %s %s ------", cn, tc) NameError: global name 'tc' is not defined ERROR - None -------------------------------------------------------------------- RedirectionService - 01_enum_crs.py: FAIL 01_enum_crs.py:29: DeprecationWarning: the sets module is deprecated from sets import Set ERROR - TypeError : __call__() takes exactly 1 argument (2 given) Traceback (most recent call last): File "/home/cimtest.work/suites/libvirt-cim/lib/XenKvmLib/const.py", line 141, in do_try rc = f() File "01_enum_crs.py", line 113, in main if res_val != exp_val: TypeError: __call__() takes exactly 1 argument (2 given) ERROR - None -------------------------------------------------------------------- ReferencedProfile - 01_verify_refprof.py: FAIL ERROR - KVM_RegisteredProfile returned 0 Profile objects, expected atleast 5 -------------------------------------------------------------------- ReferencedProfile - 02_refprofile_errs.py: FAIL ERROR - KVM_RegisteredProfile returned 0 Profile objects, expected atleast 5 -------------------------------------------------------------------- John

On 04/10/2013 04:26 AM, Wenchao Xia wrote:
于 2013-4-9 23:45, John Ferlan 写道:
On 04/09/2013 10:40 AM, John Ferlan wrote:
On 04/08/2013 06:16 AM, Wenchao Xia wrote:
Yes, this fix all bugs show on my machine, maybe there are other caller, but want to sent this first to make it work. I like your patches which compare two version string from libvirt. :>
I got only 3 case fail now, may be you can paste some detail of your fail here.
Something I know to make it work: 1 host must be able to resolve itself's name, that is ping [MACINENAME] must succeed. 2 two fake image must be created in /var/lib/libvirt/images 3 default disk-pool default must be "virsh pool-undefine".
I'm *painfully* aware of #2 & #3!!! The "default" libvirt location is also /var/lib/libvirt/images and cimtest kept failing strangely until Me too, maybe we can skip this now and improve it in the future.
I remembered that two storage pools cannot use the same path to storage. The cimtest results were not very "helpful" at discerning that though with just the following:
ElementAllocatedFromPool - 01_forward.py: FAIL ERROR - Expected at least one KVM_DiskPool instance ERROR - Exception details : Failed to get pool details CIM_ERR_NOT_FOUND: No such instance (cimtest-diskpool)
I knew cimtest-diskpool wasn't being created, but I had no idea why. I have it on a list of things to do to generate better errors or use a different location (either configurable or chosen).
As for the other errors - I'm running libvirt-cim in the rhel6.4 environment with libvirt 0.10.2 installed. I have no predefined domains which differs from my f18 environment. I'm thinking there's some amount of environment differences which I don't yet have the "history" to recognize right away.
I presently have 22 failures -
* 11 are because neither the Migration Service nor the Capabilities MOF is present. Whether that's not installed by design, I haven't yet figured out.
How do you installed libvirt-cim? I guess "make install" still have problem, could u try remove old libvirt-cim rpm and then "make rpm" in libvirt-cim source code and then rpm -ivh? Also have you installed pywbem rpm? softlink seems work innormal since pywbem have dependency library need to be installed. I suggest make the suit work normal now if some manually env clean/preparation can solve it, and add a README in the suit to tip about the issues, we can fix them one by one later.
ElementCapabilities - 02_reverse.py: FAIL ElementCapabilities - 04_reverse_errs.py: FAIL ElementCapabilities - 05_hostsystem_cap.py: FAIL ElementConforms - 01_forward.py: FAIL HostedService - 01_forward.py: FAIL HostedService - 02_reverse.py: FAIL HostedService - 04_reverse_errs.py: FAIL SettingsDefineCapabilities - 04_forward_vsmsdata.py: FAIL SettingsDefineCapabilities - 05_reverse_vsmcap.py: FAIL VirtualSystemMigrationCapabilities - 01_enum.py: FAIL VirtualSystemMigrationCapabilities - 02_vsmc_gi_errs.py: FAIL
* 2 are because of the bad comparison in RPCS for RHEL versions, although I used a slightly different mechanism than you did:
- (elems[0] == 'Red' and int(elems[i+1]) >= 7): + (elems[0] == 'Red' and float(elems[i+1]) >= 7.0):
* 9 I still need to research some more:
--------------------------------------------------------------------
ElementConforms - 02_reverse.py: FAIL ERROR - Failed to get associators information for KVM_ElementConformsToProfile ERROR - Exception: u'KVM_ComputerSystem' --------------------------------------------------------------------
ElementConforms - 03_ectp_fwd_errs.py: XFAIL ERROR - 'KVM_ElementConformsToProfile' association failed to generate an exception and 'INVALID_InstID_Keyname' passed. ERROR - ------ FAILED: INVALID_InstID_Keyname------ ERROR - 'KVM_ElementConformsToProfile' association failed to generate an exception and 'INVALID_InstID_Keyvalue' passed. ERROR - ------ FAILED: INVALID_InstID_Keyvalue------ ERROR - 'KVM_ElementConformsToProfile' association failed to generate an exception and 'INVALID_InstID_Keyname' passed. ERROR - ------ FAILED: INVALID_InstID_Keyname------ ERROR - 'KVM_ElementConformsToProfile' association failed to generate an exception and 'INVALID_InstID_Keyvalue' passed. ERROR - ------ FAILED: INVALID_InstID_Keyvalue------ --------------------------------------------------------------------
HostSystem - 03_hs_to_settdefcap.py: FAIL ERROR - Failed to get associatornames according to KVM_AllocationCapabilities ERROR - Exception: list index out of range --------------------------------------------------------------------
Profile - 01_enum.py: FAIL ERROR - Profile CIM:DSP1042-SystemVirtualization-1.0.0 is not found ERROR - Properties check for KVM_RegisteredProfile failed --------------------------------------------------------------------
Profile - 02_profile_to_elec.py: FAIL ERROR - KVM_RegisteredProfile with Virtual System Profile was not returned --------------------------------------------------------------------
Profile - 03_rprofile_gi_errs.py: FAIL ERROR - Unexpected errno 6, desc CIM_ERR_NOT_FOUND: KVM_RegisteredProfile.InstanceID="INVALID_Instid_KeyValue" ERROR - Expected No such instance 6 ERROR - NameError : global name 'tc' is not defined Traceback (most recent call last): File "/home/cimtest.work/suites/libvirt-cim/lib/XenKvmLib/const.py", line 141, in do_try rc = f() File "03_rprofile_gi_errs.py", line 85, in main logger.error("------ FAILED: %s %s ------", cn, tc) NameError: global name 'tc' is not defined ERROR - None --------------------------------------------------------------------
RedirectionService - 01_enum_crs.py: FAIL 01_enum_crs.py:29: DeprecationWarning: the sets module is deprecated from sets import Set ERROR - TypeError : __call__() takes exactly 1 argument (2 given) Traceback (most recent call last): File "/home/cimtest.work/suites/libvirt-cim/lib/XenKvmLib/const.py", line 141, in do_try rc = f() File "01_enum_crs.py", line 113, in main if res_val != exp_val: TypeError: __call__() takes exactly 1 argument (2 given) ERROR - None --------------------------------------------------------------------
ReferencedProfile - 01_verify_refprof.py: FAIL ERROR - KVM_RegisteredProfile returned 0 Profile objects, expected atleast 5 --------------------------------------------------------------------
ReferencedProfile - 02_refprofile_errs.py: FAIL ERROR - KVM_RegisteredProfile returned 0 Profile objects, expected atleast 5 --------------------------------------------------------------------
John
_______________________________________________ Libvirt-cim mailing list Libvirt-cim@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-cim
-- Best Regards Wenchao Xia

On 04/10/2013 10:51 PM, Wenchao Xia wrote:
On 04/10/2013 04:26 AM, Wenchao Xia wrote:
于 2013-4-9 23:45, John Ferlan 写道:
On 04/09/2013 10:40 AM, John Ferlan wrote:
On 04/08/2013 06:16 AM, Wenchao Xia wrote: I'm *painfully* aware of #2 & #3!!! The "default" libvirt location is also /var/lib/libvirt/images and cimtest kept failing strangely until Me too, maybe we can skip this now and improve it in the future.
Right - I'm not going to solve the problem - just have it noted.
How do you installed libvirt-cim? I guess "make install" still have problem, could u try remove old libvirt-cim rpm and then "make rpm" in libvirt-cim source code and then rpm -ivh? Also have you installed pywbem rpm? softlink seems work innormal since pywbem have dependency library need to be installed.
What's interesting about the RH64 system is I used yum/rpm install only... The rest is a bit lengthy as I've added to it during the day, but perhaps it's good to know/record the steps/process used. I yum localinstall'd libvirt-cim... pywbem was installed... I erased and reinstalled libvirt-cim today (sometimes that helps) and yet still the MigrationService was still missing, although MigrationCapabilities miraculously showed up. A 'wbemcli ecn http://root:password@localhost/root/virt | grep Migration' did not find a CIM_VirtualSystemMigrationService. After digging through the various scripts I believe that means the {KVM|LXC|XEN} versions of the mof won't be installed/loaded Why it's not there - I have no clue. Per the libvirt.org/CIM/schema webpage, I installed the v216 experimental schema on my RH64 box. That schema doesn't have that class/mof; however, on my F18 box it seems I have a later schema installed (v2.33) which does have it. Not sure how I installed that... According to everything I found online the migration schema was added in v2.17. After spending some time thinking about and investigating the differences between the two systems - I tried using the 'make {preinstall|install|postinstall}' options and had some interesting results (with a restart of tog-pegasus between each make option). Interesting as in things now work (except for one small bug in 'enum_volumes' where 'None' was returned for an empty storage pool causing an exception - I have a fix for it). Of most interest is somehow CIM_VirtualSystemMigrationService is now present. What I noted as part of that installation which went to /usr/lib64/share/libvirt-cim is that there is a 2.21 schema zip file in that directory which gets inflated and installed as part of the 'make' processing (by one of the two scripts). Since 2.21 > 2.17 that migration mof is there, so now it's on that system. Why this doesn't happen as part of RPM install - I'm not sure. So to summarize - 1. Now I have no test failures on my RH64 system other than the already failing Indications tests. 2. I believe there's some disconnect between what happens via the rpm install and what happens during the 'make' options, but I don't know where to look and right now I really don't have the cycles to investigate. I'm guessing that somewhere along the line 2.21 was made the default, but the web pages didn't get updated and something in the RPM install process didn't quite work right, but it didn't matter or wasn't noticed because perhaps no one went through the pain of a clean installation environment while strictly following the web pages. John

于 2013-4-12 4:35, John Ferlan 写道:
On 04/10/2013 10:51 PM, Wenchao Xia wrote:
On 04/10/2013 04:26 AM, Wenchao Xia wrote:
于 2013-4-9 23:45, John Ferlan 写道:
On 04/09/2013 10:40 AM, John Ferlan wrote:
On 04/08/2013 06:16 AM, Wenchao Xia wrote: I'm *painfully* aware of #2 & #3!!! The "default" libvirt location is also /var/lib/libvirt/images and cimtest kept failing strangely until Me too, maybe we can skip this now and improve it in the future.
Right - I'm not going to solve the problem - just have it noted.
How do you installed libvirt-cim? I guess "make install" still have problem, could u try remove old libvirt-cim rpm and then "make rpm" in libvirt-cim source code and then rpm -ivh? Also have you installed pywbem rpm? softlink seems work innormal since pywbem have dependency library need to be installed.
What's interesting about the RH64 system is I used yum/rpm install only... The rest is a bit lengthy as I've added to it during the day, but perhaps it's good to know/record the steps/process used.
I yum localinstall'd libvirt-cim... pywbem was installed... I erased and reinstalled libvirt-cim today (sometimes that helps) and yet still the MigrationService was still missing, although MigrationCapabilities miraculously showed up.
A 'wbemcli ecn http://root:password@localhost/root/virt | grep Migration' did not find a CIM_VirtualSystemMigrationService.
After digging through the various scripts I believe that means the {KVM|LXC|XEN} versions of the mof won't be installed/loaded
Why it's not there - I have no clue. Per the libvirt.org/CIM/schema webpage, I installed the v216 experimental schema on my RH64 box. That schema doesn't have that class/mof; however, on my F18 box it seems I have a later schema installed (v2.33) which does have it. Not sure how I installed that... According to everything I found online the migration schema was added in v2.17.
After spending some time thinking about and investigating the differences between the two systems - I tried using the 'make {preinstall|install|postinstall}' options and had some interesting results (with a restart of tog-pegasus between each make option). Interesting as in things now work (except for one small bug in 'enum_volumes' where 'None' was returned for an empty storage pool causing an exception - I have a fix for it). Of most interest is somehow CIM_VirtualSystemMigrationService is now present. What I noted as part of that installation which went to /usr/lib64/share/libvirt-cim is that there is a 2.21 schema zip file in that directory which gets inflated and installed as part of the 'make' processing (by one of the two scripts). Since 2.21 > 2.17 that migration mof is there, so now it's on that system. Why this doesn't happen as part of RPM install - I'm not sure.
So to summarize - 1. Now I have no test failures on my RH64 system other than the already failing Indications tests.
Strange, I still get success on those cases, could u try upstream cimtest on it? I found it fail after applying the 9 patches of your but succeed before, and it succeed again after applying mine these 3 patches.
2. I believe there's some disconnect between what happens via the rpm install and what happens during the 'make' options, but I don't know where to look and right now I really don't have the cycles to investigate. I'm guessing that somewhere along the line 2.21 was made the default, but the web pages didn't get updated and something in the RPM install process didn't quite work right, but it didn't matter or wasn't noticed because perhaps no one went through the pain of a clean installation environment while strictly following the web pages.
libvirt-cim make process automatically download 2.21 base schema and install it, it seems root cause are yum install script are missing that part. So rpm -ivh would succeed, yum upgrade would succeed(haven't check), but yum install fail. This is a bug need to be solved, since user are tend to use yum when it is available.
John
_______________________________________________ Libvirt-cim mailing list Libvirt-cim@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-cim
-- Best Regards Wenchao Xia

于 2013-4-12 10:55, Wenchao Xia 写道:
于 2013-4-12 4:35, John Ferlan 写道:
On 04/10/2013 10:51 PM, Wenchao Xia wrote:
On 04/10/2013 04:26 AM, Wenchao Xia wrote:
于 2013-4-9 23:45, John Ferlan 写道:
On 04/09/2013 10:40 AM, John Ferlan wrote: > On 04/08/2013 06:16 AM, Wenchao Xia wrote: I'm *painfully* aware of #2 & #3!!! The "default" libvirt location is also /var/lib/libvirt/images and cimtest kept failing strangely until Me too, maybe we can skip this now and improve it in the future.
Right - I'm not going to solve the problem - just have it noted.
How do you installed libvirt-cim? I guess "make install" still have problem, could u try remove old libvirt-cim rpm and then "make rpm" in libvirt-cim source code and then rpm -ivh? Also have you installed pywbem rpm? softlink seems work innormal since pywbem have dependency library need to be installed.
What's interesting about the RH64 system is I used yum/rpm install only... The rest is a bit lengthy as I've added to it during the day, but perhaps it's good to know/record the steps/process used.
I yum localinstall'd libvirt-cim... pywbem was installed... I erased and reinstalled libvirt-cim today (sometimes that helps) and yet still the MigrationService was still missing, although MigrationCapabilities miraculously showed up.
A 'wbemcli ecn http://root:password@localhost/root/virt | grep Migration' did not find a CIM_VirtualSystemMigrationService.
After digging through the various scripts I believe that means the {KVM|LXC|XEN} versions of the mof won't be installed/loaded
Why it's not there - I have no clue. Per the libvirt.org/CIM/schema webpage, I installed the v216 experimental schema on my RH64 box. That schema doesn't have that class/mof; however, on my F18 box it seems I have a later schema installed (v2.33) which does have it. Not sure how I installed that... According to everything I found online the migration schema was added in v2.17. I guess your use yum update to install RH6.4 right? manually install of V216 base schema is sure a cause, but I am not sure in yum install tog-pegasus/libvirt-cim if base_schema will have its chance to be registered, one thing I am sure is that installation from image disk is OK. Summrize: To fix this problem, only thing need to do, is uninstall base-sc experimental schema and try yum install libvirt-cim, to see if base schema exist. If not, check yum section in spec file.
By the way, it is embarrassing that the web page misguide user, may be you can share the link and we should modify it when time allows.
After spending some time thinking about and investigating the differences between the two systems - I tried using the 'make {preinstall|install|postinstall}' options and had some interesting results (with a restart of tog-pegasus between each make option). Interesting as in things now work (except for one small bug in 'enum_volumes' where 'None' was returned for an empty storage pool causing an exception - I have a fix for it). Of most interest is somehow CIM_VirtualSystemMigrationService is now present. What I noted as part of that installation which went to /usr/lib64/share/libvirt-cim is that there is a 2.21 schema zip file in that directory which gets inflated and installed as part of the 'make' processing (by one of the two scripts). Since 2.21 > 2.17 that migration mof is there, so now it's on that system. Why this doesn't happen as part of RPM install - I'm not sure.
So to summarize - 1. Now I have no test failures on my RH64 system other than the already failing Indications tests.
Strange, I still get success on those cases, could u try upstream cimtest on it? I found it fail after applying the 9 patches of your but succeed before, and it succeed again after applying mine these 3 patches.
2. I believe there's some disconnect between what happens via the rpm install and what happens during the 'make' options, but I don't know where to look and right now I really don't have the cycles to investigate. I'm guessing that somewhere along the line 2.21 was made the default, but the web pages didn't get updated and something in the RPM install process didn't quite work right, but it didn't matter or wasn't noticed because perhaps no one went through the pain of a clean installation environment while strictly following the web pages.
libvirt-cim make process automatically download 2.21 base schema and install it, it seems root cause are yum install script are missing that part. So rpm -ivh would succeed, yum upgrade would succeed(haven't check), but yum install fail. This is a bug need to be solved, since user are tend to use yum when it is available.
John
_______________________________________________ Libvirt-cim mailing list Libvirt-cim@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-cim
-- Best Regards Wenchao Xia

On 04/12/2013 02:41 AM, Wenchao Xia wrote:
Why it's not there - I have no clue. Per the libvirt.org/CIM/schema webpage, I installed the v216 experimental schema on my RH64 box. That schema doesn't have that class/mof; however, on my F18 box it seems I have a later schema installed (v2.33) which does have it. Not sure how I installed that... According to everything I found online the migration schema was added in v2.17.
I guess your use yum update to install RH6.4 right? manually install of V216 base schema is sure a cause, but I am not sure in yum install tog-pegasus/libvirt-cim if base_schema will have its chance to be registered, one thing I am sure is that installation from image disk is OK.
Not sure of the question. I didn't install the 6.4 base system; however, that shouldn't matter. I believe the install was done via some provisioning tool like cobbler. Installation of other necessary packages to make things work has been mostly a hunt and gather exercise, then use rpm -ivh in order to install since by default the yum.repos.d are devoid of any way to yum update.
Summrize: To fix this problem, only thing need to do, is uninstall base-sc experimental schema and try yum install libvirt-cim, to see if base schema exist. If not, check yum section in spec file.
Right, when/if I find more time in order to try various different options. I'd probably start from scratch and be more careful about documenting everything I had to do.
By the way, it is embarrassing that the web page misguide user, may be you can share the link and we should modify it when time allows.
The two primary pages I've looked at are: http://libvirt.org/CIM/schema.html http://wiki.libvirt.org/page/Libvirt-cim_setup Beyond that the 'README' from the 'libvirt-cim' git repository provided some tips. Generally speaking though anything from the /CIM/ pages is probably a bit old. Figuring all the steps and packages one should take would be a nice exercise; however, it doesn't seem there are that many new users out there so it's not a "top of the list" type item to undertake. John

On 04/11/2013 10:55 PM, Wenchao Xia wrote:
于 2013-4-12 4:35, John Ferlan 写道:
So to summarize - 1. Now I have no test failures on my RH64 system other than the already failing Indications tests.
Strange, I still get success on those cases, could u try upstream cimtest on it? I found it fail after applying the 9 patches of your but succeed before, and it succeed again after applying mine these 3 patches.
Took me a bit to find the previous email on this, but I think this is a "network configuration issue" on my end rather than a test issue. The 'cimconfig -c -l' returns ' enableIndicationService=true'; however, the value 'fullyQualifiedHostName=' is not (at best) correct. It's not pingable or connectable - it's an address/name that I assume is handled elsewhere in the Red Hat corporate address translation world.
2. I believe there's some disconnect between what happens via the rpm install and what happens during the 'make' options, but I don't know where to look and right now I really don't have the cycles to investigate. I'm guessing that somewhere along the line 2.21 was made the default, but the web pages didn't get updated and something in the RPM install process didn't quite work right, but it didn't matter or wasn't noticed because perhaps no one went through the pain of a clean installation environment while strictly following the web pages.
libvirt-cim make process automatically download 2.21 base schema and install it, it seems root cause are yum install script are missing that part. So rpm -ivh would succeed, yum upgrade would succeed(haven't check), but yum install fail. This is a bug need to be solved, since user are tend to use yum when it is available.
Whether rpm -ivh would do the right thing - I have no idea. I used 'yum localinstall' from the result of a 'make rpm' - I'm not a yum expert - just a user, so I have no idea "internally" what the difference between yum install & update is. As a consumer/user of libvirt-cim - if the right cim schema isn't installed, then I'd expect it to be installed regardless of which yum option I used. John
participants (2)
-
John Ferlan
-
Wenchao Xia