Zhengang Li wrote:
> ComputerSystemIndication - 01_created_indication.py: FAIL
Pegasus crashed after running this test case.
Log says 'BadStatusLine :'
> ElementAllocatedFromPool - 03_reverse_errs.py: FAIL
exp: ERR_NOT_FOUND(6) - No such instance
ret: ERR_FAILED(1) - Invalid InstanceID or unsupported pool type
> ElementConforms - 02_reverse.py: FAIL
Binary rpm provider returns CIM_ERR_INVALID_PARAMETER:
KVM_ElementConformsToProfile on the following query:
wbemain -ac KVM_ElementConformsToProfile
'http://u:p@host:5988/root/virt:KVM_ComputerSystem.CreationClassName="KVM_ComputerSystem",Name="domgst"'
Same wbemcli command gets the correct results on another system using
latest libvirt-cim tree (changeset 533).
I dont see the
KVM_ElementConformsToProfile.CIM_ElementConformsToProfile
registered in the root#virt namespace on F9 machine with rpm
libvirt-cim, while the same is present in both the root#interop and
root#virt namespace on F9 machine with latest libvirt-cim sources.
I tried copying the provider manually to the root#virt namespace and
restarted the cimserver, but I did not get any results even after that.
I dont know if this is proper way of registering the mof files in the
namespace.
Although, the above wbemcli gives me expected o/p on the
F9 with rpm when the query includes the root/interop namespace,
While on the F9 with latest source I get o/p for query with root/virt
namespace.
Could you please tell me the namespace and ECTP provider registration
details.
> ElementConforms - 04_ectp_rev_errs.py: FAIL
All negative checks result in the CIM_ERR_INVALID_PARAMETER. Should be
the same reason as ElementConforms.02
This returned XFAIL on F9 with rpm binary. It
passes with the latest
libvirt-cim sources.
> ElementSettingData - 03_esd_assoc_with_rasd_errs.py: FAIL
This one passed in individual run. The previous ElementConforms.04
undefine fix doesn't help here. Might be some other unknown missing
undefine.
> NetworkPort - 03_user_netport.py: FAIL
'user' network type.
[Known Issue]
> ReferencedProfile - 01_verify_refprof.py: FAIL
Binary rpm provider gives 2 results on the following query:
wbemein
http://u:p@host:5988/root/interop:KVM_RegisteredProfile
"CIM:DSP1042-SystemVirtualization-1.0.0"
"CIM:DSP1057-VirtualSystem-1.0.0a"
Same wbemcli command gets 5 results on changeset-533 tree on another
system.
"CIM:DSP1042-SystemVirtualization-1.0.0"
"CIM:DSP1057-VirtualSystem-1.0.0a"
"CIM:DSP1059-GenericDeviceResourceVirtualization-1.0.0"
"CIM:DSP1045-MemoryResourceVirtualization-1.0.0"
"CIM:DSP1081-VirtualSystemMigration-1.0"
Yes this is correct.
This leads to ReferencedProfile's 'ain' query gets only 2
results.
I did not see the ReferncedProfile query return any results.
since the ReferencedProfile is not present on the on an rpm libvirt-cim
based F9 machine and hence the ain query fails without any results.
> ReferencedProfile - 02_refprofile_errs.py: FAIL
Same as ReferencedProfile.01
I think the ReferncedProfile was added with the changeset 500 and the
rpm contains the changes till 393, hence I think the ReferencedProfile
did not get registered on the machine.
Should we skip the above test cases for rpm based F9 ?
> ResourceAllocationFromPool - 03_forward_errs.py: FAIL
Daisy sent a fix for this. Should be passed with her fix.
> ResourcePoolConfigurationService - 03_CreateResourcePool.py: FAIL
> ResourcePoolConfigurationService - 04_CreateChildResourcePool.py: FAIL
> ResourcePoolConfigurationService -
> 06_RemoveResourcesFromResourcePool.py: FAIL
> ResourcePoolConfigurationService - 07_DeleteResourcePool.py: FAIL
CIM_ERR_NOT_SUPPORTED for 03, 04, 06, 07
Fix submitted.
> SettingsDefine - 02_reverse.py: FAIL
ProcRASD.InstanceID is 'domname/0' in binary rpm provider. Test case
expects 'domname/proc' now.
> VirtualSystemManagementService - 06_addresource.py: FAIL
Different network xml returned from system_to_xml() in provider and
'virsh dumpxml'. Error message complains about missing <source> in the
a bridged network device.
> VirtualSystemSettingDataComponent - 03_vssdc_fwd_errs.py: FAIL
Daisy sent a fix this morning. Should be passed upon patch applied.