Re: Test Run Summary (Aug 11 2009): Xen on Red Hat Enterprise Linux Server release 5.3 (Tikanga) with Pegasus

Deepti B Kalakeri wrote:
Deepti B Kalakeri wrote:
Due to some problem, I am not able to send mails to libvirt-cim mailing list.
================================================= Test Run Summary (Aug 11 2009): Xen on Red Hat Enterprise Linux Server release 5.3 (Tikanga) with Pegasus ================================================= Distro: Red Hat Enterprise Linux Server release 5.3 (Tikanga) Kernel: 2.6.18-128.el5xen libvirt: 0.3.3 Hypervisor: Xen 3.1.0 CIMOM: Pegasus 2.7.1 Libvirt-cim revision: 945 Libvirt-cim changeset: 2de7d9bdb9af Cimtest revision: Cimtest changeset: ================================================= FAIL : 34 XFAIL : 3 SKIP : 4 PASS : 125 ----------------- Total : 166 ================================================= FAIL Test Summary: ComputerSystemIndication - 01_created_indication.py: FAIL ComputerSystemMigrationJobIndication - 01_csmig_ind_for_offline_mig.py: FAIL ElementAllocatedFromPool - 01_forward.py: FAIL This tc fails for Xen. In the test case though we are passing the network pool information while creating the domain. But, somehow the domain is getting created with the bridge type interface. I checked this with the rpm based libvirt-cim on RHEL5.4 snap5 and the test case worked fine. But the same when checked with current sources on RHEL5.4 failed with same error as on with RHEL5.3.
ElementAllocatedFromPool - 04_forward_errs.py: FAIL HostSystem - 03_hs_to_settdefcap.py: FAIL Submitted fix for these. RASD - 01_verify_rasd_fields.py: FAIL
The test case do not have any problem. I am seeing a peculiar behavior. The EnumerateInstance operation from the provider on the RASD is returning None value in the NetworkName field. I commented out the destroy and undefine part of the test case and checked if the wbemcli ein operation on RASD returned proper NetworkName value, which did. The rasd_from_vdev() function in Virt_RASD.c is indirectly called via enu_rasds -> _get_rasds(). The NetworkName value seem to be assigned properly when called from the VSSDC.c vssd_to_rasd() function. But when called from EnumInstances() in the Virt_RASD.c the NetworkName is getting reset to null. Here is the sample Debug msg below: Virt_VSSDComponent.c(60): From VSSDC Virt_RASD.c(756): From RASD misc_util.c(75): Connecting to libvirt with uri `xen' device_parsing.c(325): No network source defined, leaving blank Virt_RASD.c(444): DEBUG NetworkName uis testbridge Virt_RASD.c(445): DEBUG dev->dev.net.type, bridge is bridge Virt_RASD.c(446): DEBUG InstanceID is VSSDC_dom/00:11:22:33:44:aa .... Virt_RASD.c(444): DEBUG DEEPTI NetworkName uis testbridge Virt_RASD.c(445): DEBUG (STREQ(dev->dev.net.type, bridge is bridge Virt_RASD.c(446): DEBUG InstanceID is VSSDC_dom/00:11:22:33:44:aa misc_util.c(75): Connecting to libvirt with uri `xen' infostore.c(88): Path is /etc/libvirt/cim/Xen_VSSDC_dom misc_util.c(406): Type is Xen libvir: Xen error : failed Xen syscall ioctl 3166208 infostore.c(88): Path is /etc/libvirt/cim/Xen_VSSDC_dom misc_util.c(75): Connecting to libvirt with uri `xen' instance_util.c(127): Number of keys: 1 instance_util.c(140): Comparing key 0: `InstanceID' misc_util.c(75): Connecting to libvirt with uri `xen' instance_util.c(127): Number of keys: 1 instance_util.c(140): Comparing key 0: `InstanceID' misc_util.c(75): Connecting to libvirt with uri `xen' device_parsing.c(273): Disk node: disk instance_util.c(127): Number of keys: 1 instance_util.c(140): Comparing key 0: `InstanceID' misc_util.c(75): Connecting to libvirt with uri `xen' instance_util.c(127): Number of keys: 1 instance_util.c(140): Comparing key 0: `InstanceID' misc_util.c(75): Connecting to libvirt with uri `xen' device_parsing.c(325): No network source defined, leaving blank Virt_RASD.c(444): DEBUG NetworkName uis (null) Virt_RASD.c(445): DEBUG (STREQ(dev->dev.net.type, bridge is bridge Virt_RASD.c(446): DEBUG InstanceID is VSSDC_dom/00:11:22:33:44:aa instance_util.c(127): Number of keys: 1 instance_util.c(140): Comparing key 0: `InstanceID' misc_util.c(75): Connecting to libvirt with uri `xen' misc_util.c(75): Connecting to libvirt with uri `xen' infostore.c(88): Path is /etc/libvirt/cim/Xen_VSSDC_dom instance_util.c(127): Number of keys: 1 instance_util.c(140): Comparing key 0: `InstanceID' misc_util.c(75): Connecting to libvirt with uri `xen' misc_util.c(75): Connecting to libvirt with uri `xen' misc_util.c(406): Type is Xen libvir: Xen error : failed Xen syscall ioctl 3166208 infostore.c(88): Path is /etc/libvirt/cim/Xen_Domain-0 instance_util.c(127): Number of keys: 1 instance_util.c(140): Comparing key 0: `InstanceID' misc_util.c(75): Connecting to libvirt with uri `xen' misc_util.c(75): Connecting to libvirt with uri `xen' misc_util.c(406): Type is Xen libvir: Xen error : failed Xen syscall ioctl 3166208 infostore.c(88): Path is /etc/libvirt/cim/Xen_VSSDC_dom instance_util.c(127): Number of keys: 1 instance_util.c(140): Comparing key 0: `InstanceID' misc_util.c(75): Connecting to libvirt with uri `xen' device_parsing.c(325): No network source defined, leaving blank Virt_RASD.c(444): DEBUG NetworkName uis (null) Virt_RASD.c(445): DEBUG (STREQ(dev->dev.net.type, bridge is bridge Virt_RASD.c(446): DEBUG InstanceID is VSSDC_dom/00:11:22:33:44:aa instance_util.c(127): Number of keys: 1
RASD - 02_enum.py: FAIL Submitted fix for this. ResourceAllocationFromPool - 01_forward.py: FAIL Though the networkpool information is supplied with the Xen domain while creating it.. once created the xml will have the bridge information instead of pool. Because of the provider would not have the source networkpool information when queried, this test fails with the following error:
device_parsing.c(325): No network source defined, leaving blank Virt_DevicePool.c(444): Unable to determine pool since no network source defined
ResourceAllocationFromPool - 02_reverse.py: FAIL same here. ResourcePoolConfigurationService - 07_DeleteResourcePool.py: FAIL ResourcePoolConfigurationService - 09_DeleteDiskPool.py: FAIL SettingsDefineCapabilities - 01_forward.py: FAIL VSSD - 02_bootldr.py: FAIL Submitted fix for this VSSD - 03_vssd_gi_errs.py: FAIL Passed when ran manually. VSSD - 04_vssd_to_rasd.py: FAIL Submitted fix for this. VSSD - 05_set_uuid.py: FAIL Submitted fix for this. VirtualSystemManagementService - 01_definesystem_name.py: FAIL VirtualSystemManagementService - 02_destroysystem.py: FAIL VirtualSystemManagementService - 06_addresource.py: FAIL Passed when run manually VirtualSystemManagementService - 08_modifyresource.py: FAIL Need to look into it VirtualSystemManagementService - 09_procrasd_persist.py: FAIL VirtualSystemManagementService - 12_referenced_config.py: FAIL Passed when run manually VirtualSystemManagementService - 13_refconfig_additional_devs.py: FAIL The test case fails because of the duplicate vdevice info. will fix this.
VirtualSystemManagementService - 15_mod_system_settings.py: FAIL VirtualSystemManagementService - 17_removeresource_neg.py: FAIL VirtualSystemManagementService - 20_verify_vnc_password.py: FAIL All the above passed when run manually VirtualSystemMigrationService - 01_migratable_host.py: FAIL VirtualSystemMigrationService - 02_host_migrate_type.py: FAIL VirtualSystemMigrationService - 05_migratable_host_errs.py: FAIL VirtualSystemMigrationService - 06_remote_live_migration.py: FAIL VirtualSystemMigrationService - 07_remote_offline_migration.py: FAIL VirtualSystemMigrationService - 08_remote_restart_resume_migration.py: FAIL VirtualSystemSettingDataComponent - 04_vssdc_rev_errs.py: FAIL VirtualSystemSnapshotService - 03_create_snapshot.py: FAIL
Need to verify the above.
The following needs to be verified. ResourcePoolConfigurationService - 07_DeleteResourcePool.py: FAIL ResourcePoolConfigurationService - 09_DeleteDiskPool.py: FAIL SettingsDefineCapabilities - 01_forward.py: FAIL VirtualSystemManagementService - 13_refconfig_additional_devs.py: VirtualSystemManagementService - 08_modifyresource.py: FAIL VirtualSystemMigrationService - 01_migratable_host.py: FAIL VirtualSystemMigrationService - 02_host_migrate_type.py: FAIL VirtualSystemMigrationService - 05_migratable_host_errs.py: FAIL VirtualSystemMigrationService - 06_remote_live_migration.py: FAIL VirtualSystemMigrationService - 07_remote_offline_migration.py: FAIL VirtualSystemMigrationService - 08_remote_restart_resume_migration.py: FAIL VirtualSystemSettingDataComponent - 04_vssdc_rev_errs.py: FAIL VirtualSystemSnapshotService - 03_create_snapshot.py: FAIL -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik@linux.vnet.ibm.com

RASD - 01_verify_rasd_fields.py: FAIL The test case do not have any problem. I am seeing a peculiar behavior. The EnumerateInstance operation from the provider on the RASD is returning None value in the NetworkName field. I commented out the destroy and undefine part of the test case and checked if the wbemcli ein operation on RASD returned proper NetworkName value, which did. The rasd_from_vdev() function in Virt_RASD.c is indirectly called via enu_rasds -> _get_rasds(). The NetworkName value seem to be assigned properly when called from the VSSDC.c vssd_to_rasd() function. But when called from EnumInstances() in the Virt_RASD.c the NetworkName is getting reset to null.
Thanks Deepti for providing debug - this helped me track the issue down. The actual error is in the virt_device_dup() function in device_parsing.c I'll follow up with a patch to fix this.
ResourceAllocationFromPool - 01_forward.py: FAIL Though the networkpool information is supplied with the Xen domain while creating it.. once created the xml will have the bridge information instead of pool. Because of the provider would not have the source networkpool information when queried, this test fails with the following error:
device_parsing.c(325): No network source defined, leaving blank Virt_DevicePool.c(444): Unable to determine pool since no network source defined
This is a consequence of how the libvirt Xen driver works. When the guest is defined, the XML is converted from a "network" type interface to a "bridge" type. -- Kaitlin Rupert IBM Linux Technology Center kaitlin@linux.vnet.ibm.com

Kaitlin Rupert wrote:
RASD - 01_verify_rasd_fields.py: FAIL The test case do not have any problem. I am seeing a peculiar behavior. The EnumerateInstance operation from the provider on the RASD is returning None value in the NetworkName field. I commented out the destroy and undefine part of the test case and checked if the wbemcli ein operation on RASD returned proper NetworkName value, which did. The rasd_from_vdev() function in Virt_RASD.c is indirectly called via enu_rasds -> _get_rasds(). The NetworkName value seem to be assigned properly when called from the VSSDC.c vssd_to_rasd() function. But when called from EnumInstances() in the Virt_RASD.c the NetworkName is getting reset to null.
Thanks Deepti for providing debug - this helped me track the issue down. The actual error is in the virt_device_dup() function in device_parsing.c I'll follow up with a patch to fix this.
Thats good :)
ResourceAllocationFromPool - 01_forward.py: FAIL Though the networkpool information is supplied with the Xen domain while creating it.. once created the xml will have the bridge information instead of pool. Because of the provider would not have the source networkpool information when queried, this test fails with the following error:
device_parsing.c(325): No network source defined, leaving blank Virt_DevicePool.c(444): Unable to determine pool since no network source defined
This is a consequence of how the libvirt Xen driver works. When the guest is defined, the XML is converted from a "network" type interface to a "bridge" type.
Yeah! you had told me about this previously as well. But how should we go about fixing these tests for Xen. I am yet to decide. -- Thanks and Regards, Deepti B. Kalakeri IBM Linux Technology Center deeptik@linux.vnet.ibm.com
participants (2)
-
Deepti B Kalakeri
-
Kaitlin Rupert