CimTest Report for KVM on F9 23-07-2008

======================================================================== CIM Test Report for KVM on F9 with latest libvirt-cim and libcmpiutil ======================================================================== Distro : Fedora 9 Beta Kernel : 2.6.25-0.121.rc5.git4.fc9 Libvirt : libvirt-0.4.2-1.fc9.x86_64 CIMOM : pegasus PyWBEM : pywbem-0.6 CIM Schema : cimv216Experimental LibCMPIutil : 83 LibVirtCIM : 640 CIMTEST : 249 ======================================================= PASS : 111 FAIL : 1 XFAIL : 2 SKIP : 16 ----------------- Total : 130 ======================================================= Here is one of the tc that failed ComputerSystemIndication - 01_created_indication.py: FAIL ERROR - Waited too long for indication Please find the complete report attached with the mail. Thanks and Regards, Deepti. Starting test suite: libvirt-cim Cleaned log files. Testing KVM hypervisor AllocationCapabilities - 01_enum.py: PASS AllocationCapabilities - 02_alloccap_gi_errs.py: PASS ComputerSystem - 01_enum.py: PASS ComputerSystem - 02_nosystems.py: SKIP ERROR - System has defined domains; unable to run ComputerSystem - 03_defineVS.py: PASS ComputerSystem - 04_defineStartVS.py: PASS ComputerSystem - 05_activate_defined_start.py: PASS ComputerSystem - 06_paused_active_suspend.py: PASS ComputerSystem - 22_define_suspend.py: PASS ComputerSystem - 23_suspend_suspend.py: SKIP ComputerSystem - 27_define_suspend_errs.py: SKIP ComputerSystem - 32_start_reboot.py: SKIP ComputerSystem - 33_suspend_reboot.py: SKIP ComputerSystem - 35_start_reset.py: SKIP ComputerSystem - 40_RSC_start.py: XFAIL Bug: 00001 ERROR - Exception: (1, u'CIM_ERR_FAILED: Invalid state transition') ERROR - Exception: RequestedStateChange() could not be used to start domain: 'test_domain' InvokeMethod(RequestStateChange): CIM_ERR_FAILED: Invalid state transition Bug:<00001> ComputerSystem - 41_cs_to_settingdefinestate.py: SKIP ComputerSystem - 42_cs_gi_errs.py: PASS ComputerSystemIndication - 01_created_indication.py: FAIL ERROR - Waited too long for indication ElementAllocatedFromPool - 01_forward.py: PASS ElementAllocatedFromPool - 02_reverse.py: SKIP ElementAllocatedFromPool - 03_reverse_errs.py: PASS ElementAllocatedFromPool - 04_forward_errs.py: PASS ElementCapabilities - 01_forward.py: PASS ElementCapabilities - 02_reverse.py: PASS ElementCapabilities - 03_forward_errs.py: PASS ElementCapabilities - 04_reverse_errs.py: PASS ElementCapabilities - 05_hostsystem_cap.py: PASS ElementConforms - 01_forward.py: PASS ElementConforms - 02_reverse.py: PASS ElementConforms - 03_ectp_fwd_errs.py: PASS ElementConforms - 04_ectp_rev_errs.py: PASS ElementSettingData - 01_forward.py: PASS ElementSettingData - 03_esd_assoc_with_rasd_errs.py: PASS EnabledLogicalElementCapabilities - 01_enum.py: PASS EnabledLogicalElementCapabilities - 02_elecap_gi_errs.py: PASS HostSystem - 01_enum.py: PASS HostSystem - 02_hostsystem_to_rasd.py: PASS HostSystem - 03_hs_to_settdefcap.py: PASS HostSystem - 04_hs_to_EAPF.py: SKIP HostSystem - 05_hs_gi_errs.py: PASS HostSystem - 06_hs_to_vsms.py: PASS HostedDependency - 01_forward.py: PASS HostedDependency - 02_reverse.py: PASS HostedDependency - 03_enabledstate.py: PASS HostedDependency - 04_reverse_errs.py: PASS HostedResourcePool - 01_forward.py: PASS HostedResourcePool - 02_reverse.py: PASS HostedResourcePool - 03_forward_errs.py: PASS HostedResourcePool - 04_reverse_errs.py: PASS HostedService - 01_forward.py: PASS HostedService - 02_reverse.py: PASS HostedService - 03_forward_errs.py: PASS HostedService - 04_reverse_errs.py: PASS LogicalDisk - 01_disk.py: PASS LogicalDisk - 02_nodevs.py: SKIP ERROR - System has defined domains; unable to run LogicalDisk - 03_ld_gi_errs.py: PASS Memory - 01_memory.py: PASS Memory - 02_defgetmem.py: PASS Memory - 03_mem_gi_errs.py: PASS NetworkPort - 01_netport.py: PASS NetworkPort - 02_np_gi_errors.py: PASS NetworkPort - 03_user_netport.py: XFAIL Bug: 00004 ERROR - Exception: (6, u'CIM_ERR_NOT_FOUND: No such instance (test_domain/00:11:22:33:44:55)') Bug:<00004> Processor - 01_processor.py: PASS Processor - 02_definesys_get_procs.py: PASS Processor - 03_proc_gi_errs.py: PASS Profile - 01_enum.py: PASS Profile - 02_profile_to_elec.py: SKIP Profile - 03_rprofile_gi_errs.py: PASS RASD - 01_verify_rasd_fields.py: PASS RASD - 02_enum.py: PASS RASD - 03_rasd_errs.py: PASS ReferencedProfile - 01_verify_refprof.py: PASS ReferencedProfile - 02_refprofile_errs.py: PASS ResourceAllocationFromPool - 01_forward.py: PASS ResourceAllocationFromPool - 02_reverse.py: PASS ResourceAllocationFromPool - 03_forward_errs.py: PASS ResourceAllocationFromPool - 04_reverse_errs.py: PASS ResourceAllocationFromPool - 05_RAPF_err.py: PASS ResourcePool - 01_enum.py: PASS ResourcePool - 02_rp_gi_errors.py: PASS ResourcePoolConfigurationCapabilities - 01_enum.py: PASS ResourcePoolConfigurationCapabilities - 02_rpcc_gi_errs.py: PASS ResourcePoolConfigurationService - 01_enum.py: PASS ResourcePoolConfigurationService - 02_rcps_gi_errors.py: PASS ResourcePoolConfigurationService - 03_CreateResourcePool.py: PASS ResourcePoolConfigurationService - 04_CreateChildResourcePool.py: PASS ResourcePoolConfigurationService - 05_AddResourcesToResourcePool.py: PASS ResourcePoolConfigurationService - 06_RemoveResourcesFromResourcePool.py: PASS ResourcePoolConfigurationService - 07_DeleteResourcePool.py: PASS SettingsDefine - 01_forward.py: PASS SettingsDefine - 02_reverse.py: PASS SettingsDefine - 03_sds_fwd_errs.py: PASS SettingsDefine - 04_sds_rev_errs.py: PASS SettingsDefineCapabilities - 01_forward.py: PASS SettingsDefineCapabilities - 03_forward_errs.py: PASS SettingsDefineCapabilities - 04_forward_vsmsdata.py: PASS SettingsDefineCapabilities - 05_reverse_vsmcap.py: PASS SystemDevice - 01_forward.py: PASS SystemDevice - 02_reverse.py: PASS SystemDevice - 03_fwderrs.py: PASS VSSD - 01_enum.py: PASS VSSD - 02_bootldr.py: SKIP VSSD - 03_vssd_gi_errs.py: PASS VSSD - 04_vssd_to_rasd.py: PASS VirtualSystemManagementCapabilities - 01_enum.py: PASS VirtualSystemManagementCapabilities - 02_vsmcap_gi_errs.py: PASS VirtualSystemManagementService - 01_definesystem_name.py: PASS VirtualSystemManagementService - 02_destroysystem.py: PASS VirtualSystemManagementService - 03_definesystem_ess.py: PASS VirtualSystemManagementService - 04_definesystem_ers.py: PASS VirtualSystemManagementService - 05_destroysystem_neg.py: PASS VirtualSystemManagementService - 06_addresource.py: PASS VirtualSystemManagementService - 07_addresource_neg.py: PASS VirtualSystemManagementService - 08_modifyresource.py: PASS VirtualSystemManagementService - 09_procrasd_persist.py: PASS VirtualSystemMigrationCapabilities - 01_enum.py: PASS VirtualSystemMigrationCapabilities - 02_vsmc_gi_errs.py: PASS VirtualSystemMigrationService - 01_migratable_host.py: SKIP VirtualSystemMigrationService - 02_host_migrate_type.py: SKIP VirtualSystemMigrationService - 05_migratable_host_errs.py: SKIP VirtualSystemMigrationSettingData - 01_enum.py: PASS VirtualSystemMigrationSettingData - 02_vsmsd_gi_errs.py: PASS VirtualSystemSettingDataComponent - 01_forward.py: SKIP VirtualSystemSettingDataComponent - 02_reverse.py: PASS VirtualSystemSettingDataComponent - 03_vssdc_fwd_errs.py: PASS VirtualSystemSettingDataComponent - 04_vssdc_rev_errs.py: PASS VirtualSystemSnapshotService - 01_enum.py: PASS VirtualSystemSnapshotService - 02_vs_sservice_gi_errs.py: PASS VirtualSystemSnapshotServiceCapabilities - 01_enum.py: PASS VirtualSystemSnapshotServiceCapabilities - 02_vs_sservicecap_gi_errs.py: PASS

DK> ComputerSystem - 40_RSC_start.py: XFAIL Bug: 00001 DK> ERROR - Exception: (1, u'CIM_ERR_FAILED: Invalid state transition') DK> ERROR - Exception: RequestedStateChange() could not be used to start domain: 'test_domain' DK> InvokeMethod(RequestStateChange): CIM_ERR_FAILED: Invalid state transition DK> Bug:<00001> What is this? Certainly RequestStateChange() is not broken on KVM, right? And if it is, it definitely shouldn't be an XFAIL. -- Dan Smith IBM Linux Technology Center Open Hypervisor Team email: danms@us.ibm.com

Dan Smith wrote:
DK> ComputerSystem - 40_RSC_start.py: XFAIL Bug: 00001 DK> ERROR - Exception: (1, u'CIM_ERR_FAILED: Invalid state transition') DK> ERROR - Exception: RequestedStateChange() could not be used to start domain: 'test_domain' DK> InvokeMethod(RequestStateChange): CIM_ERR_FAILED: Invalid state transition DK> Bug:<00001>
What is this? Certainly RequestStateChange() is not broken on KVM, right? And if it is, it definitely shouldn't be an XFAIL.
The test is defining a guest and then starting it. The enable call in the provider checks to make sure the guest is either defined or pause. Deepti - you could add a return after the guest is defined and then check the state of the guest. I tried to reproduce this on my system, but I see the following error: CIM_ERR_FAILED: ResourceSettings Error: No NetworkPool specified and no default available ERROR - Exception: DefineSystem() failed to create domain: test_domain InvokeMethod(DefineSystem): CIM_ERR_FAILED: ResourceSettings Error: No NetworkPool specified and no default available This is odd, because I have a network pool defined: Name State Autostart ----------------------------------------- default-net active no The provider complains with the following: Virt_VirtualSystemManagementService.c(298): Failed to get default network pool: No default pool found for type 10 Virt_VirtualSystemManagementService.c(573): rasd_to_vdev(KVM_NetResourceAllocationSettingData): No NetworkPool specified and no default available Virt_VirtualSystemManagementService.c(886): Failed to classify resources: No NetworkPool specified and no default available std_invokemethod.c(305): Method `DefineSystem' returned 1 -- Kaitlin Rupert IBM Linux Technology Center kaitlin@linux.vnet.ibm.com

I tried to reproduce this on my system, but I see the following error:
CIM_ERR_FAILED: ResourceSettings Error: No NetworkPool specified and no default available ERROR - Exception: DefineSystem() failed to create domain: test_domain InvokeMethod(DefineSystem): CIM_ERR_FAILED: ResourceSettings Error: No NetworkPool specified and no default available
This is odd, because I have a network pool defined:
Name State Autostart ----------------------------------------- default-net active no
The provider complains with the following:
Virt_VirtualSystemManagementService.c(298): Failed to get default network pool: No default pool found for type 10 Virt_VirtualSystemManagementService.c(573): rasd_to_vdev(KVM_NetResourceAllocationSettingData): No NetworkPool specified and no default available Virt_VirtualSystemManagementService.c(886): Failed to classify resources: No NetworkPool specified and no default available std_invokemethod.c(305): Method `DefineSystem' returned 1
My issue appears to be caused by a different test suite - it's inadvertently deleting the network pool. -- Kaitlin Rupert IBM Linux Technology Center kaitlin@linux.vnet.ibm.com

Kaitlin Rupert wrote:
Dan Smith wrote:
DK> ComputerSystem - 40_RSC_start.py: XFAIL Bug: 00001 DK> ERROR - Exception: (1, u'CIM_ERR_FAILED: Invalid state transition') DK> ERROR - Exception: RequestedStateChange() could not be used to start domain: 'test_domain' DK> InvokeMethod(RequestStateChange): CIM_ERR_FAILED: Invalid state transition DK> Bug:<00001>
What is this? Certainly RequestStateChange() is not broken on KVM, right? And if it is, it definitely shouldn't be an XFAIL.
The test is defining a guest and then starting it. The enable call in the provider checks to make sure the guest is either defined or pause.
Deepti - you could add a return after the guest is defined and then check the state of the guest.
I tried to reproduce this on my system, but I see the following error:
CIM_ERR_FAILED: ResourceSettings Error: No NetworkPool specified and no default available ERROR - Exception: DefineSystem() failed to create domain: test_domain InvokeMethod(DefineSystem): CIM_ERR_FAILED: ResourceSettings Error: No NetworkPool specified and no default available
This is odd, because I have a network pool defined:
Name State Autostart ----------------------------------------- default-net active no
The provider complains with the following:
Virt_VirtualSystemManagementService.c(298): Failed to get default network pool: No default pool found for type 10 Virt_VirtualSystemManagementService.c(573): rasd_to_vdev(KVM_NetResourceAllocationSettingData): No NetworkPool specified and no default available Virt_VirtualSystemManagementService.c(886): Failed to classify resources: No NetworkPool specified and no default available std_invokemethod.c(305): Method `DefineSystem' returned 1
I did not see the above problem on the KVM when a network pool existed. I had the following on my machine *virsh net-list Name State Autostart ----------------------------------------- default-net55 active no * and the test case on KVM failed with the following reason: ComputerSystem - 40_RSC_start.py: XFAIL Bug: 00001 ERROR - Exception: (1, u'CIM_ERR_FAILED: Domain Operation Failed') ERROR - Exception: RequestedStateChange() could not be used to start domain: 'test_domain' InvokeMethod(RequestStateChange): CIM_ERR_FAILED: Domain Operation Failed Bug:<00001> I have given more information to the above error in reply to Dan's mail to the same thread. Thanks and Regards, Deepti.

Dan Smith wrote:
DK> ComputerSystem - 40_RSC_start.py: XFAIL Bug: 00001 DK> ERROR - Exception: (1, u'CIM_ERR_FAILED: Invalid state transition') DK> ERROR - Exception: RequestedStateChange() could not be used to start domain: 'test_domain' DK> InvokeMethod(RequestStateChange): CIM_ERR_FAILED: Invalid state transition DK> Bug:<00001>
What is this? Certainly RequestStateChange() is not broken on KVM, right? And if it is, it definitely shouldn't be an XFAIL.
The above tc *On Xen* : the test case passes *On KVM* : the test case XFAIL's with below reason. ComputerSystem - 40_RSC_start.py: XFAIL Bug: 00001 ERROR - Exception: (1, u'CIM_ERR_FAILED: Domain Operation Failed') ERROR - Exception: RequestedStateChange() could not be used to start domain: '40_test_domain' InvokeMethod(RequestStateChange): CIM_ERR_FAILED: Domain Operation Failed Bug:<00001> The XML from the debug statement is as follows: <domain type='kvm'> <uuid>a01c02b7-c8a0-4a49-9c90-0a35e19865da</uuid> <name>40_test_domain</name> <on_poweroff>destroy</on_poweroff> <on_crash>destroy</on_crash> <os> <type>hvm</type> <boot dev='hd'/> </os> <currentMemory>0</currentMemory> <memory>0</memory> <vcpu>1</vcpu> <devices> <interface type='network'> <mac address='11:22:33:aa:bb:cc'/> <source network='default-net55'/> </interface> <disk type='file' device='disk'> <source file='/tmp/default-kvm-dimage'/> <target dev='hda'/> </disk> <graphics type='vnc' port='-1'/> </devices> </domain> 1) The XML config file is *just enough* to *define *the guest but not start it. I manually tried to define the guest with the above XML and I was able to define it, but when I tried to start the defined guest it failed with the following error: *virsh start 40_test_domain* libvir: QEMU error : internal error QEMU quit during console startup error: Failed to start domain 40_test_domain The only problem with the above XML file is the *Memory and currentMemory *being set to* 0. *I then assigned *1024* in place of *0* for *Memory and currentMemory *and I was able to define and start the domain. I was able to start the domain only when both the values *Memory and currentMemory *were not equal to *0*. The *Memory and currentMemory *being 0 is not a problem in case of Xen and XenFV. I tried checking the difference and the limitation for this field on libvirt.org, but was not much successful. 2) I found one more peculiar problem, even though I was able to successfully define KVM domain either using tc or manually, or started the KVM guest using the virsh, I was not find any info using the virsh -c qemu:///system list --all. Can this be a problem with my machine ? I was able to use virsh for other guests though. *On XenFV* : the test case fails with following error. ComputerSystem - 40_RSC_start.py: XFAIL Bug: 00002 ERROR - EnabledState should be 2 not 0 ERROR - Exception: Attributes were not set as expected for domain: 'test_domain' Bug:<00002> I tried inserting delay in the test case between the call_request_state_change() and check_attributes(), and the test case Passed :). PS : The machine had the following network pool *virsh net-list Name State Autostart ----------------------------------------- default-net active no * Thanks and Regards, Deepti.
------------------------------------------------------------------------
_______________________________________________ Libvirt-cim mailing list Libvirt-cim@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-cim

DK> 1) DK> The XML config file is *just enough* to *define *the guest but not DK> start it. DK> I manually tried to define the guest with the above XML and I was able DK> to define it, but when I tried to start the defined guest it failed DK> with the following error: DK> *virsh start 40_test_domain* DK> libvir: QEMU error : internal error QEMU quit during console startup DK> error: Failed to start domain 40_test_domain DK> The only problem with the above XML file is the *Memory and DK> currentMemory *being set to* 0. That's a pretty big problem. Why is this marked as XFAIL? This should be failing, and needs to be fixed ASAP. When I define a KVM guest through the providers manually, I get a correctly-set memory value. Is this an error in the test? DK> *I then assigned *1024* in place of *0* for *Memory and DK> currentMemory *and I was able to define and start the domain. I DK> was able to start the domain only when both the values *Memory and DK> currentMemory *were not equal to *0*. DK> The *Memory and currentMemory *being 0 is not a problem in case of Xen DK> and XenFV. DK> I tried checking the difference and the limitation for this field on DK> libvirt.org, but was not much successful. Yeah, it's not expected to do anything at all if given 0 memory. DK> 2) I found one more peculiar problem, even though I was able to DK> successfully define KVM domain either using tc or manually, or DK> started the KVM guest using the virsh, I was not find any info DK> using the virsh -c qemu:///system list --all. Can this be a DK> problem with my machine ? I was able to use virsh for other DK> guests though. I don't see that, and obviously if you're seeing it with a manual virsh define, then something else is broken. -- Dan Smith IBM Linux Technology Center Open Hypervisor Team email: danms@us.ibm.com

Dan Smith wrote:
DK> 1) DK> The XML config file is *just enough* to *define *the guest but not DK> start it. DK> I manually tried to define the guest with the above XML and I was able DK> to define it, but when I tried to start the defined guest it failed DK> with the following error:
DK> *virsh start 40_test_domain* DK> libvir: QEMU error : internal error QEMU quit during console startup DK> error: Failed to start domain 40_test_domain
DK> The only problem with the above XML file is the *Memory and DK> currentMemory *being set to* 0.
That's a pretty big problem. Why is this marked as XFAIL? This should be failing, and needs to be fixed ASAP.
When I define a KVM guest through the providers manually, I get a correctly-set memory value. Is this an error in the test?
Yes, the test case is passing the RASD values which is inturn used for generating the XML configuration of the domain. According to my analysis, to create a KVM guest we need atleast 1024 memory units. Currently the test library vsms.py which is responsible for creating MemRASD values is passing only VirtualQuantity=512. This value is *not sufficient* for creating a *KVM* guest, but is *just enough* for *Xen, XenFV* guests. Also, AllocationUnits is one of the important field of MemRASD that determines the Memory and CurrentMemory fields in the XML configuration. The provider code that generates the Memory and CurrentMemory part of the XML configuration is given below. static const char *mem_rasd_to_vdev(CMPIInstance *inst, struct virt_device *dev) { const char *units; int shift; cu_get_u64_prop(inst, "VirtualQuantity", &dev->dev.mem.size); cu_get_u64_prop(inst, "Reservation", &dev->dev.mem.size); dev->dev.mem.maxsize = dev->dev.mem.size; cu_get_u64_prop(inst, "Limit", &dev->dev.mem.maxsize); if (cu_get_str_prop(inst, "AllocationUnits", &units) != CMPI_RC_OK) { CU_DEBUG("Memory RASD has no units, assuming bytes"); units = "Bytes"; } if (STREQC(units, "Bytes")) shift = -10; else if (STREQC(units, "KiloBytes")) shift = 0; else if (STREQC(units, "MegaBytes")) shift = 10; else if (STREQC(units, "GigaBytes")) shift = 20; else return "Unknown AllocationUnits in Memory RASD"; if (shift < 0) { dev->dev.mem.size >>= -shift; dev->dev.mem.maxsize >>= -shift; } else { dev->dev.mem.size <<= shift; dev->dev.mem.maxsize <<= shift; } return NULL; } Currently, the default value for the AllocationUnits is not being set by the vsms.py test library. Hence according to the above code the units that will be considered will be Bytes, when this happens the final value that is currently being assigned will be (dev->dev.mem.size >>= -shift, 512>>10) 0. As mentioned above this value is not at all sufficient for the successful KVM creation. This problem for KVM can be solved by setting the VirtualQuantity as 1024 and the AllocationUnits as "KiloBytes". After making these changes to the vsms.py , the tc 40_RSC_start.py passed on KVM and Xen. And for XenFV the test case passed with additional modification in the 40_RSC_start.py tc, which required polling for enabled state to be set properly after RequestedStateChange(). 1) As an extention to the 40_RSC_start.py tc we can actually pass different combination of VirtualQuantity and AllocationUnits to test the mem_rasd_to_vdev code path. 2) Also, can I update our libvirt wiki where we post our test results with tips like the necessary value for Memory and CurrentMemory value, if you think its valuable and if my analysis is accurate. Any suggestions ?? Thanks and Regards, Deepti.

DK> Yes, the test case is passing the RASD values which is inturn used DK> for generating the XML configuration of the domain. According to DK> my analysis, to create a KVM guest we need atleast 1024 memory DK> units. Currently the test library vsms.py which is responsible DK> for creating MemRASD values is passing only VirtualQuantity=512. Aiee. The libvirt XML specifies memory in kilobytes, which means 512 rounds to zero. The fact that you can create a Xen guest is probably just because of a lack of verification. The QEMU command line takes memory in megabytes, so anything less than that isn't really valid. This was changed (fixed) in my recent patch to make the providers honor AllocationUnits in the MemRASD on DefineSystem(). DK> This value is *not sufficient* for creating a *KVM* guest, but is DK> *just enough* for *Xen, XenFV* guests. Perhaps just enough to create the guest, but not start it. Not even MS-DOS can run in 512 bytes of memory... DK> Also, AllocationUnits is one of the important field of MemRASD DK> that determines the Memory and CurrentMemory fields in the XML DK> configuration. Indeed, which is why I wrote the (still unreviewed) test for AllocationUnits last week. DK> 1) As an extention to the 40_RSC_start.py tc we can actually pass DK> different combination of VirtualQuantity and AllocationUnits to test DK> the mem_rasd_to_vdev code path. My test exercises all of the AllocationUnits paths. 40_RSC_start isn't the proper place for that, IMHO. I think you should change the default VirtualQuantity to 32, and the AllocationUnits to "MegaBytes" for this case. Thanks! -- Dan Smith IBM Linux Technology Center Open Hypervisor Team email: danms@us.ibm.com

Dan Smith wrote:
DK> Yes, the test case is passing the RASD values which is inturn used DK> for generating the XML configuration of the domain. According to DK> my analysis, to create a KVM guest we need atleast 1024 memory DK> units. Currently the test library vsms.py which is responsible DK> for creating MemRASD values is passing only VirtualQuantity=512.
Aiee. The libvirt XML specifies memory in kilobytes, which means 512 rounds to zero. The fact that you can create a Xen guest is probably just because of a lack of verification. The QEMU command line takes memory in megabytes, so anything less than that isn't really valid.
This was changed (fixed) in my recent patch to make the providers honor AllocationUnits in the MemRASD on DefineSystem().
DK> This value is *not sufficient* for creating a *KVM* guest, but is DK> *just enough* for *Xen, XenFV* guests.
Perhaps just enough to create the guest, but not start it. Not even MS-DOS can run in 512 bytes of memory...
DK> Also, AllocationUnits is one of the important field of MemRASD DK> that determines the Memory and CurrentMemory fields in the XML DK> configuration.
Indeed, which is why I wrote the (still unreviewed) test for AllocationUnits last week.
DK> 1) As an extention to the 40_RSC_start.py tc we can actually pass DK> different combination of VirtualQuantity and AllocationUnits to test DK> the mem_rasd_to_vdev code path.
My test exercises all of the AllocationUnits paths. 40_RSC_start isn't the proper place for that, IMHO.
Cool, I have not yet got a chance to review the patches.
I think you should change the default VirtualQuantity to 32, and the AllocationUnits to "MegaBytes" for this case.
I will try using this. Thanks. Regards, Deepti.
Thanks!
------------------------------------------------------------------------
_______________________________________________ Libvirt-cim mailing list Libvirt-cim@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-cim
participants (3)
-
Dan Smith
-
Deepti B Kalakeri
-
Kaitlin Rupert