On 04/23/2013 05:30 AM, cngesaint(a)outlook.com wrote:
From: Wenchao Xia <xiawenc(a)linux.vnet.ibm.com>
Since in nested KVM, libvirt-cim doesn't handle it well now, add
this option to make it run well with qemu wchi help development
and test.
I think your commit message more simply stated is:
Allow libvirt-cim to be supported within a nested KVM environment in
order to more easily develop and test various configurations
What happens if someone sets this in a non nested environment? I would
think you'd want to test if the 'nested' property is set...
Signed-off-by: Xu Wang <cngesaint(a)outlook.com>
---
libvirt-cim.conf | 8 ++++++++
libxkutil/misc_util.c | 8 ++++++++
libxkutil/misc_util.h | 1 +
src/Virt_VirtualSystemManagementService.c | 7 +++++++
4 files changed, 24 insertions(+), 0 deletions(-)
diff --git a/libvirt-cim.conf b/libvirt-cim.conf
index 37d7b0f..3244ee3 100644
--- a/libvirt-cim.conf
+++ b/libvirt-cim.conf
@@ -30,3 +30,11 @@
# Default value: NULL, that is not set.
#
# migrate_ssh_temp_key = "/root/vm_migrate_tmp_id_rsa";
+
+# force_use_qemu (bool)
+# Since in nested KVM, libvirt-cim doesn't handler it well now, so add this
+# option to make it run well with qemu which help development and test.
+# Possible values: {true,false}
+# Default value: false
+#
+# force_use_qemu = false;
I suggest using the comments from above here too...
diff --git a/libxkutil/misc_util.c b/libxkutil/misc_util.c
index 00eb4b1..4c0b0a1 100644
--- a/libxkutil/misc_util.c
+++ b/libxkutil/misc_util.c
@@ -227,6 +227,14 @@ static int is_read_only(void)
return prop.value_bool;
}
+bool get_force_use_qemu(void)
+{
+ static LibvirtcimConfigProperty prop = {
+ "force_use_qemu", CONFIG_BOOL, {0},
0};
+ libvirt_cim_config_get(&prop);
+ return prop.value_bool;
+}
+
const char *get_mig_ssh_tmp_key(void)
{
static LibvirtcimConfigProperty prop = {
diff --git a/libxkutil/misc_util.h b/libxkutil/misc_util.h
index 0f52290..9e6b419 100644
--- a/libxkutil/misc_util.h
+++ b/libxkutil/misc_util.h
@@ -154,6 +154,7 @@ int virt_set_status(const CMPIBroker *broker,
/* get libvirt-cim config */
const char *get_mig_ssh_tmp_key(void);
+bool get_force_use_qemu(void);
/*
* Local Variables:
diff --git a/src/Virt_VirtualSystemManagementService.c
b/src/Virt_VirtualSystemManagementService.c
index cbb646d..4e93ef0 100644
--- a/src/Virt_VirtualSystemManagementService.c
+++ b/src/Virt_VirtualSystemManagementService.c
@@ -394,6 +394,13 @@ static bool system_has_kvm(const char *pfx)
virConnectPtr conn;
char *caps = NULL;
bool kvm = false;
+ bool force_use_qemu = get_force_use_qemu();
+
+ /* hack for nested KVM */
+ if (force_use_qemu) {
+ CU_DEBUG("Enter force use qemu mode!");
+ return false;
+ }
The above check is being done on the "local" system right? While the
following check is being done on the "BROKER" system, right? Which may
not be the same as the "local" system? So where should the check of
whether the system in which is executing libvirt-cim code be really
made? IOW, are you sure this is the right place to check? What's the
differentiation in libvirt-cim between QEMU & KVM being made for? Being
"new" I thought they were the same.
I'm not even convinced the routine is doing the right thing, but perhaps
I just don't have enough history. I see a virsh capabilities on my
system returns some kvm defs and some qemu defs, so a blind strstr is
returning true...
The prime differentiator is that the calling routine will set
domain->type to QEMU now rather than KVM and I'm not sure I understand
how by using that setting we'll be able support nested KVM.
John
conn = connect_by_classname(_BROKER, pfx, &s);
if ((conn == NULL) || (s.rc != CMPI_RC_OK)) {