[libvirt] Need help for thesis on Xen - disabling HAP for benchmarking purposes
by Alexander Sascha
Hi,
I'm a German student, writing my thesis on "Virtualization with Xen.
Analysis and Comparision of Different Techniques such as
Paravirtualization, Full Virtualization, and Utilization of Hardware
Support Provided by the Processor".
I'd like to know how I can disable HAP/RVI/Nested Paging for
benchmarking purposes. I read somewhere
(http://markmail.org/message/bbnivuqx6vjz7jg4) that Xen developers
decided to disable the global grub parameter introducing a per domain
flag for HAP instead (hap=0/1).
I used virt-manager built in CentOS for setting up all domUs;
virt-manager uses libvirt so that no "old-style" configuration files are
stored in /etc/xen. I actually don't know how if libvirt-XML-files
already support something like <hap>0</hap>. How would you proceed to
modify an already existing domain using libvirt/virt-manager?
I really do apologize for my "newbie ignorance". Thanks in advance for
any useful help.
- Alex
15 years, 8 months
[libvirt] [RFC]: Volume allocation progress reporting
by Cole Robinson
The attached patch implements storage volume allocation progress
reporting for file volumes.
Currently, the volume creation process looks like:
- Grab the pool lock
- Fully allocated the volume
- 'define' the volume (so it shows up in 'virsh vol-list', etc)
- Lookup the volume object to return
- Drop the pool lock
The new sequence is:
- Grab the pool lock
- 'define' the volume (even though nothing is on disk yet)
- Drop the pool lock
- Allocate the volume as needed
- Regrab the pool lock
- Lookup the volume object to return
- Drop the pool lock (again)
Since the volume is 'defined', the user can fetch an object reference in
a separate thread, and continually poll the 'info' command for up to
date numbers. This also has the benefit of dropping the pool lock during
the potentially lengthy allocation, as currently 'virsh pool-list' etc.
will block while any volume is being allocated.
Non file volumes maintain existing behavior.
I tried to make the implementation resistant to user error: say if the
pool is deactivated or deleted while the volume is being allocated. The
creation process may bail out, but I couldn't produce any bad errors
(crashing).
There are a few other small fixes in this patch:
- Refresh volume info when doing volume dumpxml
- Update volume capacity when doing a refresh
I've also attached an ugly python script that can test this. Presuming
you have a pool named 'default', running
python sparse.py --vol-create-info
Will launch an allocation, print vol.info in a loop.
Feedback appreciated.
Thanks,
Cole
import libvirt
import threading
import time
import sys
import optparse
poolname = "default"
volname = "nonsparsetest"
testvol = "testvol"
uri = "qemu:///system"
testvolxml = """
<volume>
<name>%s</name>
<capacity>1048576000</capacity>
<allocation>0</allocation>
<target>
<format type='raw'/>
</target>
</volume>
""" % testvol
volxml = """
<volume>
<name>%s</name>
<capacity>1048576000</capacity>
<allocation>800000000</allocation>
<target>
<format type='raw'/>
</target>
</volume>
""" % volname
failvol = """
<volume>
<name>%s</name>
<capacity>1048576000</capacity>
<allocation>1048576000</allocation>
<target>
<format type='bochs'/>
</target>
</volume>
""" % volname
poolxml = """
<pool type='dir'>
<name>default</name>
<target>
<path>/var/lib/libvirt/images</path>
</target>
</pool>
"""
# Helper functions
def exception_wrapper(cmd, args):
try:
cmd(*args)
except Exception, e:
print str(e)
def make_test_vol():
pool = get_pool()
pool.createXML(testvolxml, 0)
def del_vol(name=volname):
pool = get_pool()
vol = pool.storageVolLookupByName(name)
vol.delete(0)
def del_pool():
pool = get_pool()
pool.destroy()
pool.undefine()
def define_pool():
conn = libvirt.open(uri)
pool = conn.storagePoolDefineXML(poolxml, 0)
pool.create(0)
pool.setAutostart(True)
def allocate_thread(xml=volxml):
try:
del_vol()
except:
pass
pool = get_pool()
print "creating vol in thread"
vol = pool.createXML(xml, 0)
print "creating vol complete."
def info_thread(vol):
for i in range(0, 40):
time.sleep(.5)
print vol.info()
def get_pool(name=poolname):
conn = libvirt.open(uri)
return conn.storagePoolLookupByName(name)
def get_vol(name=volname):
pool = get_pool()
return pool.storageVolLookupByName(name)
def cmd_vol_create(xml=volxml):
exception_wrapper(define_pool, ())
pool = get_pool()
print pool.listVolumes()
t = threading.Thread(target=allocate_thread, name="Allocating",
args=(xml,))
t.start()
time.sleep(5)
print "\nRefreshing pool and dumping list"
pool.refresh(0)
print pool.listVolumes()
def cmd_vol_fail():
cmd_vol_create(failvol)
def cmd_vol_poll():
cmd_vol_create()
vol = get_vol()
t = threading.Thread(target=info_thread, name="Getting info",
args=(vol,))
t.start()
def main():
import optparse
parser = optparse.OptionParser()
parser.add_option("", "--vol-create", action="store_true",
dest="vol_create",
help="Create a volume nonsparse volume that should "
"succeed")
parser.add_option("", "--vol-create-info", action="store_true",
dest="vol_create_info",
help="Create a volume nonsparse volume that should "
"succeed, and list info in a loop")
parser.add_option("", "--vol-create-fail", action="store_true",
dest="vol_fail",
help="Create a volume that will fail at the allocate "
"stage")
options, ignore = parser.parse_args()
if options.vol_create:
cmd_vol_create()
elif options.vol_create_info:
cmd_vol_poll()
elif options.vol_fail:
cmd_vol_fail()
else:
parser.print_help()
sys.exit(1)
if __name__ == "__main__":
main()
15 years, 8 months
[libvirt] first cut public API for physical host interface configuration
by Laine Stump
To get started integrating libnetcf support into libvirt, the attached
libvirt.h diff has a first attempt at the public API that will hook up
to libnetcf on the libvirtd side. I started out with the virNetwork*
API, and modified/removed as seemed appropriate.
A few points worth mentioning:
virNetwork has "defined" and "active" interfaces, but so far
virInterface just has interfaces, as the active/inactive status is
really controlled by 1) the "onboot" property in the XML, and 2) whether
or not virInterfaceStart() has been called yet.
The _virInterface struct that is referenced here is more or less
identical to _virnetwork.
libnetcf works with netcf structs (one per library instance) and
netcf_if structs (one per interface, multiple per library instance. As
far as I can see right now, the netcf struct will not need to be
referenced directly from the client side, but the netcf_if for each
interface is needed, and I guess a cookie representing it will be sent
to the client side and stored in the _virInterface. Is that what the
UUID in _virNetwork is used for? (I know, I know - "read the code!",
it's just quicker to (and avoids misreading on my part) to ask)
As before, any and all advice/corrections gratefully accepted!
diff --git a/include/libvirt/libvirt.h b/include/libvirt/libvirt.h
index 779ea72..cac400e 100644
--- a/include/libvirt/libvirt.h
+++ b/include/libvirt/libvirt.h
@@ -854,7 +854,73 @@ int virNetworkGetAutostart (virNetworkPtr network,
int virNetworkSetAutostart (virNetworkPtr network,
int autostart);
+/*
+ * Physical host interface configuration API
+ */
+
+/**
+ * virInterface:
+ *
+ * a virInterface is a private structure representing a virtual interface.
+ */
+typedef struct _virInterface virInterface;
+
+/**
+ * virInterfacePtr:
+ *
+ * a virInterfacePtr is pointer to a virInterface private structure, this is the
+ * type used to reference a virtual interface in the API.
+ */
+typedef virInterface *virInterfacePtr;
+
+/*
+ * Get connection from interface.
+ */
+virConnectPtr virInterfaceGetConnect (virInterfacePtr interface);
+
+/*
+ * List defined interfaces
+ */
+int virConnectNumOfInterfaces (virConnectPtr conn);
+int virConnectListInterfaces (virConnectPtr conn,
+ char **const names,
+ int maxnames);
+
+/*
+ * Lookup interface by name
+ */
+virInterfacePtr virInterfaceLookupByName (virConnectPtr conn,
+ const char *name);
+ const char *uuid);
+
+/*
+ * Define interface (or modify existing interface configuration)
+ */
+virInterfacePtr virInterfaceDefineXML (virConnectPtr conn,
+ const char *xmlDesc);
+
+/*
+ * Delete interface
+ */
+int virInterfaceUndefine (virInterfacePtr interface);
+
+/*
+ * Activate interface (ie call "ifup")
+ */
+int virInterfaceStart (virInterfacePtr interface);
+
+/*
+ * De-activate interface (call "ifdown")
+ */
+int virInterfaceStop (virInterfacePtr interface);
+
+/*
+ * Interface information
+ */
+const char* virInterfaceGetName (virInterfacePtr interface);
+char * virInterfaceGetXMLDesc (virInterfacePtr interface,
+ int flags);
/**
* virStoragePool:
*
15 years, 8 months
[libvirt] virt-top
by Zvi Dubitzky
Are the sources of virt-top available at some repository to look at .
I do not see them in the Download section of libvirt Web site
thanks
Zvi Dubitzky
Virtualization and System Architecture Email:dubi@il.ibm.com
IBM Haifa Research Laboratory Phone: +972-4-8296182
Haifa, 31905, ISRAEL
15 years, 8 months
[libvirt] save/ restore a domain with kvm
by Zvi Dubitzky
I am working with virsh
The version of VIRSH as it shows is :
virsh # version
Compiled against library: libvir 0.4.4
Using library: libvir 0.4.4
Using API: QEMU 0.4.4
Running hypervisor: QEMU 0.9.1
Under virsh I do the following :
1. suspend a running VM
2. save it to a file with the 'save' command
3. restore the saved domain file with the 'restore' command while the
domain is still in the suspend mode
or after it was shutdown/destroyed (not seen with 'list --all)
In either case I get the error message saying :
virsh # restore /home/dubi/xml/vm_dubi2.sav
libvir: QEMU error : operation failed: failed to start VM
error: Failed to restore domain from /home/dubi/xml/vm_dubi2.xml
Any idea what is the failure reason ?
The content of the saved domain file isattached:
Zvi Dubitzky
Virtualization and System Architecture Email:dubi@il.ibm.com
IBM Haifa Research Laboratory Phone: +972-4-8296182
Haifa, 31905, ISRAEL
15 years, 8 months
[libvirt] [PATCH 1/2] VirtualBox support to libvirt
by Pritesh Kothari
Hi All,
I have attached a patch which when applied on the HEAD as of today would
allow virtualbox support in libvirt.
The patch works very well with the VirtualBox OSE version and the 2.2 Beta
release.
[PATCH 1/2] contains diff of files already in libvirt.
[PATCH 2/2] contains new files needed for VirtualBox support.
Regards,
-pritesh
15 years, 8 months
[libvirt] Improve heuristic for default guest architecture
by Soren Hansen
In libvirt 0.6.1, if you create a domain description of type 'kvm'
without an arch set on an x86-64 host, you would get an i686 qemu guest
rather than the expected x86-64 kvm guest.
This is because virCapabilitiesDefaultGuestArch doesn't take the domain
type into consideration, so it just returned the first hvm architecutre
that has been registered, which is i686.
After applying Dan P's patch,
http://www.redhat.com/archives/libvir-list/2009-March/msg00281.html,
I now get a i686 kvm guest, since kvm now can do i686 guests from
libvirt. This is certainly an improvement, but I think a more reasonable
default is to attempt to match the host's architecture.
This patch makes virCapabilitiesDefaultGuestArch also check the domain
type, and also gives preference to a guest architecture that matches the
host's architecture.
Index: libvirt-0.6.1/src/capabilities.c
===================================================================
--- libvirt-0.6.1.orig/src/capabilities.c 2009-03-19 15:18:09.483317579 +0100
+++ libvirt-0.6.1/src/capabilities.c 2009-03-19 15:42:31.027341187 +0100
@@ -468,14 +468,26 @@
*/
extern const char *
virCapabilitiesDefaultGuestArch(virCapsPtr caps,
- const char *ostype)
+ const char *ostype,
+ const char *domain)
{
- int i;
+ int i, j;
+ const char *arch = NULL;
for (i = 0 ; i < caps->nguests ; i++) {
- if (STREQ(caps->guests[i]->ostype, ostype))
- return caps->guests[i]->arch.name;
+ if (STREQ(caps->guests[i]->ostype, ostype)) {
+ for (j = 0 ; j < caps->guests[i]->arch.ndomains ; j++) {
+ if (STREQ(caps->guests[i]->arch.domains[j]->type, domain)) {
+ /* Use the first match... */
+ if (!arch)
+ arch = caps->guests[i]->arch.name;
+ /* ...unless we can match the host's architecture. */
+ if (STREQ(caps->guests[i]->arch.name, caps->host.arch))
+ return caps->guests[i]->arch.name;
+ }
+ }
+ }
}
- return NULL;
+ return arch;
}
/**
Index: libvirt-0.6.1/src/capabilities.h
===================================================================
--- libvirt-0.6.1.orig/src/capabilities.h 2009-03-19 15:18:09.507338228 +0100
+++ libvirt-0.6.1/src/capabilities.h 2009-03-19 15:42:31.027341187 +0100
@@ -177,7 +177,8 @@
extern const char *
virCapabilitiesDefaultGuestArch(virCapsPtr caps,
- const char *ostype);
+ const char *ostype,
+ const char *domain);
extern const char *
virCapabilitiesDefaultGuestMachine(virCapsPtr caps,
const char *ostype,
Index: libvirt-0.6.1/src/domain_conf.c
===================================================================
--- libvirt-0.6.1.orig/src/domain_conf.c 2009-03-19 15:18:09.531341976 +0100
+++ libvirt-0.6.1/src/domain_conf.c 2009-03-19 15:42:31.031345327 +0100
@@ -2146,7 +2146,7 @@
goto error;
}
} else {
- const char *defaultArch = virCapabilitiesDefaultGuestArch(caps, def->os.type);
+ const char *defaultArch = virCapabilitiesDefaultGuestArch(caps, def->os.type, virDomainVirtTypeToString(def->virtType));
if (defaultArch == NULL) {
virDomainReportError(conn, VIR_ERR_INTERNAL_ERROR,
_("no supported architecture for os type '%s'"),
Index: libvirt-0.6.1/src/xm_internal.c
===================================================================
--- libvirt-0.6.1.orig/src/xm_internal.c 2009-03-19 15:18:09.559316828 +0100
+++ libvirt-0.6.1/src/xm_internal.c 2009-03-19 15:42:45.807318313 +0100
@@ -695,7 +695,7 @@
if (!(def->os.type = strdup(hvm ? "hvm" : "xen")))
goto no_memory;
- defaultArch = virCapabilitiesDefaultGuestArch(priv->caps, def->os.type);
+ defaultArch = virCapabilitiesDefaultGuestArch(priv->caps, def->os.type, virDomainVirtTypeToString(def->virtType));
if (defaultArch == NULL) {
xenXMError(conn, VIR_ERR_INTERNAL_ERROR,
_("no supported architecture for os type '%s'"),
--
Soren Hansen |
Lead Virtualisation Engineer | Ubuntu Server Team
Canonical Ltd. | http://www.ubuntu.com/
15 years, 8 months
[libvirt] [PATCH] Adding filesystem mount support for openVZ
by Florian Vichot
Hi everyone,
This patch is to allow using the "mount" type in the "filesystem" tag
for OpenVZ domains.
Example:
...
<filesystem type='mount'>
<source dir='/path/to/filesystem/directory/' />
<target dir='/path/to/pivot/root/' />
</filesystem>
...
This is my first patch to an external project, so don't spare me if I
got things wrong :)
Also, I'm curious for suggestions as to how I could allow for the target
not to be specified in the XML. Because in this case OpenVZ just makes a
temporary pivot root in "/var/lib/vz/root/" and that is probably
sufficient for most people, who might not want to have to explicitly
create a pivot root somewhere, just for mounting the filesystem while
it's running.
I was thinking either allow for the target tag not to be specified, or
add an "auto" attribute to target. Which one sounds better ?
Thanks,
Florian
15 years, 8 months
[libvirt] VM cpuTime from libvirt
by Zvi Dubitzky
Currently the cat /proc/pid/stat where pid is the pid of the VM Qemu
process gives the utime + stime of the VM according to libvirt.
Unfortunately I notice that this is actually the elapsed time of the host
. I find this by using libvirt , sampling the cputime of each VM
process and compare it to the total elapsed time (of the Host Linux
machine) . Roughly assuming full VM vcpu utlization, the cpu utilization
of every VM is actually : ( # of VM vcpus / total # of Host cpus) .This
implies that there is no idle time per VM (while actually there is) . We
only know the idle time of the Host via the top command at the Host .
Is the cputime of a VM (from cat /proc/pid/stat of the Qemu process as
used by libvirt) realy the cputime of the VM ?
I tested with a host having 2 sockets X 4cpus =8 totally and assigned
4 cpus to VM1 and VM2 . libvirt gave equal cputime for each VM which is
equal to the total machine elapsed time. But even if VM1 has 4 vcpus and
VM2 have 8 vcpus the cputime of each VM
(from cat /proc/pid/stat) is the elapsed time.
The truth is that I am running the libvirt application on the host machine
and do the application wait there . Should that matter much?
Each guest idle time is needed from the KVM for real cpu utilization
calc to be independent from the guest OS .
At least for Linux we can manually run 'top' at each guest terminal
window , but I do not know if it will show the real idle time or the total
machine (host) idle time .At least linux has no idle process . Besides
this is not a good programmatic way to get the VM idle time.
Is there a cure or I am missing something ?
thanks
Zvi Dubitzky
Virtualization and System Architecture Email:dubi@il.ibm.com
IBM Haifa Research Laboratory Phone: +972-4-8296182
Haifa, 31905, ISRAEL
15 years, 8 months
[libvirt] [PATCH 1/2] [Plain text ] OpenNebula driver, libvirt-0.6.1
by "Abel Míguez Rodríguez"
Hi all,
I'm sorry, I apologize for the format used.
thanks,
> Hi all,
> We have updated the OpenNebula Driver to libvirt version 0.6.1.
> Now, the ONE driver is build at the libvirtd daemon, that is the natural place for it.
> Please feel free to make any comment, to improve the driver's coherence with libvirt's structure.
> I split the Patches in two e-mais:
> [PATCH 1/2] includes the patches to be applied to libvirt's sources and building files.
> [PATCH 2/2] attached the "one driver" source files.
>All the patches are made to be applied at the git commit:
>"025b62" (Fix subsystem lookup for older HAL releases)
>Thanks,
>Abel Miguez
diff --git a/configure.in b/configure.in
index 413d27c..fd75a8d 100644
--- a/configure.in
+++ b/configure.in
@@ -184,6 +184,8 @@ AC_ARG_WITH([openvz],
[ --with-openvz add OpenVZ support (on)],[],[with_openvz=yes])
AC_ARG_WITH([lxc],
[ --with-lxc add Linux Container support (on)],[],[with_lxc=yes])
+AC_ARG_WITH([one],
+[ --with-one add ONE support (on)],[],[with_one=no])
AC_ARG_WITH([test],
[ --with-test add test driver support (on)],[],[with_test=yes])
AC_ARG_WITH([remote],
@@ -399,6 +401,17 @@ dnl check for kvm headers
dnl
AC_CHECK_HEADERS([linux/kvm.h])
+dnl OpenNebula driver Compilation setting
+dnl
+
+if test "$with_one" = "yes" ; then
+ LIBVIRT_FEATURES="$LIBVIRT_FEATURES -DWITH_ONE -I$ONE_LOCATION/include"
+ ONE_LIBS="-L/usr/local/lib -lxmlrpc_client++ -lxmlrpc -lxmlrpc_util -lxmlrpc_xmlparse -lxmlrpc_xmltok -lxmlrpc++ -lxmlrpc_client -L$ONE_LOCATION/lib -loneapi"
+ AC_SUBST([ONE_LIBS])
+ AC_DEFINE_UNQUOTED([WITH_ONE],1,[whether Open Nebula Driver is enabled])
+fi
+AM_CONDITIONAL([WITH_ONE],[test "$with_one" = "yes"])
+
dnl Need to test if pkg-config exists
PKG_PROG_PKG_CONFIG
@@ -1345,6 +1358,7 @@ AC_MSG_NOTICE([ QEMU: $with_qemu])
AC_MSG_NOTICE([ UML: $with_uml])
AC_MSG_NOTICE([ OpenVZ: $with_openvz])
AC_MSG_NOTICE([ LXC: $with_lxc])
+AC_MSG_NOTICE([ ONE: $with_one])
AC_MSG_NOTICE([ Test: $with_test])
AC_MSG_NOTICE([ Remote: $with_remote])
AC_MSG_NOTICE([ Network: $with_network])
diff --git a/include/libvirt/virterror.h b/include/libvirt/virterror.h
index 2c3777d..cc98e45 100644
--- a/include/libvirt/virterror.h
+++ b/include/libvirt/virterror.h
@@ -61,6 +61,7 @@ typedef enum {
VIR_FROM_UML, /* Error at the UML driver */
VIR_FROM_NODEDEV, /* Error from node device monitor */
VIR_FROM_XEN_INOTIFY, /* Error from xen inotify layer */
+ VIR_FROM_ONE, /* Error from ONE driver */
VIR_FROM_SECURITY, /* Error from security framework */
} virErrorDomain;
diff --git a/qemud/Makefile.am b/qemud/Makefile.am
index 924e8ad..9d7f61f 100644
--- a/qemud/Makefile.am
+++ b/qemud/Makefile.am
@@ -120,6 +120,10 @@ if WITH_UML
libvirtd_LDADD += ../src/libvirt_driver_uml.la
endif
+if WITH_ONE
+ libvirtd_LDADD += ../src/libvirt_driver_one.la
+endif
+
if WITH_STORAGE_DIR
libvirtd_LDADD += ../src/libvirt_driver_storage.la
endif
diff --git a/qemud/qemud.c b/qemud/qemud.c
index 4f04355..e1d6113 100644
--- a/qemud/qemud.c
+++ b/qemud/qemud.c
@@ -78,6 +78,9 @@
#ifdef WITH_NETWORK
#include "network_driver.h"
#endif
+#ifdef WITH_ONE
+#include "one_driver.h"
+#endif
#ifdef WITH_STORAGE_DIR
#include "storage_driver.h"
#endif
@@ -841,6 +844,8 @@ static struct qemud_server *qemudInitialize(int sigread) {
virDriverLoadModule("qemu");
virDriverLoadModule("lxc");
virDriverLoadModule("uml");
+ virDriverLoadModule("one");
+
#else
#ifdef WITH_NETWORK
networkRegister();
@@ -861,6 +866,10 @@ static struct qemud_server *qemudInitialize(int sigread) {
#ifdef WITH_UML
umlRegister();
#endif
+#ifdef WITH_ONE
+ oneRegister ();
+#endif
+
#endif
virEventRegisterImpl(virEventAddHandleImpl,
diff --git a/src/Makefile.am b/src/Makefile.am
index d5aac11..a5c2084 100644
--- a/src/Makefile.am
+++ b/src/Makefile.am
@@ -137,6 +137,10 @@ UML_DRIVER_SOURCES = \
uml_conf.c uml_conf.h \
uml_driver.c uml_driver.h
+ONE_DRIVER_SOURCES = \
+ one_conf.c one_conf.h \
+ one_driver.c one_driver.h
+
NETWORK_DRIVER_SOURCES = \
network_driver.h network_driver.c
@@ -314,6 +318,22 @@ endif
libvirt_driver_uml_la_SOURCES = $(UML_DRIVER_SOURCES)
endif
+if WITH_ONE
+if WITH_DRIVER_MODULES
+mod_LTLIBRARIES += libvirt_driver_one.la
+else
+noinst_LTLIBRARIES += libvirt_driver_one.la
+# Stateful, so linked to daemon instead
+#libvirt_la_LIBADD += libvirt_driver_one.la
+endif
+libvirt_driver_one_la_LDFLAGS = $(ONE_LIBS)
+libvirt_driver_one_la_CFLAGS = "-DWITH_ONE"
+if WITH_DRIVER_MODULES
+libvirt_driver_one_la_LDFLAGS += -module -avoid-version
+endif
+libvirt_driver_one_la_SOURCES = $(ONE_DRIVER_SOURCES)
+endif
+
if WITH_NETWORK
if WITH_DRIVER_MODULES
mod_LTLIBRARIES += libvirt_driver_network.la
@@ -402,6 +422,7 @@ EXTRA_DIST += \
$(QEMU_DRIVER_SOURCES) \
$(LXC_DRIVER_SOURCES) \
$(UML_DRIVER_SOURCES) \
+ $(ONE_DRIVER_SOURCES) \
$(OPENVZ_DRIVER_SOURCES) \
$(NETWORK_DRIVER_SOURCES) \
$(STORAGE_DRIVER_SOURCES) \
diff --git a/src/domain_conf.c b/src/domain_conf.c
index 5bf3483..e4d3249 100644
--- a/src/domain_conf.c
+++ b/src/domain_conf.c
@@ -54,7 +54,9 @@ VIR_ENUM_IMPL(virDomainVirt, VIR_DOMAIN_VIRT_LAST,
"ldom",
"test",
"vmware",
- "hyperv")
+ "hyperv",
+ "one")
+
VIR_ENUM_IMPL(virDomainBoot, VIR_DOMAIN_BOOT_LAST,
"fd",
diff --git a/src/domain_conf.h b/src/domain_conf.h
index dd61467..e8a2bff 100644
--- a/src/domain_conf.h
+++ b/src/domain_conf.h
@@ -48,6 +48,7 @@ enum virDomainVirtType {
VIR_DOMAIN_VIRT_TEST,
VIR_DOMAIN_VIRT_VMWARE,
VIR_DOMAIN_VIRT_HYPERV,
+ VIR_DOMAIN_VIRT_ONE,
VIR_DOMAIN_VIRT_LAST,
};
diff --git a/src/driver.h b/src/driver.h
index 62d6fbc..ed3eef7 100644
--- a/src/driver.h
+++ b/src/driver.h
@@ -20,6 +20,7 @@ typedef enum {
VIR_DRV_OPENVZ = 5,
VIR_DRV_LXC = 6,
VIR_DRV_UML = 7,
+ VIR_DRV_ONE = 8,
} virDrvNo;
diff --git a/src/libvirt.c b/src/libvirt.c
index bf3453a..cd4b5b7 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -830,6 +830,10 @@ virGetVersion(unsigned long *libVer, const char *type,
if (STRCASEEQ(type, "OpenVZ"))
*typeVer = LIBVIR_VERSION_NUMBER;
#endif
+#if WITH_ONE
+ if (STRCASEEQ(type, "ONE"))
+ *typeVer = LIBVIR_VERSION_NUMBER;
+#endif
#if WITH_UML
if (STRCASEEQ(type, "UML"))
*typeVer = LIBVIR_VERSION_NUMBER;
----
Distributed System Architecture Group
(http://dsa-research.org)
GridWay, http://www.gridway.org
OpenNEbula, http://www.opennebula.org
15 years, 8 months