[libvirt] [PATCH] numad: Set memory policy according to the advisory nodeset from numad
by Osier Yang
Though numad will manage the memory allocation of task dynamically,
but it wants management application (libvirt) to pre-set the memory
policy according to the advisory nodeset returned from querying numad,
(just like pre-bind CPU nodeset for domain process), and thus the
performance could benifit much more from it.
This patch introduces new XML tag 'placement', value 'auto' indicates
whether to set the memory policy with the advisory nodeset from numad,
and its value defaults to 'static'. e.g.
<numatune>
<memory placement='auto' mode='interleave'/>
</numatune>
Just like what current "numatune" does, the 'auto' numa memory policy
setting uses libnuma's API too.
So, to full drive numad, one needs to specify placement='auto' for
both "<vcpu>" and "<numatune><memory .../></numatune>". It's a bit
inconvenient, but makes sense from semantics' point of view.
---
An alternative way is to not introduce the new XML tag, and pre-set
the memory policy implicitly with "<vcpu placement='auto'>4</vcpu>",
but IMHO it implies too much, and I'd not like go this way unless
the new XML tag is not accepted.
---
docs/formatdomain.html.in | 11 ++-
docs/schemas/domaincommon.rng | 39 +++++++---
libvirt.spec.in | 1 +
src/conf/domain_conf.c | 96 ++++++++++++++++--------
src/conf/domain_conf.h | 9 ++
src/libvirt_private.syms | 2 +
src/qemu/qemu_process.c | 79 +++++++++++--------
tests/qemuxml2argvdata/qemuxml2argv-numad.args | 4 +
tests/qemuxml2argvdata/qemuxml2argv-numad.xml | 31 ++++++++
tests/qemuxml2argvtest.c | 1 +
10 files changed, 194 insertions(+), 79 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-numad.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-numad.xml
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index e1fe0c4..01b3124 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -580,9 +580,14 @@
The optional <code>memory</code> element specifies how to allocate memory
for the domain process on a NUMA host. It contains two attributes,
attribute <code>mode</code> is either 'interleave', 'strict',
- or 'preferred',
- attribute <code>nodeset</code> specifies the NUMA nodes, it leads same
- syntax with attribute <code>cpuset</code> of element <code>vcpu</code>.
+ or 'preferred', attribute <code>nodeset</code> specifies the NUMA nodes,
+ it leads same syntax with attribute <code>cpuset</code> of element
+ <code>vcpu</code>, the optional attribute <code>placement</code> can be
+ used to indicate the memory placement mode for domain process, its value
+ can be either "static" or "auto", defaults to "static". "auto" indicates
+ the domain process will only allocate memory from the advisory nodeset
+ returned from querying numad, and the value of attribute <code>nodeset</code>
+ will be ignored if it's specified.
<span class='since'>Since 0.9.3</span>
</dd>
</dl>
diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng
index 8419ccc..9b509f1 100644
--- a/docs/schemas/domaincommon.rng
+++ b/docs/schemas/domaincommon.rng
@@ -562,16 +562,35 @@
<element name="numatune">
<optional>
<element name="memory">
- <attribute name="mode">
- <choice>
- <value>strict</value>
- <value>preferred</value>
- <value>interleave</value>
- </choice>
- </attribute>
- <attribute name="nodeset">
- <ref name="cpuset"/>
- </attribute>
+ <choice>
+ <group>
+ <attribute name="mode">
+ <choice>
+ <value>strict</value>
+ <value>preferred</value>
+ <value>interleave</value>
+ </choice>
+ </attribute>
+ <attribute name='placement'>
+ <choice>
+ <value>static</value>
+ <value>auto</value>
+ </choice>
+ </attribute>
+ </group>
+ <group>
+ <attribute name="mode">
+ <choice>
+ <value>strict</value>
+ <value>preferred</value>
+ <value>interleave</value>
+ </choice>
+ </attribute>
+ <attribute name="nodeset">
+ <ref name="cpuset"/>
+ </attribute>
+ </group>
+ </choice>
</element>
</optional>
</element>
diff --git a/libvirt.spec.in b/libvirt.spec.in
index e7e0a55..0b119c2 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -454,6 +454,7 @@ BuildRequires: scrub
%if %{with_numad}
BuildRequires: numad
+BuildRequires: numactl-devel
%endif
%description
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 3fce7e5..b728cb6 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -640,6 +640,11 @@ VIR_ENUM_IMPL(virDomainDiskTray, VIR_DOMAIN_DISK_TRAY_LAST,
"closed",
"open");
+VIR_ENUM_IMPL(virDomainNumatuneMemPlacementMode,
+ VIR_DOMAIN_NUMATUNE_MEM_PLACEMENT_MODE_LAST,
+ "static",
+ "auto");
+
#define virDomainReportError(code, ...) \
virReportErrorHelper(VIR_FROM_DOMAIN, code, __FILE__, \
__FUNCTION__, __LINE__, __VA_ARGS__)
@@ -8023,30 +8028,22 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps,
while (cur != NULL) {
if (cur->type == XML_ELEMENT_NODE) {
if (xmlStrEqual(cur->name, BAD_CAST "memory")) {
- tmp = virXMLPropString(cur, "nodeset");
-
+ tmp = virXMLPropString(cur, "placement");
+ int placement_mode;
if (tmp) {
- char *set = tmp;
- int nodemasklen = VIR_DOMAIN_CPUMASK_LEN;
-
- if (VIR_ALLOC_N(def->numatune.memory.nodemask,
- nodemasklen) < 0) {
- virReportOOMError();
+ if ((placement_mode =
+ virDomainNumatuneMemPlacementModeTypeFromString(tmp)) < 0) {
+ virDomainReportError(VIR_ERR_XML_ERROR,
+ _("Unsupported memory placement "
+ "mode '%s'"), tmp);
+ VIR_FREE(tmp);
goto error;
}
-
- /* "nodeset" leads same syntax with "cpuset". */
- if (virDomainCpuSetParse(set, 0,
- def->numatune.memory.nodemask,
- nodemasklen) < 0)
- goto error;
VIR_FREE(tmp);
} else {
- virDomainReportError(VIR_ERR_XML_ERROR, "%s",
- _("nodeset for NUMA memory "
- "tuning must be set"));
- goto error;
+ placement_mode = VIR_DOMAIN_NUMATUNE_MEM_PLACEMENT_MODE_STATIC;
}
+ def->numatune.memory.placement_mode = placement_mode;
tmp = virXMLPropString(cur, "mode");
if (tmp) {
@@ -8055,13 +8052,40 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps,
virDomainReportError(VIR_ERR_XML_ERROR,
_("Unsupported NUMA memory "
"tuning mode '%s'"),
- tmp);
+ tmp);
goto error;
}
VIR_FREE(tmp);
} else {
def->numatune.memory.mode = VIR_DOMAIN_NUMATUNE_MEM_STRICT;
}
+
+ if (placement_mode ==
+ VIR_DOMAIN_NUMATUNE_MEM_PLACEMENT_MODE_STATIC) {
+ tmp = virXMLPropString(cur, "nodeset");
+ if (tmp) {
+ char *set = tmp;
+ int nodemasklen = VIR_DOMAIN_CPUMASK_LEN;
+
+ if (VIR_ALLOC_N(def->numatune.memory.nodemask,
+ nodemasklen) < 0) {
+ virReportOOMError();
+ goto error;
+ }
+
+ /* "nodeset" leads same syntax with "cpuset". */
+ if (virDomainCpuSetParse(set, 0,
+ def->numatune.memory.nodemask,
+ nodemasklen) < 0)
+ goto error;
+ VIR_FREE(tmp);
+ } else {
+ virDomainReportError(VIR_ERR_XML_ERROR, "%s",
+ _("nodeset for NUMA memory "
+ "tuning must be set"));
+ goto error;
+ }
+ }
} else {
virDomainReportError(VIR_ERR_XML_ERROR,
_("unsupported XML element %s"),
@@ -12491,25 +12515,33 @@ virDomainDefFormatInternal(virDomainDefPtr def,
def->cputune.period || def->cputune.quota)
virBufferAddLit(buf, " </cputune>\n");
- if (def->numatune.memory.nodemask) {
+ if (def->numatune.memory.nodemask ||
+ (def->numatune.memory.placement_mode ==
+ VIR_DOMAIN_NUMATUNE_MEM_PLACEMENT_MODE_AUTO)) {
virBufferAddLit(buf, " <numatune>\n");
const char *mode;
char *nodemask = NULL;
-
- nodemask = virDomainCpuSetFormat(def->numatune.memory.nodemask,
- VIR_DOMAIN_CPUMASK_LEN);
- if (nodemask == NULL) {
- virDomainReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("failed to format nodeset for "
- "NUMA memory tuning"));
- goto cleanup;
- }
+ const char *placement;
mode = virDomainNumatuneMemModeTypeToString(def->numatune.memory.mode);
- virBufferAsprintf(buf, " <memory mode='%s' nodeset='%s'/>\n",
- mode, nodemask);
- VIR_FREE(nodemask);
+ virBufferAsprintf(buf, " <memory mode='%s' ", mode);
+ if (def->numatune.memory.placement_mode ==
+ VIR_DOMAIN_NUMATUNE_MEM_PLACEMENT_MODE_STATIC) {
+ nodemask = virDomainCpuSetFormat(def->numatune.memory.nodemask,
+ VIR_DOMAIN_CPUMASK_LEN);
+ if (nodemask == NULL) {
+ virDomainReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("failed to format nodeset for "
+ "NUMA memory tuning"));
+ goto cleanup;
+ }
+ virBufferAsprintf(buf, "nodeset='%s'/>\n", nodemask);
+ VIR_FREE(nodemask);
+ } else {
+ placement = virDomainNumatuneMemPlacementModeTypeToString(def->numatune.memory.placement_mode);
+ virBufferAsprintf(buf, "placement='%s'/>\n", placement);
+ }
virBufferAddLit(buf, " </numatune>\n");
}
diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
index 5aa8fc1..9aade99 100644
--- a/src/conf/domain_conf.h
+++ b/src/conf/domain_conf.h
@@ -1416,6 +1416,13 @@ enum virDomainCpuPlacementMode {
VIR_DOMAIN_CPU_PLACEMENT_MODE_LAST,
};
+enum virDomainNumatuneMemPlacementMode {
+ VIR_DOMAIN_NUMATUNE_MEM_PLACEMENT_MODE_STATIC = 0,
+ VIR_DOMAIN_NUMATUNE_MEM_PLACEMENT_MODE_AUTO,
+
+ VIR_DOMAIN_NUMATUNE_MEM_PLACEMENT_MODE_LAST,
+};
+
typedef struct _virDomainTimerCatchupDef virDomainTimerCatchupDef;
typedef virDomainTimerCatchupDef *virDomainTimerCatchupDefPtr;
struct _virDomainTimerCatchupDef {
@@ -1504,6 +1511,7 @@ struct _virDomainNumatuneDef {
struct {
char *nodemask;
int mode;
+ int placement_mode;
} memory;
/* Future NUMA tuning related stuff should go here. */
@@ -2176,6 +2184,7 @@ VIR_ENUM_DECL(virDomainGraphicsSpiceStreamingMode)
VIR_ENUM_DECL(virDomainGraphicsSpiceClipboardCopypaste)
VIR_ENUM_DECL(virDomainGraphicsSpiceMouseMode)
VIR_ENUM_DECL(virDomainNumatuneMemMode)
+VIR_ENUM_DECL(virDomainNumatuneMemPlacementMode)
VIR_ENUM_DECL(virDomainSnapshotState)
/* from libvirt.h */
VIR_ENUM_DECL(virDomainState)
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index 88f8a21..c9c1486 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -400,6 +400,8 @@ virDomainNostateReasonTypeFromString;
virDomainNostateReasonTypeToString;
virDomainNumatuneMemModeTypeFromString;
virDomainNumatuneMemModeTypeToString;
+virDomainNumatuneMemPlacementModeTypeFromString;
+virDomainNumatuneMemPlacementModeTypeToString;
virDomainObjAssignDef;
virDomainObjCopyPersistentDef;
virDomainObjGetPersistentDef;
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index f1401e1..72beb83 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -1657,7 +1657,8 @@ qemuProcessDetectVcpuPIDs(struct qemud_driver *driver,
*/
#if HAVE_NUMACTL
static int
-qemuProcessInitNumaMemoryPolicy(virDomainObjPtr vm)
+qemuProcessInitNumaMemoryPolicy(virDomainObjPtr vm,
+ const char *nodemask)
{
nodemask_t mask;
int mode = -1;
@@ -1666,9 +1667,18 @@ qemuProcessInitNumaMemoryPolicy(virDomainObjPtr vm)
int i = 0;
int maxnode = 0;
bool warned = false;
+ virDomainNumatuneDef numatune = vm->def->numatune;
+ const char *tmp_nodemask = NULL;
- if (!vm->def->numatune.memory.nodemask)
- return 0;
+ if (numatune.memory.placement_mode ==
+ VIR_DOMAIN_NUMATUNE_MEM_PLACEMENT_MODE_STATIC) {
+ if (!numatune.memory.nodemask)
+ return 0;
+ tmp_nodemask = numatune.memory.nodemask;
+ } else if (numatune.memory.placement_mode ==
+ VIR_DOMAIN_NUMATUNE_MEM_PLACEMENT_MODE_AUTO) {
+ tmp_nodemask = nodemask;
+ }
VIR_DEBUG("Setting NUMA memory policy");
@@ -1679,11 +1689,10 @@ qemuProcessInitNumaMemoryPolicy(virDomainObjPtr vm)
}
maxnode = numa_max_node() + 1;
-
/* Convert nodemask to NUMA bitmask. */
nodemask_zero(&mask);
for (i = 0; i < VIR_DOMAIN_CPUMASK_LEN; i++) {
- if (vm->def->numatune.memory.nodemask[i]) {
+ if (tmp_nodemask[i]) {
if (i > NUMA_NUM_NODES) {
qemuReportError(VIR_ERR_INTERNAL_ERROR,
_("Host cannot support NUMA node %d"), i);
@@ -1693,12 +1702,12 @@ qemuProcessInitNumaMemoryPolicy(virDomainObjPtr vm)
VIR_WARN("nodeset is out of range, there is only %d NUMA "
"nodes on host", maxnode);
warned = true;
- }
+ }
nodemask_set(&mask, i);
}
}
- mode = vm->def->numatune.memory.mode;
+ mode = numatune.memory.mode;
if (mode == VIR_DOMAIN_NUMATUNE_MEM_STRICT) {
numa_set_bind_policy(1);
@@ -1789,7 +1798,8 @@ qemuGetNumadAdvice(virDomainDefPtr def ATTRIBUTE_UNUSED)
*/
static int
qemuProcessInitCpuAffinity(struct qemud_driver *driver,
- virDomainObjPtr vm)
+ virDomainObjPtr vm,
+ const char *nodemask)
{
int ret = -1;
int i, hostcpus, maxcpu = QEMUD_CPUMASK_LEN;
@@ -1815,27 +1825,6 @@ qemuProcessInitCpuAffinity(struct qemud_driver *driver,
}
if (vm->def->placement_mode == VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO) {
- char *nodeset = NULL;
- char *nodemask = NULL;
-
- nodeset = qemuGetNumadAdvice(vm->def);
- if (!nodeset)
- goto cleanup;
-
- if (VIR_ALLOC_N(nodemask, VIR_DOMAIN_CPUMASK_LEN) < 0) {
- virReportOOMError();
- VIR_FREE(nodeset);
- goto cleanup;
- }
-
- if (virDomainCpuSetParse(nodeset, 0, nodemask,
- VIR_DOMAIN_CPUMASK_LEN) < 0) {
- VIR_FREE(nodemask);
- VIR_FREE(nodeset);
- goto cleanup;
- }
- VIR_FREE(nodeset);
-
/* numad returns the NUMA node list, convert it to cpumap */
int prev_total_ncpus = 0;
for (i = 0; i < driver->caps->host.nnumaCell; i++) {
@@ -1852,8 +1841,6 @@ qemuProcessInitCpuAffinity(struct qemud_driver *driver,
}
prev_total_ncpus += cur_ncpus;
}
-
- VIR_FREE(nodemask);
} else {
if (vm->def->cpumask) {
/* XXX why don't we keep 'cpumask' in the libvirt cpumap
@@ -2564,6 +2551,8 @@ static int qemuProcessHook(void *data)
struct qemuProcessHookData *h = data;
int ret = -1;
int fd;
+ char *nodeset = NULL;
+ char *nodemask = NULL;
/* Some later calls want pid present */
h->vm->pid = getpid();
@@ -2597,14 +2586,34 @@ static int qemuProcessHook(void *data)
if (qemuAddToCgroup(h->driver, h->vm->def) < 0)
goto cleanup;
+ if ((h->vm->def->placement_mode ==
+ VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO) ||
+ (h->vm->def->numatune.memory.placement_mode ==
+ VIR_DOMAIN_NUMATUNE_MEM_PLACEMENT_MODE_AUTO)) {
+ nodeset = qemuGetNumadAdvice(h->vm->def);
+ if (!nodeset)
+ goto cleanup;
+
+ VIR_DEBUG("Nodeset returned from numad: %s", nodeset);
+
+ if (VIR_ALLOC_N(nodemask, VIR_DOMAIN_CPUMASK_LEN) < 0) {
+ virReportOOMError();
+ goto cleanup;
+ }
+
+ if (virDomainCpuSetParse(nodeset, 0, nodemask,
+ VIR_DOMAIN_CPUMASK_LEN) < 0)
+ goto cleanup;
+ }
+
/* This must be done after cgroup placement to avoid resetting CPU
* affinity */
VIR_DEBUG("Setup CPU affinity");
- if (qemuProcessInitCpuAffinity(h->driver, h->vm) < 0)
+ if (qemuProcessInitCpuAffinity(h->driver, h->vm, nodemask) < 0)
goto cleanup;
- if (qemuProcessInitNumaMemoryPolicy(h->vm) < 0)
- return -1;
+ if (qemuProcessInitNumaMemoryPolicy(h->vm, nodemask) < 0)
+ goto cleanup;
VIR_DEBUG("Setting up security labelling");
if (virSecurityManagerSetProcessLabel(h->driver->securityManager, h->vm->def) < 0)
@@ -2614,6 +2623,8 @@ static int qemuProcessHook(void *data)
cleanup:
VIR_DEBUG("Hook complete ret=%d", ret);
+ VIR_FREE(nodeset);
+ VIR_FREE(nodemask);
return ret;
}
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-numad.args b/tests/qemuxml2argvdata/qemuxml2argv-numad.args
new file mode 100644
index 0000000..23bcb70
--- /dev/null
+++ b/tests/qemuxml2argvdata/qemuxml2argv-numad.args
@@ -0,0 +1,4 @@
+LC_ALL=C PATH=/bin HOME=/home/test USER=test LOGNAME=test /usr/bin/qemu -S -M \
+pc -m 214 -smp 2 -nographic -monitor \
+unix:/tmp/test-monitor,server,nowait -no-acpi -boot c -hda \
+/dev/HostVG/QEMUGuest1 -net none -serial none -parallel none -usb
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-numad.xml b/tests/qemuxml2argvdata/qemuxml2argv-numad.xml
new file mode 100644
index 0000000..c87ec49
--- /dev/null
+++ b/tests/qemuxml2argvdata/qemuxml2argv-numad.xml
@@ -0,0 +1,31 @@
+<domain type='qemu'>
+ <name>QEMUGuest1</name>
+ <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>
+ <memory unit='KiB'>219136</memory>
+ <currentMemory unit='KiB'>219136</currentMemory>
+ <vcpu placement='auto'>2</vcpu>
+ <numatune>
+ <memory mode="interleave" placement='auto'/>
+ </numatune>
+ <os>
+ <type arch='i686' machine='pc'>hvm</type>
+ <boot dev='hd'/>
+ </os>
+ <cpu>
+ <topology sockets='2' cores='1' threads='1'/>
+ </cpu>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <emulator>/usr/bin/qemu</emulator>
+ <disk type='block' device='disk'>
+ <source dev='/dev/HostVG/QEMUGuest1'/>
+ <target dev='hda' bus='ide'/>
+ <address type='drive' controller='0' bus='0' target='0' unit='0'/>
+ </disk>
+ <controller type='ide' index='0'/>
+ <memballoon model='virtio'/>
+ </devices>
+</domain>
diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c
index e47a385..3e529e2 100644
--- a/tests/qemuxml2argvtest.c
+++ b/tests/qemuxml2argvtest.c
@@ -732,6 +732,7 @@ mymain(void)
DO_TEST("blkiotune-device", false, QEMU_CAPS_NAME);
DO_TEST("cputune", false, QEMU_CAPS_NAME);
DO_TEST("numatune-memory", false, NONE);
+ DO_TEST("numad", false, NONE);
DO_TEST("blkdeviotune", false, QEMU_CAPS_NAME, QEMU_CAPS_DEVICE,
QEMU_CAPS_DRIVE, QEMU_CAPS_DRIVE_IOTUNE);
--
1.7.7.3
12 years, 6 months
[libvirt] internal error hostname of destination resolved to localhost, but migration requires an FQDN
by William Herry
HI
I use virsh to migrate kvm virtual machine, it show me this error
I already use ip address in my command:
virsh migrate --live kvm-test qemu+ssh://root@my_ip:port/system
after a few tries I make it work by change the hostname of dest to
something else rather than localhost,
this looks like a bug cause a already this virsh the ip address,
or any thing I misunderstand?
regards
--
===========================
William Herry
WilliamHerryChina(a)Gmail.com
12 years, 6 months
[libvirt] [PATCHv6 0/8] live block migration
by Eric Blake
v5: https://www.redhat.com/archives/libvir-list/2012-April/msg00753.html
Differences in v6:
- rebased on top of accepted patches v5:1-4/23 and latest tree
- corresponds to patches v5:7-14/23
- patch 5/8 is new
- patch v5:12/23 is dropped; qemu is giving us something better, but
I still need to finish writing that patch
- patch v5:11/23 comments were incorporated, with better cleanup on error
- tweak series to deal with potential for qemu 1.1 to support copy
but not pivot
- limit series to just the bare minimum for now; patches from v5:15/23
are still on my tree, but not worth submitting until we know more about
what qemu will provide
I'm posting this more for reference for my efforts to backport this
to RHEL 6 where I have tested against a candidate qemu build, and
don't this is ready for upstream until we have confirmation that
actual patches have gone into at least the qemu block queue for 1.1.
Eric Blake (8):
blockjob: react to active block copy
blockjob: add qemu capabilities related to block jobs
blockjob: return appropriate event and info
blockjob: support pivot operation on cancel
blockjob: make drive-reopen safer
blockjob: implement block copy for qemu
blockjob: allow for existing files
blockjob: allow mirroring under SELinux
src/conf/domain_conf.c | 12 ++
src/conf/domain_conf.h | 1 +
src/libvirt_private.syms | 1 +
src/qemu/qemu_capabilities.c | 3 +
src/qemu/qemu_capabilities.h | 2 +
src/qemu/qemu_driver.c | 396 +++++++++++++++++++++++++++++++++++++++++-
src/qemu/qemu_hotplug.c | 7 +
src/qemu/qemu_monitor.c | 37 ++++
src/qemu/qemu_monitor.h | 11 ++
src/qemu/qemu_monitor_json.c | 67 +++++++
src/qemu/qemu_monitor_json.h | 18 ++-
11 files changed, 549 insertions(+), 6 deletions(-)
--
1.7.7.6
12 years, 6 months
[libvirt] [PATCH] virsh: avoid heap corruption leading to virsh abort
by Jim Meyering
Investigating a build problem reported by Laine,
I was surprised to see "make check" fail on F17 due to a
glibc invalid free abort. Ok to push to master?
>From 61a559e0b2f4bded3059c5be7c958e2276f7fd16 Mon Sep 17 00:00:00 2001
From: Jim Meyering <meyering(a)redhat.com>
Date: Mon, 7 May 2012 21:22:09 +0200
Subject: [PATCH] virsh: avoid heap corruption leading to virsh abort
* tools/virsh.c (vshParseSnapshotDiskspec): Fix off-by-3 memmove
that would corrupt heap when parsing escaped --diskspec comma.
Bug introduced via commit v0.9.4-260-g35d52b5.
---
tools/virsh.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/virsh.c b/tools/virsh.c
index 1207ac9..dd9292a 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -16107,7 +16107,7 @@ vshParseSnapshotDiskspec(vshControl *ctl, virBufferPtr buf, const char *str)
while ((tmp = strchr(tmp, ','))) {
if (tmp[1] == ',') {
/* Recognize ,, as an escape for a literal comma */
- memmove(&tmp[1], &tmp[2], len - (tmp - spec) + 2);
+ memmove(&tmp[1], &tmp[2], len - (tmp - spec) - 2 + 1);
len--;
tmp++;
continue;
--
1.7.10.1.457.g8275905
12 years, 6 months
[libvirt] [PATCHv3 0/4] util: fix libvirtd startup failure due to netlink error
by Laine Stump
The initial patch in this series is the same as the single patch I
provided before, but in addition there are 3 new patches that take
care of the fallout from 1/4 by:
2/4) removing hard-coding of the src nl_pid when virNetLinkCommand is
sending a message
3/4) adding a helper function to retrieve the local nl_pid from a
libnl socket
and
4/4) using that value when appropriate (I think).
As before, I'm unable to fully test this myself, so I won't push
unless/until I get verification it works.
12 years, 6 months
[libvirt] [PATCH] openvz: simplify openvzDomainDefineCmd by using virCommandPtr
by Guido Günther
---
While working on setting filesystem quotas I came accross this and I
figured I'll send this in advance because of the diffstat.
Cheers,
-- Guido
src/openvz/openvz_driver.c | 79 +++++++++++---------------------------------
1 file changed, 19 insertions(+), 60 deletions(-)
diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c
index 91f5d49..8cceed6 100644
--- a/src/openvz/openvz_driver.c
+++ b/src/openvz/openvz_driver.c
@@ -100,68 +100,31 @@ static void cmdExecFree(const char *cmdExec[])
/* generate arguments to create OpenVZ container
return -1 - error
0 - OK
+ Caller has to free the cmd
*/
-static int
-openvzDomainDefineCmd(const char *args[],
- int maxarg, virDomainDefPtr vmdef)
+static virCommandPtr
+openvzDomainDefineCmd(virDomainDefPtr vmdef)
{
- int narg;
-
- for (narg = 0; narg < maxarg; narg++)
- args[narg] = NULL;
+ virCommandPtr cmd = virCommandNewArgList(VZCTL,
+ "--quiet",
+ "create",
+ NULL);
if (vmdef == NULL) {
openvzError(VIR_ERR_INTERNAL_ERROR, "%s",
_("Container is not defined"));
- return -1;
+ virCommandFree(cmd);
+ return NULL;
}
-#define ADD_ARG(thisarg) \
- do { \
- if (narg >= maxarg) \
- goto no_memory; \
- args[narg++] = thisarg; \
- } while (0)
-
-#define ADD_ARG_LIT(thisarg) \
- do { \
- if (narg >= maxarg) \
- goto no_memory; \
- if ((args[narg++] = strdup(thisarg)) == NULL) \
- goto no_memory; \
- } while (0)
-
- narg = 0;
- ADD_ARG_LIT(VZCTL);
- ADD_ARG_LIT("--quiet");
- ADD_ARG_LIT("create");
-
- ADD_ARG_LIT(vmdef->name);
- ADD_ARG_LIT("--name");
- ADD_ARG_LIT(vmdef->name);
+ virCommandAddArgList(cmd, vmdef->name, "--name", vmdef->name, NULL);
if (vmdef->nfss == 1 &&
vmdef->fss[0]->type == VIR_DOMAIN_FS_TYPE_TEMPLATE) {
- ADD_ARG_LIT("--ostemplate");
- ADD_ARG_LIT(vmdef->fss[0]->src);
- }
-#if 0
- if ((vmdef->profile && *(vmdef->profile))) {
- ADD_ARG_LIT("--config");
- ADD_ARG_LIT(vmdef->profile);
+ virCommandAddArgList(cmd, "--ostemplate", vmdef->fss[0]->src, NULL);
}
-#endif
- ADD_ARG(NULL);
- return 0;
-
-no_memory:
- openvzError(VIR_ERR_INTERNAL_ERROR,
- _("Could not put argument to %s"), VZCTL);
- return -1;
-
-#undef ADD_ARG
-#undef ADD_ARG_LIT
+ return cmd;
}
@@ -170,8 +133,7 @@ static int openvzSetInitialConfig(virDomainDefPtr vmdef)
int ret = -1;
int vpsid;
char * confdir = NULL;
- const char *prog[OPENVZ_MAX_ARG];
- prog[0] = NULL;
+ virCommandPtr cmd = NULL;
if (vmdef->nfss > 1) {
openvzError(VIR_ERR_INTERNAL_ERROR, "%s",
@@ -210,24 +172,21 @@ static int openvzSetInitialConfig(virDomainDefPtr vmdef)
_("Could not set the source dir for the filesystem"));
goto cleanup;
}
- }
- else
- {
- if (openvzDomainDefineCmd(prog, OPENVZ_MAX_ARG, vmdef) < 0) {
- VIR_ERROR(_("Error creating command for container"));
+ } else {
+ cmd = openvzDomainDefineCmd(vmdef);
+ if (cmd == NULL)
goto cleanup;
- }
- if (virRun(prog, NULL) < 0) {
+ if (virCommandRun(cmd, NULL) < 0)
goto cleanup;
- }
}
ret = 0;
cleanup:
VIR_FREE(confdir);
- cmdExecFree(prog);
+ virCommandFree(cmd);
+
return ret;
}
--
1.7.10
12 years, 6 months
[libvirt] [PATCH] docs: Add 'maintenance releases' link in 'News' sidebar
by Cole Robinson
Signed-off-by: Cole Robinson <crobinso(a)redhat.com>
---
docs/sitemap.html.in | 8 +++++++-
1 files changed, 7 insertions(+), 1 deletions(-)
diff --git a/docs/sitemap.html.in b/docs/sitemap.html.in
index 8f58d46..206040c 100644
--- a/docs/sitemap.html.in
+++ b/docs/sitemap.html.in
@@ -13,8 +13,14 @@
<span>Details of new features and bugs fixed in each release</span>
<ul>
<li>
+ <a href="http://wiki.libvirt.org/page/Maintenance_Releases">Maintenance Releases</a>
+ <span>Details about libvirt maintenance releases</span>
+ </li>
+ </ul>
+ <ul>
+ <li>
<a href="http://libvirt.org/git/?p=libvirt.git;a=log">Git log</a>
- <span>Latest commit messages from the source repository </span>
+ <span>Latest commit messages from the source repository</span>
</li>
</ul>
</li>
--
1.7.7.6
12 years, 6 months
[libvirt] [PATCH 0/2] Fix migration to older versions of libvirt
by Jiri Denemark
This series deals with migrating to older versions of libvirt broken by
unconditionally added USB controller into domain XML.
Jiri Denemark (2):
qemu: Don't use virDomainDefFormat* directly
qemu: Emit compatible XML when migrating a domain
src/qemu/qemu_domain.c | 88 ++++++++++++++++++++++++++++++++++++++++-----
src/qemu/qemu_domain.h | 15 ++++++--
src/qemu/qemu_driver.c | 26 ++++++++------
src/qemu/qemu_migration.c | 30 +++++++++------
src/qemu/qemu_process.c | 8 ++--
5 files changed, 128 insertions(+), 39 deletions(-)
--
1.7.8.6
12 years, 6 months
[libvirt] Submission Deadline Extension
by VHPC 12
we apologize if you receive multiple copies of this CFP
===================================================================
CALL FOR PAPERS
7th Workshop on
Virtualization in High-Performance Cloud Computing
VHPC '12
as part of Euro-Par 2012, Rhodes Island, Greece
===================================================================
Date: August 28, 2012
Workshop URL: http://vhpc.org
SUBMISSION DEADLINE:
June 11, 2012 - Full paper submission (extended)
SCOPE:
Virtualization has become a common abstraction layer in modern
data centers, enabling resource owners to manage complex
infrastructure independently of their applications. Conjointly,
virtualization is becoming a driving technology for a manifold of
industry grade IT services. The cloud concept includes the notion
of a separation between resource owners and users, adding services
such as hosted application frameworks and queueing. Utilizing the
same infrastructure, clouds carry significant potential for use in
high-performance scientific computing. The ability of clouds to provide
for requests and releases of vast computing resources dynamically and
close to the marginal cost of providing the services is unprecedented in
the history of scientific and commercial computing.
Distributed computing concepts that leverage federated resource
access are popular within the grid community, but have not seen
previously desired deployed levels so far. Also, many of the scientific
data centers have not adopted virtualization or cloud concepts yet.
This workshop aims to bring together industrial providers with the
scientific community in order to foster discussion, collaboration
and mutual exchange of knowledge and experience.
The workshop will be one day in length, composed of 20 min
paper presentations, each followed by 10 min discussion sections.
Presentations may be accompanied by interactive demonstrations.
TOPICS
Topics of interest include, but are not limited to:
Higher-level cloud architectures, focusing on issues such as:
- Languages for describing highly-distributed compute jobs
- Workload characterization for VM-based environments
- Optimized communication libraries/protocols in the cloud
- Cross-layer optimization of numeric algorithms on VM infrastructure
- System and process/bytecode VM convergence
- Cloud frameworks and API sets
- Checkpointing/migration of large compute jobs
- Instrumentation interfaces and languages
- VMM performance (auto-)tuning on various load types
- Cloud reliability, fault-tolerance, and security
- Software as a Service (SaaS) architectures
- Research and education use cases
- Virtualization in cloud, cluster and grid environments
- Cross-layer VM optimizations
- Cloud use cases including optimizations
- VM-based cloud performance modelling
- Performance and cost modelling
Lower-level design challenges for Hypervisors, VM-aware I/O devices,
hardware accelerators or filesystems in VM environments, especially:
- Cloud, grid and distributed filesystems
- Hardware for I/O virtualization (storage/network/accelerators)
- Storage and network I/O subsystems in virtualized environments
- Novel software approaches to I/O virtualization
- Paravirtualized I/O subsystems for modified/unmodified guests
- Virtualization-aware cluster interconnects
- Direct device assignment
- NUMA-aware subsystems in virtualized environments
- Hardware Accelerators in virtualization (GPUs/FPGAs)
- Hardware extensions for virtualization
- VMMs/Hypervisors for embedded systems
Data Center management methods, including:
- QoS and and service levels
- VM cloud and cluster distribution algorithms
- VM load-balancing in Clouds
- Hypervisor extensions and tools for cluster and grid computing
- Fault tolerant VM environments
- Virtual machine monitor platforms
- Management, deployment and monitoring of VM-based environments
- Cluster provisioning in the Cloud
PAPER SUBMISSION
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work.
Accepted papers will be published in the Springer LNCS series - the
format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Format Guidelines: http://www.springer.de/comp/lncs/authors.html
Style template:
ftp://ftp.springer.de/pub/tex/latex/llncs/latex2e/llncs2e.zip
Abstract Submission Link: http://edas.info/newPaper.php?c=11943
IMPORTANT DATES
Rolling abstract submission
June 11, 2012 - Full paper submission (extended)
June 29, 2012 - Acceptance notification
July 20, 2012 - Camera-ready version due
August 28, 2012 - Workshop Date
CHAIR
Michael Alexander (chair), TU Wien, Austria
Gianluigi Zanetti (co-chair), CRS4, Italy
Anastassios Nanos (co-chair), NTUA, Greece
PROGRAM COMMITTEE
Paolo Anedda, CRS4, Italy
Giovanni Busonera, CRS4, Italy
Brad Calder, Microsoft, USA
Roberto Canonico, University of Napoli Federico II, Italy
Tommaso Cucinotta, Alcatel-Lucent Bell Labs, Ireland
Werner Fischer, Thomas-Krenn AG, Germany
William Gardner, University of Guelph, USA
Marcus Hardt, Forschungszentrum Karlsruhe, Germany
Sverre Jarp, CERN, Switzerland
Shantenu Jha, Louisiana State University, USA
Xuxian Jiang, NC State, USA
Nectarios Koziris, National Technical University of Athens, Greece
Simone Leo, CRS4, Italy
Ignacio Llorente, Universidad Complutense de Madrid, Spain
Naoya Maruyama, Tokyo Institute of Technology, Japan
Jean-Marc Menaud, Ecole des Mines de Nantes, France
Dimitrios Nikolopoulos, Foundation for Research&Technology Hellas, Greece
Jose Renato Santos, HP Labs, USA
Walter Schwaiger, TU Wien, Austria
Yoshio Turner, HP Labs, USA
Kurt Tutschku, University of Vienna, Austria
Lizhe Wang, Indiana University, USA
Chao-Tung Yang, Tunghai University, Taiwan
DURATION: Workshop Duration is one day.
GENERAL INFORMATION
The workshop will be held as part of Euro-Par 2012.
Euro-Par 2012: http://europar2012.cti.gr/
12 years, 6 months