[libvirt] [PATCH] Fix python error reporting for some storage operations
by Cole Robinson
In the python bindings, all vir* classes expect to be
passed a virConnect object when instantiated. Before
the storage stuff, these classes were only instantiated
in virConnect methods, so the generator is hardcoded to
pass 'self' as the connection instance to these classes.
Problem is there are some methods that return pool or vol
instances which aren't called from virConnect: you can
lookup a storage volume's associated pool, and can lookup
volumes from a pool. In these cases passing 'self' doesn't
give the vir* instance a connection, so when it comes time
to raise an exception crap hits the fan.
Rather than rework the generator to accomodate this edge
case, I just fixed the init functions for virStorage* to
pull the associated connection out of the passed value
if it's not a virConnect instance.
Thanks,
Cole
diff --git a/python/generator.py b/python/generator.py
index 01a17da..c706b19 100755
--- a/python/generator.py
+++ b/python/generator.py
@@ -962,8 +962,12 @@ def buildWrappers():
list = reference_keepers[classname]
for ref in list:
classes.write(" self.%s = None\n" % ref[1])
- if classname in [ "virDomain", "virNetwork", "virStoragePool", "virStorageVol" ]:
+ if classname in [ "virDomain", "virNetwork" ]:
classes.write(" self._conn = conn\n")
+ elif classname in [ "virStorageVol", "virStoragePool" ]:
+ classes.write(" self._conn = conn\n" + \
+ " if not isinstance(conn, virConnect):\n" + \
+ " self._conn = conn._conn\n")
classes.write(" if _obj != None:self._o = _obj;return\n")
classes.write(" self._o = None\n\n");
destruct=None
1 month
[libvirt] Supporting vhost-net and macvtap in libvirt for QEMU
by Anthony Liguori
Disclaimer: I am neither an SR-IOV nor a vhost-net expert, but I've CC'd
people that are who can throw tomatoes at me for getting bits wrong :-)
I wanted to start a discussion about supporting vhost-net in libvirt.
vhost-net has not yet been merged into qemu but I expect it will be soon
so it's a good time to start this discussion.
There are two modes worth supporting for vhost-net in libvirt. The
first mode is where vhost-net backs to a tun/tap device. This is
behaves in very much the same way that -net tap behaves in qemu today.
Basically, the difference is that the virtio backend is in the kernel
instead of in qemu so there should be some performance improvement.
Current, libvirt invokes qemu with -net tap,fd=X where X is an already
open fd to a tun/tap device. I suspect that after we merge vhost-net,
libvirt could support vhost-net in this mode by just doing -net
vhost,fd=X. I think the only real question for libvirt is whether to
provide a user visible switch to use vhost or to just always use vhost
when it's available and it makes sense. Personally, I think the later
makes sense.
The more interesting invocation of vhost-net though is one where the
vhost-net device backs directly to a physical network card. In this
mode, vhost should get considerably better performance than the current
implementation. I don't know the syntax yet, but I think it's
reasonable to assume that it will look something like -net
tap,dev=eth0. The effect will be that eth0 is dedicated to the guest.
On most modern systems, there is a small number of network devices so
this model is not all that useful except when dealing with SR-IOV
adapters. In that case, each physical device can be exposed as many
virtual devices (VFs). There are a few restrictions here though. The
biggest is that currently, you can only change the number of VFs by
reloading a kernel module so it's really a parameter that must be set at
startup time.
I think there are a few ways libvirt could support vhost-net in this
second mode. The simplest would be to introduce a new tag similar to
<source network='br0'>. In fact, if you probed the device type for the
network parameter, you could probably do something like <source
network='eth0'> and have it Just Work.
Another model would be to have libvirt see an SR-IOV adapter as a
network pool whereas it handled all of the VF management. Considering
how inflexible SR-IOV is today, I'm not sure whether this is the best model.
Has anyone put any more thought into this problem or how this should be
modeled in libvirt? Michael, could you share your current thinking for
-net syntax?
--
Regards,
Anthony Liguori
1 year, 5 months
[libvirt] [PATCH 0/4] Multiple problems with saving to block devices
by Daniel P. Berrange
This patch series makes it possible to save to a block device,
instead of a plain file. There were multiple problems
- WHen save failed, we might de-reference a NULL pointer
- When save failed, we unlinked the device node !!
- The approach of using >> to append, doesn't work with block devices
- CGroups was blocking QEMU access to the block device when enabled
One remaining problem is not in libvirt, but rather QEMU. The QEMU
exec: based migration often fails to detect failure of the command
and will thus hang forever attempting a migration that'll never
succeed! Fortunately you can now work around this in libvirt using
the virsh domjobabort command
12 years
[libvirt] [PATCHv4 00/51] another round of snapshot patches
by Eric Blake
I think I've addressed most findings from round 3 - by implementing
the ability to redefine a snapshot, it becomes possible to restore
snapshot hierarchy when recreating a transient domain by the same
name. New goodies in this round: several bug fixes, add virsh
snapshot-edit, drop undefine --snapshots-full (you can only remove
snapshot metadata on undefine). I tested as I went, but this went
through so many rebases that there may be some nasties that snuck
in; but I wanted to get this posted now. I also know that I'm
missing at least one major feature requested in the v3 review:
namely, transient domains _should_ auto-remove snapshot metadata
files when they halt, but right now aren't doing that.
v3 was at:
https://www.redhat.com/archives/libvir-list/2011-August/msg01132.html
Also available here:
git fetch git://repo.or.cz/libvirt/ericb.git snapshot
or browse online at:
http://repo.or.cz/w/libvirt/ericb.git/shortlog/refs/heads/snapshot
I'm also trying to group things by several bugzilla related to
various patches (looks like I still need to create a few):
Eric Blake (51):
https://bugzilla.redhat.com/show_bug.cgi?id=674537
snapshot: fix corner case on OOM during creation
https://bugzilla.redhat.com/show_bug.cgi?id=733762
snapshot: better events when starting paused
snapshot: fine-tune ability to start paused
snapshot: expose --running and --paused in virsh
snapshot: fine-tune qemu saved images starting paused
snapshot: improve reverting to qemu paused snapshots
snapshot: properly revert qemu to offline snapshots
snapshot: fine-tune qemu snapshot revert states
no bug filed yet... should be one about no stale metadata
snapshot: allow deletion of just snapshot metadata
snapshot: add snapshot-list --parent to virsh
https://bugzilla.redhat.com/show_bug.cgi?id=733529
snapshot: speed up snapshot location
snapshot: avoid crash when deleting qemu snapshots
snapshot: track current domain across deletion of children
snapshot: simplify acting on just children
no bug filed yet... should be one about no stale metadata
snapshot: let qemu discard only snapshot metadata
snapshot: identify which snapshots have metadata
snapshot: reflect new dumpxml and list options in virsh
snapshot: identify qemu snapshot roots
snapshot: allow recreation of metadata
snapshot: refactor virsh snapshot creation
snapshot: improve virsh snapshot-create, add snapshot-edit
snapshot: add qemu snapshot creation without metadata
no bug filed yet... should be one about snapshot migration
snapshot: add qemu snapshot redefine support
snapshot: prevent stranding snapshot data on domain destruction
snapshot: teach virsh about new undefine flags
snapshot: refactor some qemu code
snapshot: cache qemu-img location
snapshot: support new undefine flags in qemu
snapshot: prevent migration from stranding snapshot data
https://bugzilla.redhat.com/show_bug.cgi?id=638510
snapshot: refactor domain xml output
snapshot: allow full domain xml in snapshot
snapshot: correctly escape generated xml
snapshot: update rng to support full domain in xml
snapshot: store qemu domain details in xml
snapshot: additions to domain xml for disks
snapshot: reject transient disks where code is not ready
snapshot: introduce new deletion flag
snapshot: expose new delete flag in virsh
snapshot: allow halting after snapshot
snapshot: expose halt-after-creation in virsh
snapshot: wire up new qemu monitor command
snapshot: support extra state in snapshots
snapshot: add <disks> to snapshot xml
snapshot: also support disks by path
snapshot: add virsh domblklist command
snapshot: add flag for requesting disk snapshot
snapshot: wire up disk-only flag to snapshot-create
snapshot: reject unimplemented disk snapshot features
snapshot: make it possible to audit external snapshot
snapshot: wire up live qemu disk snapshots
snapshot: use SELinux and lock manager with external snapshots
docs/formatdomain.html.in | 40 +-
docs/formatsnapshot.html.in | 269 ++-
docs/schemas/Makefile.am | 1 +
docs/schemas/domain.rng | 2555 +-------------------
docs/schemas/{domain.rng => domaincommon.rng} | 32 +-
docs/schemas/domainsnapshot.rng | 84 +-
examples/domain-events/events-c/event-test.c | 37 +-
include/libvirt/libvirt.h.in | 66 +-
src/conf/domain_audit.c | 12 +-
src/conf/domain_audit.h | 4 +-
src/conf/domain_conf.c | 902 ++++++--
src/conf/domain_conf.h | 76 +-
src/esx/esx_driver.c | 38 +-
src/libvirt.c | 256 ++-
src/libvirt_private.syms | 8 +
src/libxl/libxl_conf.c | 5 +
src/libxl/libxl_driver.c | 11 +-
src/qemu/qemu_command.c | 5 +
src/qemu/qemu_conf.h | 1 +
src/qemu/qemu_driver.c | 1532 +++++++++---
src/qemu/qemu_hotplug.c | 18 +-
src/qemu/qemu_migration.c | 48 +-
src/qemu/qemu_migration.h | 2 -
src/qemu/qemu_monitor.c | 24 +
src/qemu/qemu_monitor.h | 4 +
src/qemu/qemu_monitor_json.c | 33 +
src/qemu/qemu_monitor_json.h | 4 +
src/qemu/qemu_monitor_text.c | 40 +
src/qemu/qemu_monitor_text.h | 4 +
src/qemu/qemu_process.c | 11 +-
src/uml/uml_driver.c | 56 +-
src/vbox/vbox_tmpl.c | 43 +-
src/xen/xend_internal.c | 12 +-
src/xenxs/xen_sxpr.c | 5 +
src/xenxs/xen_xm.c | 5 +
tests/domainsnapshotxml2xmlin/disk_snapshot.xml | 16 +
tests/domainsnapshotxml2xmlout/disk_snapshot.xml | 77 +
tests/domainsnapshotxml2xmlout/full_domain.xml | 35 +
.../qemuxml2argv-disk-snapshot.args | 7 +
.../qemuxml2argv-disk-snapshot.xml | 39 +
.../qemuxml2argv-disk-transient.xml | 27 +
tests/qemuxml2argvtest.c | 2 +
tests/virsh-optparse | 20 +
tools/virsh.c | 772 +++++-
tools/virsh.pod | 214 ++-
45 files changed, 3978 insertions(+), 3474 deletions(-)
copy docs/schemas/{domain.rng => domaincommon.rng} (98%)
create mode 100644 tests/domainsnapshotxml2xmlin/disk_snapshot.xml
create mode 100644 tests/domainsnapshotxml2xmlout/disk_snapshot.xml
create mode 100644 tests/domainsnapshotxml2xmlout/full_domain.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-snapshot.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-snapshot.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-transient.xml
--
1.7.4.4
12 years, 5 months
[libvirt] [PATCH] Wire up <loader> to set the QEMU BIOS path
by Daniel P. Berrange
From: "Daniel P. Berrange" <berrange(a)redhat.com>
* src/qemu/qemu_command.c: Wire up -bios with <loader>
* tests/qemuxml2argvdata/qemuxml2argv-bios.args,
tests/qemuxml2argvdata/qemuxml2argv-bios.xml: Expand
existing BIOS test case to cover <loader>
---
src/qemu/qemu_command.c | 9 +++++++++
tests/qemuxml2argvdata/qemuxml2argv-bios.args | 3 ++-
tests/qemuxml2argvdata/qemuxml2argv-bios.xml | 1 +
3 files changed, 12 insertions(+), 1 deletions(-)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index ea9431f..c82f5bc 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -4052,6 +4052,11 @@ qemuBuildCommandLine(virConnectPtr conn,
if (enableKVM)
virCommandAddArg(cmd, "-enable-kvm");
+ if (def->os.loader) {
+ virCommandAddArg(cmd, "-bios");
+ virCommandAddArg(cmd, def->os.loader);
+ }
+
/* Set '-m MB' based on maxmem, because the lower 'memory' limit
* is set post-startup using the balloon driver. If balloon driver
* is not supported, then they're out of luck anyway. Update the
@@ -7581,6 +7586,10 @@ virDomainDefPtr qemuParseCommandLine(virCapsPtr caps,
WANT_VALUE();
if (!(def->os.kernel = strdup(val)))
goto no_memory;
+ } else if (STREQ(arg, "-bios")) {
+ WANT_VALUE();
+ if (!(def->os.loader = strdup(val)))
+ goto no_memory;
} else if (STREQ(arg, "-initrd")) {
WANT_VALUE();
if (!(def->os.initrd = strdup(val)))
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-bios.args b/tests/qemuxml2argvdata/qemuxml2argv-bios.args
index f9727c4..ac98000 100644
--- a/tests/qemuxml2argvdata/qemuxml2argv-bios.args
+++ b/tests/qemuxml2argvdata/qemuxml2argv-bios.args
@@ -1,5 +1,6 @@
LC_ALL=C PATH=/bin HOME=/home/test USER=test LOGNAME=test QEMU_AUDIO_DRV=none \
-/usr/bin/qemu -S -M pc -m 1024 -smp 1 -nodefaults -device sga \
+/usr/bin/qemu -S -M pc -bios /usr/share/seabios/bios.bin \
+-m 1024 -smp 1 -nodefaults -device sga \
-monitor unix:/tmp/test-monitor,server,nowait -no-acpi -boot c \
-hda /dev/HostVG/QEMUGuest1 -serial pty \
-usb -device usb-tablet,id=input0 -vnc 127.0.0.1:0 \
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-bios.xml b/tests/qemuxml2argvdata/qemuxml2argv-bios.xml
index cfc5587..ac15d45 100644
--- a/tests/qemuxml2argvdata/qemuxml2argv-bios.xml
+++ b/tests/qemuxml2argvdata/qemuxml2argv-bios.xml
@@ -6,6 +6,7 @@
<vcpu>1</vcpu>
<os>
<type arch='i686' machine='pc'>hvm</type>
+ <loader>/usr/share/seabios/bios.bin</loader>
<boot dev='hd'/>
<bootmenu enable='yes'/>
<bios useserial='yes'/>
--
1.7.7.6
12 years, 9 months
[libvirt] [Patch v2] vmware: detect when a domain was shut down from the inside
by Jean-Baptiste Rouault
This patch adds an internal function vmwareUpdateVMStatus to
update the real state of the domain. This function is used in
various places in the driver, in particular to detect when
the domain has been shut down by the user with the "halt"
command.
---
v2:
- Replace internal function vmwareGetVMStatus by vmwareUpdateVMStatus
- Improve vmrun list output parsing
- variable initialization and coding-style fixes
src/vmware/vmware_driver.c | 95 ++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 95 insertions(+), 0 deletions(-)
diff --git a/src/vmware/vmware_driver.c b/src/vmware/vmware_driver.c
index 8f9d922..53e28e7 100644
--- a/src/vmware/vmware_driver.c
+++ b/src/vmware/vmware_driver.c
@@ -28,6 +28,7 @@
#include "datatypes.h"
#include "virfile.h"
#include "memory.h"
+#include "util.h"
#include "uuid.h"
#include "command.h"
#include "vmx.h"
@@ -181,6 +182,64 @@ vmwareGetVersion(virConnectPtr conn, unsigned long *version)
}
static int
+vmwareUpdateVMStatus(struct vmware_driver *driver, virDomainObjPtr vm)
+{
+ virCommandPtr cmd;
+ char *outbuf = NULL;
+ char *vmxAbsolutePath = NULL;
+ char *parsedVmxPath = NULL;
+ char *str;
+ char *saveptr = NULL;
+ bool found = false;
+ int oldState = virDomainObjGetState(vm, NULL);
+ int newState;
+ int ret = -1;
+
+ cmd = virCommandNewArgList(VMRUN, "-T", vmw_types[driver->type],
+ "list", NULL);
+ virCommandSetOutputBuffer(cmd, &outbuf);
+ if (virCommandRun(cmd, NULL) < 0)
+ goto cleanup;
+
+ if (virFileResolveAllLinks(((vmwareDomainPtr) vm->privateData)->vmxPath,
+ &vmxAbsolutePath) < 0)
+ goto cleanup;
+
+ for(str = outbuf ; (parsedVmxPath = strtok_r(str, "\n", &saveptr)) != NULL;
+ str = NULL) {
+
+ if (parsedVmxPath[0] != '/')
+ continue;
+
+ if (STREQ(parsedVmxPath, vmxAbsolutePath)) {
+ found = true;
+ /* If the vmx path is in the output, the domain is running or
+ * is paused but we have no way to detect if it is paused or not. */
+ if (oldState == VIR_DOMAIN_PAUSED)
+ newState = oldState;
+ else
+ newState = VIR_DOMAIN_RUNNING;
+ break;
+ }
+ }
+
+ if (!found) {
+ vm->def->id = -1;
+ newState = VIR_DOMAIN_SHUTOFF;
+ }
+
+ virDomainObjSetState(vm, newState, 0);
+
+ ret = 0;
+
+cleanup:
+ virCommandFree(cmd);
+ VIR_FREE(outbuf);
+ VIR_FREE(vmxAbsolutePath);
+ return ret;
+}
+
+static int
vmwareStopVM(struct vmware_driver *driver,
virDomainObjPtr vm,
virDomainShutoffReason reason)
@@ -331,6 +390,9 @@ vmwareDomainShutdownFlags(virDomainPtr dom,
goto cleanup;
}
+ if (vmwareUpdateVMStatus(driver, vm) < 0)
+ goto cleanup;
+
if (virDomainObjGetState(vm, NULL) != VIR_DOMAIN_RUNNING) {
vmwareError(VIR_ERR_INTERNAL_ERROR, "%s",
_("domain is not in running state"));
@@ -485,6 +547,8 @@ vmwareDomainReboot(virDomainPtr dom, unsigned int flags)
vmwareSetSentinal(cmd, vmw_types[driver->type]);
vmwareSetSentinal(cmd, vmxPath);
+ if (vmwareUpdateVMStatus(driver, vm) < 0)
+ goto cleanup;
if (virDomainObjGetState(vm, NULL) != VIR_DOMAIN_RUNNING) {
vmwareError(VIR_ERR_INTERNAL_ERROR, "%s",
@@ -596,6 +660,9 @@ vmwareDomainCreateWithFlags(virDomainPtr dom,
goto cleanup;
}
+ if (vmwareUpdateVMStatus(driver, vm) < 0)
+ goto cleanup;
+
if (virDomainObjIsActive(vm)) {
vmwareError(VIR_ERR_OPERATION_INVALID,
"%s", _("Domain is already running"));
@@ -645,6 +712,9 @@ vmwareDomainUndefineFlags(virDomainPtr dom,
goto cleanup;
}
+ if (vmwareUpdateVMStatus(driver, vm) < 0)
+ goto cleanup;
+
if (virDomainObjIsActive(vm)) {
vm->persistent = 0;
} else {
@@ -874,6 +944,21 @@ vmwareDomainXMLFromNative(virConnectPtr conn, const char *nativeFormat,
return xml;
}
+static void vmwareDomainObjListUpdateDomain(void *payload, const void *name ATTRIBUTE_UNUSED, void *data)
+{
+ struct vmware_driver *driver = data;
+ virDomainObjPtr vm = payload;
+ virDomainObjLock(vm);
+ vmwareUpdateVMStatus(driver, vm);
+ virDomainObjUnlock(vm);
+}
+
+static void
+vmwareDomainObjListUpdateAll(virDomainObjListPtr doms, struct vmware_driver *driver)
+{
+ virHashForEach(doms->objs, vmwareDomainObjListUpdateDomain, driver);
+}
+
static int
vmwareNumDefinedDomains(virConnectPtr conn)
{
@@ -881,6 +966,7 @@ vmwareNumDefinedDomains(virConnectPtr conn)
int n;
vmwareDriverLock(driver);
+ vmwareDomainObjListUpdateAll(&driver->domains, driver);
n = virDomainObjListNumOfDomains(&driver->domains, 0);
vmwareDriverUnlock(driver);
@@ -894,6 +980,7 @@ vmwareNumDomains(virConnectPtr conn)
int n;
vmwareDriverLock(driver);
+ vmwareDomainObjListUpdateAll(&driver->domains, driver);
n = virDomainObjListNumOfDomains(&driver->domains, 1);
vmwareDriverUnlock(driver);
@@ -908,6 +995,7 @@ vmwareListDomains(virConnectPtr conn, int *ids, int nids)
int n;
vmwareDriverLock(driver);
+ vmwareDomainObjListUpdateAll(&driver->domains, driver);
n = virDomainObjListGetActiveIDs(&driver->domains, ids, nids);
vmwareDriverUnlock(driver);
@@ -922,6 +1010,7 @@ vmwareListDefinedDomains(virConnectPtr conn,
int n;
vmwareDriverLock(driver);
+ vmwareDomainObjListUpdateAll(&driver->domains, driver);
n = virDomainObjListGetInactiveNames(&driver->domains, names, nnames);
vmwareDriverUnlock(driver);
return n;
@@ -944,6 +1033,9 @@ vmwareDomainGetInfo(virDomainPtr dom, virDomainInfoPtr info)
goto cleanup;
}
+ if (vmwareUpdateVMStatus(driver, vm) < 0)
+ goto cleanup;
+
info->state = virDomainObjGetState(vm, NULL);
info->cpuTime = 0;
info->maxMem = vm->def->mem.max_balloon;
@@ -979,6 +1071,9 @@ vmwareDomainGetState(virDomainPtr dom,
goto cleanup;
}
+ if (vmwareUpdateVMStatus(driver, vm) < 0)
+ goto cleanup;
+
*state = virDomainObjGetState(vm, reason);
ret = 0;
--
1.7.9.1
12 years, 9 months
[libvirt] [PATCH] Allow byte[] arrays to be set as a secretValue
by Wido den Hollander
Signed-off-by: Wido den Hollander <wido(a)widodh.nl>
---
src/main/java/org/libvirt/Secret.java | 11 +++++++++++
src/main/java/org/libvirt/jna/Libvirt.java | 1 +
2 files changed, 12 insertions(+), 0 deletions(-)
diff --git a/src/main/java/org/libvirt/Secret.java b/src/main/java/org/libvirt/Secret.java
index e536cf4..48f7895 100644
--- a/src/main/java/org/libvirt/Secret.java
+++ b/src/main/java/org/libvirt/Secret.java
@@ -146,6 +146,17 @@ public class Secret {
}
/**
+ * Sets the value of the secret
+ *
+ * @return 0 on success, -1 on failure.
+ */
+ public int setValue(byte[] value) throws LibvirtException {
+ int returnValue = libvirt.virSecretSetValue(VSP, value, new NativeLong(value.length), 0);
+ processError();
+ return returnValue;
+ }
+
+ /**
* Undefines, but does not free, the Secret.
*
* @return 0 on success, -1 on failure.
diff --git a/src/main/java/org/libvirt/jna/Libvirt.java b/src/main/java/org/libvirt/jna/Libvirt.java
index 2c8c03d..b1e53a2 100644
--- a/src/main/java/org/libvirt/jna/Libvirt.java
+++ b/src/main/java/org/libvirt/jna/Libvirt.java
@@ -336,6 +336,7 @@ public interface Libvirt extends Library {
public SecretPointer virSecretLookupByUUID(ConnectionPointer virConnectPtr, byte[] uuidBytes);
public SecretPointer virSecretLookupByUUIDString(ConnectionPointer virConnectPtr, String uuidstr);
public int virSecretSetValue(SecretPointer virSecretPtr, String value, NativeLong value_size, int flags);
+ public int virSecretSetValue(SecretPointer virSecretPtr, byte[] value, NativeLong value_size, int flags);
public int virSecretUndefine(SecretPointer virSecretPtr);
//Stream Methods
--
1.7.5.4
12 years, 9 months
[libvirt] Potential deadlock in libvirt lxc driver
by Thomas Hunger
Hi,
I'm running version 0.9.10 on debian [1]. When starting and stopping
containers I see deadlocks every so often. I attached a full backtrace
(see at the end of the log). This seems to happen in conditions
similar to the segfault I wrote about recently. I.e. usually when
destroying domains.
I'm unsure why but it seems to me that lxcMonitorEvent gets called
after a vm has been freed. This happens despite
lxcVmCleanup calling "virEventRemoveHandle(priv->monitorWatch);" to
remove the handle.
best,
Tom
[1]
Linux zap 3.1.0-1-amd64 #1 SMP Tue Jan 10 05:01:58 UTC 2012 x86_64 GNU/Linux
12 years, 9 months
[libvirt] CPU topology 'sockets' handling guest vs host
by Daniel P. Berrange
On my x86_64 host I have a pair of Quad core CPUs, each in a separate
NUMA node. The virsh capabilities
topology data reports this:
# virsh capabilities | xmllint --xpath /capabilities/host/cpu -
<cpu>
<arch>x86_64</arch>
<model>Opteron_G3</model>
<vendor>AMD</vendor>
<topology sockets="1" cores="4" threads="1"/>
<feature name="osvw"/>
<feature name="3dnowprefetch"/>
<feature name="cr8legacy"/>
<feature name="extapic"/>
<feature name="cmp_legacy"/>
<feature name="3dnow"/>
<feature name="3dnowext"/>
<feature name="pdpe1gb"/>
<feature name="fxsr_opt"/>
<feature name="mmxext"/>
<feature name="ht"/>
<feature name="vme"/>
</cpu>
# virsh capabilities | xmllint --xpath /capabilities/host/topology -
<topology>
<cells num="2">
<cell id="0">
<cpus num="4">
<cpu id="0"/>
<cpu id="1"/>
<cpu id="2"/>
<cpu id="3"/>
</cpus>
</cell>
<cell id="1">
<cpus num="4">
<cpu id="4"/>
<cpu id="5"/>
<cpu id="6"/>
<cpu id="7"/>
</cpus>
</cell>
</cells>
</topology>
Note, it is reporting sockets=1, because sockets is the number of sockets
*per* NUMA node.
Now I try to figure the guest to match the host using:
<cpu>
<topology sockets='1' cores='4' threads='1'/>
<numa>
<cell cpus='0-3' memory='512000'/>
<cell cpus='4-7' memory='512000'/>
</numa>
</cpu>
And I get:
error: Maximum CPUs greater than topology limit
So, the XML checker is mistaking 'sockets' as the total number of sockets,
rather than the per-node socket count. We need to fix this bogus check
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
12 years, 10 months