[libvirt] [PATCH] virsh.c: avoid leak on OOM error path
by Jim Meyering
No one really cares if we leak memory while dying, but who knows...
freeing that buffer may let us go down more gracefully.
FYI, the leak is triggered when virFileReadAll succeeds
(it allocates "buffer"), yet xmlNewDoc fails:
if (virFileReadAll(from, VIRSH_MAX_XML_FILE, &buffer) < 0)
return FALSE;
doc = xmlNewDoc(NULL);
if (doc == NULL)
goto no_memory;
>From 03c7a44e3d5b2e7c992bebc98fc8c6a7bf63881e Mon Sep 17 00:00:00 2001
From: Jim Meyering <meyering(a)redhat.com>
Date: Fri, 19 Feb 2010 18:03:41 +0100
Subject: [PATCH] virsh.c: avoid leak on OOM error path
* tools/virsh.c (cmdCPUBaseline): Also free "buffer" upon OOM.
---
tools/virsh.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c
index dd916f3..8756a7a 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -7139,6 +7139,7 @@ cleanup:
return ret;
no_memory:
+ VIR_FREE(buffer);
vshError(ctl, "%s", _("Out of memory"));
ret = FALSE;
return ret;
--
1.7.0.233.g05e1a
14 years, 10 months
[libvirt] [PATCH] Create raw storage files with O_DSYNC (again)
by Jiri Denemark
Recently we introduced O_DYSNC flag when creating raw storage files to
avoid filling all disk cache with dirty pages. However, the patch got
lost when virStorageBackendCreateRaw was reworked using
virFileOperation. Let's use O_DYSNC again.
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
src/storage/storage_backend.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/src/storage/storage_backend.c b/src/storage/storage_backend.c
index 8b9ed5d..3742493 100644
--- a/src/storage/storage_backend.c
+++ b/src/storage/storage_backend.c
@@ -356,7 +356,8 @@ virStorageBackendCreateRaw(virConnectPtr conn ATTRIBUTE_UNUSED,
goto cleanup;
}
- if ((createstat = virFileOperation(vol->target.path, O_RDWR | O_CREAT | O_EXCL,
+ if ((createstat = virFileOperation(vol->target.path,
+ O_RDWR | O_CREAT | O_EXCL | O_DSYNC,
vol->target.perms.mode,
vol->target.perms.uid, vol->target.perms.gid,
createRawFileOpHook, &hdata,
--
1.7.0
14 years, 10 months
[libvirt] domain restore race condition
by Laine Stump
As noted in another message, the problem I was seeing is a race
condition in qemudDomainRestore(), not with my modifications to
qemudDmainSave(). Here's some discussion about that problem from IRC,
with a question at the bottom:
> <laine> Does anyone else see a failure of domain restore (immediately
> after domain save? I'm very definitely seeing it on my machine with
> F12+updates testing and libvirt built from unpatched sources.
> <laine> It's very reproduceable - with virsh I do "save domain
> filename", then "restore filename" and it pretty much always gives me
> a black screen. Then I force shutdown the guest (with virt-manager)
> and do "restore filename" again. Tada! It's restored and running!
> [...]
> <danpb> laine: possible race condition
> <danpb> laine: try putting a sleep(10) before the qemuMonitorStartCPUs
> in qemuDomainRestore()
Dan's suggestion *did* eliminate the failures.
> [...]
> <danpb> laine: this sounds like the issue with libvirt prematurely
> starting execution of the CPUs before QEMU has even started restoring
> (or soemthing like that)
> <danpb> laine: search the archives for a mail from Charles Duffy on
> this subject some time ago
>
Here's the BZ filed by Charles Duffy
https://bugzilla.redhat.com/show_bug.cgi?id=537938
It looks like he's dealing with a race condition earlier in the restore,
since his solution was to wait for the migration process to terminate
somewhere inside qemudStartVMDaemon(), rather than waiting until
qemudStartVMDaemon() was finished (which is what it does now). Since
this wait has already been done anyway by the time of Dan's sleep(10) in
my test, I don't think Charles' patch would help this situation.
So is there something that libvirt can wait on here to ensure proper
start? Or is there a problem in qemu? (I'm still running 0.11. I'll also
try upgrading to 0.12 and see if there are changes in behavior.)
14 years, 10 months
[libvirt] PATCH [0/3] Filesystem pool formatting
by David Allan
I finished these patches this afternoon, and I need to do a bit more
testing before I'm comfortable having these patches committed, but I'm
sending them along so I can get everybody's feedback overnight. In
particular, I'm wondering if my move of the pool unref call is
correct.
Dave
14 years, 10 months
[libvirt] Failures in existing qemu domain save/restore.
by Laine Stump
After running a bunch more tests, I am now noticing failures in domain
save/restore *without* my patch (ie sources built straight from git),
not 100% in either case (with or without patches), and the same saved
domain sometimes fails to restore and sometimes is successful. (Most of
the time (but not all), when I try to restore a domain immediately after
a save, it fails, but if I then force shutdown of that failed restore
and restore again from the same file, it succeeds.
(Note that this is all to/from a local disk - it's not related to NFS)
Is there a known problem in domain save/restore? Is anyone else seeing
any failures?
On 02/19/2010 02:30 AM, Laine Stump wrote:
> (This patch requires that you first apply the virFileOperation patches
> that I sent to the list yesterday.)
>
> I'm hoping someone else's keen eye can see what I'm failing to see
> here.
>
> The patch following this message makes domain save "work" on
> root-squash NFS (both the libvirt header and the QEMU state are saved
> to the file), but the files that it saves don't restore properly, even
> when simply saving to a local directory w/o forking - the guest screen
> comes up black, or reboots immediately, or some other such thing. I've
> looked at the data files and the code, and can't see what difference
> is causing this - at least the header written by libvirt seems to be
> the same, and the part written by qemu starts in the same place and
> looks similar.
>
> If anyone's got an idea, please let me know! I'd like to get this
> figured out and a working patch submitted before the day is done.
>
> --
> libvir-list mailing list
> libvir-list(a)redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list
>
>
14 years, 10 months
[libvirt] [PATCH] docs: add 3 missing spaces
by Dan Kenigsberg
From: danken <danken(a)redhat.com>
---
docs/architecture.html.in | 2 +-
docs/auth.html.in | 2 +-
docs/logging.html.in | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/docs/architecture.html.in b/docs/architecture.html.in
index 8eb6458..f5cc520 100644
--- a/docs/architecture.html.in
+++ b/docs/architecture.html.in
@@ -83,7 +83,7 @@ drivers present in driver.h:</p>
<li>xen_internal: provide the implementation of the functions possible via
direct hypervisor access</li>
<li>proxy_internal: provide read-only Xen access via a proxy, the proxy code
- is in the <code>proxy/</code>directory.</li>
+ is in the <code>proxy/</code> directory.</li>
<li>xm_internal: provide support for Xen defined but not running
domains.</li>
<li>qemu_internal: implement the driver functions for QEmu and
diff --git a/docs/auth.html.in b/docs/auth.html.in
index 3ca40c3..ab6c3e9 100644
--- a/docs/auth.html.in
+++ b/docs/auth.html.in
@@ -72,7 +72,7 @@ available. The two libvirt daemon actions available are named <code>org.libvirt.
for the RO socket, and <code>org.libvirt.unix.manage</code> for the RW socket.
</p>
<p>
-As an example, to allow a user <code>fred</code>full access to the RW socket,
+As an example, to allow a user <code>fred</code> full access to the RW socket,
while requiring <code>joe</code> to authenticate with the admin password,
would require adding the following snippet to <code>PolicyKit.conf</code>.
</p>
diff --git a/docs/logging.html.in b/docs/logging.html.in
index 63a0b05..f4a5878 100644
--- a/docs/logging.html.in
+++ b/docs/logging.html.in
@@ -96,7 +96,7 @@
<li><code>x:stderr</code> output goes to stderr</li>
<li><code>x:syslog:name</code> use syslog for the output and use the
given <code>name</code> as the ident</li>
- <li><code>x:file:file_path</code>output to a file, with the given
+ <li><code>x:file:file_path</code> output to a file, with the given
filepath</li>
</ul>
<p>In all cases the x prefix is the minimal level, acting as a filter:</p>
--
1.6.6
14 years, 10 months
[libvirt] [RFC][PATCH 2/2] Automatically recreate macvtap interface and reconnect to qemu
by Ed Swierk
This patch implements a thread for each VM that iterates through the
list of macvtap interfaces, attempting to recreate each one and
reconnect it to qemu. See the PATCH 0 email for discussion.
---
Index: libvirt-0.7.6/src/qemu/qemu_driver.c
===================================================================
--- libvirt-0.7.6.orig/src/qemu/qemu_driver.c
+++ libvirt-0.7.6/src/qemu/qemu_driver.c
@@ -76,6 +76,7 @@
#include "xml.h"
#include "cpu/cpu.h"
#include "macvtap.h"
+#include "threads.h"
#define VIR_FROM_THIS VIR_FROM_QEMU
@@ -96,6 +97,8 @@ struct _qemuDomainObjPrivate {
int *vcpupids;
qemuDomainPCIAddressSetPtr pciaddrs;
+
+ pthread_t reconnectNetBackendsThread;
};
static int qemudShutdown(void);
@@ -143,6 +146,8 @@ static int qemudDomainDisconnectNetBacke
virDomainObjPtr vm,
virDomainNetDefPtr net);
+static void *qemudDomainReconnectNetBackends(void *opaque);
+
static struct qemud_driver *qemu_driver = NULL;
@@ -2794,6 +2799,10 @@ static int qemudStartVMDaemon(virConnect
if (virDomainSaveStatus(conn, driver->caps, driver->stateDir, vm) < 0)
goto abort;
+ if (pthread_create(&priv->reconnectNetBackendsThread, NULL,
+ qemudDomainReconnectNetBackends, vm->def->uuid))
+ goto abort;
+
return 0;
cleanup:
@@ -5627,7 +5636,7 @@ static int qemudDomainConnectNetBackend(
unsigned int qemuCmdFlags)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
- char *tapfd_name = NULL;
+ char *hostnet_name = NULL;
int tapfd = -1;
char *netstr = NULL;
int ret = -1;
@@ -5664,27 +5673,30 @@ static int qemudDomainConnectNetBackend(
if (tapfd < 0)
return -1;
- if (virAsprintf(&tapfd_name, "fd-%s", net->info.alias) < 0) {
+ if (virAsprintf(&hostnet_name, "host%s", net->info.alias) < 0) {
virReportOOMError(conn);
close(tapfd);
return -1;
}
if (!(netstr = qemuBuildHostNetStr(conn, net, ' ',
- vlan, tapfd_name))) {
- VIR_FREE(tapfd_name);
+ vlan, hostnet_name))) {
+ VIR_FREE(hostnet_name);
close(tapfd);
return -1;
}
qemuDomainObjEnterMonitorWithDriver(driver, vm);
- if (qemuMonitorSendFileHandle(priv->mon, tapfd_name, tapfd) < 0)
+ if (qemuMonitorRemoveHostNetwork(priv->mon, vlan, hostnet_name) < 0)
+ VIR_INFO0("Did not remove existing host network");
+
+ if (qemuMonitorSendFileHandle(priv->mon, hostnet_name, tapfd) < 0)
goto out;
if (qemuMonitorAddHostNetwork(priv->mon, netstr) < 0) {
- if (qemuMonitorCloseFileHandle(priv->mon, tapfd_name) < 0)
- VIR_WARN(_("Failed to close tapfd with '%s'"), tapfd_name);
+ if (qemuMonitorCloseFileHandle(priv->mon, hostnet_name) < 0)
+ VIR_WARN(_("Failed to close tapfd with '%s'"), hostnet_name);
goto out;
}
@@ -5693,7 +5705,7 @@ static int qemudDomainConnectNetBackend(
out:
qemuDomainObjExitMonitorWithDriver(driver, vm);
VIR_FREE(netstr);
- VIR_FREE(tapfd_name);
+ VIR_FREE(hostnet_name);
close(tapfd);
return ret;
}
@@ -8549,6 +8561,72 @@ out:
return ret;
}
+static void *qemudDomainReconnectNetBackends(void *opaque)
+{
+ virConnectPtr conn = virConnectOpen("qemu:///system");
+ struct qemud_driver *driver = conn->privateData;
+ const unsigned char *uuid = opaque;
+
+ while (1) {
+ virDomainObjPtr vm;
+ unsigned int qemuCmdFlags;
+ int i;
+
+ usleep(1000000);
+
+ VIR_ERROR(_("qemuDomainReconnectNetBackends (%p %p)"),
+ conn, driver);
+
+ qemuDriverLock(driver);
+ vm = virDomainFindByUUID(&driver->domains, uuid);
+ if (!vm) {
+ VIR_ERROR0("qemuDomainReconnectNetBackends done");
+ qemuDriverUnlock(driver);
+ break;
+ }
+
+ VIR_ERROR(_("qemuDomainReconnectNetBackends (%s)"),
+ vm->def->name);
+
+ if (qemudExtractVersionInfo(vm->def->emulator,
+ NULL,
+ &qemuCmdFlags) < 0)
+ goto cleanup;
+
+ if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ goto cleanup;
+
+ if (!virDomainObjIsActive(vm))
+ goto endjob;
+
+ for (i = 0 ; i < vm->def->nnets ; i++) {
+ virDomainNetDefPtr net = vm->def->nets[i];
+
+ VIR_ERROR(_("qemuDomainReconnectNetBackends (%s)"), net->ifname);
+
+ if (net->type != VIR_DOMAIN_NET_TYPE_DIRECT)
+ continue;
+
+ if (qemudDomainConnectNetBackend(conn, driver, vm,
+ net, qemuCmdFlags) < 0)
+ VIR_ERROR(_("qemudDomainConnectNetBackend failed (%s)"),
+ net->ifname);
+ }
+
+ endjob:
+ if (vm &&
+ qemuDomainObjEndJob(vm) == 0)
+ vm = NULL;
+
+ cleanup:
+ if (vm)
+ virDomainObjUnlock(vm);
+ qemuDriverUnlock(driver);
+ }
+
+ return NULL;
+}
+
static int
qemuCPUCompare(virConnectPtr conn,
const char *xmlDesc,
14 years, 10 months
[libvirt] [RFC][PATCH 1/2] Separate backend setup from NIC device add/remove
by Ed Swierk
This patch moves backend setup code from qemudDomainAttachNetDevice and
qemudDomainDetachNetDevice into separate functions. While the intent is
to allow the new functions to be called separately without affecting VM
NIC devices, this is arguably a worthwhile refactoring on its own.
Currently it's based on 0.7.6 with some macvtap patches. I can rebase
it against git if there's interest.
Signed-off-by: Ed Swierk <eswierk(a)aristanetworks.com>
---
Move the code that creates and destroys the network backend and
connects it to qemu from the code that creates the virtual NIC.
Index: libvirt-0.7.6/src/qemu/qemu_driver.c
===================================================================
--- libvirt-0.7.6.orig/src/qemu/qemu_driver.c
+++ libvirt-0.7.6/src/qemu/qemu_driver.c
@@ -132,6 +132,17 @@ static int qemuDetectVcpuPIDs(virConnect
static int qemuUpdateActivePciHostdevs(struct qemud_driver *driver,
virDomainDefPtr def);
+static int qemudDomainConnectNetBackend(virConnectPtr conn,
+ struct qemud_driver *driver,
+ virDomainObjPtr vm,
+ virDomainNetDefPtr net,
+ unsigned int qemuCmdFlags);
+
+static int qemudDomainDisconnectNetBackend(virConnectPtr conn,
+ struct qemud_driver *driver,
+ virDomainObjPtr vm,
+ virDomainNetDefPtr net);
+
static struct qemud_driver *qemu_driver = NULL;
@@ -5609,19 +5620,17 @@ error:
}
-static int qemudDomainAttachNetDevice(virConnectPtr conn,
- struct qemud_driver *driver,
- virDomainObjPtr vm,
- virDomainNetDefPtr net,
- unsigned int qemuCmdFlags)
+static int qemudDomainConnectNetBackend(virConnectPtr conn,
+ struct qemud_driver *driver,
+ virDomainObjPtr vm,
+ virDomainNetDefPtr net,
+ unsigned int qemuCmdFlags)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
char *tapfd_name = NULL;
int tapfd = -1;
- char *nicstr = NULL;
char *netstr = NULL;
int ret = -1;
- virDomainDevicePCIAddress guestAddr;
int vlan;
if (!(qemuCmdFlags & QEMUD_CMD_FLAG_HOST_NET_ADD)) {
@@ -5630,142 +5639,132 @@ static int qemudDomainAttachNetDevice(vi
return -1;
}
+ if (priv->monConfig->type != VIR_DOMAIN_CHR_TYPE_UNIX) {
+ qemudReportError(conn, dom, NULL, VIR_ERR_NO_SUPPORT,
+ _("network device type '%s' cannot be attached: "
+ "qemu is not using a unix socket monitor"),
+ virDomainNetTypeToString(net->type));
+ return -1;
+ }
+
+ if ((vlan = qemuDomainNetVLAN(net)) < 0) {
+ qemudReportError(conn, NULL, NULL, VIR_ERR_NO_SUPPORT, "%s",
+ _("Unable to connect network backend without vlan"));
+ goto out;
+ }
+
if (net->type == VIR_DOMAIN_NET_TYPE_BRIDGE ||
net->type == VIR_DOMAIN_NET_TYPE_NETWORK) {
- if (priv->monConfig->type != VIR_DOMAIN_CHR_TYPE_UNIX) {
- qemudReportError(conn, dom, NULL, VIR_ERR_NO_SUPPORT,
- _("network device type '%s' cannot be attached: "
- "qemu is not using a unix socket monitor"),
- virDomainNetTypeToString(net->type));
- return -1;
- }
-
- if ((tapfd = qemudNetworkIfaceConnect(conn, driver, net, qemuCmdFlags)) < 0)
- return -1;
+ tapfd = qemudNetworkIfaceConnect(conn, driver, net, qemuCmdFlags);
} else if (net->type == VIR_DOMAIN_NET_TYPE_DIRECT) {
- if (priv->monConfig->type != VIR_DOMAIN_CHR_TYPE_UNIX) {
- qemudReportError(conn, dom, NULL, VIR_ERR_NO_SUPPORT,
- _("direct device type '%s' cannot be attached: "
- "qemu is not using a unix socket monitor"),
- virDomainNetTypeToString(net->type));
- return -1;
- }
+ tapfd = qemudPhysIfaceConnect(conn, net,
+ net->data.direct.linkdev,
+ net->data.direct.mode);
+ }
+ if (tapfd < 0)
+ return -1;
- if ((tapfd = qemudPhysIfaceConnect(conn, net,
- net->data.direct.linkdev,
- net->data.direct.mode)) < 0)
- return -1;
+ if (virAsprintf(&tapfd_name, "fd-%s", net->info.alias) < 0) {
+ virReportOOMError(conn);
+ close(tapfd);
+ return -1;
}
- if (VIR_REALLOC_N(vm->def->nets, vm->def->nnets+1) < 0)
- goto no_memory;
+ if (!(netstr = qemuBuildHostNetStr(conn, net, ' ',
+ vlan, tapfd_name))) {
+ VIR_FREE(tapfd_name);
+ close(tapfd);
+ return -1;
+ }
+
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
+
+ if (qemuMonitorSendFileHandle(priv->mon, tapfd_name, tapfd) < 0)
+ goto out;
+
+ if (qemuMonitorAddHostNetwork(priv->mon, netstr) < 0) {
+ if (qemuMonitorCloseFileHandle(priv->mon, tapfd_name) < 0)
+ VIR_WARN(_("Failed to close tapfd with '%s'"), tapfd_name);
+ goto out;
+ }
+
+ ret = 0;
+
+out:
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ VIR_FREE(netstr);
+ VIR_FREE(tapfd_name);
+ close(tapfd);
+ return ret;
+}
+
+static int qemudDomainAttachNetDevice(virConnectPtr conn,
+ struct qemud_driver *driver,
+ virDomainObjPtr vm,
+ virDomainNetDefPtr net,
+ unsigned int qemuCmdFlags)
+{
+ qemuDomainObjPrivatePtr priv = vm->privateData;
+ char *nicstr = NULL;
+ int ret = -1;
+ virDomainDevicePCIAddress guestAddr;
+ int vlan;
+
+ if (VIR_REALLOC_N(vm->def->nets, vm->def->nnets+1) < 0) {
+ virReportOOMError(conn);
+ return -1;
+ }
if ((qemuCmdFlags & QEMUD_CMD_FLAG_NET_NAME) ||
(qemuCmdFlags & QEMUD_CMD_FLAG_DEVICE)) {
if (qemuAssignDeviceNetAlias(vm->def, net, -1) < 0)
- goto cleanup;
+ goto out;
}
if ((qemuCmdFlags & QEMUD_CMD_FLAG_DEVICE) &&
qemuDomainPCIAddressEnsureAddr(priv->pciaddrs, &net->info) < 0)
- goto cleanup;
-
- vlan = qemuDomainNetVLAN(net);
+ goto out;
- if (vlan < 0) {
+ if ((vlan = qemuDomainNetVLAN(net)) < 0) {
qemudReportError(conn, NULL, NULL, VIR_ERR_NO_SUPPORT, "%s",
_("Unable to attach network devices without vlan"));
- goto cleanup;
- }
-
- if (tapfd != -1) {
- if (virAsprintf(&tapfd_name, "fd-%s", net->info.alias) < 0)
- goto no_memory;
-
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- if (qemuMonitorSendFileHandle(priv->mon, tapfd_name, tapfd) < 0) {
- qemuDomainObjExitMonitorWithDriver(driver, vm);
- goto cleanup;
- }
- qemuDomainObjExitMonitorWithDriver(driver, vm);
+ goto out;
}
- if (!(netstr = qemuBuildHostNetStr(conn, net, ' ',
- vlan, tapfd_name)))
- goto try_tapfd_close;
-
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- if (qemuMonitorAddHostNetwork(priv->mon, netstr) < 0) {
- qemuDomainObjExitMonitorWithDriver(driver, vm);
- goto try_tapfd_close;
+ if (qemudDomainConnectNetBackend(conn, driver, vm, net, qemuCmdFlags) < 0) {
+ VIR_WARN0(_("Failed to connect network backend"));
+ goto out;
}
- qemuDomainObjExitMonitorWithDriver(driver, vm);
-
- if (tapfd != -1)
- close(tapfd);
- tapfd = -1;
if (!(nicstr = qemuBuildNicStr(conn, net, NULL, vlan)))
- goto try_remove;
+ goto out_remove;
qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorAddPCINetwork(priv->mon, nicstr,
&guestAddr) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
- goto try_remove;
+ goto out_remove;
}
net->info.type = VIR_DOMAIN_DEVICE_ADDRESS_TYPE_PCI;
memcpy(&net->info.addr.pci, &guestAddr, sizeof(guestAddr));
qemuDomainObjExitMonitorWithDriver(driver, vm);
+ vm->def->nets[vm->def->nnets++] = net;
ret = 0;
+ goto out;
- vm->def->nets[vm->def->nnets++] = net;
+out_remove:
+ if (qemudDomainDisconnectNetBackend(conn, driver, vm, net) < 0)
+ VIR_WARN0("Unable to disconnect network backend");
-cleanup:
+out:
if ((ret != 0) &&
(qemuCmdFlags & QEMUD_CMD_FLAG_DEVICE) &&
(net->info.type == VIR_DOMAIN_DEVICE_ADDRESS_TYPE_PCI) &&
qemuDomainPCIAddressReleaseAddr(priv->pciaddrs, &net->info) < 0)
VIR_WARN0("Unable to release PCI address on NIC");
-
VIR_FREE(nicstr);
- VIR_FREE(netstr);
- VIR_FREE(tapfd_name);
- if (tapfd != -1)
- close(tapfd);
-
return ret;
-
-try_remove:
- if (vlan < 0) {
- VIR_WARN0(_("Unable to remove network backend"));
- } else {
- char *hostnet_name;
- if (virAsprintf(&hostnet_name, "host%s", net->info.alias) < 0)
- goto no_memory;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- if (qemuMonitorRemoveHostNetwork(priv->mon, vlan, hostnet_name) < 0)
- VIR_WARN(_("Failed to remove network backend for vlan %d, net %s"),
- vlan, hostnet_name);
- qemuDomainObjExitMonitorWithDriver(driver, vm);
- VIR_FREE(hostnet_name);
- }
- goto cleanup;
-
-try_tapfd_close:
- if (tapfd_name) {
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- if (qemuMonitorCloseFileHandle(priv->mon, tapfd_name) < 0)
- VIR_WARN(_("Failed to close tapfd with '%s'"), tapfd_name);
- qemuDomainObjExitMonitorWithDriver(driver, vm);
- }
-
- goto cleanup;
-
-no_memory:
- virReportOOMError(conn);
- goto cleanup;
}
@@ -6207,82 +6206,95 @@ cleanup:
return ret;
}
-static int
-qemudDomainDetachNetDevice(virConnectPtr conn,
- struct qemud_driver *driver,
- virDomainObjPtr vm,
- virDomainDeviceDefPtr dev)
+static int qemudDomainDisconnectNetBackend(virConnectPtr conn,
+ struct qemud_driver *driver,
+ virDomainObjPtr vm,
+ virDomainNetDefPtr net)
{
- int i, ret = -1;
- virDomainNetDefPtr detach = NULL;
qemuDomainObjPrivatePtr priv = vm->privateData;
int vlan;
char *hostnet_name = NULL;
+ if ((vlan = qemuDomainNetVLAN(net)) < 0) {
+ qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
+ "%s", _("unable to determine original VLAN"));
+ return -1;
+ }
+
+ if (virAsprintf(&hostnet_name, "host%s", net->info.alias) < 0) {
+ virReportOOMError(NULL);
+ return -1;
+ }
+
+ qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ if (qemuMonitorRemoveHostNetwork(priv->mon, vlan, hostnet_name) < 0)
+ VIR_WARN0("Failed to remove host network");
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+
+#if WITH_MACVTAP
+ if (net->type == VIR_DOMAIN_NET_TYPE_DIRECT) {
+ if (net->ifname)
+ delMacvtap(conn, net->ifname);
+ }
+#endif
+
+ VIR_FREE(hostnet_name);
+ return 0;
+}
+
+static int qemudDomainDetachNetDevice(virConnectPtr conn,
+ struct qemud_driver *driver,
+ virDomainObjPtr vm,
+ virDomainDeviceDefPtr dev)
+{
+ int i = -1;
+ virDomainNetDefPtr net = NULL;
+ qemuDomainObjPrivatePtr priv = vm->privateData;
+
for (i = 0 ; i < vm->def->nnets ; i++) {
- virDomainNetDefPtr net = vm->def->nets[i];
+ virDomainNetDefPtr n = vm->def->nets[i];
- if (!memcmp(net->mac, dev->data.net->mac, sizeof(net->mac))) {
- detach = net;
+ if (!memcmp(n->mac, dev->data.net->mac, sizeof(n->mac))) {
+ net = n;
break;
}
}
- if (!detach) {
+ if (!net) {
qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
_("network device %02x:%02x:%02x:%02x:%02x:%02x not found"),
dev->data.net->mac[0], dev->data.net->mac[1],
dev->data.net->mac[2], dev->data.net->mac[3],
dev->data.net->mac[4], dev->data.net->mac[5]);
- goto cleanup;
+ return -1;
}
- if (!virDomainDeviceAddressIsValid(&detach->info,
+ if (!virDomainDeviceAddressIsValid(&net->info,
VIR_DOMAIN_DEVICE_ADDRESS_TYPE_PCI)) {
qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
"%s", _("device cannot be detached without a PCI address"));
- goto cleanup;
- }
-
- if ((vlan = qemuDomainNetVLAN(detach)) < 0) {
- qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
- "%s", _("unable to determine original VLAN"));
- goto cleanup;
- }
-
- if (virAsprintf(&hostnet_name, "host%s", detach->info.alias) < 0) {
- virReportOOMError(NULL);
- goto cleanup;
+ return -1;
}
qemuDomainObjEnterMonitorWithDriver(driver, vm);
if (qemuMonitorRemovePCIDevice(priv->mon,
- &detach->info.addr.pci) < 0) {
- qemuDomainObjExitMonitorWithDriver(driver, vm);
- goto cleanup;
- }
-
- if (qemuMonitorRemoveHostNetwork(priv->mon, vlan, hostnet_name) < 0) {
+ &net->info.addr.pci) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
- goto cleanup;
+ return -1;
}
qemuDomainObjExitMonitorWithDriver(driver, vm);
-#if WITH_MACVTAP
- if (detach->type == VIR_DOMAIN_NET_TYPE_DIRECT) {
- if (detach->ifname)
- delMacvtap(conn, detach->ifname);
- }
-#endif
+ if (qemudDomainDisconnectNetBackend(conn, driver, vm, net) < 0)
+ VIR_WARN0("Failed to disconnect network backend");
- if ((driver->macFilter) && (detach->ifname != NULL)) {
+ if ((driver->macFilter) && (net->ifname != NULL)) {
if ((errno = networkDisallowMacOnPort(conn,
driver,
- detach->ifname,
- detach->mac))) {
+ net->ifname,
+ net->mac))) {
virReportSystemError(conn, errno,
_("failed to remove ebtables rule on '%s'"),
- detach->ifname);
+ net->ifname);
}
}
@@ -6299,13 +6311,9 @@ qemudDomainDetachNetDevice(virConnectPtr
VIR_FREE(vm->def->nets);
vm->def->nnets = 0;
}
- virDomainNetDefFree(detach);
+ virDomainNetDefFree(net);
- ret = 0;
-
-cleanup:
- VIR_FREE(hostnet_name);
- return ret;
+ return 0;
}
static int qemudDomainDetachHostPciDevice(virConnectPtr conn,
14 years, 10 months
[libvirt] [RFC][PATCH 0/2] Dynamic backend setup for macvtap interfaces
by Ed Swierk
Using a bridge to connect a qemu NIC to a host interface offers a fair
amount of flexibility to reconfigure the host without restarting the VM.
For example, if the bridge connects host interface eth0 to the qemu tap0
interface, eth0 can be hot-removed and hot-plugged without affecting the
VM. Similarly, if the bridge connects host VLAN interface vlan0 to the
qemu tap0 interface, the admin can easily replace vlan0 with vlan1
without the VM noticing.
Using the macvtap driver instead of a kernel bridge, the host interface
is much more tightly tied to the VM. Qemu communicates with the macvtap
interface through a file descriptor, and the macvtap interface is bound
permanently to a specific host interface when it is first created.
What's more, if the underlying host interface disappears, the macvtap
interface vanishes along with it, leaving the VM holding a file
descriptor for a deleted file.
To avoid race conditions during system startup, I would like libvirt to
allow starting up the VM with a NIC even if the underlying host
interface doesn't yet exist, deferring creation of the macvtap interface
(analogous to starting up the VM with a tap interface bound to an orphan
bridge). To support adding and removing a host interface without
restarting the VM, I would like libvirt to react to the (re)appearance
of the underlying host interface, creating a new macvtap interface and
passing the new fd to qemu to reconnect to the NIC.
(It would also be nice if libvirt allowed the user to change which
underlying host interface the qemu NIC is connected to. I'm ignoring
this issue for now, except to note that implementing the above features
should make this easier.)
The libvirt API already supports domainAttachDevice and
domainDetachDevice to add or remove an interface while the VM is
running. In the qemu implementation, these commands add or remove the
VM NIC device as well as reconfiguring the host side. This works only
if the OS and application running in the VM can handle PCI hotplug and
dynamically reconfigure its network. I would like to isolate the VM
from changes to the host network setup, whether you use macvtap or a
bridge.
The changes I think are needed to implement this include:
1. Refactor qemudDomainAttachNetDevice/qemudDomainDetachNetDevice, which
currently handle both backend (host) setup and adding/removing the VM
NIC device; move the backend setup code into separate functions that can
called separately without affecting VM devices.
2. Implement a thread or task that watches for changes to the underlying
host interface for each configured macvtap interface, and reacts by
invoking the appropriate backend setup code.
3. Change qemudBuildCommandLine to defer backend setup if qemu supports
the necessary features for doing it later (e.g. the host_net_add monitor
command).
4. Implement appropriate error handling and reporting, and any necessary
changes to the configuration schema.
The following patches are a partial implementation of the above as a
proof of concept.
Patch 1 implements change (1) above, moving the backend setup code to
new functions
qemudDomainConnectNetBackend/qemudDomainDisconnectNetBackend, and
calling these functions from the existing
qemudDomainAttachNetDevice/qemudDomainDetachNetDevice. I think this
change is useful on its own: it breaks up two monster functions into
more manageable pieces, and eliminates some code duplication (e.g. the
try_remove clause at the end of qemudDomainAttachNetDevice).
Patch 2 is a godawful hack roughly implementing change (2) above (did I
mention that this is a proof of concept?). It spawns a thread that
simply tries reconnecting the backend of each macvtap interface once a
second. As long as the interface is already up, the reconnection fails.
If the macvtap interface goes away because the underlying host interface
disappears, the reconnection fails until the host interface reappears.
I ran into two major issues while implementing change (2):
- Can we use the existing virEvent functions to invoke the reconnection
process, triggered either by a timer or by an event from the host? It
seems like this ought to work, but it appears that communication between
libvirt and the qemu monitor relies on an event, and since all events
run in the same thread, there's no way for an event to call the monitor.
- Should the reconnection process use udev or hal to get notifications,
or leverage the node device code which itself uses udev or hal?
Currently there doesn't appear to be a way to get notifications of
changes to node devices; if there were, we'd still need to address the
threading issue. If we use node devices, what changes to the
configuration schema would be needed to associate a macvtap interface
with the underlying node device?
I didn't attempt to implement changes (3) or (4) above but would welcome
input on those items as well.
--Ed
14 years, 10 months
[libvirt] [PATCH 0/1] A couple of problems with the daemon-conf test
by David Allan
Here's a patch fixing two problems I found with the daemon-conf test that prevented the test from passing if the system libvirt was running on my system.
The first change only affects the direct running of the daemon-conf script, i.e, if someone executes ./daemon-conf, which will always exit failure because the default config file contains a line that the test believes should not be there. The invocation by make-check supplies a different config file that's not subject to that problem.
The second problem is that the corrupt config tests don't supply the pid-file argument to libvirt, so on my system it was attempting to use the same pidfile as the running system daemon and failing all the tests except the valid config test. Supplying the --pid-file argument when running the corrupt config tests as it is supplied for the valid config tests fixes the problem.
Dave
14 years, 10 months