[libvirt] [PATCH v3] virsh: Increase device-detach intelligence
by Michal Privoznik
From: Michal Prívozník <mprivozn(a)redhat.com>
Up to now users have to give a full XML description on input when
device-detaching. If they omitted something it lead to unclear
error messages (like generated MAC wasn't found, etc.).
With this patch users can specify only those information which
specify one device sufficiently precise. Remaining information is
completed from domain.
---
diff to v2:
-rebase to current HEAD
diff to v1:
-rebase to current HEAD
-add a little bit comments
tools/virsh.c | 266 +++++++++++++++++++++++++++++++++++++++++++++++++++++----
1 files changed, 250 insertions(+), 16 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c
index 1ad84a2..aae8e4e 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -10351,6 +10351,226 @@ cmdAttachDevice(vshControl *ctl, const vshCmd *cmd)
return true;
}
+/**
+ * Check if n1 is superset of n2, meaning n1 contains all elements and
+ * attributes as n2 at lest. Including children.
+ * @n1 first node
+ * @n2 second node
+ * return 1 in case n1 covers n2, 0 otherwise.
+ */
+static int
+vshNodeIsSuperset(xmlNodePtr n1, xmlNodePtr n2) {
+ xmlNodePtr child1, child2;
+ xmlAttrPtr attr1, attr2;
+ int found;
+
+ if (!n1 && !n2)
+ return 1;
+
+ if (!n1 || !n2)
+ return 0;
+
+ if (!xmlStrEqual(n1->name, n2->name))
+ return 0;
+
+ /* Iterate over n2 attributes and check if n1 contains them*/
+ attr2 = n2->properties;
+ while (attr2) {
+ if (attr2->type == XML_ATTRIBUTE_NODE) {
+ attr1 = n1->properties;
+ found = 0;
+ while (attr1) {
+ if (xmlStrEqual(attr1->name, attr2->name)) {
+ found = 1;
+ break;
+ }
+ attr1 = attr1->next;
+ }
+ if (!found)
+ return 0;
+ if (!xmlStrEqual(BAD_CAST virXMLPropString(n1, (const char *) attr1->name),
+ BAD_CAST virXMLPropString(n2, (const char *) attr2->name)))
+ return 0;
+ }
+ attr2 = attr2->next;
+ }
+
+ /* and now iterate over n2 children */
+ child2 = n2->children;
+ while (child2) {
+ if (child2->type == XML_ELEMENT_NODE) {
+ child1 = n1->children;
+ found = 0;
+ while (child1) {
+ if (child1->type == XML_ELEMENT_NODE &&
+ xmlStrEqual(child1->name, child2->name)) {
+ found = 1;
+ break;
+ }
+ child1 = child1->next;
+ }
+ if (!found)
+ return 0;
+ if (!vshNodeIsSuperset(child1, child2))
+ return 0;
+ }
+ child2 = child2->next;
+ }
+
+ return 1;
+}
+
+/**
+ * To given domain and (probably incomplete) device XML specification try to
+ * find such device in domain and complete missing parts. This is however
+ * possible when given device XML is sufficiently precise so it addresses only
+ * one device.
+ * @ctl vshControl for error messages printing
+ * @dom domain
+ * @oldXML device XML before
+ * @newXML and after completion
+ * Returns -2 when no such device exists in domain, -3 when given XML selects many
+ * (is too ambiguous), 0 in case of success. Otherwise returns -1. @newXML
+ * is touched only in case of success.
+ */
+static int
+vshCompleteXMLFromDomain(vshControl *ctl, virDomainPtr dom, char *oldXML,
+ char **newXML) {
+ int funcRet = -1;
+ char *domXML = NULL;
+ xmlDocPtr domDoc = NULL, devDoc = NULL;
+ xmlNodePtr node = NULL;
+ xmlXPathContextPtr domCtxt = NULL, devCtxt = NULL;
+ xmlNodePtr *devices = NULL;
+ xmlSaveCtxtPtr sctxt = NULL;
+ int devices_size;
+ char *xpath;
+ xmlBufferPtr buf = NULL;
+
+ if (!(domXML = virDomainGetXMLDesc(dom, 0))) {
+ vshError(ctl, _("couldn't get XML description of domain %s"),
+ virDomainGetName(dom));
+ goto cleanup;
+ }
+
+ if (!(domDoc = xmlReadDoc(BAD_CAST domXML, "domain.xml", NULL,
+ XML_PARSE_NOENT | XML_PARSE_NONET |
+ XML_PARSE_NOERROR | XML_PARSE_NOWARNING))) {
+ vshError(ctl, "%s", _("could not parse domain XML"));
+ goto cleanup;
+ }
+
+ if (!(devDoc = xmlReadDoc(BAD_CAST oldXML, "device.xml", NULL,
+ XML_PARSE_NOENT | XML_PARSE_NONET |
+ XML_PARSE_NOERROR | XML_PARSE_NOWARNING))) {
+ vshError(ctl, "%s", _("could not parse device XML"));
+ goto cleanup;
+ }
+
+ node = xmlDocGetRootElement(domDoc);
+ if (!node) {
+ vshError(ctl, "%s", _("failed to get domain root element"));
+ goto cleanup;
+ }
+
+ domCtxt = xmlXPathNewContext(domDoc);
+ if (!domCtxt) {
+ vshError(ctl, "%s", _("failed to create context on domain XML"));
+ goto cleanup;
+ }
+ domCtxt->node = node;
+
+ node = xmlDocGetRootElement(devDoc);
+ if (!node) {
+ vshError(ctl, "%s", _("failed to get device root element"));
+ goto cleanup;
+ }
+
+ devCtxt = xmlXPathNewContext(devDoc);
+ if (!devCtxt) {
+ vshError(ctl, "%s", _("failed to create context on device XML"));
+ goto cleanup;
+ }
+ devCtxt->node = node;
+
+ buf = xmlBufferCreate();
+ if (!buf) {
+ vshError(ctl, "%s", _("out of memory"));
+ goto cleanup;
+ }
+
+ xmlBufferCat(buf, BAD_CAST "/domain/devices/");
+ xmlBufferCat(buf, node->name);
+ xpath = (char *) xmlBufferContent(buf);
+ /* Get all possible devices */
+ devices_size = virXPathNodeSet(xpath, domCtxt, &devices);
+ xmlBufferEmpty(buf);
+
+ if (devices_size < 0) {
+ /* error */
+ vshError(ctl, "%s", _("error when selecting nodes"));
+ goto cleanup;
+ } else if (devices_size == 0) {
+ /* no such device */
+ funcRet = -2;
+ goto cleanup;
+ }
+
+ /* and refine */
+ int i = 0;
+ while (i < devices_size) {
+ if (!vshNodeIsSuperset(devices[i], node)) {
+ if (devices_size == 1) {
+ VIR_FREE(devices);
+ devices_size = 0;
+ } else {
+ memmove(devices + i, devices + i + 1,
+ sizeof(*devices) * (devices_size-i-1));
+ devices_size--;
+ if (VIR_REALLOC_N(devices, devices_size) < 0) {
+ /* ignore, harmless */
+ }
+ }
+ } else {
+ i++;
+ }
+ }
+
+ if (!devices_size) {
+ /* no such device */
+ funcRet = -2;
+ goto cleanup;
+ } else if (devices_size > 1) {
+ /* ambiguous */
+ funcRet = -3;
+ goto cleanup;
+ }
+
+ if (newXML) {
+ sctxt = xmlSaveToBuffer(buf, NULL, 0);
+ if (!sctxt) {
+ vshError(ctl, "%s", _("failed to create document saving context"));
+ goto cleanup;
+ }
+
+ xmlSaveTree(sctxt, devices[0]);
+ xmlSaveClose(sctxt);
+ *newXML = (char *) xmlBufferContent(buf);
+ buf->content = NULL;
+ }
+
+ funcRet = 0;
+
+cleanup:
+ xmlBufferFree(buf);
+ VIR_FREE(devices);
+ xmlXPathFreeContext(devCtxt);
+ xmlXPathFreeContext(domCtxt);
+ xmlFreeDoc(devDoc);
+ xmlFreeDoc(domDoc);
+ VIR_FREE(domXML);
+ return funcRet;
+}
/*
* "detach-device" command
@@ -10371,10 +10591,11 @@ static const vshCmdOptDef opts_detach_device[] = {
static bool
cmdDetachDevice(vshControl *ctl, const vshCmd *cmd)
{
- virDomainPtr dom;
+ virDomainPtr dom = NULL;
const char *from = NULL;
- char *buffer;
+ char *buffer = NULL, *new_buffer = NULL;
int ret;
+ bool funcRet = false;
unsigned int flags;
if (!vshConnectionUsability(ctl, ctl->conn))
@@ -10383,37 +10604,50 @@ cmdDetachDevice(vshControl *ctl, const vshCmd *cmd)
if (!(dom = vshCommandOptDomain(ctl, cmd, NULL)))
return false;
- if (vshCommandOptString(cmd, "file", &from) <= 0) {
- virDomainFree(dom);
- return false;
- }
+ if (vshCommandOptString(cmd, "file", &from) <= 0)
+ goto cleanup;
if (virFileReadAll(from, VIRSH_MAX_XML_FILE, &buffer) < 0) {
virshReportError(ctl);
- virDomainFree(dom);
- return false;
+ goto cleanup;
+ }
+
+ ret = vshCompleteXMLFromDomain(ctl, dom, buffer, &new_buffer);
+ if (ret < 0) {
+ if (ret == -2) {
+ vshError(ctl, _("no such device in %s"), virDomainGetName(dom));
+ } else if (ret == -3) {
+ vshError(ctl, "%s", _("given XML selects too many devices. "
+ "Please, be more specific"));
+ } else {
+ /* vshCompleteXMLFromDomain() already printed error message,
+ * so nothing to do here. */
+ }
+ goto cleanup;
}
if (vshCommandOptBool(cmd, "persistent")) {
flags = VIR_DOMAIN_AFFECT_CONFIG;
if (virDomainIsActive(dom) == 1)
flags |= VIR_DOMAIN_AFFECT_LIVE;
- ret = virDomainDetachDeviceFlags(dom, buffer, flags);
+ ret = virDomainDetachDeviceFlags(dom, new_buffer, flags);
} else {
- ret = virDomainDetachDevice(dom, buffer);
+ ret = virDomainDetachDevice(dom, new_buffer);
}
- VIR_FREE(buffer);
if (ret < 0) {
vshError(ctl, _("Failed to detach device from %s"), from);
- virDomainFree(dom);
- return false;
- } else {
- vshPrint(ctl, "%s", _("Device detached successfully\n"));
+ goto cleanup;
}
+ vshPrint(ctl, "%s", _("Device detached successfully\n"));
+ funcRet = true;
+
+cleanup:
+ VIR_FREE(new_buffer);
+ VIR_FREE(buffer);
virDomainFree(dom);
- return true;
+ return funcRet;
}
--
1.7.3.4
13 years, 7 months
Re: [libvirt] [virt-tools-list] Are requests for new virsh commands acceptable?
by Richard W.M. Jones
On Mon, Aug 22, 2011 at 09:06:29PM +1000, dave bl wrote:
> Are requests/patches for new virsh commands acceptable? ... I keep
> typing in "boot" instead of "start", If I submit a patch to add
> ('boot') this would anyone have anything against it?
Certainly I've long wanted better aliases for virsh commands.
This should be discussed on libvir-list. I suggest sending patches
there instead of here.
There was some discussion about 4-8 months ago about this subject. It
might be a good idea to search the archives and familiarize yourself
with that first.
We can't remove the existing commands, and we should be careful about
aliases which might clash with future commands. Something to think
about ...
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://et.redhat.com/~rjones/virt-top
13 years, 7 months
[libvirt] [PATCH 0/3] Libvirt RPC dispatching and unresponsive QEMU
by Michal Privoznik
If there is an unresponsive qemu process and libvirt access
it's monitor, it will not get any response and this thread will
block indefinitely, until the qemu process resumes or it's destroyed.
If users continues executing APIs against that domain, libvirt will
run out of worker threads and hangs (if those APIs will access
monitor as well). Although, they will timeout in approx 30 seconds,
which will free some workers, during that time is libvirt unable to
process any request. Even worse - if the number of unresponsive qemu
exceeds the size of worker thread pool, libvirt will hangs forever,
and even restarting the daemon will not make it any better.
This patch set heals the daemon on several levels, so nothing from
written above will cause it to hangs:
1. RPC dispatching - all APIs are now annotated as 'high' or 'low'
priority. Then a special thread pool is created. Low priority
APIs will be still placed into usual pool, but high priority
can be placed into this new pool if the former has no free worker.
Which APIs should be marked high and which low? The splitting
presented here is my guessing. It is not something written in stone,
but from the logic of things it is not safe to annotate any
API which is NOT guaranteed to end in reasonable small time as
high priority call.
2. Job queue size limit - this sets bound on the number of threads
blocked by a stuck qemu. Okay, there exists timeout on this,
but if user application continue dispatching low priority calls
it can still consume all (low priority) worker threads and therefore
affect other users/VMs. Even if they timeout in approx 30 secs.
3. Run monitor re-connect in a separate thread per VM.
If libvirtd is restarted, it tries to reconnect to all running
qemu processes. This is potentially risky - one stuck qemu
block daemon startup. However, putting the monitor startup code
into one thread per VM allows libvirtd to startup, accept client
connections and work with all VMs which monitor was successfully
re-opened. Unresponsive qemu will hold job until we open the monitor.
So clever user application can destroy such domain. All APIs
requiring job will just fail in acquiring lock.
Michal Privoznik (3):
daemon: Create priority workers pool
qemu: Introduce job queue size limit
qemu: Deal with stucked qemu on daemon startup
daemon/libvirtd.aug | 1 +
daemon/libvirtd.c | 10 +-
daemon/libvirtd.conf | 6 +
daemon/remote.c | 26 ++
daemon/remote.h | 2 +
src/qemu/libvirtd_qemu.aug | 1 +
src/qemu/qemu.conf | 7 +
src/qemu/qemu_conf.c | 4 +
src/qemu/qemu_conf.h | 2 +
src/qemu/qemu_domain.c | 17 ++
src/qemu/qemu_domain.h | 2 +
src/qemu/qemu_driver.c | 23 +--
src/qemu/qemu_process.c | 89 ++++++-
src/remote/qemu_protocol.x | 13 +-
src/remote/remote_protocol.x | 544 +++++++++++++++++++++---------------------
src/rpc/gendispatch.pl | 48 ++++-
src/rpc/virnetserver.c | 32 +++-
src/rpc/virnetserver.h | 6 +-
src/util/threadpool.c | 38 ++-
src/util/threadpool.h | 1 +
20 files changed, 554 insertions(+), 318 deletions(-)
--
1.7.3.4
13 years, 7 months
Re: [libvirt] migration of vnlink VMs
by Oved Ourfalli
----- Original Message -----
> From: "Laine Stump" <lstump(a)redhat.com>
> To: "Oved Ourfalli" <ovedo(a)redhat.com>
> Cc: "Ayal Baron" <abaron(a)redhat.com>, "Barak Azulay" <bazulay(a)redhat.com>, "Shahar Havivi" <shaharh(a)redhat.com>,
> "Itamar Heim" <iheim(a)redhat.com>, "Dan Kenigsberg" <danken(a)redhat.com>
> Sent: Thursday, April 28, 2011 10:20:35 AM
> Subject: Re: migration of vnlink VMs
> Oved,
>
> Would it be okay to repost this message to the thread on libvir-list
> so
> that other parties can add their thoughts?
>
Of course. I'm sending my answer to the libvirt list.
> On 04/27/2011 09:58 AM, Oved Ourfalli wrote:
> > Laine, hello.
> >
> > We read your proposal for abstraction of guest<--> host network
> > connection in libvirt.
> >
> > You has an open issue there regarding the vepa/vnlink attributes:
> > "3) What about the parameters in the<virtualport> element that are
> > currently used by vepa/vnlink. Do those belong with the host, or
> > with the guest?"
> >
> > The parameters for the virtualport element should be on the guest,
> > and not the host, because a specific interface can run multiple
> > profiles,
>
> Are you talking about host interface or guest interface? If you mean
> that multiple different profiles can be used when connecting to a
> particular switch - as long as there are only a few different
> profiles,
> rather than each guest having its own unique profile, then it still
> seems better to have the port profile live with the network definition
> (and just define multiple networks, one for each port profile).
>
The profile names can be changed regularly, so it looks like it will be better to put them in the guest level, so that the network host file won't have to be changed on all hosts once something has changed in the profiles.
Also, you will have a duplication of data, writing all the profile name on all the hosts that are connected to the vn-link/vepa switch.
>
> > so it will be a mistake to define a profile to be interface
> > specific on the host. Moreover, putting it in the guest level will
> > enable us in the future (if supported by libvirt/qemu) to migrate
> > a vm from a host with vepa/vnlink interfaces, to another host with
> > a bridge, for example.
>
> It seems to me like doing exactly the opposite would make it easier to
> migrate to a host that used a different kind of switching (from vepa
> to
> vnlink, or from a bridged interface to vepa, etc), since the port
> profile required for a particular host's network would be at the host
> waiting to be used.
You are right, but we would want to have the option to prevent that from happening in case we wouldn't want to allow it.
We can make the ability to migrate between different network types configurable, and we would like an easy way to tell libvirt - "please allow/don't allow it".
>
> > So, in the networks at the host level you will have:
> > <network type='direct'>
> > <name>red-network</name>
> > <source mode='vepa'>
> > <pool>
> > <interface>
> > <name>eth0</name>
> > .....
> > </interface>
> > <interface>
> > <name>eth4</name>
> > .....
> > </interface>
> > <interface>
> > <name>eth18</name>
> > .....
> > </interface>
> > </pool>
> > </source>
> > </network>
> >
> > And in the guest you will have (for vepa):
> > <interface type='network'>
> > <source network='red-network'/>
> > <virtualport type="802.1Qbg">
> > <parameters managerid="11" typeid="1193047" typeidversion="2"
> > instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/>
> > </virtualport>
> > </interface>
> >
> > Or (for vnlink):
> > <interface type='network'>
> > <source network='red-network'/>
> > <virtualport type="802.1Qbh">
> > <parameters profile_name="profile1"/>
> > </virtualport>
> > </interface>
>
> This illustrates the problem I was wondering about - in your example
> it
> would not be possible for the guest to migrate from the host using a
> vepa switch to the host using a vnlink switch (and it would be
> possible
You are right. When trying to migrate between vepa and vnlink there will be missing attributes in each in case we leave it on the host.
> to migrate to a host using a standard bridge only if the virtualport
> element was ignored). If the virtualport element lived with the
> network
> definition of red-network on each host, it could be migrated without
> problem.
>
> The only problematic thing would be if any of the attributes within
> <parameters> was unique for each guest (I don't know anything about
> the
> individual attributes, but "instanceid" sounds like it might be
> different for each guest).
>
> > Then, when migrating from a vepa/vnlink host to another vepa/vnlink
> > host containing red-network, the profile attributes will be
> > available at the guest domain xml.
> > In case the target host has a red-network, which isn't vepa/vnlink,
> > we want to be able to choose whether to make the use of the profile
> > attributes optional (i.e., libvirt won't fail in case of migrating
> > to a network of another type), or mandatory (i.e., libvirt will fail
> > in case of migration to a non-vepa/vnlink network).
> >
> > We have something similar in CPU flags:
> > <cpu match="exact">
> > <model>qemu64</model>
> > <topology sockets="S" cores="C" threads="T"/>
> > <feature policy="require/optional/disable......"
> > name="sse2"/>
> > </cpu>
>
> In this analogy, does "CPU flags" == "mode (vepa/vnlink/bridge)" or
> does
> "CPU flags" == "virtualport parameters" ? It seems like what you're
> wanting can be satisfied by simply not defining "red-network" on the
> hosts that don't have the proper networking setup available (maybe
> what
> you *really* want to call it is "red-vnlink-network").
What I meant to say in that is that we would like to have the ability to say if an attribute must me used, or not.
The issues you mention are indeed interesting. I'm cc-ing libvirt-list to see what other people think.
Putting it on the guest will indeed make it problematic to migrate between networks that need different parameters (vnlink/vepa for example).
Oved
13 years, 7 months
[libvirt] [BUG] Xen->libvirt: localtime reported as UTC
by Philipp Hahn
Hello,
just a report, no fix for that bug yet.
If I create a domain and set <clock offset='localtime'/>, that information is
correctly translated to Xends sxpr data, but on reading it back I get it
reported as 'utc':
# virsh dumpxml 85664d3f-68dd-a4c2-4d2f-be7f276b95f0 | grep clock
<clock offset='utc'/>
# gfind localtime
./85664d3f-68dd-a4c2-4d2f-be7f276b95f0/config.sxp: (platform
((device_model /usr/lib64/xen/bin/qemu-dm) (localtime 1)))
./85664d3f-68dd-a4c2-4d2f-be7f276b95f0/config.sxp: (localtime 1)
BYtE
Philipp
--
Philipp Hahn Open Source Software Engineer hahn(a)univention.de
Univention GmbH Linux for Your Business fon: +49 421 22 232- 0
Mary-Somerville-Str.1 D-28359 Bremen fax: +49 421 22 232-99
http://www.univention.de/
13 years, 7 months
[libvirt] [PATCH] libvirtd: create run dir when running at non-root user
by xuhj@linux.vnet.ibm.com
From: Xu He Jie <xuhj(a)linux.vnet.ibm.com>
Signed-off-by: Xu He Jie <xuhj(a)linux.vnet.ibm.com>
When libvirtd is running at non-root user, it won't create ${HOME}/.libvirt.
It will show error message:
17:44:16.838: 7035: error : virPidFileAcquirePath:322 : Failed to open pid file
---
daemon/libvirtd.c | 46 ++++++++++++++++++++++++++++++++--------------
1 files changed, 32 insertions(+), 14 deletions(-)
diff --git a/daemon/libvirtd.c b/daemon/libvirtd.c
index 423c3d7..e0004c7 100644
--- a/daemon/libvirtd.c
+++ b/daemon/libvirtd.c
@@ -1249,6 +1249,7 @@ int main(int argc, char **argv) {
bool privileged = geteuid() == 0 ? true : false;
bool implicit_conf = false;
bool use_polkit_dbus;
+ char *run_dir = NULL;
struct option opts[] = {
{ "verbose", no_argument, &verbose, 1},
@@ -1403,21 +1404,35 @@ int main(int argc, char **argv) {
/* Ensure the rundir exists (on tmpfs on some systems) */
if (privileged) {
- const char *rundir = LOCALSTATEDIR "/run/libvirt";
- mode_t old_umask;
-
- old_umask = umask(022);
- if (mkdir (rundir, 0755)) {
- if (errno != EEXIST) {
- char ebuf[1024];
- VIR_ERROR(_("unable to create rundir %s: %s"), rundir,
- virStrerror(errno, ebuf, sizeof(ebuf)));
- ret = VIR_DAEMON_ERR_RUNDIR;
- umask(old_umask);
- goto cleanup;
- }
+ run_dir = strdup(LOCALSTATEDIR "/run/libvirt");
+ if (!run_dir) {
+ VIR_ERROR(_("Can't allocate memory"));
+ goto cleanup;
+ }
+ }
+ else {
+ char *user_dir = NULL;
+
+ if (!(user_dir = virGetUserDirectory(geteuid()))) {
+ VIR_ERROR(_("Can't determine user directory"));
+ goto cleanup;
+ }
+
+ if (virAsprintf(&run_dir, "%s/.libvirt/", user_dir) < 0) {
+ VIR_ERROR(_("Can't allocate memory"));
+ VIR_FREE(user_dir);
+ goto cleanup;
}
- umask(old_umask);
+
+ VIR_FREE(user_dir);
+ }
+
+ if (virFileMakePath(run_dir) < 0) {
+ char ebuf[1024];
+ VIR_ERROR(_("unable to create rundir %s: %s"), run_dir,
+ virStrerror(errno, ebuf, sizeof(ebuf)));
+ ret = VIR_DAEMON_ERR_RUNDIR;
+ goto cleanup;
}
/* Try to claim the pidfile, exiting if we can't */
@@ -1571,6 +1586,9 @@ cleanup:
VIR_FREE(sock_file_ro);
VIR_FREE(pid_file);
VIR_FREE(remote_config_file);
+ if (run_dir)
+ VIR_FREE(run_dir);
+
daemonConfigFree(config);
virLogShutdown();
--
1.7.4.1
13 years, 7 months
[libvirt] [PATCH 1/2] Fix error detection in device change
by Philipp Hahn
According to qemu-kvm/qerror.c all messages start with a capital
"Device ", but the current code only scans for the lower case "device ".
This results in "virDomainUpdateDeviceFlags()" to not detect locked
CD-ROMs and reporting success even in the case of a failure:
# virsh qemu-monitor-command "$VM" change\ drive-ide0-0-0\ \"/var/lib/libvirt/images/ucs_2.4-0-sec4-20110714145916-dvd-amd64.iso\"
Device 'drive-ide0-0-0' is locked
# virsh update-device "$VM" /dev/stdin <<<"<disk type='file' device='cdrom'><driver name='qemu' type='raw'/><source file='/var/lib/libvirt/images/ucs_2.4-0-sec4-20110714145916-dvd-amd64.iso'/><target dev='hda' bus='ide'/><readonly/><alias name='ide0-0-0'/><address type='drive' controller='0' bus='0' unit='0'/></disk>"
Device updated successfully
Signed-off-by: Philipp Hahn <hahn(a)univention.de>
---
src/qemu/qemu_monitor_text.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/src/qemu/qemu_monitor_text.c b/src/qemu/qemu_monitor_text.c
index 52d924a..98d9169 100644
--- a/src/qemu/qemu_monitor_text.c
+++ b/src/qemu/qemu_monitor_text.c
@@ -1064,7 +1064,7 @@ int qemuMonitorTextChangeMedia(qemuMonitorPtr mon,
/* If the command failed qemu prints:
* device not found, device is locked ...
* No message is printed on success it seems */
- if (strstr(reply, "device ")) {
+ if (c_strcasestr(reply, "device ")) {
qemuReportError(VIR_ERR_OPERATION_FAILED,
_("could not change media on %s: %s"), devname, reply);
goto cleanup;
--
1.7.1
13 years, 7 months
[libvirt] [PATCH 0/8] Report disk latency info
by Osier Yang
This patch series introduces a new API to report the disk latency
related information, which is supported by upstream QEMU just a
few days ago (commit c488c7f649, Thu Aug 25).
Per previous dicussion on the ABI compatiblity, and API design
principle, the new API is defined with style:
typedef struct _virDomainBlockStatsFlags virDomainBlockStatsFlagsStruct;
typedef virDomainBlockStatsFlagsStruct *virDomainBlockStatsFlagsPtr;
struct _virDomainBlockStatsFlags {
char field[VIR_DOMAIN_BLOCK_STATS_FIELD_LENGTH];
long long value;
};
int virDomainBlockStatsFlags (virDomainPtr dom,
const char *path,
virDomainBlockStatsFlagsPtr params,
int *nparams,
unsigned int flags)
Other points:
1) With the new API, output of virsh command "domblkstat" is
different, this might affect the existed scripts. But
considering we are even introducing new feilds, so this
can be fine?
2) qemuMonitorJSONGetBlockStatsInfo set "*errs = 0" before, it
will cause "domblkstat" always print things like "vda errs 0".
However, QEMU doesn't support this field. Fixed it in these
patches (*errs = -1).
And new API qemuDomainBlockStatsFlags won't even set a field
for "errs".
3) Is it deserved to update gendispatch.pl to generate remote
codes for functions take argument like "virNodeCPUStatsPtr params",
"virNodeMemoryStatsPtr params"? All of these argument points
to structs has same structure. Perhaps we can define an alias
for these structs and generate the remote codes just like for
"virTypedParameterPtr params".
[PATCH 1/8] latency: Define new public API and structure
[PATCH 2/8] latency: Define the internal driver callback
[PATCH 3/8] latency: Implemente the public API
[PATCH 4/8] latency: Wire up the remote protocol
[PATCH 5/8] latency: Update monitor functions for new latency fields
[PATCH 6/8] latency: Implemente internal API for qemu driver
[PATCH 7/8] latency: Expose the new API for Python binding
[PATCH 8/8] latency: Update cmdBlkStats to use new API
Regards,
Osier
13 years, 7 months
[libvirt] [PATCH 0/3 v4] Add filesystem pool formatting
by Osier Yang
The following patches add the ability to format filesystem pools when
the appropriate flags are passed to pool build. This patch set introduces
two new flags:
VIR_STORAGE_POOL_BUILD_NO_OVERWRITE causes the build to probe for an
existing pool of the requested type. The build operation formats the
filesystem if it does not find an existing filesystem of that type.
VIR_STORAGE_POOL_BUILD_OVERWRITE causes the build to format unconditionally.
This patch set is mainly based on v3 by Dave Allan.
http://www.redhat.com/archives/libvir-list/2010-June/msg00042.html
[PATCH 1/3] storage: Add mkfs and libblkid to build system
[PATCH 2/3] storage: Add fs pool formatting
[PATCH 3/3] storage: Add virsh support for fs pool formating
Regards,
Osier
13 years, 7 months