[libvirt] [PATCH 0/5] Atomic api to list all domains

This patchset adds a atomic way to list guests. A more detailed description is in patch 1. Peter Krempa (5): lib: Add public api to enable atomic listing of guest python: add API exports for virConnectListAllDomains() remote: implement remote protocol for virConnectListAllDomains() qemu: implement virConnectListAllDomains() for qemu driver virsh: add support for virConnectListAllDomains and clean up cmdList daemon/remote.c | 50 ++++ include/libvirt/libvirt.h.in | 8 +- python/generator.py | 1 + python/libvirt-override-api.xml | 12 +- python/libvirt-override-virConnect.py | 12 + python/libvirt-override.c | 49 ++++- src/driver.h | 12 +- src/libvirt.c | 46 ++++ src/libvirt_public.syms | 5 + src/qemu/qemu_driver.c | 97 ++++++++ src/remote/remote_driver.c | 66 +++++ src/remote/remote_protocol.x | 14 +- src/remote_protocol-structs | 12 + tools/virsh.c | 423 +++++++++++++++++++++------------ 14 files changed, 647 insertions(+), 160 deletions(-) -- 1.7.3.4

This patch adds a new public api that lists domains. The new approach is different from those used before. There are four key points to this: 1) The list is acquired atomicaly and contains both active and inactive domains (guests). This eliminates the need to call two different list APIs, where the state might change in between of the calls. 2) The returned list consists of virDomainPtrs instead of names or ID's that have to be converted to virDomainPtrs anyways using separate calls for each one of them. This is more convenient and saves resources and application code in most (all?) cases. 3) The returned list is auto-allocated. This saves a lot hassle for the users. 4) The python binding returns a list of domain objects that is very neat to work with. The only problem with this approach is no support from code generators so both RPC code and python bindings had to be written manualy. *include/libvirt/libvirt.h.in: - add API prototype - clean up whitespace mistakes nearby *python/generator.py: - inhibit generation of the bindings for the new api *src/driver.h: - add driver prototype - clean up some whitespace mistakes nearby *src/libvirt.c: - add public implementation *src/libvirt_public.syms: - export the new symbol --- include/libvirt/libvirt.h.in | 8 +++++- python/generator.py | 1 + src/driver.h | 12 ++++++++-- src/libvirt.c | 46 ++++++++++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 5 ++++ 5 files changed, 67 insertions(+), 5 deletions(-) diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index a817db8..2dacd45 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -1759,8 +1759,12 @@ int virDomainUndefineFlags (virDomainPtr domain, unsigned int flags); int virConnectNumOfDefinedDomains (virConnectPtr conn); int virConnectListDefinedDomains (virConnectPtr conn, - char **const names, - int maxnames); + char **const names, + int maxnames); +int virConnectListAllDomains (virConnectPtr conn, + virDomainPtr **domains, + int ndomains, + unsigned int flags); int virDomainCreate (virDomainPtr domain); int virDomainCreateWithFlags (virDomainPtr domain, unsigned int flags); diff --git a/python/generator.py b/python/generator.py index 9530867..a81fe4d 100755 --- a/python/generator.py +++ b/python/generator.py @@ -453,6 +453,7 @@ skip_function = ( 'virConnectDomainEventDeregisterAny', # overridden in virConnect.py 'virSaveLastError', # We have our own python error wrapper 'virFreeError', # Only needed if we use virSaveLastError + 'virConnectListAllDomains', #overriden in virConnect.py 'virStreamRecvAll', # Pure python libvirt-override-virStream.py 'virStreamSendAll', # Pure python libvirt-override-virStream.py diff --git a/src/driver.h b/src/driver.h index 03d249b..d5c5072 100644 --- a/src/driver.h +++ b/src/driver.h @@ -251,9 +251,14 @@ typedef char * const char *domainXml, unsigned int flags); typedef int - (*virDrvListDefinedDomains) (virConnectPtr conn, - char **const names, - int maxnames); + (*virDrvListDefinedDomains) (virConnectPtr conn, + char **const names, + int maxnames); +typedef int + (*virDrvConnectListAllDomains) (virConnectPtr conn, + virDomainPtr **domains, + int ndomains, + unsigned int flags); typedef int (*virDrvNumOfDefinedDomains) (virConnectPtr conn); typedef int @@ -1014,6 +1019,7 @@ struct _virDriver { virDrvDomainGetDiskErrors domainGetDiskErrors; virDrvDomainSetMetadata domainSetMetadata; virDrvDomainGetMetadata domainGetMetadata; + virDrvConnectListAllDomains connectListAllDomains; }; typedef int diff --git a/src/libvirt.c b/src/libvirt.c index 22fc863..09a6d95 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -8128,6 +8128,52 @@ error: } /** + * virConnectListAllDomains: + * @conn: Pointer to the hypervisor connection. + * @domains: Pointer to a variable to store the array containing domain objects + * or NULL if the list is not required (just returns number of guests). + * @ndomains: Maximum number of requested guests. Set to -1 for no limits. + * @flags: Not used yet. Callers must pass 0. + * + * List all guests (domains). This function allocates and returns a complete list + * of guests. Caller has to free the returned virDomainPtrs and the array. + * + * Returns number of guests found (and stored in *domains) or -1 on error. + */ +int +virConnectListAllDomains(virConnectPtr conn, + virDomainPtr **domains, + int ndomains, + unsigned int flags) +{ + VIR_DEBUG("conn=%p, domains=%p, ndomains=%d, flags=%x", + conn, domains, ndomains, flags); + + virResetLastError(); + + if (!VIR_IS_CONNECT(conn)) { + virLibConnError(VIR_ERR_INVALID_CONN, __FUNCTION__); + virDispatchError(NULL); + return -1; + } + + if (conn->driver->connectListAllDomains) { + int ret; + ret = conn->driver->connectListAllDomains(conn, domains, + ndomains, flags); + if (ret < 0) + goto error; + return ret; + } + + virLibConnError(VIR_ERR_NO_SUPPORT, __FUNCTION__); + +error: + virDispatchError(conn); + return -1; +} + +/** * virDomainCreate: * @domain: pointer to a defined domain * diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index 46c13fb..6fd4843 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -534,4 +534,9 @@ LIBVIRT_0.9.11 { virDomainPMWakeup; } LIBVIRT_0.9.10; +LIBVIRT_0.9.13 { + global: + virConnectListAllDomains; +} LIBVIRT_0.9.11; + # .... define new API here using predicted next version number .... -- 1.7.3.4

On 05/20/2012 09:56 AM, Peter Krempa wrote:
This patch adds a new public api that lists domains. The new approach is different from those used before. There are four key points to this:
1) The list is acquired atomicaly and contains both active and inactive
s/atomicaly/atomically/
domains (guests). This eliminates the need to call two different list APIs, where the state might change in between of the calls.
Here, you may want to merge in points from my RFC in your v2, such as the ability to do filtering.
2) The returned list consists of virDomainPtrs instead of names or ID's that have to be converted to virDomainPtrs anyways using separate calls for each one of them. This is more convenient and saves resources and application code in most (all?) cases.
I don't know about 'saves resources' - we are actually passing more over the RPC wire. But definitely more convenient, and less application code.
3) The returned list is auto-allocated. This saves a lot hassle for the users.
4) The python binding returns a list of domain objects that is very neat to work with.
The only problem with this approach is no support from code generators so both RPC code and python bindings had to be written manualy.
s/manualy/manually/
*include/libvirt/libvirt.h.in: - add API prototype - clean up whitespace mistakes nearby *python/generator.py: - inhibit generation of the bindings for the new api *src/driver.h: - add driver prototype - clean up some whitespace mistakes nearby *src/libvirt.c: - add public implementation *src/libvirt_public.syms: - export the new symbol --- include/libvirt/libvirt.h.in | 8 +++++- python/generator.py | 1 + src/driver.h | 12 ++++++++-- src/libvirt.c | 46 ++++++++++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 5 ++++ 5 files changed, 67 insertions(+), 5 deletions(-)
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in index a817db8..2dacd45 100644 --- a/include/libvirt/libvirt.h.in +++ b/include/libvirt/libvirt.h.in @@ -1759,8 +1759,12 @@ int virDomainUndefineFlags (virDomainPtr domain, unsigned int flags); int virConnectNumOfDefinedDomains (virConnectPtr conn); int virConnectListDefinedDomains (virConnectPtr conn, - char **const names, - int maxnames); + char **const names, + int maxnames); +int virConnectListAllDomains (virConnectPtr conn, + virDomainPtr **domains,
Hmm - time for me to think out loud. Your signature differs from mine. I proposed: virDomainPtr *domains aka virDomain **domains where my return value would be an array of virDomain objects, and the caller would have to do: virDomainPtr dom, first; virConnectListAllDomains(conn, &first, flags); dom = first; while (dom) { use dom; virDomainFree(dom); dom++; } free(first); But that would only work if virDomain is not an opaque type. You proposed: virDomainPtr **domains aka virDomain ***domains where the return is an array of virDomainPtr pointers. The caller would then do: virDomainPtr *array; virDomainPtr dom; int i = 0; virConnectListAllDomains(conn, &array, flags); dom = array[0]; do { use dom; virDomainFree(dom); dom = array[i++]; } while (dom); free(array); (Of course, you could use a for (i=0; i < ret; i++) loop instead of a while loop; I chose that style to constrast the difference between the number of pointer dereferences needed). In conclusion, I think your style is right. But it also means that we ought to document a sample usage as part of the API call :)
+ int ndomains,
Drop the ndomains argument.
+ unsigned int flags); int virDomainCreate (virDomainPtr domain); int virDomainCreateWithFlags (virDomainPtr domain, unsigned int flags); diff --git a/python/generator.py b/python/generator.py index 9530867..a81fe4d 100755 --- a/python/generator.py +++ b/python/generator.py @@ -453,6 +453,7 @@ skip_function = ( 'virConnectDomainEventDeregisterAny', # overridden in virConnect.py 'virSaveLastError', # We have our own python error wrapper 'virFreeError', # Only needed if we use virSaveLastError + 'virConnectListAllDomains', #overriden in virConnect.py
s/overriden/overridden/
+++ b/src/driver.h @@ -251,9 +251,14 @@ typedef char * const char *domainXml, unsigned int flags); typedef int - (*virDrvListDefinedDomains) (virConnectPtr conn, - char **const names, - int maxnames); + (*virDrvListDefinedDomains) (virConnectPtr conn, + char **const names, + int maxnames); +typedef int + (*virDrvConnectListAllDomains) (virConnectPtr conn, + virDomainPtr **domains, + int ndomains,
Again, drop ndomains.
@@ -1014,6 +1019,7 @@ struct _virDriver { virDrvDomainGetDiskErrors domainGetDiskErrors; virDrvDomainSetMetadata domainSetMetadata; virDrvDomainGetMetadata domainGetMetadata; + virDrvConnectListAllDomains connectListAllDomains;
Order doesn't matter in this file; so I'd stick it closer to listDomains/numOfDomains, not here at the end. Your name is long; my preliminary code went with the shorter 'listAllDomains'.
/** + * virConnectListAllDomains: + * @conn: Pointer to the hypervisor connection. + * @domains: Pointer to a variable to store the array containing domain objects + * or NULL if the list is not required (just returns number of guests).
Nice idea - I hadn't thought of allowing NULL for counting.
+ * @ndomains: Maximum number of requested guests. Set to -1 for no limits.
Drop.
+ * @flags: Not used yet. Callers must pass 0.
See my RFC on how to use this for filtering.
+ * + * List all guests (domains). This function allocates and returns a complete list + * of guests. Caller has to free the returned virDomainPtrs and the array.
My wording mentioned: virDomainFree() on each guest, then free() on the array. Be specific, otherwise someone will do it wrong and leak memory. Again, sample code in this API can't hurt.
+ * + * Returns number of guests found (and stored in *domains) or -1 on error. + */ +int +virConnectListAllDomains(virConnectPtr conn, + virDomainPtr **domains, + int ndomains, + unsigned int flags) +{ + VIR_DEBUG("conn=%p, domains=%p, ndomains=%d, flags=%x", + conn, domains, ndomains, flags);
Adjust for the loss of ndomains. -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On Mon, May 21, 2012 at 04:42:41PM -0600, Eric Blake wrote:
+int virConnectListAllDomains (virConnectPtr conn, + virDomainPtr **domains,
Hmm - time for me to think out loud. Your signature differs from mine. I proposed:
virDomainPtr *domains aka virDomain **domains
where my return value would be an array of virDomain objects, and the caller would have to do:
virDomainPtr dom, first; virConnectListAllDomains(conn, &first, flags); dom = first; while (dom) { use dom; virDomainFree(dom); dom++; } free(first);
But that would only work if virDomain is not an opaque type.
You proposed:
virDomainPtr **domains aka virDomain ***domains
where the return is an array of virDomainPtr pointers. The caller would then do:
virDomainPtr *array; virDomainPtr dom; int i = 0; virConnectListAllDomains(conn, &array, flags); dom = array[0]; do { use dom; virDomainFree(dom); dom = array[i++]; } while (dom); free(array);
(Of course, you could use a for (i=0; i < ret; i++) loop instead of a while loop; I chose that style to constrast the difference between the number of pointer dereferences needed). In conclusion, I think your style is right. But it also means that we ought to document a sample usage as part of the API call :)
I agree that Michal's style is nicer & of course docs are always great Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

This patch adds export of the new API function virConnectListAllDomains() to the libvirt-python bindings. The virConnect object now has method "listAllDomains" that takes only the flags parameter and returns a python list of virDomain object corresponding to virDomainPtrs returned by the underlying api. The implementation is done manually as the generator does not support wrapping list of virDomainPtrs into virDomain objects. --- python/libvirt-override-api.xml | 12 ++++++-- python/libvirt-override-virConnect.py | 12 ++++++++ python/libvirt-override.c | 49 ++++++++++++++++++++++++++++++++- 3 files changed, 69 insertions(+), 4 deletions(-) diff --git a/python/libvirt-override-api.xml b/python/libvirt-override-api.xml index 0bafd21..2fd6dec 100644 --- a/python/libvirt-override-api.xml +++ b/python/libvirt-override-api.xml @@ -19,17 +19,23 @@ <function name='virConnectListDefinedDomains' file='python'> <info>list the defined domains, stores the pointers to the names in @names</info> <arg name='conn' type='virConnectPtr' info='pointer to the hypervisor connection'/> - <return type='str *' info='the list of Names of None in case of error'/> + <return type='str *' info='the list of Names or None in case of error'/> + </function> + <function name='virConnectListAllDomains' file='python'> + <info>returns list of all defined domains</info> + <arg name='conn' type='virConnectPtr' info='pointer to the hypervisor connection'/> + <arg name='flags' type='unsigned int' info='optional flags'/> + <return type='domain *' info='the list of domains or None in case of error'/> </function> <function name='virConnectListNetworks' file='python'> <info>list the networks, stores the pointers to the names in @names</info> <arg name='conn' type='virConnectPtr' info='pointer to the hypervisor connection'/> - <return type='str *' info='the list of Names of None in case of error'/> + <return type='str *' info='the list of Names or None in case of error'/> </function> <function name='virConnectListDefinedNetworks' file='python'> <info>list the defined networks, stores the pointers to the names in @names</info> <arg name='conn' type='virConnectPtr' info='pointer to the hypervisor connection'/> - <return type='str *' info='the list of Names of None in case of error'/> + <return type='str *' info='the list of Names or None in case of error'/> </function> <function name='virDomainLookupByUUID' file='python'> <info>Try to lookup a domain on the given hypervisor based on its UUID.</info> diff --git a/python/libvirt-override-virConnect.py b/python/libvirt-override-virConnect.py index 811e16b..ecb5680 100644 --- a/python/libvirt-override-virConnect.py +++ b/python/libvirt-override-virConnect.py @@ -185,3 +185,15 @@ raise libvirtError ('virConnectDomainEventRegisterAny() failed', conn=self) self.domainEventCallbackID[ret] = opaque return ret + + def listAllDomains(self, flags): + """List all domains and returns a list of domain objects""" + ret = libvirtmod.virConnectListAllDomains(self._o, flags) + if ret is None: + raise libvirtError("virConnectListAllDomains() failed", conn=self) + + retlist = list() + for domptr in ret: + retlist.append(virDomain(self, _obj=domptr)) + + return retlist diff --git a/python/libvirt-override.c b/python/libvirt-override.c index 130e702..eb2e21b 100644 --- a/python/libvirt-override.c +++ b/python/libvirt-override.c @@ -1944,7 +1944,7 @@ libvirt_virConnectGetLibVersion (PyObject *self ATTRIBUTE_UNUSED, static PyObject * libvirt_virConnectListDomainsID(PyObject *self ATTRIBUTE_UNUSED, - PyObject *args) { + PyObject *args) { PyObject *py_retval; int *ids = NULL, c_retval, i; virConnectPtr conn; @@ -1986,6 +1986,52 @@ libvirt_virConnectListDomainsID(PyObject *self ATTRIBUTE_UNUSED, } static PyObject * +libvirt_virConnectListAllDomains(PyObject *self ATTRIBUTE_UNUSED, + PyObject *args) +{ + PyObject *pyobj_conn; + PyObject *py_retval = NULL; + virConnectPtr conn; + virDomainPtr *doms = NULL; + int c_retval = 0; + int i; + unsigned int flags; + + if (!PyArg_ParseTuple(args, (char *)"Oi:virConnectListAllDomains", + &pyobj_conn, &flags)) + return NULL; + conn = (virConnectPtr) PyvirConnect_Get(pyobj_conn); + + LIBVIRT_BEGIN_ALLOW_THREADS; + c_retval = virConnectListAllDomains(conn, &doms, -1, flags); + LIBVIRT_END_ALLOW_THREADS; + if (c_retval < 0) + return VIR_PY_NONE; + + if (!(py_retval = PyList_New(c_retval))) + goto error; + + for (i = 0; i < c_retval; i++) { + if (PyList_SetItem(py_retval, i, + libvirt_virDomainPtrWrap(doms[i])) < 0) + goto error; + } + + return py_retval; + +error: + Py_XDECREF(py_retval); + + if (doms && c_retval >= 0) + for (i = 0; i < c_retval; i++) + if (doms[i]) + virDomainFree(doms[i]); + + + return NULL; +} + +static PyObject * libvirt_virConnectListDefinedDomains(PyObject *self ATTRIBUTE_UNUSED, PyObject *args) { PyObject *py_retval; @@ -5618,6 +5664,7 @@ static PyMethodDef libvirtMethods[] = { {(char *) "virConnectOpenAuth", libvirt_virConnectOpenAuth, METH_VARARGS, NULL}, {(char *) "virConnectListDomainsID", libvirt_virConnectListDomainsID, METH_VARARGS, NULL}, {(char *) "virConnectListDefinedDomains", libvirt_virConnectListDefinedDomains, METH_VARARGS, NULL}, + {(char *) "virConnectListAllDomains", libvirt_virConnectListAllDomains, METH_VARARGS, NULL}, {(char *) "virConnectDomainEventRegister", libvirt_virConnectDomainEventRegister, METH_VARARGS, NULL}, {(char *) "virConnectDomainEventDeregister", libvirt_virConnectDomainEventDeregister, METH_VARARGS, NULL}, {(char *) "virConnectDomainEventRegisterAny", libvirt_virConnectDomainEventRegisterAny, METH_VARARGS, NULL}, -- 1.7.3.4

On 05/20/2012 09:56 AM, Peter Krempa wrote:
This patch adds export of the new API function virConnectListAllDomains() to the libvirt-python bindings. The virConnect object now has method "listAllDomains" that takes only the flags parameter and returns a python list of virDomain object corresponding to virDomainPtrs returned by the underlying api.
The implementation is done manually as the generator does not support wrapping list of virDomainPtrs into virDomain objects. --- python/libvirt-override-api.xml | 12 ++++++-- python/libvirt-override-virConnect.py | 12 ++++++++ python/libvirt-override.c | 49 ++++++++++++++++++++++++++++++++- 3 files changed, 69 insertions(+), 4 deletions(-)
Yep, definitely has to be an override function.
- <return type='str *' info='the list of Names of None in case of error'/> + <return type='str *' info='the list of Names or None in case of error'/>
Cute typo fix. Took me a while to spot it.
+ + def listAllDomains(self, flags): + """List all domains and returns a list of domain objects""" + ret = libvirtmod.virConnectListAllDomains(self._o, flags) + if ret is None: + raise libvirtError("virConnectListAllDomains() failed", conn=self) + + retlist = list() + for domptr in ret: + retlist.append(virDomain(self, _obj=domptr))
I'm not familiar enough with python to know if this does the right thing, but I hope it does. You got further than I did in my simple RFC :)
static PyObject * +libvirt_virConnectListAllDomains(PyObject *self ATTRIBUTE_UNUSED, + PyObject *args) +{ + PyObject *pyobj_conn; + PyObject *py_retval = NULL; + virConnectPtr conn; + virDomainPtr *doms = NULL; + int c_retval = 0; + int i; + unsigned int flags; + + if (!PyArg_ParseTuple(args, (char *)"Oi:virConnectListAllDomains", + &pyobj_conn, &flags)) + return NULL; + conn = (virConnectPtr) PyvirConnect_Get(pyobj_conn);
What happens if PyvirConnect_Get() fails? Can it return NULL? While virConnectListAllDomains happens to deal with NULL (by failing the API), I can't help but wonder if we should be more careful here. Then again, that's probably a common problem in the python bindings, where you are just copying from other clients doing the same. I guess nothing needs to change here.
+ + LIBVIRT_BEGIN_ALLOW_THREADS; + c_retval = virConnectListAllDomains(conn, &doms, -1, flags); + LIBVIRT_END_ALLOW_THREADS; + if (c_retval < 0) + return VIR_PY_NONE; + + if (!(py_retval = PyList_New(c_retval))) + goto error; + + for (i = 0; i < c_retval; i++) { + if (PyList_SetItem(py_retval, i, + libvirt_virDomainPtrWrap(doms[i])) < 0)
Does libvirt_virDomainPtrWrap create a new object from scratch, or is it taking a reference to doms[i]? If the former, then we need to call virDomainFree(doms[i]) unconditionally; if the latter, then we need to set doms[i] to NULL on success. [Looks like the latter.]
+ goto error; + } + + return py_retval;
Wrong. This leaks memory, of at least doms. Instead, you need skip to:
+ +error: + Py_XDECREF(py_retval); +
a cleanup label here.
+ if (doms && c_retval >= 0)
We already guaranteed doms was non-NULL and c_retval >= 0 if we get here.
+ for (i = 0; i < c_retval; i++) + if (doms[i]) + virDomainFree(doms[i]); +
Memory leak: you are missing a VIR_FREE(doms).
+ + return NULL;
I think this control flow solves it: if (!(py_retval = PyList_New(c_retval))) goto cleanup; for (i = 0; i < c_retval; i++) { if (PyList_SetItem(py_retval, i, libvirt_virDomainPtrWrap(doms[i])) < 0) { Py_DECREF(py_retval); py_retval = NULL; goto cleanup; } doms[i] = NULL; } cleanup: for (i = 0; i < c_retval; i++) if (doms[i]) virDomainFree(doms[i]); VIR_FREE(doms). return py_retval; -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On 05/20/2012 09:56 AM, Peter Krempa wrote:
This patch adds export of the new API function virConnectListAllDomains() to the libvirt-python bindings. The virConnect object now has method "listAllDomains" that takes only the flags parameter and returns a python list of virDomain object corresponding to virDomainPtrs returned by the underlying api.
The implementation is done manually as the generator does not support wrapping list of virDomainPtrs into virDomain objects. --- python/libvirt-override-api.xml | 12 ++++++-- python/libvirt-override-virConnect.py | 12 ++++++++ python/libvirt-override.c | 49 ++++++++++++++++++++++++++++++++- 3 files changed, 69 insertions(+), 4 deletions(-)
Just noticed another thing:
+ <function name='virConnectListAllDomains' file='python'> + <info>returns list of all defined domains</info> + <arg name='conn' type='virConnectPtr' info='pointer to the hypervisor connection'/> + <arg name='flags' type='unsigned int' info='optional flags'/>
This does not expose any way to trigger the underlying C code of passing NULL as a means of just counting. That might be useful, so maybe we need: conn.numOfAllDomains in python code, which wraps virConnectListAllDomains but passes NULL and returns just an integer instead of a list. After all, if we make RPC more efficient for just counting when passing NULL, then it is feasible to write python code that could benefit from that efficiency. -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

This patch wires up the RPC protocol handlers for virConnectListAllDomains(). The RPC generator has no support for the way how virConnectListAllDomains() returns the results so the handler code had to be done manually. The new api is handled by REMOTE_PROC_CONNECT_LIST_ALL_DOMAINS, with number 271 and marked with high priority. --- daemon/remote.c | 50 +++++++++++++++++++++++++++++++ src/remote/remote_driver.c | 66 ++++++++++++++++++++++++++++++++++++++++++ src/remote/remote_protocol.x | 14 ++++++++- src/remote_protocol-structs | 12 +++++++ 4 files changed, 141 insertions(+), 1 deletions(-) diff --git a/daemon/remote.c b/daemon/remote.c index 16a8a05..8ac0568 100644 --- a/daemon/remote.c +++ b/daemon/remote.c @@ -996,6 +996,56 @@ no_memory: } static int +remoteDispatchConnectListAllDomains(virNetServerPtr server ATTRIBUTE_UNUSED, + virNetServerClientPtr client, + virNetMessagePtr msg ATTRIBUTE_UNUSED, + virNetMessageErrorPtr rerr, + remote_connect_list_all_domains_args *args, + remote_connect_list_all_domains_ret *ret) +{ + virDomainPtr *doms = NULL; + int ndomains = 0; + int i; + int rv = -1; + struct daemonClientPrivate *priv = virNetServerClientGetPrivateData(client); + + if (!priv->conn) { + virNetError(VIR_ERR_INTERNAL_ERROR, "%s", _("connection not open")); + goto cleanup; + } + + if ((ndomains = virConnectListAllDomains(priv->conn, + &doms, + args->ndomains, + args->flags)) < 0) + goto cleanup; + + if (doms) { + if (VIR_ALLOC_N(ret->domains.domains_val, ndomains) < 0) { + virReportOOMError(); + goto cleanup; + } + + ret->domains.domains_len = ndomains; + + for (i = 0; i < ndomains; i++) + make_nonnull_domain(ret->domains.domains_val+i, doms[i]); + } + + rv = ndomains; + +cleanup: + if (rv < 0) + virNetMessageSaveError(rerr); + if (doms) { + for (i = 0; i < ndomains;i++) + virDomainFree(doms[i]); + VIR_FREE(doms); + } + return rv; +} + +static int remoteDispatchDomainGetSchedulerParametersFlags(virNetServerPtr server ATTRIBUTE_UNUSED, virNetServerClientPtr client ATTRIBUTE_UNUSED, virNetMessagePtr msg ATTRIBUTE_UNUSED, diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index 5c87561..4cd02ec 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -1265,6 +1265,71 @@ done: return rv; } +static int +remoteConnectListAllDomains(virConnectPtr conn, + virDomainPtr **domains, + int ndomains, + unsigned int flags) +{ + int rv = -1; + int i; + virDomainPtr *doms = NULL; + remote_connect_list_all_domains_args args; + remote_connect_list_all_domains_ret ret; + + struct private_data *priv = conn->privateData; + + remoteDriverLock(priv); + + if (domains) + *domains = NULL; + + args.ndomains = ndomains; + args.flags = flags; + + memset(&ret, 0, sizeof(ret)); + if (call (conn, + priv, + 0, + REMOTE_PROC_CONNECT_LIST_ALL_DOMAINS, + (xdrproc_t) xdr_remote_connect_list_all_domains_args, + (char *) &args, + (xdrproc_t) xdr_remote_connect_list_all_domains_ret, + (char *) &ret) == -1) + goto done; + + if (domains) { + if (VIR_ALLOC_N(doms, ret.domains.domains_len) < 0) + goto no_memory; + for (i = 0; i < ret.domains.domains_len; i++) { + doms[i] = get_nonnull_domain(conn, ret.domains.domains_val[i]); + if (!doms[i]) + goto no_memory; + } + + *domains = doms; + doms = NULL; + } + + rv = ret.domains.domains_len; + +cleanup: + xdr_free((xdrproc_t) xdr_remote_connect_list_all_domains_ret, (char *) &ret); + +done: + remoteDriverUnlock(priv); + return rv; + +no_memory: + if (doms) { + for (i = 0; i < ret.domains.domains_len; i++) + virDomainFree(*domains[i]); + VIR_FREE(doms); + } + virReportOOMError(); + goto cleanup; +} + /* Helper to free typed parameters. */ static void remoteFreeTypedParameters(remote_typed_param *args_params_val, @@ -5111,6 +5176,7 @@ static virDriver remote_driver = { .domainGetDiskErrors = remoteDomainGetDiskErrors, /* 0.9.10 */ .domainSetMetadata = remoteDomainSetMetadata, /* 0.9.10 */ .domainGetMetadata = remoteDomainGetMetadata, /* 0.9.10 */ + .connectListAllDomains = remoteConnectListAllDomains, /* 0.9.13 */ }; static virNetworkDriver network_driver = { diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 2d57247..bc254bd 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -2463,6 +2463,16 @@ struct remote_domain_get_disk_errors_ret { int nerrors; }; +struct remote_connect_list_all_domains_args { + int ndomains; + unsigned int flags; +}; + +struct remote_connect_list_all_domains_ret { + remote_nonnull_domain domains<>; + unsigned int ret; +}; + /*----- Protocol. -----*/ @@ -2782,7 +2792,9 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_PM_WAKEUP = 267, /* autogen autogen */ REMOTE_PROC_DOMAIN_EVENT_TRAY_CHANGE = 268, /* autogen autogen */ REMOTE_PROC_DOMAIN_EVENT_PMWAKEUP = 269, /* autogen autogen */ - REMOTE_PROC_DOMAIN_EVENT_PMSUSPEND = 270 /* autogen autogen */ + REMOTE_PROC_DOMAIN_EVENT_PMSUSPEND = 270, /* autogen autogen */ + + REMOTE_PROC_CONNECT_LIST_ALL_DOMAINS = 271 /* skipgen skipgen priority:high */ /* * Notice how the entries are grouped in sets of 10 ? diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index 9b2414f..322a0e1 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -1921,6 +1921,17 @@ struct remote_domain_get_disk_errors_ret { } errors; int nerrors; }; +struct remote_connect_list_all_domains_args { + int ndomains; + u_int flags; +}; +struct remote_connect_list_all_domains_ret { + struct { + u_int domains_len; + remote_nonnull_domain * domains_val; + } domains; + u_int ret; +}; enum remote_procedure { REMOTE_PROC_OPEN = 1, REMOTE_PROC_CLOSE = 2, @@ -2192,4 +2203,5 @@ enum remote_procedure { REMOTE_PROC_DOMAIN_EVENT_TRAY_CHANGE = 268, REMOTE_PROC_DOMAIN_EVENT_PMWAKEUP = 269, REMOTE_PROC_DOMAIN_EVENT_PMSUSPEND = 270, + REMOTE_PROC_CONNECT_LIST_ALL_DOMAINS = 271, }; -- 1.7.3.4

On 05/20/2012 09:56 AM, Peter Krempa wrote:
This patch wires up the RPC protocol handlers for virConnectListAllDomains(). The RPC generator has no support for the way how virConnectListAllDomains() returns the results so the handler code had to be done manually.
The new api is handled by REMOTE_PROC_CONNECT_LIST_ALL_DOMAINS, with number 271 and marked with high priority. --- daemon/remote.c | 50 +++++++++++++++++++++++++++++++ src/remote/remote_driver.c | 66 ++++++++++++++++++++++++++++++++++++++++++ src/remote/remote_protocol.x | 14 ++++++++- src/remote_protocol-structs | 12 +++++++ 4 files changed, 141 insertions(+), 1 deletions(-)
diff --git a/daemon/remote.c b/daemon/remote.c index 16a8a05..8ac0568 100644 --- a/daemon/remote.c +++ b/daemon/remote.c
+ + if ((ndomains = virConnectListAllDomains(priv->conn, + &doms, + args->ndomains,
No need for args->ndomains.
+ args->flags)) < 0) + goto cleanup; + + if (doms) {
My proposal is that we _guarantee_ a terminating NULL entry. Which means if we get this far, doms will always be non-NULL.
+ if (VIR_ALLOC_N(ret->domains.domains_val, ndomains) < 0) {
However, this is correct. We do not want to transmit the trailing NULL over the wire. That said, you proposed that passing NULL doms could get just a count. As written, you have no way to flag that the client passed in NULL, so the client is _always_ requesting a list, and the server is always returning a list, for the client to then discard that list. I'm wondering if struct remote_connect_list_all_domains_args needs an extra field that flags whether the client is interested in a return list or just a count.
+++ b/src/remote/remote_driver.c @@ -1265,6 +1265,71 @@ done: return rv; }
+static int +remoteConnectListAllDomains(virConnectPtr conn, + virDomainPtr **domains, + int ndomains, + unsigned int flags) +{ + int rv = -1; + int i; + virDomainPtr *doms = NULL; + remote_connect_list_all_domains_args args; + remote_connect_list_all_domains_ret ret; + + struct private_data *priv = conn->privateData; + + remoteDriverLock(priv); + + if (domains) + *domains = NULL;
I'm debating whether we should guaranteed that *domains is set to NULL on error (in which case, this should probably be hoisted out of here and put in libvirt.c in patch 1/5 instead), or if we should instead guarantee that *domains is untouched on error and only modified on success. Either way, the RPC driver shouldn't need to touch *domains except on success.
+ + args.ndomains = ndomains;
Again, drop this.
+ args.flags = flags;
And as my counterpart to the daemon side, if the args struct has a field that flags whether the user passed in NULL for domains, it would save effort over the wire to return a 0-size array and just a return count.
+ + if (domains) { + if (VIR_ALLOC_N(doms, ret.domains.domains_len) < 0)
This must be VIR_ALLOC_N(doms, ret.domains.domains_len + 1) in order to match my proposed semantics of guaranteeing a terminating NULL.
+++ b/src/remote/remote_protocol.x @@ -2463,6 +2463,16 @@ struct remote_domain_get_disk_errors_ret { int nerrors; };
+struct remote_connect_list_all_domains_args { + int ndomains;
Drop ndomains.
+ unsigned int flags; +}; + +struct remote_connect_list_all_domains_ret { + remote_nonnull_domain domains<>;
This is an unbounded array; we aren't using any of these anywhere else. I wonder if reusing REMOTE_DOMAIN_ID_LIST_MAX would be reasonable instead. Then again, we are returning more information per domain: a name, uuid, and id, so maybe REMOTE_DOMAIN_NAME_LIST_MAX is the best we can do (currently 1024, but then again there is the pending patch to raise that limit). The name element may make us run into RPC limits even if we don't otherwise enforce a limit.
+ unsigned int ret;
Either this ret value is redundant (you can get it from domains.domains_len), or it makes sense (because you added a field to the args, and return a 0-length domains but a non-zero ret when the caller passed in a NULL domains). -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On 05/22/2012 01:30 AM, Eric Blake wrote:
On 05/20/2012 09:56 AM, Peter Krempa wrote:
This patch wires up the RPC protocol handlers for virConnectListAllDomains(). The RPC generator has no support for the way how virConnectListAllDomains() returns the results so the handler code had to be done manually.
The new api is handled by REMOTE_PROC_CONNECT_LIST_ALL_DOMAINS, with number 271 and marked with high priority. --- daemon/remote.c | 50 +++++++++++++++++++++++++++++++ src/remote/remote_driver.c | 66 ++++++++++++++++++++++++++++++++++++++++++ src/remote/remote_protocol.x | 14 ++++++++- src/remote_protocol-structs | 12 +++++++ 4 files changed, 141 insertions(+), 1 deletions(-)
+ +struct remote_connect_list_all_domains_ret { + remote_nonnull_domain domains<>;
This is an unbounded array; we aren't using any of these anywhere else. I wonder if reusing REMOTE_DOMAIN_ID_LIST_MAX would be reasonable instead. Then again, we are returning more information per domain: a name, uuid, and id, so maybe REMOTE_DOMAIN_NAME_LIST_MAX is the best we can do (currently 1024, but then again there is the pending patch to raise that limit). The name element may make us run into RPC limits even if we don't otherwise enforce a limit.
I chose the unbounded array intentionaly as I don't see any point in applying two limits on the size of the RPC message. I agree that the global message size limit is good for avoiding DoS attacks. On the other side, I don't find limiting the maximum count any useful (as you stated, there's the domain name that is variable length) and we still have to scale the other limits when we rise the global one. I'll add the limit to conform with the rest of the code and hopefuly nobody will need more than 1024 machines soon. Peter

On 05/22/2012 08:05 AM, Peter Krempa wrote:
+ +struct remote_connect_list_all_domains_ret { + remote_nonnull_domain domains<>;
This is an unbounded array; we aren't using any of these anywhere else. I wonder if reusing REMOTE_DOMAIN_ID_LIST_MAX would be reasonable instead. Then again, we are returning more information per domain: a name, uuid, and id, so maybe REMOTE_DOMAIN_NAME_LIST_MAX is the best we can do (currently 1024, but then again there is the pending patch to raise that limit). The name element may make us run into RPC limits even if we don't otherwise enforce a limit.
I chose the unbounded array intentionaly as I don't see any point in applying two limits on the size of the RPC message. I agree that the global message size limit is good for avoiding DoS attacks.
Interesting point - maybe it's worth re-evaluating which of our existing limits should be made unbounded, under the umbrella of the overall RPC limits being good enough.
I'll add the limit to conform with the rest of the code and hopefuly nobody will need more than 1024 machines soon.
Yeah, if we decide to go with unbounded structs within the scope of a bounded maximum message, we should probably do it across the entire .x file at once. -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

This patch adds a basic implementation of the listing code for virConnectListAllDomains() to qemu driver. The listing code does not support any filtering flags yet, but they may be easily added later. --- src/qemu/qemu_driver.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 97 insertions(+), 0 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index efbc421..e7b029f 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -12948,6 +12948,102 @@ cleanup: return ret; } +struct virDomainListData { + virConnectPtr conn; + virDomainPtr *domains; + int ndomains; + int size; + int limit; + bool error; + bool populate; +}; + +#define VIR_DOMAIN_LIST_POPULATION_INCREMENT 10 + +static void +qemuPopulateDomainList(void *payload, + const void *name ATTRIBUTE_UNUSED, + void *opaque) +{ + struct virDomainListData *data = opaque; + virDomainObjPtr vm = payload; + + if (data->error || + (data->limit >= 0 && + data->ndomains >= data->limit)) + return; + + if (!data->populate) { + data->ndomains++; + return; + } + + virDomainObjLock(vm); + + if (data->size == data->ndomains) { + if (VIR_REALLOC_N(data->domains, + data->size + VIR_DOMAIN_LIST_POPULATION_INCREMENT) < 0) + goto no_memory; + data->size += VIR_DOMAIN_LIST_POPULATION_INCREMENT; + } + + data->domains[data->ndomains] = virGetDomain(data->conn, + vm->def->name, + vm->def->uuid); + if (data->domains[data->ndomains] == NULL) + goto no_memory; + + data->domains[data->ndomains]->id = vm->def->id; + data->ndomains++; + +cleanup: + virDomainObjUnlock(vm); + return; + +no_memory: + virReportOOMError(); + data->error = true; + goto cleanup; +} + +static int +qemuListAllDomains(virConnectPtr conn, + virDomainPtr **domains, + int ndomains, + unsigned int flags) +{ + struct qemud_driver *driver = conn->privateData; + int ret = -1; + int i; + + struct virDomainListData data = { conn, NULL, 0, 0, ndomains, + false, !!domains}; + + virCheckFlags(0, -1); + + qemuDriverLock(driver); + + virHashForEach(driver->domains.objs, qemuPopulateDomainList, + (void *) &data); + + if (data.error) { + for (i = 0; i < data.size; i++) { + if (data.domains[i]) + virDomainFree(data.domains[i]); + } + VIR_FREE(data.domains); + data.ndomains = -1; + } + + if (domains) + *domains = data.domains; + + ret = data.ndomains; + + qemuDriverUnlock(driver); + return ret; +} + static virDriver qemuDriver = { .no = VIR_DRV_QEMU, .name = QEMU_DRIVER_NAME, @@ -13108,6 +13204,7 @@ static virDriver qemuDriver = { .domainPMSuspendForDuration = qemuDomainPMSuspendForDuration, /* 0.9.11 */ .domainPMWakeup = qemuDomainPMWakeup, /* 0.9.11 */ .domainGetCPUStats = qemuDomainGetCPUStats, /* 0.9.11 */ + .connectListAllDomains = qemuListAllDomains, /* 0.9.13 */ }; -- 1.7.3.4

On 05/20/2012 09:56 AM, Peter Krempa wrote:
This patch adds a basic implementation of the listing code for virConnectListAllDomains() to qemu driver. The listing code does not support any filtering flags yet, but they may be easily added later.
We need to also cover all the other drivers; I'm wondering how many drivers will share common implementations, where we could factor things into src/conf/domain_conf.c.
+struct virDomainListData { + virConnectPtr conn; + virDomainPtr *domains; + int ndomains; + int size; + int limit;
s/int/size_t/ for these three lines.
+ bool error; + bool populate; +}; + +#define VIR_DOMAIN_LIST_POPULATION_INCREMENT 10
You don't need this. Instead, you can use VIR_RESIZE_N(domains, size, ndomains, 1) to guarantee at least ndomains+1 slots, but only allocating on geometric boundaries.
+ +static void +qemuPopulateDomainList(void *payload, + const void *name ATTRIBUTE_UNUSED, + void *opaque) +{ + struct virDomainListData *data = opaque; + virDomainObjPtr vm = payload; + + if (data->error || + (data->limit >= 0 && + data->ndomains >= data->limit)) + return;
I don't think you need data->limit at all. Here is where you would do any filtering based on flags.
+ + if (!data->populate) { + data->ndomains++; + return; + } + + virDomainObjLock(vm); + + if (data->size == data->ndomains) { + if (VIR_REALLOC_N(data->domains, + data->size + VIR_DOMAIN_LIST_POPULATION_INCREMENT) < 0)
See above about using VIR_EXPAND_N instead of VIR_REALLOC_N.
+static int +qemuListAllDomains(virConnectPtr conn, + virDomainPtr **domains, + int ndomains, + unsigned int flags) +{ + struct qemud_driver *driver = conn->privateData; + int ret = -1; + int i; + + struct virDomainListData data = { conn, NULL, 0, 0, ndomains,
Why the double space?
+ false, !!domains}; + + virCheckFlags(0, -1); + + qemuDriverLock(driver); + + virHashForEach(driver->domains.objs, qemuPopulateDomainList, + (void *) &data);
No need to cast to (void*) in C. -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On 05/20/2012 09:56 AM, Peter Krempa wrote:
This patch adds a basic implementation of the listing code for virConnectListAllDomains() to qemu driver. The listing code does not support any filtering flags yet, but they may be easily added later. --- src/qemu/qemu_driver.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 97 insertions(+), 0 deletions(-)
+static void +qemuPopulateDomainList(void *payload, + const void *name ATTRIBUTE_UNUSED, + void *opaque) +{ + struct virDomainListData *data = opaque; + virDomainObjPtr vm = payload; + + if (data->error || + (data->limit >= 0 && + data->ndomains >= data->limit)) + return; + + if (!data->populate) { + data->ndomains++; + return; + } + + virDomainObjLock(vm);
I just realized: Since we are executing this under the driver lock, and no VM can change state until we let go of the driver lock, it is not necessary to lock vms in this loop. That will help things go faster in computing the list.
+static int +qemuListAllDomains(virConnectPtr conn, + virDomainPtr **domains, + int ndomains, + unsigned int flags) +{ + struct qemud_driver *driver = conn->privateData; + int ret = -1; + int i; + + struct virDomainListData data = { conn, NULL, 0, 0, ndomains, + false, !!domains}; + + virCheckFlags(0, -1); + + qemuDriverLock(driver); + + virHashForEach(driver->domains.objs, qemuPopulateDomainList, + (void *) &data);
-- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On Thu, May 24, 2012 at 05:46:51AM -0600, Eric Blake wrote:
On 05/20/2012 09:56 AM, Peter Krempa wrote:
This patch adds a basic implementation of the listing code for virConnectListAllDomains() to qemu driver. The listing code does not support any filtering flags yet, but they may be easily added later. --- src/qemu/qemu_driver.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 97 insertions(+), 0 deletions(-)
+static void +qemuPopulateDomainList(void *payload, + const void *name ATTRIBUTE_UNUSED, + void *opaque) +{ + struct virDomainListData *data = opaque; + virDomainObjPtr vm = payload; + + if (data->error || + (data->limit >= 0 && + data->ndomains >= data->limit)) + return; + + if (!data->populate) { + data->ndomains++; + return; + } + + virDomainObjLock(vm);
I just realized: Since we are executing this under the driver lock, and no VM can change state until we let go of the driver lock, it is not necessary to lock vms in this loop. That will help things go faster in computing the list.
Hmm, this feels slightly dangerous to me. Saying that is in effect saying you can do reads on a virDomainObjPtr without locking. This would be ok if it were impossible for the fields we're reading to be changed without driver lock held. 1. Thread A lock(driver) 2. Thread A lock(vm1) 3. Thread A unlock(driver) 4. Thread B lock(driver) 5. Thread B ...starts getting the list of domains... Now consider Thread A changes the 'id' feld in a virDomainPtr eg due to the guest shutting down. Won't thread B be doing unsafe reads of 'id' now ? The only way I can see that this is safe, is if we are sure that every change of the 'name', 'uuid' and 'id' fields is protected by the driver lock. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On 05/24/2012 06:14 AM, Daniel P. Berrange wrote:
I just realized: Since we are executing this under the driver lock, and no VM can change state until we let go of the driver lock, it is not necessary to lock vms in this loop. That will help things go faster in computing the list.
Hmm, this feels slightly dangerous to me. Saying that is in effect saying you can do reads on a virDomainObjPtr without locking. This would be ok if it were impossible for the fields we're reading to be changed without driver lock held.
1. Thread A lock(driver) 2. Thread A lock(vm1) 3. Thread A unlock(driver) 4. Thread B lock(driver) 5. Thread B ...starts getting the list of domains...
Now consider Thread A changes the 'id' feld in a virDomainPtr eg due to the guest shutting down.
Won't thread B be doing unsafe reads of 'id' now ?
The only way I can see that this is safe, is if we are sure that every change of the 'name', 'uuid' and 'id' fields is protected by the driver lock.
Okay, you may have a point about 'id'. But I still think 'name' and 'uuid' are safe - we hash virDomainObjPtr via the 'uuid', and changing the 'uuid' would invalidate the hash, and so it must only be done while holding driver lock. Likewise, we don't yet support the ability to change a 'name', other than deleting the old name and defining a new domain with the new name. But it's easier to be safe (albeit potentially slower) than it is to audit everyone else and ensure that they follow the rules. You've convinced me - let's keep the lock for now; the only way we could drop the lock here is if we refactor 'id', 'name', and 'uuid' to be accessible only through accessor functions that guarantee appropriate locking. -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On 24.05.2012 14:14, Daniel P. Berrange wrote:
On Thu, May 24, 2012 at 05:46:51AM -0600, Eric Blake wrote:
On 05/20/2012 09:56 AM, Peter Krempa wrote:
This patch adds a basic implementation of the listing code for virConnectListAllDomains() to qemu driver. The listing code does not support any filtering flags yet, but they may be easily added later. --- src/qemu/qemu_driver.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 97 insertions(+), 0 deletions(-)
+static void +qemuPopulateDomainList(void *payload, + const void *name ATTRIBUTE_UNUSED, + void *opaque) +{ + struct virDomainListData *data = opaque; + virDomainObjPtr vm = payload; + + if (data->error || + (data->limit >= 0 && + data->ndomains >= data->limit)) + return; + + if (!data->populate) { + data->ndomains++; + return; + } + + virDomainObjLock(vm);
I just realized: Since we are executing this under the driver lock, and no VM can change state until we let go of the driver lock, it is not necessary to lock vms in this loop. That will help things go faster in computing the list.
Hmm, this feels slightly dangerous to me. Saying that is in effect saying you can do reads on a virDomainObjPtr without locking. This would be ok if it were impossible for the fields we're reading to be changed without driver lock held.
1. Thread A lock(driver) 2. Thread A lock(vm1) 3. Thread A unlock(driver) 4. Thread B lock(driver) 5. Thread B ...starts getting the list of domains...
Now consider Thread A changes the 'id' feld in a virDomainPtr eg due to the guest shutting down.
Won't thread B be doing unsafe reads of 'id' now ?
If a domain dies during enumeration the 'id' won't be valid at the end anyway. Moreover, long running APIs (like shutdown) set job and unlock VM. Besides, the real id change is done in qemuProcessStop() rather than qemuDomainShutdownFlags(). Since domain can die without any mgmt application interaction, I don't think any app should rely on them. So I'd say this doesn't really matter.
The only way I can see that this is safe, is if we are sure that every change of the 'name', 'uuid' and 'id' fields is protected by the driver lock.
Daniel

On Thu, May 24, 2012 at 03:22:28PM +0200, Michal Privoznik wrote:
On 24.05.2012 14:14, Daniel P. Berrange wrote:
On Thu, May 24, 2012 at 05:46:51AM -0600, Eric Blake wrote:
On 05/20/2012 09:56 AM, Peter Krempa wrote:
This patch adds a basic implementation of the listing code for virConnectListAllDomains() to qemu driver. The listing code does not support any filtering flags yet, but they may be easily added later. --- src/qemu/qemu_driver.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 97 insertions(+), 0 deletions(-)
+static void +qemuPopulateDomainList(void *payload, + const void *name ATTRIBUTE_UNUSED, + void *opaque) +{ + struct virDomainListData *data = opaque; + virDomainObjPtr vm = payload; + + if (data->error || + (data->limit >= 0 && + data->ndomains >= data->limit)) + return; + + if (!data->populate) { + data->ndomains++; + return; + } + + virDomainObjLock(vm);
I just realized: Since we are executing this under the driver lock, and no VM can change state until we let go of the driver lock, it is not necessary to lock vms in this loop. That will help things go faster in computing the list.
Hmm, this feels slightly dangerous to me. Saying that is in effect saying you can do reads on a virDomainObjPtr without locking. This would be ok if it were impossible for the fields we're reading to be changed without driver lock held.
1. Thread A lock(driver) 2. Thread A lock(vm1) 3. Thread A unlock(driver) 4. Thread B lock(driver) 5. Thread B ...starts getting the list of domains...
Now consider Thread A changes the 'id' feld in a virDomainPtr eg due to the guest shutting down.
Won't thread B be doing unsafe reads of 'id' now ?
If a domain dies during enumeration the 'id' won't be valid at the end anyway.
There's a significant difference between an ID value not being valid anymore, which will be caught if it is used (assuming the ID values have not wrapped), vs an ID value that is totally scrambled due to non-atomic integer read/writes which could result in giving the ID value of a different guest.
Moreover, long running APIs (like shutdown) set job and unlock VM. Besides, the real id change is done in qemuProcessStop() rather than qemuDomainShutdownFlags(). Since domain can die without any mgmt application interaction, I don't think any app should rely on them. So I'd say this doesn't really matter.
I think that's a separate question really. We should be threadsafe in our usage regardless. The benefits of avoiding the locking here are not so great as to make it worthwhile Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On 05/24/2012 03:37 PM, Daniel P. Berrange wrote:
On Thu, May 24, 2012 at 03:22:28PM +0200, Michal Privoznik wrote:
On 24.05.2012 14:14, Daniel P. Berrange wrote:
On Thu, May 24, 2012 at 05:46:51AM -0600, Eric Blake wrote:
On 05/20/2012 09:56 AM, Peter Krempa wrote:
This patch adds a basic implementation of the listing code for virConnectListAllDomains() to qemu driver. The listing code does not support any filtering flags yet, but they may be easily added later. --- src/qemu/qemu_driver.c | 97 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 97 insertions(+), 0 deletions(-)
+static void +qemuPopulateDomainList(void *payload, + const void *name ATTRIBUTE_UNUSED, + void *opaque) +{ + struct virDomainListData *data = opaque; + virDomainObjPtr vm = payload; + + if (data->error || + (data->limit>= 0&& + data->ndomains>= data->limit)) + return; + + if (!data->populate) { + data->ndomains++; + return; + } + + virDomainObjLock(vm);
I just realized: Since we are executing this under the driver lock, and no VM can change state until we let go of the driver lock, it is not necessary to lock vms in this loop. That will help things go faster in computing the list.
Hmm, this feels slightly dangerous to me. Saying that is in effect saying you can do reads on a virDomainObjPtr without locking. This would be ok if it were impossible for the fields we're reading to be changed without driver lock held.
1. Thread A lock(driver) 2. Thread A lock(vm1) 3. Thread A unlock(driver) 4. Thread B lock(driver) 5. Thread B ...starts getting the list of domains...
Now consider Thread A changes the 'id' feld in a virDomainPtr eg due to the guest shutting down.
Won't thread B be doing unsafe reads of 'id' now ?
If a domain dies during enumeration the 'id' won't be valid at the end anyway.
There's a significant difference between an ID value not being valid anymore, which will be caught if it is used (assuming the ID values have not wrapped), vs an ID value that is totally scrambled due to non-atomic integer read/writes which could result in giving the ID value of a different guest.
Moreover, long running APIs (like shutdown) set job and unlock VM. Besides, the real id change is done in qemuProcessStop() rather than qemuDomainShutdownFlags(). Since domain can die without any mgmt application interaction, I don't think any app should rely on them. So I'd say this doesn't really matter.
I think that's a separate question really. We should be threadsafe in our usage regardless. The benefits of avoiding the locking here are not so great as to make it worthwhile
With the (now almost finished) server side filtering implementation, I'll be touching also other parts of the virDomainObj, so I'll have to keep the locks anyways. Peter

This patch makes use of the newly added api virConnectListAllDomains() to list domains in virsh. To enable fallback virsh now represents lists of domains using an internal structure vshDomainList. This structure contains the virDomainPtr list as provided by virConnectListAllDomains() and the count of domains in the list. For backwards compatiblity I've added function vshDomList that tries to enumerate the domains using the new API and if the API is not supported falls back to the older approach with the two list functions. This patch also adds helpers to filter domain lists provided by vshDomList(). As a last improvement I've cleaned up code that handles the "list" command using the newly added helper and filter functions. --- tools/virsh.c | 423 +++++++++++++++++++++++++++++++++++++-------------------- 1 files changed, 273 insertions(+), 150 deletions(-) diff --git a/tools/virsh.c b/tools/virsh.c index 46239fa..7f3f9ac 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -63,6 +63,7 @@ #include "util/bitmap.h" #include "conf/domain_conf.h" #include "virtypedparam.h" +#include "intprops.h" static char *progname; @@ -489,22 +490,225 @@ _vshStrdup(vshControl *ctl, const char *s, const char *filename, int line) #define realloc use_vshRealloc_instead_of_realloc #define strdup use_vshStrdup_instead_of_strdup -static int idsorter(const void *a, const void *b) { - const int *ia = (const int *)a; - const int *ib = (const int *)b; +static int +namesorter(const void *a, const void *b) +{ + const char **sa = (const char**)a; + const char **sb = (const char**)b; - if (*ia > *ib) - return 1; - else if (*ia < *ib) - return -1; - return 0; + /* User visible sort, so we want locale-specific case comparison. */ + return strcasecmp(*sa, *sb); +} + +static int +domsorter(const void *a, const void *b) +{ + virDomainPtr *da = (virDomainPtr *) a; + virDomainPtr *db = (virDomainPtr *) b; + unsigned int ida = virDomainGetID(*da); + unsigned int idb = virDomainGetID(*db); + unsigned int err = (unsigned int) -1; + + if (ida == err && idb == err) { + if (virDomainGetName(*da) || + virDomainGetName(*db)) + return strcasecmp(virDomainGetName(*da), + virDomainGetName(*db)); + else + return 0; + } + + if (ida != err && idb != err) { + if (ida > idb) + return 1; + else if (ida < idb) + return -1; + + return 0; + } + + if (ida != err) + return -1; + else + return 1; +} + +struct vshDomainList { + virDomainPtr *domains; + int ndomains; +}; +typedef struct vshDomainList *vshDomainListPtr; + +static int +vshDomList(vshControl *ctl, vshDomainListPtr domlist) +{ + int ret; + int rv = -1; + virErrorPtr err = NULL; + int i; + int *ids = NULL; + int nids = 0; + char **names = NULL; + int nnames = 0; + virDomainPtr dom; + virDomainPtr *doms = NULL; + int ndoms = 0; + + domlist->domains = NULL; + domlist->ndomains = 0; + + if ((ret = virConnectListAllDomains(ctl->conn, + &(domlist->domains), + -1, 0)) >= 0) { + domlist->ndomains = ret; + qsort(domlist->domains, domlist->ndomains, + sizeof(virDomainPtr), domsorter); + return ret; + } + + if ((err = virGetLastError()) && + err->code != VIR_ERR_NO_SUPPORT) { + vshError(ctl, "%s", _("Failed to list domains")); + goto cleanup; + } + + /* fall back to old method */ + virResetLastError(); + + if ((nids = virConnectNumOfDomains(ctl->conn)) < 0) { + vshError(ctl, "%s", _("Failed to list active domains")); + goto cleanup; + } + + if (nids) { + ids = vshMalloc(ctl, sizeof(int) * nids); + + if ((nids = virConnectListDomains(ctl->conn, ids, nids)) < 0) { + vshError(ctl, "%s", _("Failed to list active domains")); + goto cleanup; + } + } + + if ((nnames = virConnectNumOfDefinedDomains(ctl->conn)) < 0) { + vshError(ctl, "%s", _("Failed to list inactive domains")); + goto cleanup; + } + + if (nnames) { + names = vshMalloc(ctl, sizeof(char *) * nnames); + + if ((nnames = virConnectListDefinedDomains(ctl->conn, + names, + nnames)) < 0) { + vshError(ctl, "%s", _("Failed to list inactive domains")); + goto cleanup; + } + } + + doms = vshMalloc(ctl, sizeof(virDomainPtr) * (nids + nnames)); + + /* get active domains */ + for (i = 0; i < nids; i++) { + if (!(dom = virDomainLookupByID(ctl->conn, ids[i]))) + continue; + + doms[ndoms++] = dom; + } + + /* get inctive domains */ + for (i = 0; i < nnames; i++) { + if (!(dom = virDomainLookupByName(ctl->conn, names[i]))) + continue; + + doms[ndoms++] = dom; + } + + qsort(doms, ndoms, sizeof(virDomainPtr), domsorter); + + domlist->domains = doms; + doms = NULL; + domlist->ndomains = ndoms; + rv = ndoms; + +cleanup: + for (i = 0; i < nnames; i++) + VIR_FREE(names[i]); + if (doms) { + for (i = 0; i < ndoms; i++) { + if (doms[i]) + virDomainFree(doms[i]); + } + } + VIR_FREE(doms); + VIR_FREE(names); + VIR_FREE(ids); + return rv; +} + +static void +vshDomListFree(vshDomainListPtr domlist) +{ + int i; + + if (!domlist || !domlist->domains) + return; + + for (i = 0; i < domlist->ndomains; i++) { + if (domlist->domains[i]) + virDomainFree(domlist->domains[i]); + } + + domlist->ndomains = 0; + VIR_FREE(domlist->domains); +} + +/* return 0 if false, 1 if true -1 if error */ +typedef int (*vshDomainListFilter)(vshControl *ctl, virDomainPtr dom); + +/* collection of filters */ +static int +vshDomainListActiveFilter(vshControl *ctl ATTRIBUTE_UNUSED, + virDomainPtr dom) +{ + return virDomainGetID(dom) != (unsigned int) -1; } -static int namesorter(const void *a, const void *b) { - const char **sa = (const char**)a; - const char **sb = (const char**)b; - /* User visible sort, so we want locale-specific case comparison. */ - return strcasecmp(*sa, *sb); +static int +vshDomainListPersistentFilter(vshControl *ctl, + virDomainPtr dom) +{ + int persistent = virDomainIsPersistent(dom); + if (persistent < 0) { + vshError(ctl, "%s", + _("Failed to determine domain's persistent state")); + return -1; + } + return persistent; +} + +static int +vshDomListFreeFilter(vshControl *ctl, + vshDomainListPtr domlist, + vshDomainListFilter filter, + bool expect_result) +{ + int i; + int ret; + + for (i = 0; i < domlist->ndomains; i++) { + if (!domlist->domains[i]) + continue; + + ret = filter(ctl, domlist->domains[i]); + if (ret < 0) + return -1; + + if (!ret == !expect_result) { + virDomainFree(domlist->domains[i]); + domlist->domains[i] = NULL; + } + } + return 0; } static double @@ -936,9 +1140,6 @@ cmdList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED) bool inactive = vshCommandOptBool(cmd, "inactive"); bool all = vshCommandOptBool(cmd, "all"); bool active = !inactive || all; - int *ids = NULL, maxid = 0, i; - char **names = NULL; - int maxname = 0; bool managed = vshCommandOptBool(cmd, "managed-save"); bool optTitle = vshCommandOptBool(cmd, "title"); bool optTable = vshCommandOptBool(cmd, "table"); @@ -947,12 +1148,13 @@ cmdList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED) bool optPersistent = vshCommandOptBool(cmd, "persistent"); bool optTransient = vshCommandOptBool(cmd, "transient"); bool persistUsed = true; - virDomainPtr dom = NULL; + int i; char *title; char uuid[VIR_UUID_STRING_BUFLEN]; int state; - int persistent; bool ret = false; + struct vshDomainList list; + char id[INT_BUFSIZE_BOUND(unsigned int)]; inactive |= all; @@ -976,167 +1178,88 @@ cmdList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED) if (!vshConnectionUsability(ctl, ctl->conn)) return false; - if (active) { - maxid = virConnectNumOfDomains(ctl->conn); - if (maxid < 0) { - vshError(ctl, "%s", _("Failed to list active domains")); - return false; - } - if (maxid) { - ids = vshMalloc(ctl, sizeof(int) * maxid); + if (vshDomList(ctl, &list) < 0) + goto cleanup; - if ((maxid = virConnectListDomains(ctl->conn, ids, maxid)) < 0) { - vshError(ctl, "%s", _("Failed to list active domains")); - goto cleanup; - } + if (!active) + vshDomListFreeFilter(ctl, &list, vshDomainListActiveFilter, true); - qsort(ids, maxid, sizeof(int), idsorter); - } - } + if (!inactive) + vshDomListFreeFilter(ctl, &list, vshDomainListActiveFilter, false); - if (inactive) { - maxname = virConnectNumOfDefinedDomains(ctl->conn); - if (maxname < 0) { - vshError(ctl, "%s", _("Failed to list inactive domains")); + if (persistUsed) { + if (!optPersistent && + vshDomListFreeFilter(ctl, &list, + vshDomainListPersistentFilter, true) < 0) goto cleanup; - } - if (maxname) { - names = vshMalloc(ctl, sizeof(char *) * maxname); - if ((maxname = virConnectListDefinedDomains(ctl->conn, - names, - maxname)) < 0) { - vshError(ctl, "%s", _("Failed to list inactive domains")); - goto cleanup; - } - - qsort(&names[0], maxname, sizeof(char*), namesorter); - } + if (!optTransient && + vshDomListFreeFilter(ctl, &list, + vshDomainListPersistentFilter, false) < 0) + goto cleanup; } /* print table header in legacy mode */ if (optTable) { - if (optTitle) { + if (optTitle) vshPrintExtra(ctl, " %-5s %-30s %-10s %-20s\n%s\n", _("Id"), _("Name"), _("State"), _("Title"), "-----------------------------------------" "-----------------------------------------"); - } else { + else vshPrintExtra(ctl, " %-5s %-30s %s\n%s\n", _("Id"), _("Name"), _("State"), "-----------------------------------------" "-----------"); - } } - for (i = 0; i < maxid; i++) { - dom = virDomainLookupByID(ctl->conn, ids[i]); - - /* this kind of work with domains is not atomic operation */ - if (!dom) + for (i = 0; i < list.ndomains; i++) { + if (!list.domains[i]) continue; - if (persistUsed) { - persistent = virDomainIsPersistent(dom); - if (persistent < 0) { - vshError(ctl, "%s", - _("Failed to determine domain's persistent state")); - goto cleanup; - } + if (virDomainGetID(list.domains[i]) != (unsigned int) -1) + snprintf(id, sizeof(id), "%d", virDomainGetID(list.domains[i])); + else + strncpy(id, "-", sizeof(id)); - if (!(persistent ? optPersistent : optTransient)) { - virDomainFree(dom); - dom = NULL; - continue; - } - } - - if (optTable) { - if (optTitle) { - if (!(title = vshGetDomainDescription(ctl, dom, true, 0))) - goto cleanup; - - vshPrint(ctl, " %-5d %-30s %-10s %-20s\n", - virDomainGetID(dom), - virDomainGetName(dom), - _(vshDomainStateToString(vshDomainState(ctl, dom, NULL))), - title); - VIR_FREE(title); - } else { - vshPrint(ctl, " %-5d %-30s %s\n", - virDomainGetID(dom), - virDomainGetName(dom), - _(vshDomainStateToString(vshDomainState(ctl, dom, NULL)))); - } - } else if (optUUID) { - if (virDomainGetUUIDString(dom, uuid) < 0) { - vshError(ctl, "%s", - _("Failed to get domain's UUID")); - goto cleanup; - } - vshPrint(ctl, "%s\n", uuid); - } else if (optName) { - vshPrint(ctl, "%s\n", virDomainGetName(dom)); - } - - virDomainFree(dom); - dom = NULL; - } - - if (optPersistent) { - for (i = 0; i < maxname; i++) { - dom = virDomainLookupByName(ctl->conn, names[i]); - - /* this kind of work with domains is not atomic operation */ - if (!dom) - continue; + state = vshDomainState(ctl, list.domains[i], NULL); + if (managed && state == VIR_DOMAIN_SHUTOFF && + virDomainHasManagedSaveImage(list.domains[i], 0) > 0) + state = -2; - if (optTable) { - state = vshDomainState(ctl, dom, NULL); - if (managed && state == VIR_DOMAIN_SHUTOFF && - virDomainHasManagedSaveImage(dom, 0) > 0) - state = -2; - - if (optTitle) { - if (!(title = vshGetDomainDescription(ctl, dom, true, 0))) - goto cleanup; - - vshPrint(ctl, " %-5s %-30s %-10s %-20s\n", - "-", - names[i], - state == -2 ? _("saved") : _(vshDomainStateToString(state)), - title); - VIR_FREE(title); - } else { - vshPrint(ctl, " %-5s %-30s %s\n", - "-", - names[i], - state == -2 ? _("saved") : _(vshDomainStateToString(state))); - } - } else if (optUUID) { - if (virDomainGetUUIDString(dom, uuid) < 0) { - vshError(ctl, "%s", - _("Failed to get domain's UUID")); - goto cleanup; - } - vshPrint(ctl, "%s\n", uuid); - } else if (optName) { - vshPrint(ctl, "%s\n", names[i]); - } + if (optTable) { + if (optTitle) { + if (!(title = vshGetDomainDescription(ctl, list.domains[i], true, 0))) + goto cleanup; + + vshPrint(ctl, " %-5s %-30s %-10s %-20s\n", + id, + virDomainGetName(list.domains[i]), + state == -2 ? _("saved") : _(vshDomainStateToString(state)), + title); - virDomainFree(dom); - dom = NULL; + VIR_FREE(title); + } else { + vshPrint(ctl, " %-5s %-30s %s\n", + id, + virDomainGetName(list.domains[i]), + state == -2 ? _("saved") : _(vshDomainStateToString(state))); + } + } else if (optUUID) { + if (virDomainGetUUIDString(list.domains[i], uuid) < 0) { + vshError(ctl, "%s", + _("Failed to get domain's UUID")); + goto cleanup; + } + vshPrint(ctl, "%s\n", uuid); + } else if (optName) { + vshPrint(ctl, "%s\n", virDomainGetName(list.domains[i])); } } ret = true; cleanup: - if (dom) - virDomainFree(dom); - VIR_FREE(ids); - for (i = 0; i < maxname; i++) - VIR_FREE(names[i]); - VIR_FREE(names); + vshDomListFree(&list); return ret; } -- 1.7.3.4

On 05/20/2012 09:56 AM, Peter Krempa wrote:
This patch makes use of the newly added api virConnectListAllDomains() to list domains in virsh.
To enable fallback virsh now represents lists of domains using an
s/fallback/fallback,/
internal structure vshDomainList. This structure contains the virDomainPtr list as provided by virConnectListAllDomains() and the count of domains in the list.
For backwards compatiblity I've added function vshDomList that tries to
s/compatiblity/compatibility/ s/added/added the/
enumerate the domains using the new API and if the API is not supported
s/new API/new API,/
falls back to the older approach with the two list functions.
This patch also adds helpers to filter domain lists provided by vshDomList().
Is it vshDomainList or vshDomList? Can you reuse the filtering bits of the public API when falling back to the older APIs?
As a last improvement I've cleaned up code that handles the "list" command using the newly added helper and filter functions. --- tools/virsh.c | 423 +++++++++++++++++++++++++++++++++++++-------------------- 1 files changed, 273 insertions(+), 150 deletions(-)
+static int +namesorter(const void *a, const void *b) +{ + const char **sa = (const char**)a; + const char **sb = (const char**)b;
- if (*ia > *ib) - return 1; - else if (*ia < *ib) - return -1; - return 0; + /* User visible sort, so we want locale-specific case comparison. */ + return strcasecmp(*sa, *sb);
Hmm. Pre-existing (since you're only floating namesorter higher up in the file), but strcasecmp has undefined behavior in some locales where not all byte sequences are valid characters. It's quite painful to work around strcasecmp failure inside a qsort (see the GPL module xmemcoll in gnulib to see how horrible it is). But since it's pre-existing and no one has complained yet, I'm find leaving it as is.
+} + +static int +domsorter(const void *a, const void *b) +{ + virDomainPtr *da = (virDomainPtr *) a; + virDomainPtr *db = (virDomainPtr *) b; + unsigned int ida = virDomainGetID(*da); + unsigned int idb = virDomainGetID(*db); + unsigned int err = (unsigned int) -1;
'err' is a misleading name. Really, it is more like 'inactive'. (Oh wow - we return -1 for both inactive domains with no error set, and for garbage in with a VIR_ERR_INVALID_DOMAIN error - but the latter case shouldn't be happening because virsh shouldn't be feeding in garbage).
+ + if (ida == err && idb == err) { + if (virDomainGetName(*da) || + virDomainGetName(*db))
All domains have names; virDomainGetName can't fail unless you feed it garbage, so this if will always occur,
+ return strcasecmp(virDomainGetName(*da), + virDomainGetName(*db));
and this is a wasted double-function call,
+ else + return 0;
and you will never get here if virsh is bug-free.
+ } + + if (ida != err && idb != err) { + if (ida > idb) + return 1; + else if (ida < idb) + return -1; + + return 0; + } + + if (ida != err) + return -1; + else + return 1;
Looks like a sane sort; you manage to preserve the old 'virsh list' ordering of active before inactive.
+static int +vshDomList(vshControl *ctl, vshDomainListPtr domlist)
Passing in a flags argument here will be important for using filtering of the public API when available, and faking the filtering otherwise.
+ + if ((ret = virConnectListAllDomains(ctl->conn, + &(domlist->domains), + -1, 0)) >= 0) { + domlist->ndomains = ret; + qsort(domlist->domains, domlist->ndomains, + sizeof(virDomainPtr), domsorter); + return ret; + } + + if ((err = virGetLastError()) && + err->code != VIR_ERR_NO_SUPPORT) { + vshError(ctl, "%s", _("Failed to list domains")); + goto cleanup; + } + + /* fall back to old method */ + virResetLastError();
Looks good through here.
+ + if ((nids = virConnectNumOfDomains(ctl->conn)) < 0) { + vshError(ctl, "%s", _("Failed to list active domains")); + goto cleanup; + } + + if (nids) { + ids = vshMalloc(ctl, sizeof(int) * nids);
Per my hints in my RFC patch, we should really be calling vshMalloc(ctl, sizeof(int)*(nids + 1)), then checking that the resulting value is smaller than our allocated size, if we want to be avoiding (some) aspects of the inherent races.
+ + if ((nids = virConnectListDomains(ctl->conn, ids, nids)) < 0) { + vshError(ctl, "%s", _("Failed to list active domains")); + goto cleanup; + } + } + + if ((nnames = virConnectNumOfDefinedDomains(ctl->conn)) < 0) { + vshError(ctl, "%s", _("Failed to list inactive domains")); + goto cleanup; + }
When you add in flags for filtering, you can skip parts of this back-compat code based on particular flags.
+ + if (nnames) { + names = vshMalloc(ctl, sizeof(char *) * nnames);
Again, using (nnames+1) for the allocation size will at least warn you if you hit one of the data races.
+ +/* collection of filters */ +static int +vshDomainListActiveFilter(vshControl *ctl ATTRIBUTE_UNUSED, + virDomainPtr dom) +{ + return virDomainGetID(dom) != (unsigned int) -1; }
I'm not sure if you need these callback functions here, or if you can fold them into collecting the list in the first place.
@@ -976,167 +1178,88 @@ cmdList(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED) if (!vshConnectionUsability(ctl, ctl->conn)) return false;
- if (active) { - maxid = virConnectNumOfDomains(ctl->conn); - if (maxid < 0) { - vshError(ctl, "%s", _("Failed to list active domains")); - return false; - } - if (maxid) { - ids = vshMalloc(ctl, sizeof(int) * maxid); + if (vshDomList(ctl, &list) < 0)
I think it would be worth adding the flags into the public API for v2 of this series, then here you would map 'active' to VIR_CONNECT_LIST_DOMAINS_ACTIVE, and so forth. Overall, though, I'm happy with the way this is moving - it is indeed much easier to operate on a list of virDomainPtr, especially if that list has been pre-filtered. -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org
participants (4)
-
Daniel P. Berrange
-
Eric Blake
-
Michal Privoznik
-
Peter Krempa