[libvirt] [PATCH v2 0/8] Don't hold both monitor and agent jobs at the same time

We have to assume that the guest agent may be malicious, so we don't want to allow any agent queries to block any other libvirt API. By holding a monitor job and an agent job while we're querying the agent, any other threads will be blocked from using the monitor while the agent is unresponsive. Because libvirt waits forever for an agent response, this makes us vulnerable to a denial of service from a malicious (or simply buggy) guest agent. Most of the patches in the first series were already reviewed and pushed, but a couple remain: the filesystem info functions. The problem with these functions is that the agent functions access the vm definition (owned by the domain). If a monitor job is not held while this is done, the vm definition could change while we are looking up the disk alias, leading to a potentional crash. This series tries to fix this by moving the disk alias searching up a level from qemu_agent.c to qemu_driver.c. The code in qemu_agent.c will only return the raw data returned from the agent command response. After the agent response is returned and the agent job is ended, we can then look up the disk alias from the vm definition while the domain object is locked. In addition, a few nearby cleanups are included in this series, notably changing to glib allocation API in a couple of places. Jonathon Jongsma (8): qemu: rename qemuAgentGetFSInfoInternalDisk() qemu: store complete agent filesystem information qemu: Don't store disk alias in qemuAgentDiskInfo qemu: don't access vmdef within qemu_agent.c qemu: remove qemuDomainObjBegin/EndJobWithAgent() qemu: use glib alloc in qemuAgentGetFSInfoFillDisks() qemu: use glib allocation apis for qemuAgentFSInfo Use glib alloc API for virDomainFSInfo src/libvirt-domain.c | 12 +- src/qemu/THREADS.txt | 58 +----- src/qemu/qemu_agent.c | 268 ++++------------------------ src/qemu/qemu_agent.h | 33 +++- src/qemu/qemu_domain.c | 56 +----- src/qemu/qemu_domain.h | 7 - src/qemu/qemu_driver.c | 246 +++++++++++++++++++++++-- src/remote/remote_daemon_dispatch.c | 2 +- tests/qemuagenttest.c | 196 ++++---------------- 9 files changed, 336 insertions(+), 542 deletions(-) -- 2.21.0

The function name doesn't give a good idea of what the function does. Rename to qemuAgentGetFSInfoFillDisks() to make it more obvious than it is filling in the disk information in the fsinfo struct. Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com> --- src/qemu/qemu_agent.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/qemu/qemu_agent.c b/src/qemu/qemu_agent.c index 5fa8d24a91..8b54786ed8 100644 --- a/src/qemu/qemu_agent.c +++ b/src/qemu/qemu_agent.c @@ -1929,9 +1929,9 @@ qemuAgentFSInfoToPublic(qemuAgentFSInfoPtr agent) } static int -qemuAgentGetFSInfoInternalDisk(virJSONValuePtr jsondisks, - qemuAgentFSInfoPtr fsinfo, - virDomainDefPtr vmdef) +qemuAgentGetFSInfoFillDisks(virJSONValuePtr jsondisks, + qemuAgentFSInfoPtr fsinfo, + virDomainDefPtr vmdef) { size_t ndisks; size_t i; @@ -2143,7 +2143,7 @@ qemuAgentGetFSInfoInternal(qemuAgentPtr mon, goto cleanup; } - if (qemuAgentGetFSInfoInternalDisk(disk, info_ret[i], vmdef) < 0) + if (qemuAgentGetFSInfoFillDisks(disk, info_ret[i], vmdef) < 0) goto cleanup; } -- 2.21.0

In an effort to avoid holding both an agent and normal job at the same time, we shouldn't access the vm definition from within qemu_agent.c (i.e. while the agent job is being held). In preparation, we need to store the full filesystem disk information in qemuAgentDiskInfo. In a following commit, we can pass this information back to the caller and the caller can search the vm definition to match the filsystem disk to an alias. Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com> --- src/qemu/qemu_agent.c | 36 ++++++++++++++++++++---------------- 1 file changed, 20 insertions(+), 16 deletions(-) diff --git a/src/qemu/qemu_agent.c b/src/qemu/qemu_agent.c index 8b54786ed8..fa55ff0a57 100644 --- a/src/qemu/qemu_agent.c +++ b/src/qemu/qemu_agent.c @@ -1853,6 +1853,11 @@ typedef qemuAgentDiskInfo *qemuAgentDiskInfoPtr; struct _qemuAgentDiskInfo { char *alias; char *serial; + virPCIDeviceAddress pci_controller; + char *bus_type; + unsigned int bus; + unsigned int target; + unsigned int unit; char *devnode; }; @@ -1876,6 +1881,7 @@ qemuAgentDiskInfoFree(qemuAgentDiskInfoPtr info) VIR_FREE(info->serial); VIR_FREE(info->alias); + VIR_FREE(info->bus_type); VIR_FREE(info->devnode); VIR_FREE(info); } @@ -1956,10 +1962,6 @@ qemuAgentGetFSInfoFillDisks(virJSONValuePtr jsondisks, qemuAgentDiskInfoPtr disk; virDomainDiskDefPtr diskDef; const char *val; - unsigned int bus; - unsigned int target; - unsigned int unit; - virPCIDeviceAddress pci_address; if (!jsondisk) { virReportError(VIR_ERR_INTERNAL_ERROR, @@ -1973,6 +1975,9 @@ qemuAgentGetFSInfoFillDisks(virJSONValuePtr jsondisks, return -1; disk = fsinfo->disks[i]; + if ((val = virJSONValueObjectGetString(jsondisk, "bus-type"))) + disk->bus_type = g_strdup(val); + if ((val = virJSONValueObjectGetString(jsondisk, "serial"))) disk->serial = g_strdup(val); @@ -1989,9 +1994,9 @@ qemuAgentGetFSInfoFillDisks(virJSONValuePtr jsondisks, } \ } while (0) - GET_DISK_ADDR(jsondisk, &bus, "bus"); - GET_DISK_ADDR(jsondisk, &target, "target"); - GET_DISK_ADDR(jsondisk, &unit, "unit"); + GET_DISK_ADDR(jsondisk, &disk->bus, "bus"); + GET_DISK_ADDR(jsondisk, &disk->target, "target"); + GET_DISK_ADDR(jsondisk, &disk->unit, "unit"); if (!(pci = virJSONValueObjectGet(jsondisk, "pci-controller"))) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -2000,18 +2005,17 @@ qemuAgentGetFSInfoFillDisks(virJSONValuePtr jsondisks, return -1; } - GET_DISK_ADDR(pci, &pci_address.domain, "domain"); - GET_DISK_ADDR(pci, &pci_address.bus, "bus"); - GET_DISK_ADDR(pci, &pci_address.slot, "slot"); - GET_DISK_ADDR(pci, &pci_address.function, "function"); + GET_DISK_ADDR(pci, &disk->pci_controller.domain, "domain"); + GET_DISK_ADDR(pci, &disk->pci_controller.bus, "bus"); + GET_DISK_ADDR(pci, &disk->pci_controller.slot, "slot"); + GET_DISK_ADDR(pci, &disk->pci_controller.function, "function"); #undef GET_DISK_ADDR - if (!(diskDef = virDomainDiskByAddress(vmdef, - &pci_address, - bus, - target, - unit))) + &disk->pci_controller, + disk->bus, + disk->target, + disk->unit))) continue; disk->alias = g_strdup(diskDef->dst); -- 2.21.0

The qemuAgentDiskInfo structure is filled with information received from the agent command response, except for the 'alias' field, which is retrieved from the vm definition. Limit this structure only to data that was received from the agent message. This is another intermediate step in moving the responsibility for searching the vmdef from qemu_agent.c to qemu_driver.c so that we can avoid holding an agent job and a normal job at the same time. Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com> --- src/qemu/qemu_agent.c | 63 ++++++++++++++++++++++++------------------- 1 file changed, 35 insertions(+), 28 deletions(-) diff --git a/src/qemu/qemu_agent.c b/src/qemu/qemu_agent.c index fa55ff0a57..b250077f0a 100644 --- a/src/qemu/qemu_agent.c +++ b/src/qemu/qemu_agent.c @@ -1851,7 +1851,6 @@ qemuAgentSetTime(qemuAgentPtr mon, typedef struct _qemuAgentDiskInfo qemuAgentDiskInfo; typedef qemuAgentDiskInfo *qemuAgentDiskInfoPtr; struct _qemuAgentDiskInfo { - char *alias; char *serial; virPCIDeviceAddress pci_controller; char *bus_type; @@ -1880,7 +1879,6 @@ qemuAgentDiskInfoFree(qemuAgentDiskInfoPtr info) return; VIR_FREE(info->serial); - VIR_FREE(info->alias); VIR_FREE(info->bus_type); VIR_FREE(info->devnode); VIR_FREE(info); @@ -1906,10 +1904,12 @@ qemuAgentFSInfoFree(qemuAgentFSInfoPtr info) } static virDomainFSInfoPtr -qemuAgentFSInfoToPublic(qemuAgentFSInfoPtr agent) +qemuAgentFSInfoToPublic(qemuAgentFSInfoPtr agent, + virDomainDefPtr vmdef) { virDomainFSInfoPtr ret = NULL; size_t i; + virDomainDiskDefPtr diskDef; if (VIR_ALLOC(ret) < 0) goto error; @@ -1924,8 +1924,17 @@ qemuAgentFSInfoToPublic(qemuAgentFSInfoPtr agent) ret->ndevAlias = agent->ndisks; - for (i = 0; i < ret->ndevAlias; i++) - ret->devAlias[i] = g_strdup(agent->disks[i]->alias); + for (i = 0; i < ret->ndevAlias; i++) { + qemuAgentDiskInfoPtr agentdisk = agent->disks[i]; + if (!(diskDef = virDomainDiskByAddress(vmdef, + &agentdisk->pci_controller, + agentdisk->bus, + agentdisk->target, + agentdisk->unit))) + continue; + + ret->devAlias[i] = g_strdup(diskDef->dst); + } return ret; @@ -1936,8 +1945,7 @@ qemuAgentFSInfoToPublic(qemuAgentFSInfoPtr agent) static int qemuAgentGetFSInfoFillDisks(virJSONValuePtr jsondisks, - qemuAgentFSInfoPtr fsinfo, - virDomainDefPtr vmdef) + qemuAgentFSInfoPtr fsinfo) { size_t ndisks; size_t i; @@ -1960,7 +1968,6 @@ qemuAgentGetFSInfoFillDisks(virJSONValuePtr jsondisks, virJSONValuePtr jsondisk = virJSONValueArrayGet(jsondisks, i); virJSONValuePtr pci; qemuAgentDiskInfoPtr disk; - virDomainDiskDefPtr diskDef; const char *val; if (!jsondisk) { @@ -2011,14 +2018,6 @@ qemuAgentGetFSInfoFillDisks(virJSONValuePtr jsondisks, GET_DISK_ADDR(pci, &disk->pci_controller.function, "function"); #undef GET_DISK_ADDR - if (!(diskDef = virDomainDiskByAddress(vmdef, - &disk->pci_controller, - disk->bus, - disk->target, - disk->unit))) - continue; - - disk->alias = g_strdup(diskDef->dst); } return 0; @@ -2030,8 +2029,7 @@ qemuAgentGetFSInfoFillDisks(virJSONValuePtr jsondisks, */ static int qemuAgentGetFSInfoInternal(qemuAgentPtr mon, - qemuAgentFSInfoPtr **info, - virDomainDefPtr vmdef) + qemuAgentFSInfoPtr **info) { size_t i; int ret = -1; @@ -2147,7 +2145,7 @@ qemuAgentGetFSInfoInternal(qemuAgentPtr mon, goto cleanup; } - if (qemuAgentGetFSInfoFillDisks(disk, info_ret[i], vmdef) < 0) + if (qemuAgentGetFSInfoFillDisks(disk, info_ret[i]) < 0) goto cleanup; } @@ -2177,14 +2175,14 @@ qemuAgentGetFSInfo(qemuAgentPtr mon, size_t i; int nfs; - nfs = qemuAgentGetFSInfoInternal(mon, &agentinfo, vmdef); + nfs = qemuAgentGetFSInfoInternal(mon, &agentinfo); if (nfs < 0) return ret; if (VIR_ALLOC_N(info_ret, nfs) < 0) goto cleanup; for (i = 0; i < nfs; i++) { - if (!(info_ret[i] = qemuAgentFSInfoToPublic(agentinfo[i]))) + if (!(info_ret[i] = qemuAgentFSInfoToPublic(agentinfo[i], vmdef))) goto cleanup; } @@ -2219,7 +2217,7 @@ qemuAgentGetFSInfoParams(qemuAgentPtr mon, size_t i, j; int nfs; - if ((nfs = qemuAgentGetFSInfoInternal(mon, &fsinfo, vmdef)) < 0) + if ((nfs = qemuAgentGetFSInfoInternal(mon, &fsinfo)) < 0) return nfs; if (virTypedParamsAddUInt(params, nparams, maxparams, @@ -2266,13 +2264,22 @@ qemuAgentGetFSInfoParams(qemuAgentPtr mon, param_name, fsinfo[i]->ndisks) < 0) goto cleanup; for (j = 0; j < fsinfo[i]->ndisks; j++) { + virDomainDiskDefPtr diskdef = NULL; qemuAgentDiskInfoPtr d = fsinfo[i]->disks[j]; - g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "fs.%zu.disk.%zu.alias", i, j); - if (d->alias && - virTypedParamsAddString(params, nparams, maxparams, - param_name, d->alias) < 0) - goto cleanup; + /* match the disk to the target in the vm definition */ + diskdef = virDomainDiskByAddress(vmdef, + &d->pci_controller, + d->bus, + d->target, + d->unit); + if (diskdef) { + g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, + "fs.%zu.disk.%zu.alias", i, j); + if (diskdef->dst && + virTypedParamsAddString(params, nparams, maxparams, + param_name, diskdef->dst) < 0) + goto cleanup; + } g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, "fs.%zu.disk.%zu.serial", i, j); -- 2.21.0

On 1/11/20 12:32 AM, Jonathon Jongsma wrote:
The qemuAgentDiskInfo structure is filled with information received from the agent command response, except for the 'alias' field, which is retrieved from the vm definition. Limit this structure only to data that was received from the agent message.
This is another intermediate step in moving the responsibility for searching the vmdef from qemu_agent.c to qemu_driver.c so that we can avoid holding an agent job and a normal job at the same time.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com> --- src/qemu/qemu_agent.c | 63 ++++++++++++++++++++++++------------------- 1 file changed, 35 insertions(+), 28 deletions(-)
diff --git a/src/qemu/qemu_agent.c b/src/qemu/qemu_agent.c index fa55ff0a57..b250077f0a 100644 --- a/src/qemu/qemu_agent.c +++ b/src/qemu/qemu_agent.c @@ -1851,7 +1851,6 @@ qemuAgentSetTime(qemuAgentPtr mon, typedef struct _qemuAgentDiskInfo qemuAgentDiskInfo; typedef qemuAgentDiskInfo *qemuAgentDiskInfoPtr; struct _qemuAgentDiskInfo { - char *alias; char *serial; virPCIDeviceAddress pci_controller; char *bus_type; @@ -1880,7 +1879,6 @@ qemuAgentDiskInfoFree(qemuAgentDiskInfoPtr info) return;
VIR_FREE(info->serial); - VIR_FREE(info->alias); VIR_FREE(info->bus_type); VIR_FREE(info->devnode); VIR_FREE(info); @@ -1906,10 +1904,12 @@ qemuAgentFSInfoFree(qemuAgentFSInfoPtr info) }
static virDomainFSInfoPtr -qemuAgentFSInfoToPublic(qemuAgentFSInfoPtr agent) +qemuAgentFSInfoToPublic(qemuAgentFSInfoPtr agent, + virDomainDefPtr vmdef) { virDomainFSInfoPtr ret = NULL; size_t i; + virDomainDiskDefPtr diskDef;
This can go into the for() loop.
if (VIR_ALLOC(ret) < 0) goto error;
Michal

In order to avoid holding an agent job and a normal job at the same time, we want to avoid accessing the domain's definition while holding the agent job. To achieve this, qemuAgentGetFSInfo() only returns the raw information from the agent query to the caller. The caller can then release the agent job and then proceed to look up the disk alias from the vm definition. This necessitates moving a few helper functions to qemu_driver.c and exposing the agent data structure (qemuAgentFSInfo) in the header. In addition, because the agent function no longer returns the looked-up disk alias, we can't test the alias within qemuagenttest. Instead we simply test that we parse and return the raw agent data correctly. Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com> --- src/qemu/qemu_agent.c | 218 +---------------------------------- src/qemu/qemu_agent.h | 33 ++++-- src/qemu/qemu_driver.c | 254 ++++++++++++++++++++++++++++++++++++++--- tests/qemuagenttest.c | 196 +++++-------------------------- 4 files changed, 298 insertions(+), 403 deletions(-) diff --git a/src/qemu/qemu_agent.c b/src/qemu/qemu_agent.c index b250077f0a..47bfef7141 100644 --- a/src/qemu/qemu_agent.c +++ b/src/qemu/qemu_agent.c @@ -1848,30 +1848,6 @@ qemuAgentSetTime(qemuAgentPtr mon, return ret; } -typedef struct _qemuAgentDiskInfo qemuAgentDiskInfo; -typedef qemuAgentDiskInfo *qemuAgentDiskInfoPtr; -struct _qemuAgentDiskInfo { - char *serial; - virPCIDeviceAddress pci_controller; - char *bus_type; - unsigned int bus; - unsigned int target; - unsigned int unit; - char *devnode; -}; - -typedef struct _qemuAgentFSInfo qemuAgentFSInfo; -typedef qemuAgentFSInfo *qemuAgentFSInfoPtr; -struct _qemuAgentFSInfo { - char *mountpoint; /* path to mount point */ - char *name; /* device name in the guest (e.g. "sda1") */ - char *fstype; /* filesystem type */ - long long total_bytes; - long long used_bytes; - size_t ndisks; - qemuAgentDiskInfoPtr *disks; -}; - static void qemuAgentDiskInfoFree(qemuAgentDiskInfoPtr info) { @@ -1884,7 +1860,7 @@ qemuAgentDiskInfoFree(qemuAgentDiskInfoPtr info) VIR_FREE(info); } -static void +void qemuAgentFSInfoFree(qemuAgentFSInfoPtr info) { size_t i; @@ -1903,46 +1879,6 @@ qemuAgentFSInfoFree(qemuAgentFSInfoPtr info) VIR_FREE(info); } -static virDomainFSInfoPtr -qemuAgentFSInfoToPublic(qemuAgentFSInfoPtr agent, - virDomainDefPtr vmdef) -{ - virDomainFSInfoPtr ret = NULL; - size_t i; - virDomainDiskDefPtr diskDef; - - if (VIR_ALLOC(ret) < 0) - goto error; - - ret->mountpoint = g_strdup(agent->mountpoint); - ret->name = g_strdup(agent->name); - ret->fstype = g_strdup(agent->fstype); - - if (agent->disks && - VIR_ALLOC_N(ret->devAlias, agent->ndisks) < 0) - goto error; - - ret->ndevAlias = agent->ndisks; - - for (i = 0; i < ret->ndevAlias; i++) { - qemuAgentDiskInfoPtr agentdisk = agent->disks[i]; - if (!(diskDef = virDomainDiskByAddress(vmdef, - &agentdisk->pci_controller, - agentdisk->bus, - agentdisk->target, - agentdisk->unit))) - continue; - - ret->devAlias[i] = g_strdup(diskDef->dst); - } - - return ret; - - error: - virDomainFSInfoFree(ret); - return NULL; -} - static int qemuAgentGetFSInfoFillDisks(virJSONValuePtr jsondisks, qemuAgentFSInfoPtr fsinfo) @@ -2016,7 +1952,6 @@ qemuAgentGetFSInfoFillDisks(virJSONValuePtr jsondisks, GET_DISK_ADDR(pci, &disk->pci_controller.bus, "bus"); GET_DISK_ADDR(pci, &disk->pci_controller.slot, "slot"); GET_DISK_ADDR(pci, &disk->pci_controller.function, "function"); - #undef GET_DISK_ADDR } @@ -2027,9 +1962,9 @@ qemuAgentGetFSInfoFillDisks(virJSONValuePtr jsondisks, * -2 when agent command is not supported by the agent * -1 otherwise */ -static int -qemuAgentGetFSInfoInternal(qemuAgentPtr mon, - qemuAgentFSInfoPtr **info) +int +qemuAgentGetFSInfo(qemuAgentPtr mon, + qemuAgentFSInfoPtr **info) { size_t i; int ret = -1; @@ -2161,151 +2096,6 @@ qemuAgentGetFSInfoInternal(qemuAgentPtr mon, return ret; } -/* Returns: 0 on success - * -1 otherwise - */ -int -qemuAgentGetFSInfo(qemuAgentPtr mon, - virDomainFSInfoPtr **info, - virDomainDefPtr vmdef) -{ - int ret = -1; - qemuAgentFSInfoPtr *agentinfo = NULL; - virDomainFSInfoPtr *info_ret = NULL; - size_t i; - int nfs; - - nfs = qemuAgentGetFSInfoInternal(mon, &agentinfo); - if (nfs < 0) - return ret; - if (VIR_ALLOC_N(info_ret, nfs) < 0) - goto cleanup; - - for (i = 0; i < nfs; i++) { - if (!(info_ret[i] = qemuAgentFSInfoToPublic(agentinfo[i], vmdef))) - goto cleanup; - } - - *info = g_steal_pointer(&info_ret); - ret = nfs; - - cleanup: - for (i = 0; i < nfs; i++) { - qemuAgentFSInfoFree(agentinfo[i]); - /* if there was an error, free any memory we've allocated for the - * return value */ - if (info_ret) - virDomainFSInfoFree(info_ret[i]); - } - VIR_FREE(agentinfo); - VIR_FREE(info_ret); - return ret; -} - -/* Returns: 0 on success - * -2 when agent command is not supported by the agent - * -1 otherwise - */ -int -qemuAgentGetFSInfoParams(qemuAgentPtr mon, - virTypedParameterPtr *params, - int *nparams, int *maxparams, - virDomainDefPtr vmdef) -{ - int ret = -1; - qemuAgentFSInfoPtr *fsinfo = NULL; - size_t i, j; - int nfs; - - if ((nfs = qemuAgentGetFSInfoInternal(mon, &fsinfo)) < 0) - return nfs; - - if (virTypedParamsAddUInt(params, nparams, maxparams, - "fs.count", nfs) < 0) - goto cleanup; - - for (i = 0; i < nfs; i++) { - char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; - g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "fs.%zu.name", i); - if (virTypedParamsAddString(params, nparams, maxparams, - param_name, fsinfo[i]->name) < 0) - goto cleanup; - g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "fs.%zu.mountpoint", i); - if (virTypedParamsAddString(params, nparams, maxparams, - param_name, fsinfo[i]->mountpoint) < 0) - goto cleanup; - g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "fs.%zu.fstype", i); - if (virTypedParamsAddString(params, nparams, maxparams, - param_name, fsinfo[i]->fstype) < 0) - goto cleanup; - - /* disk usage values are not returned by older guest agents, so - * only add the params if the value is set */ - g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "fs.%zu.total-bytes", i); - if (fsinfo[i]->total_bytes != -1 && - virTypedParamsAddULLong(params, nparams, maxparams, - param_name, fsinfo[i]->total_bytes) < 0) - goto cleanup; - - g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "fs.%zu.used-bytes", i); - if (fsinfo[i]->used_bytes != -1 && - virTypedParamsAddULLong(params, nparams, maxparams, - param_name, fsinfo[i]->used_bytes) < 0) - goto cleanup; - - g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "fs.%zu.disk.count", i); - if (virTypedParamsAddUInt(params, nparams, maxparams, - param_name, fsinfo[i]->ndisks) < 0) - goto cleanup; - for (j = 0; j < fsinfo[i]->ndisks; j++) { - virDomainDiskDefPtr diskdef = NULL; - qemuAgentDiskInfoPtr d = fsinfo[i]->disks[j]; - /* match the disk to the target in the vm definition */ - diskdef = virDomainDiskByAddress(vmdef, - &d->pci_controller, - d->bus, - d->target, - d->unit); - if (diskdef) { - g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "fs.%zu.disk.%zu.alias", i, j); - if (diskdef->dst && - virTypedParamsAddString(params, nparams, maxparams, - param_name, diskdef->dst) < 0) - goto cleanup; - } - - g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "fs.%zu.disk.%zu.serial", i, j); - if (d->serial && - virTypedParamsAddString(params, nparams, maxparams, - param_name, d->serial) < 0) - goto cleanup; - - g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, - "fs.%zu.disk.%zu.device", i, j); - if (d->devnode && - virTypedParamsAddString(params, nparams, maxparams, - param_name, d->devnode) < 0) - goto cleanup; - } - } - ret = nfs; - - cleanup: - for (i = 0; i < nfs; i++) - qemuAgentFSInfoFree(fsinfo[i]); - VIR_FREE(fsinfo); - - return ret; -} - /* * qemuAgentGetInterfaces: * @mon: Agent monitor diff --git a/src/qemu/qemu_agent.h b/src/qemu/qemu_agent.h index 85e436cf68..5656fe60ff 100644 --- a/src/qemu/qemu_agent.h +++ b/src/qemu/qemu_agent.h @@ -65,19 +65,38 @@ typedef enum { QEMU_AGENT_SHUTDOWN_LAST, } qemuAgentShutdownMode; +typedef struct _qemuAgentDiskInfo qemuAgentDiskInfo; +typedef qemuAgentDiskInfo *qemuAgentDiskInfoPtr; +struct _qemuAgentDiskInfo { + char *serial; + virPCIDeviceAddress pci_controller; + char *bus_type; + unsigned int bus; + unsigned int target; + unsigned int unit; + char *devnode; +}; + +typedef struct _qemuAgentFSInfo qemuAgentFSInfo; +typedef qemuAgentFSInfo *qemuAgentFSInfoPtr; +struct _qemuAgentFSInfo { + char *mountpoint; /* path to mount point */ + char *name; /* device name in the guest (e.g. "sda1") */ + char *fstype; /* filesystem type */ + long long total_bytes; + long long used_bytes; + size_t ndisks; + qemuAgentDiskInfoPtr *disks; +}; +void qemuAgentFSInfoFree(qemuAgentFSInfoPtr info); + int qemuAgentShutdown(qemuAgentPtr mon, qemuAgentShutdownMode mode); int qemuAgentFSFreeze(qemuAgentPtr mon, const char **mountpoints, unsigned int nmountpoints); int qemuAgentFSThaw(qemuAgentPtr mon); -int qemuAgentGetFSInfo(qemuAgentPtr mon, virDomainFSInfoPtr **info, - virDomainDefPtr vmdef); - -int qemuAgentGetFSInfoParams(qemuAgentPtr mon, - virTypedParameterPtr *params, - int *nparams, int *maxparams, - virDomainDefPtr vmdef); +int qemuAgentGetFSInfo(qemuAgentPtr mon, qemuAgentFSInfoPtr **info); int qemuAgentSuspend(qemuAgentPtr mon, unsigned int target); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index d6b1e9f00c..50e6178dbb 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -21814,6 +21814,110 @@ qemuNodeAllocPages(virConnectPtr conn, startCell, cellCount, add); } +static int +qemuDomainGetFSInfoAgent(virQEMUDriverPtr driver, + virDomainObjPtr vm, + qemuAgentFSInfoPtr **info) +{ + int ret = -1; + qemuAgentPtr agent; + + if (qemuDomainObjBeginAgentJob(driver, vm, + QEMU_AGENT_JOB_QUERY) < 0) + return ret; + + if (virDomainObjCheckActive(vm) < 0) + goto endjob; + + if (!qemuDomainAgentAvailable(vm, true)) + goto endjob; + + agent = qemuDomainObjEnterAgent(vm); + ret = qemuAgentGetFSInfo(agent, info); + qemuDomainObjExitAgent(vm, agent); + + endjob: + qemuDomainObjEndAgentJob(vm); + return ret; +} + +static virDomainFSInfoPtr +qemuAgentFSInfoToPublic(qemuAgentFSInfoPtr agent, + virDomainDefPtr vmdef) +{ + virDomainFSInfoPtr ret = NULL; + size_t i; + virDomainDiskDefPtr diskDef; + + if (VIR_ALLOC(ret) < 0) + goto error; + + ret->mountpoint = g_strdup(agent->mountpoint); + ret->name = g_strdup(agent->name); + ret->fstype = g_strdup(agent->fstype); + + if (agent->disks && + VIR_ALLOC_N(ret->devAlias, agent->ndisks) < 0) + goto error; + + ret->ndevAlias = agent->ndisks; + + for (i = 0; i < ret->ndevAlias; i++) { + qemuAgentDiskInfoPtr agentdisk = agent->disks[i]; + if (!(diskDef = virDomainDiskByAddress(vmdef, + &agentdisk->pci_controller, + agentdisk->bus, + agentdisk->target, + agentdisk->unit))) + continue; + + ret->devAlias[i] = g_strdup(diskDef->dst); + } + + return ret; + + error: + virDomainFSInfoFree(ret); + return NULL; +} + +/* Returns: 0 on success + * -1 otherwise + */ +static int +virDomainFSInfoFormat(qemuAgentFSInfoPtr *agentinfo, + int nagentinfo, + virDomainDefPtr vmdef, + virDomainFSInfoPtr **info) +{ + int ret = -1; + virDomainFSInfoPtr *info_ret = NULL; + size_t i; + + if (nagentinfo < 0) + return ret; + if (VIR_ALLOC_N(info_ret, nagentinfo) < 0) + goto cleanup; + + for (i = 0; i < nagentinfo; i++) { + if (!(info_ret[i] = qemuAgentFSInfoToPublic(agentinfo[i], vmdef))) + goto cleanup; + } + + *info = g_steal_pointer(&info_ret); + ret = nagentinfo; + + cleanup: + for (i = 0; i < nagentinfo; i++) { + qemuAgentFSInfoFree(agentinfo[i]); + /* if there was an error, free any memory we've allocated for the + * return value */ + if (info_ret) + virDomainFSInfoFree(info_ret[i]); + } + VIR_FREE(info_ret); + return ret; +} static int qemuDomainGetFSInfo(virDomainPtr dom, @@ -21822,8 +21926,9 @@ qemuDomainGetFSInfo(virDomainPtr dom, { virQEMUDriverPtr driver = dom->conn->privateData; virDomainObjPtr vm; - qemuAgentPtr agent; + qemuAgentFSInfoPtr *agentinfo = NULL; int ret = -1; + int nfs; virCheckFlags(0, ret); @@ -21833,25 +21938,22 @@ qemuDomainGetFSInfo(virDomainPtr dom, if (virDomainGetFSInfoEnsureACL(dom->conn, vm->def) < 0) goto cleanup; - if (qemuDomainObjBeginJobWithAgent(driver, vm, - QEMU_JOB_QUERY, - QEMU_AGENT_JOB_QUERY) < 0) + if ((nfs = qemuDomainGetFSInfoAgent(driver, vm, &agentinfo)) < 0) + goto cleanup; + + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) goto cleanup; if (virDomainObjCheckActive(vm) < 0) goto endjob; - if (!qemuDomainAgentAvailable(vm, true)) - goto endjob; - - agent = qemuDomainObjEnterAgent(vm); - ret = qemuAgentGetFSInfo(agent, info, vm->def); - qemuDomainObjExitAgent(vm, agent); + ret = virDomainFSInfoFormat(agentinfo, nfs, vm->def, info); endjob: - qemuDomainObjEndJobWithAgent(driver, vm); + qemuDomainObjEndJob(driver, vm); cleanup: + g_free(agentinfo); virDomainObjEndAPI(&vm); return ret; } @@ -22857,6 +22959,103 @@ qemuDomainGetGuestInfoCheckSupport(unsigned int *types) *types = *types & supportedGuestInfoTypes; } +/* Returns: 0 on success + * -1 otherwise + */ +static int +qemuAgentFSInfoFormatParams(qemuAgentFSInfoPtr *fsinfo, + int nfs, + virDomainDefPtr vmdef, + virTypedParameterPtr *params, + int *nparams, int *maxparams) +{ + int ret = -1; + size_t i, j; + + /* FIXME: get disk target */ + + if (virTypedParamsAddUInt(params, nparams, maxparams, + "fs.count", nfs) < 0) + goto cleanup; + + for (i = 0; i < nfs; i++) { + char param_name[VIR_TYPED_PARAM_FIELD_LENGTH]; + g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, + "fs.%zu.name", i); + if (virTypedParamsAddString(params, nparams, maxparams, + param_name, fsinfo[i]->name) < 0) + goto cleanup; + g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, + "fs.%zu.mountpoint", i); + if (virTypedParamsAddString(params, nparams, maxparams, + param_name, fsinfo[i]->mountpoint) < 0) + goto cleanup; + g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, + "fs.%zu.fstype", i); + if (virTypedParamsAddString(params, nparams, maxparams, + param_name, fsinfo[i]->fstype) < 0) + goto cleanup; + + /* disk usage values are not returned by older guest agents, so + * only add the params if the value is set */ + g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, + "fs.%zu.total-bytes", i); + if (fsinfo[i]->total_bytes != -1 && + virTypedParamsAddULLong(params, nparams, maxparams, + param_name, fsinfo[i]->total_bytes) < 0) + goto cleanup; + + g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, + "fs.%zu.used-bytes", i); + if (fsinfo[i]->used_bytes != -1 && + virTypedParamsAddULLong(params, nparams, maxparams, + param_name, fsinfo[i]->used_bytes) < 0) + goto cleanup; + + g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, + "fs.%zu.disk.count", i); + if (virTypedParamsAddUInt(params, nparams, maxparams, + param_name, fsinfo[i]->ndisks) < 0) + goto cleanup; + for (j = 0; j < fsinfo[i]->ndisks; j++) { + virDomainDiskDefPtr diskdef = NULL; + qemuAgentDiskInfoPtr d = fsinfo[i]->disks[j]; + /* match the disk to the target in the vm definition */ + diskdef = virDomainDiskByAddress(vmdef, + &d->pci_controller, + d->bus, + d->target, + d->unit); + if (diskdef) { + g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, + "fs.%zu.disk.%zu.alias", i, j); + if (diskdef->dst && + virTypedParamsAddString(params, nparams, maxparams, + param_name, diskdef->dst) < 0) + goto cleanup; + } + + g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, + "fs.%zu.disk.%zu.serial", i, j); + if (d->serial && + virTypedParamsAddString(params, nparams, maxparams, + param_name, d->serial) < 0) + goto cleanup; + + g_snprintf(param_name, VIR_TYPED_PARAM_FIELD_LENGTH, + "fs.%zu.disk.%zu.device", i, j); + if (d->devnode && + virTypedParamsAddString(params, nparams, maxparams, + param_name, d->devnode) < 0) + goto cleanup; + } + } + ret = nfs; + + cleanup: + return ret; +} + static int qemuDomainGetGuestInfo(virDomainPtr dom, unsigned int types, @@ -22872,6 +23071,9 @@ qemuDomainGetGuestInfo(virDomainPtr dom, g_autofree char *hostname = NULL; unsigned int supportedTypes = types; int rc; + int nfs = 0; + qemuAgentFSInfoPtr *agentfsinfo = NULL; + size_t i; virCheckFlags(0, -1); qemuDomainGetGuestInfoCheckSupport(&supportedTypes); @@ -22882,13 +23084,12 @@ qemuDomainGetGuestInfo(virDomainPtr dom, if (virDomainGetGuestInfoEnsureACL(dom->conn, vm->def) < 0) goto cleanup; - if (qemuDomainObjBeginJobWithAgent(driver, vm, - QEMU_JOB_QUERY, - QEMU_AGENT_JOB_QUERY) < 0) + if (qemuDomainObjBeginAgentJob(driver, vm, + QEMU_AGENT_JOB_QUERY) < 0) goto cleanup; if (!qemuDomainAgentAvailable(vm, true)) - goto endjob; + goto endagentjob; agent = qemuDomainObjEnterAgent(vm); @@ -22923,7 +23124,7 @@ qemuDomainGetGuestInfo(virDomainPtr dom, } } if (supportedTypes & VIR_DOMAIN_GUEST_INFO_FILESYSTEM) { - rc = qemuAgentGetFSInfoParams(agent, params, nparams, &maxparams, vm->def); + rc = nfs = qemuAgentGetFSInfo(agent, &agentfsinfo); if (rc < 0 && !(rc == -2 && types == 0)) goto exitagent; } @@ -22933,10 +23134,29 @@ qemuDomainGetGuestInfo(virDomainPtr dom, exitagent: qemuDomainObjExitAgent(vm, agent); + endagentjob: + qemuDomainObjEndAgentJob(vm); + + if (nfs > 0) { + if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_QUERY) < 0) + goto cleanup; + + if (virDomainObjCheckActive(vm) < 0) + goto endjob; + + /* we need to convert the agent fsinfo struct to parameters and match + * it to the vm disk target */ + qemuAgentFSInfoFormatParams(agentfsinfo, nfs, vm->def, params, nparams, &maxparams); + endjob: - qemuDomainObjEndJobWithAgent(driver, vm); + qemuDomainObjEndJob(driver, vm); + } cleanup: + for (i = 0; i < nfs; i++) + qemuAgentFSInfoFree(agentfsinfo[i]); + VIR_FREE(agentfsinfo); + virDomainObjEndAPI(&vm); return ret; } diff --git a/tests/qemuagenttest.c b/tests/qemuagenttest.c index 644dc9d08b..a45ce4f44a 100644 --- a/tests/qemuagenttest.c +++ b/tests/qemuagenttest.c @@ -247,14 +247,14 @@ testQemuAgentGetFSInfo(const void *data) virDomainXMLOptionPtr xmlopt = (virDomainXMLOptionPtr)data; qemuMonitorTestPtr test = NULL; virDomainDefPtr def = NULL; - virDomainFSInfoPtr *info = NULL; + qemuAgentFSInfoPtr *info = NULL; int ret = -1, ninfo = 0, i; if (testQemuAgentGetFSInfoCommon(xmlopt, &test, &def) < 0) goto cleanup; if ((ninfo = qemuAgentGetFSInfo(qemuMonitorTestGetAgent(test), - &info, def)) < 0) + &info)) < 0) goto cleanup; if (ninfo != 3) { @@ -266,35 +266,48 @@ testQemuAgentGetFSInfo(const void *data) if (STRNEQ(info[2]->name, "sda1") || STRNEQ(info[2]->mountpoint, "/") || STRNEQ(info[2]->fstype, "ext4") || - info[2]->ndevAlias != 1 || - !info[2]->devAlias || !info[2]->devAlias[0] || - STRNEQ(info[2]->devAlias[0], "hdc")) { + info[2]->ndisks != 1 || + !info[2]->disks || !info[2]->disks[0]) { virReportError(VIR_ERR_INTERNAL_ERROR, - "unexpected filesystems information returned for sda1 (%s,%s)", - info[2]->name, info[2]->devAlias ? info[2]->devAlias[0] : "null"); + "unexpected filesystems information returned for sda1 (%s)", + info[2]->name); ret = -1; goto cleanup; } if (STRNEQ(info[1]->name, "dm-1") || STRNEQ(info[1]->mountpoint, "/opt") || STRNEQ(info[1]->fstype, "vfat") || - info[1]->ndevAlias != 2 || - !info[1]->devAlias || !info[1]->devAlias[0] || !info[1]->devAlias[1] || - STRNEQ(info[1]->devAlias[0], "vda") || - STRNEQ(info[1]->devAlias[1], "vdb")) { + info[1]->ndisks != 2 || + !info[1]->disks || !info[1]->disks[0] || !info[1]->disks[1] || + STRNEQ(info[1]->disks[0]->bus_type, "virtio") || + info[1]->disks[0]->bus != 0 || + info[1]->disks[0]->target != 0 || + info[1]->disks[0]->unit != 0 || + info[1]->disks[0]->pci_controller.domain != 0 || + info[1]->disks[0]->pci_controller.bus != 0 || + info[1]->disks[0]->pci_controller.slot != 6 || + info[1]->disks[0]->pci_controller.function != 0 || + STRNEQ(info[1]->disks[1]->bus_type, "virtio") || + info[1]->disks[1]->bus != 0 || + info[1]->disks[1]->target != 0 || + info[1]->disks[1]->unit != 0 || + info[1]->disks[1]->pci_controller.domain != 0 || + info[1]->disks[1]->pci_controller.bus != 0 || + info[1]->disks[1]->pci_controller.slot != 7 || + info[1]->disks[1]->pci_controller.function != 0) { virReportError(VIR_ERR_INTERNAL_ERROR, - "unexpected filesystems information returned for dm-1 (%s,%s)", - info[0]->name, info[0]->devAlias ? info[0]->devAlias[0] : "null"); + "unexpected filesystems information returned for dm-1 (%s)", + info[0]->name); ret = -1; goto cleanup; } if (STRNEQ(info[0]->name, "sdb1") || STRNEQ(info[0]->mountpoint, "/mnt/disk") || STRNEQ(info[0]->fstype, "xfs") || - info[0]->ndevAlias != 0 || info[0]->devAlias) { + info[0]->ndisks != 0 || info[0]->disks) { virReportError(VIR_ERR_INTERNAL_ERROR, - "unexpected filesystems information returned for sdb1 (%s,%s)", - info[0]->name, info[0]->devAlias ? info[0]->devAlias[0] : "null"); + "unexpected filesystems information returned for sdb1 (%s)", + info[0]->name); ret = -1; goto cleanup; } @@ -313,7 +326,7 @@ testQemuAgentGetFSInfo(const void *data) "}") < 0) goto cleanup; - if (qemuAgentGetFSInfo(qemuMonitorTestGetAgent(test), &info, def) != -1) { + if (qemuAgentGetFSInfo(qemuMonitorTestGetAgent(test), &info) >= 0) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", "agent get-fsinfo command should have failed"); goto cleanup; @@ -323,159 +336,13 @@ testQemuAgentGetFSInfo(const void *data) cleanup: for (i = 0; i < ninfo; i++) - virDomainFSInfoFree(info[i]); + qemuAgentFSInfoFree(info[i]); VIR_FREE(info); virDomainDefFree(def); qemuMonitorTestFree(test); return ret; } -static int -testQemuAgentGetFSInfoParams(const void *data) -{ - virDomainXMLOptionPtr xmlopt = (virDomainXMLOptionPtr)data; - qemuMonitorTestPtr test = NULL; - virDomainDefPtr def = NULL; - virTypedParameterPtr params = NULL; - int nparams = 0, maxparams = 0; - int ret = -1; - unsigned int count; - const char *name, *mountpoint, *fstype, *alias, *serial; - unsigned int diskcount; - unsigned long long bytesused, bytestotal; - const char *alias2; - - if (testQemuAgentGetFSInfoCommon(xmlopt, &test, &def) < 0) - goto cleanup; - - if (qemuAgentGetFSInfoParams(qemuMonitorTestGetAgent(test), - ¶ms, &nparams, &maxparams, def) < 0) { - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - "Failed to execute qemuAgentGetFSInfoParams()"); - goto cleanup; - } - - if (virTypedParamsGetUInt(params, nparams, "fs.count", &count) < 0) { - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - "expected filesystem count"); - goto cleanup; - } - - if (count != 3) { - virReportError(VIR_ERR_INTERNAL_ERROR, - "expected 3 filesystems information, got %d", count); - goto cleanup; - } - - if (virTypedParamsGetString(params, nparams, "fs.2.name", &name) < 0 || - virTypedParamsGetString(params, nparams, "fs.2.mountpoint", &mountpoint) < 0 || - virTypedParamsGetString(params, nparams, "fs.2.fstype", &fstype) < 0 || - virTypedParamsGetULLong(params, nparams, "fs.2.used-bytes", &bytesused) <= 0 || - virTypedParamsGetULLong(params, nparams, "fs.2.total-bytes", &bytestotal) <= 0 || - virTypedParamsGetUInt(params, nparams, "fs.2.disk.count", &diskcount) < 0 || - virTypedParamsGetString(params, nparams, "fs.2.disk.0.alias", &alias) < 0 || - virTypedParamsGetString(params, nparams, "fs.2.disk.0.serial", &serial) < 0) { - virReportError(VIR_ERR_INTERNAL_ERROR, - "Missing an expected parameter for sda1 (%s,%s)", - name, alias); - goto cleanup; - } - - if (STRNEQ(name, "sda1") || - STRNEQ(mountpoint, "/") || - STRNEQ(fstype, "ext4") || - bytesused != 229019648 || - bytestotal != 952840192 || - diskcount != 1 || - STRNEQ(alias, "hdc") || - STRNEQ(serial, "ARBITRARYSTRING")) { - virReportError(VIR_ERR_INTERNAL_ERROR, - "unexpected filesystems information returned for sda1 (%s,%s)", - name, alias); - goto cleanup; - } - - if (virTypedParamsGetString(params, nparams, "fs.1.name", &name) < 0 || - virTypedParamsGetString(params, nparams, "fs.1.mountpoint", &mountpoint) < 0 || - virTypedParamsGetString(params, nparams, "fs.1.fstype", &fstype) < 0 || - virTypedParamsGetULLong(params, nparams, "fs.1.used-bytes", &bytesused) == 1 || - virTypedParamsGetULLong(params, nparams, "fs.1.total-bytes", &bytestotal) == 1 || - virTypedParamsGetUInt(params, nparams, "fs.1.disk.count", &diskcount) < 0 || - virTypedParamsGetString(params, nparams, "fs.1.disk.0.alias", &alias) < 0 || - virTypedParamsGetString(params, nparams, "fs.1.disk.1.alias", &alias2) < 0) { - virReportError(VIR_ERR_INTERNAL_ERROR, - "Incorrect parameters for dm-1 (%s,%s)", - name, alias); - goto cleanup; - } - if (STRNEQ(name, "dm-1") || - STRNEQ(mountpoint, "/opt") || - STRNEQ(fstype, "vfat") || - diskcount != 2 || - STRNEQ(alias, "vda") || - STRNEQ(alias2, "vdb")) { - virReportError(VIR_ERR_INTERNAL_ERROR, - "unexpected filesystems information returned for dm-1 (%s,%s)", - name, alias); - goto cleanup; - } - - alias = NULL; - if (virTypedParamsGetString(params, nparams, "fs.0.name", &name) < 0 || - virTypedParamsGetString(params, nparams, "fs.0.mountpoint", &mountpoint) < 0 || - virTypedParamsGetString(params, nparams, "fs.0.fstype", &fstype) < 0 || - virTypedParamsGetULLong(params, nparams, "fs.0.used-bytes", &bytesused) == 1 || - virTypedParamsGetULLong(params, nparams, "fs.0.total-bytes", &bytestotal) == 1 || - virTypedParamsGetUInt(params, nparams, "fs.0.disk.count", &diskcount) < 0 || - virTypedParamsGetString(params, nparams, "fs.0.disk.0.alias", &alias) == 1) { - virReportError(VIR_ERR_INTERNAL_ERROR, - "Incorrect parameters for sdb1 (%s,%s)", - name, alias); - goto cleanup; - } - - if (STRNEQ(name, "sdb1") || - STRNEQ(mountpoint, "/mnt/disk") || - STRNEQ(fstype, "xfs") || - diskcount != 0 || - alias != NULL) { - virReportError(VIR_ERR_INTERNAL_ERROR, - "unexpected filesystems information returned for sdb1 (%s,%s)", - name, alias); - goto cleanup; - } - - if (qemuMonitorTestAddAgentSyncResponse(test) < 0) - goto cleanup; - - if (qemuMonitorTestAddItem(test, "guest-get-fsinfo", - "{\"error\":" - " {\"class\":\"CommandDisabled\"," - " \"desc\":\"The command guest-get-fsinfo " - "has been disabled for " - "this instance\"," - " \"data\":{\"name\":\"guest-get-fsinfo\"}" - " }" - "}") < 0) - goto cleanup; - - if (qemuAgentGetFSInfoParams(qemuMonitorTestGetAgent(test), ¶ms, - &nparams, &maxparams, def) != -2) { - virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - "agent get-fsinfo command should have failed"); - goto cleanup; - } - - ret = 0; - - cleanup: - virTypedParamsFree(params, nparams); - virDomainDefFree(def); - qemuMonitorTestFree(test); - return ret; -} - - static int testQemuAgentSuspend(const void *data) { @@ -1438,7 +1305,6 @@ mymain(void) DO_TEST(FSFreeze); DO_TEST(FSThaw); DO_TEST(FSTrim); - DO_TEST(GetFSInfoParams); DO_TEST(GetFSInfo); DO_TEST(Suspend); DO_TEST(Shutdown); -- 2.21.0

This function potentially grabs both a monitor job and an agent job at the same time. This is problematic because it means that a malicious (or just buggy) guest agent can cause a denial of service on the host. The presence of this function makes it easy to do the wrong thing and hold both jobs at the same time. All existing uses have already been removed by previous commits. Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com> --- src/qemu/THREADS.txt | 58 +++++------------------------------------- src/qemu/qemu_domain.c | 56 ++++------------------------------------ src/qemu/qemu_domain.h | 7 ----- 3 files changed, 11 insertions(+), 110 deletions(-) diff --git a/src/qemu/THREADS.txt b/src/qemu/THREADS.txt index aa428fda6a..a7d8709a43 100644 --- a/src/qemu/THREADS.txt +++ b/src/qemu/THREADS.txt @@ -61,11 +61,12 @@ There are a number of locks on various objects Agent job condition is then used when thread wishes to talk to qemu agent monitor. It is possible to acquire just agent job - (qemuDomainObjBeginAgentJob), or only normal job - (qemuDomainObjBeginJob) or both at the same time - (qemuDomainObjBeginJobWithAgent). Which type of job to grab depends - whether caller wishes to communicate only with agent socket, or only - with qemu monitor socket or both, respectively. + (qemuDomainObjBeginAgentJob), or only normal job (qemuDomainObjBeginJob) + but not both at the same time. Holding an agent job and a normal job would + allow an unresponsive or malicious agent to block normal libvirt API and + potentially result in a denial of service. Which type of job to grab + depends whether caller wishes to communicate only with agent socket, or + only with qemu monitor socket. Immediately after acquiring the virDomainObjPtr lock, any method which intends to update state must acquire asynchronous, normal or @@ -141,18 +142,6 @@ To acquire the agent job condition -To acquire both normal and agent job condition - - qemuDomainObjBeginJobWithAgent() - - Waits until there is no normal and no agent job set - - Sets both job.active and job.agentActive with required job types - - qemuDomainObjEndJobWithAgent() - - Sets both job.active and job.agentActive to 0 - - Signals on job.cond condition - - - To acquire the asynchronous job condition qemuDomainObjBeginAsyncJob() @@ -292,41 +281,6 @@ Design patterns virDomainObjEndAPI(&obj); - * Invoking both monitor and agent commands on a virDomainObjPtr - - virDomainObjPtr obj; - qemuAgentPtr agent; - - obj = qemuDomObjFromDomain(dom); - - qemuDomainObjBeginJobWithAgent(obj, QEMU_JOB_TYPE, QEMU_AGENT_JOB_TYPE); - - if (!virDomainObjIsActive(dom)) - goto cleanup; - - ...do prep work... - - if (!qemuDomainAgentAvailable(obj, true)) - goto cleanup; - - agent = qemuDomainObjEnterAgent(obj); - qemuAgentXXXX(agent, ..); - qemuDomainObjExitAgent(obj, agent); - - ... - - qemuDomainObjEnterMonitor(obj); - qemuMonitorXXXX(priv->mon); - qemuDomainObjExitMonitor(obj); - - /* Alternatively, talk to the monitor first and then talk to the agent. */ - - ...do final work... - - qemuDomainObjEndJobWithAgent(obj); - virDomainObjEndAPI(&obj); - - * Running asynchronous job virDomainObjPtr obj; diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index 1f358544ab..ce141c5256 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -9640,26 +9640,6 @@ qemuDomainObjBeginAgentJob(virQEMUDriverPtr driver, QEMU_ASYNC_JOB_NONE, false); } -/** - * qemuDomainObjBeginJobWithAgent: - * - * Grabs both monitor and agent types of job. Use if caller talks to - * both monitor and guest agent. However, if @job (or @agentJob) is - * QEMU_JOB_NONE (or QEMU_AGENT_JOB_NONE) only agent job is acquired (or - * monitor job). - * - * To end job call qemuDomainObjEndJobWithAgent. - */ -int -qemuDomainObjBeginJobWithAgent(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainJob job, - qemuDomainAgentJob agentJob) -{ - return qemuDomainObjBeginJobInternal(driver, obj, job, agentJob, - QEMU_ASYNC_JOB_NONE, false); -} - int qemuDomainObjBeginAsyncJob(virQEMUDriverPtr driver, virDomainObjPtr obj, qemuDomainAsyncJob asyncJob, @@ -9774,31 +9754,6 @@ qemuDomainObjEndAgentJob(virDomainObjPtr obj) virCondBroadcast(&priv->job.cond); } -void -qemuDomainObjEndJobWithAgent(virQEMUDriverPtr driver, - virDomainObjPtr obj) -{ - qemuDomainObjPrivatePtr priv = obj->privateData; - qemuDomainJob job = priv->job.active; - qemuDomainAgentJob agentJob = priv->job.agentActive; - - priv->jobs_queued--; - - VIR_DEBUG("Stopping both jobs: %s %s (async=%s vm=%p name=%s)", - qemuDomainJobTypeToString(job), - qemuDomainAgentJobTypeToString(agentJob), - qemuDomainAsyncJobTypeToString(priv->job.asyncJob), - obj, obj->def->name); - - qemuDomainObjResetJob(priv); - qemuDomainObjResetAgentJob(priv); - if (qemuDomainTrackJob(job)) - qemuDomainObjSaveStatus(driver, obj); - /* We indeed need to wake up ALL threads waiting because - * grabbing a job requires checking more variables. */ - virCondBroadcast(&priv->job.cond); -} - void qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, virDomainObjPtr obj) { @@ -9832,9 +9787,9 @@ qemuDomainObjAbortAsyncJob(virDomainObjPtr obj) * obj must be locked before calling * * To be called immediately before any QEMU monitor API call - * Must have already either called qemuDomainObjBeginJob() or - * qemuDomainObjBeginJobWithAgent() and checked that the VM is - * still active; may not be used for nested async jobs. + * Must have already called qemuDomainObjBeginJob() and checked + * that the VM is still active; may not be used for nested async + * jobs. * * To be followed with qemuDomainObjExitMonitor() once complete */ @@ -9956,9 +9911,8 @@ qemuDomainObjEnterMonitorAsync(virQEMUDriverPtr driver, * obj must be locked before calling * * To be called immediately before any QEMU agent API call. - * Must have already called qemuDomainObjBeginAgentJob() or - * qemuDomainObjBeginJobWithAgent() and checked that the VM is - * still active. + * Must have already called qemuDomainObjBeginAgentJob() and + * checked that the VM is still active. * * To be followed with qemuDomainObjExitAgent() once complete */ diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index c6afc484f6..eb34f17921 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -650,11 +650,6 @@ int qemuDomainObjBeginAgentJob(virQEMUDriverPtr driver, virDomainObjPtr obj, qemuDomainAgentJob agentJob) G_GNUC_WARN_UNUSED_RESULT; -int qemuDomainObjBeginJobWithAgent(virQEMUDriverPtr driver, - virDomainObjPtr obj, - qemuDomainJob job, - qemuDomainAgentJob agentJob) - G_GNUC_WARN_UNUSED_RESULT; int qemuDomainObjBeginAsyncJob(virQEMUDriverPtr driver, virDomainObjPtr obj, qemuDomainAsyncJob asyncJob, @@ -673,8 +668,6 @@ int qemuDomainObjBeginJobNowait(virQEMUDriverPtr driver, void qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj); void qemuDomainObjEndAgentJob(virDomainObjPtr obj); -void qemuDomainObjEndJobWithAgent(virQEMUDriverPtr driver, - virDomainObjPtr obj); void qemuDomainObjEndAsyncJob(virQEMUDriverPtr driver, virDomainObjPtr obj); void qemuDomainObjAbortAsyncJob(virDomainObjPtr obj); -- 2.21.0

Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com> --- src/qemu/qemu_agent.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/src/qemu/qemu_agent.c b/src/qemu/qemu_agent.c index 47bfef7141..7186b1da64 100644 --- a/src/qemu/qemu_agent.c +++ b/src/qemu/qemu_agent.c @@ -1894,10 +1894,8 @@ qemuAgentGetFSInfoFillDisks(virJSONValuePtr jsondisks, ndisks = virJSONValueArraySize(jsondisks); - if (ndisks && - VIR_ALLOC_N(fsinfo->disks, ndisks) < 0) - return -1; - + if (ndisks) + fsinfo->disks = g_new0(qemuAgentDiskInfoPtr, ndisks); fsinfo->ndisks = ndisks; for (i = 0; i < fsinfo->ndisks; i++) { @@ -1914,8 +1912,7 @@ qemuAgentGetFSInfoFillDisks(virJSONValuePtr jsondisks, return -1; } - if (VIR_ALLOC(fsinfo->disks[i]) < 0) - return -1; + fsinfo->disks[i] = g_new0(qemuAgentDiskInfo, 1); disk = fsinfo->disks[i]; if ((val = virJSONValueObjectGetString(jsondisk, "bus-type"))) -- 2.21.0

Switch from old VIR_ allocation APIs to glib equivalents. Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com> --- src/qemu/qemu_agent.c | 26 ++++++++++++-------------- src/qemu/qemu_driver.c | 2 +- 2 files changed, 13 insertions(+), 15 deletions(-) diff --git a/src/qemu/qemu_agent.c b/src/qemu/qemu_agent.c index 7186b1da64..b6556ffbaf 100644 --- a/src/qemu/qemu_agent.c +++ b/src/qemu/qemu_agent.c @@ -1854,10 +1854,10 @@ qemuAgentDiskInfoFree(qemuAgentDiskInfoPtr info) if (!info) return; - VIR_FREE(info->serial); - VIR_FREE(info->bus_type); - VIR_FREE(info->devnode); - VIR_FREE(info); + g_free(info->serial); + g_free(info->bus_type); + g_free(info->devnode); + g_free(info); } void @@ -1868,15 +1868,15 @@ qemuAgentFSInfoFree(qemuAgentFSInfoPtr info) if (!info) return; - VIR_FREE(info->mountpoint); - VIR_FREE(info->name); - VIR_FREE(info->fstype); + g_free(info->mountpoint); + g_free(info->name); + g_free(info->fstype); for (i = 0; i < info->ndisks; i++) qemuAgentDiskInfoFree(info->disks[i]); - VIR_FREE(info->disks); + g_free(info->disks); - VIR_FREE(info); + g_free(info); } static int @@ -1999,8 +1999,7 @@ qemuAgentGetFSInfo(qemuAgentPtr mon, *info = NULL; goto cleanup; } - if (VIR_ALLOC_N(info_ret, ndata) < 0) - goto cleanup; + info_ret = g_new0(qemuAgentFSInfoPtr, ndata); for (i = 0; i < ndata; i++) { /* Reverse the order to arrange in mount order */ @@ -2017,8 +2016,7 @@ qemuAgentGetFSInfo(qemuAgentPtr mon, goto cleanup; } - if (VIR_ALLOC(info_ret[i]) < 0) - goto cleanup; + info_ret[i] = g_new0(qemuAgentFSInfo, 1); if (!(result = virJSONValueObjectGetString(entry, "mountpoint"))) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", @@ -2088,7 +2086,7 @@ qemuAgentGetFSInfo(qemuAgentPtr mon, if (info_ret) { for (i = 0; i < ndata; i++) qemuAgentFSInfoFree(info_ret[i]); - VIR_FREE(info_ret); + g_free(info_ret); } return ret; } diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 50e6178dbb..812ff45707 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -23155,7 +23155,7 @@ qemuDomainGetGuestInfo(virDomainPtr dom, cleanup: for (i = 0; i < nfs; i++) qemuAgentFSInfoFree(agentfsinfo[i]); - VIR_FREE(agentfsinfo); + g_free(agentfsinfo); virDomainObjEndAPI(&vm); return ret; -- 2.21.0

Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com> --- src/libvirt-domain.c | 12 ++++++------ src/qemu/qemu_driver.c | 18 +++++------------- src/remote/remote_daemon_dispatch.c | 2 +- 3 files changed, 12 insertions(+), 20 deletions(-) diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index eb66999f07..33c0e1949d 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -11924,15 +11924,15 @@ virDomainFSInfoFree(virDomainFSInfoPtr info) if (!info) return; - VIR_FREE(info->mountpoint); - VIR_FREE(info->name); - VIR_FREE(info->fstype); + g_free(info->mountpoint); + g_free(info->name); + g_free(info->fstype); for (i = 0; i < info->ndevAlias; i++) - VIR_FREE(info->devAlias[i]); - VIR_FREE(info->devAlias); + g_free(info->devAlias[i]); + g_free(info->devAlias); - VIR_FREE(info); + g_free(info); } /** diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 812ff45707..f905ef4675 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -21849,17 +21849,14 @@ qemuAgentFSInfoToPublic(qemuAgentFSInfoPtr agent, size_t i; virDomainDiskDefPtr diskDef; - if (VIR_ALLOC(ret) < 0) - goto error; + ret = g_new0(virDomainFSInfo, 1); ret->mountpoint = g_strdup(agent->mountpoint); ret->name = g_strdup(agent->name); ret->fstype = g_strdup(agent->fstype); - if (agent->disks && - VIR_ALLOC_N(ret->devAlias, agent->ndisks) < 0) - goto error; - + if (agent->disks) + ret->devAlias = g_new0(char *, agent->ndisks); ret->ndevAlias = agent->ndisks; for (i = 0; i < ret->ndevAlias; i++) { @@ -21875,10 +21872,6 @@ qemuAgentFSInfoToPublic(qemuAgentFSInfoPtr agent, } return ret; - - error: - virDomainFSInfoFree(ret); - return NULL; } /* Returns: 0 on success @@ -21896,8 +21889,7 @@ virDomainFSInfoFormat(qemuAgentFSInfoPtr *agentinfo, if (nagentinfo < 0) return ret; - if (VIR_ALLOC_N(info_ret, nagentinfo) < 0) - goto cleanup; + info_ret = g_new0(virDomainFSInfoPtr, nagentinfo); for (i = 0; i < nagentinfo; i++) { if (!(info_ret[i] = qemuAgentFSInfoToPublic(agentinfo[i], vmdef))) @@ -21915,7 +21907,7 @@ virDomainFSInfoFormat(qemuAgentFSInfoPtr *agentinfo, if (info_ret) virDomainFSInfoFree(info_ret[i]); } - VIR_FREE(info_ret); + g_free(info_ret); return ret; } diff --git a/src/remote/remote_daemon_dispatch.c b/src/remote/remote_daemon_dispatch.c index 9c294ddc39..70fdb7f36b 100644 --- a/src/remote/remote_daemon_dispatch.c +++ b/src/remote/remote_daemon_dispatch.c @@ -7032,7 +7032,7 @@ remoteDispatchDomainGetFSInfo(virNetServerPtr server G_GNUC_UNUSED, if (ninfo >= 0) for (i = 0; i < ninfo; i++) virDomainFSInfoFree(info[i]); - VIR_FREE(info); + g_free(info); return rv; } -- 2.21.0

On 1/11/20 12:32 AM, Jonathon Jongsma wrote:
We have to assume that the guest agent may be malicious, so we don't want to allow any agent queries to block any other libvirt API. By holding a monitor job and an agent job while we're querying the agent, any other threads will be blocked from using the monitor while the agent is unresponsive. Because libvirt waits forever for an agent response, this makes us vulnerable to a denial of service from a malicious (or simply buggy) guest agent.
Most of the patches in the first series were already reviewed and pushed, but a couple remain: the filesystem info functions. The problem with these functions is that the agent functions access the vm definition (owned by the domain). If a monitor job is not held while this is done, the vm definition could change while we are looking up the disk alias, leading to a potentional crash.
This series tries to fix this by moving the disk alias searching up a level from qemu_agent.c to qemu_driver.c. The code in qemu_agent.c will only return the raw data returned from the agent command response. After the agent response is returned and the agent job is ended, we can then look up the disk alias from the vm definition while the domain object is locked.
In addition, a few nearby cleanups are included in this series, notably changing to glib allocation API in a couple of places.
Jonathon Jongsma (8): qemu: rename qemuAgentGetFSInfoInternalDisk() qemu: store complete agent filesystem information qemu: Don't store disk alias in qemuAgentDiskInfo qemu: don't access vmdef within qemu_agent.c qemu: remove qemuDomainObjBegin/EndJobWithAgent() qemu: use glib alloc in qemuAgentGetFSInfoFillDisks() qemu: use glib allocation apis for qemuAgentFSInfo Use glib alloc API for virDomainFSInfo
src/libvirt-domain.c | 12 +- src/qemu/THREADS.txt | 58 +----- src/qemu/qemu_agent.c | 268 ++++------------------------ src/qemu/qemu_agent.h | 33 +++- src/qemu/qemu_domain.c | 56 +----- src/qemu/qemu_domain.h | 7 - src/qemu/qemu_driver.c | 246 +++++++++++++++++++++++-- src/remote/remote_daemon_dispatch.c | 2 +- tests/qemuagenttest.c | 196 ++++---------------- 9 files changed, 336 insertions(+), 542 deletions(-)
Reviewed-by: Michal Privoznik <mprivozn@redhat.com> and pushed. Thanks for fixing this. Michal

On 1/10/20 5:32 PM, Jonathon Jongsma wrote:
We have to assume that the guest agent may be malicious, so we don't want to allow any agent queries to block any other libvirt API. By holding a monitor job and an agent job while we're querying the agent, any other threads will be blocked from using the monitor while the agent is unresponsive. Because libvirt waits forever for an agent response, this makes us vulnerable to a denial of service from a malicious (or simply buggy) guest agent.
Most of the patches in the first series were already reviewed and pushed, but a couple remain: the filesystem info functions. The problem with these functions is that the agent functions access the vm definition (owned by the domain). If a monitor job is not held while this is done, the vm definition could change while we are looking up the disk alias, leading to a potentional crash.
Did we ever hear back on a CVE assignment for the first series? And do any of the patches in this series also fall under the CVE umbrella? -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Thu, 2020-01-16 at 09:46 -0600, Eric Blake wrote:
On 1/10/20 5:32 PM, Jonathon Jongsma wrote:
We have to assume that the guest agent may be malicious, so we don't want to allow any agent queries to block any other libvirt API. By holding a monitor job and an agent job while we're querying the agent, any other threads will be blocked from using the monitor while the agent is unresponsive. Because libvirt waits forever for an agent response, this makes us vulnerable to a denial of service from a malicious (or simply buggy) guest agent.
Most of the patches in the first series were already reviewed and pushed, but a couple remain: the filesystem info functions. The problem with these functions is that the agent functions access the vm definition (owned by the domain). If a monitor job is not held while this is done, the vm definition could change while we are looking up the disk alias, leading to a potentional crash.
Did we ever hear back on a CVE assignment for the first series? And do any of the patches in this series also fall under the CVE umbrella?
Good question. I never did hear back about a CVE assignment. This series is just a revision (and refactoring) of a couple of the patches that were NACKed from the first series. So they are relevant to the (potential) CVE. Jonathon
participants (3)
-
Eric Blake
-
Jonathon Jongsma
-
Michal Privoznik