[libvirt] [PATH v8 00/10] Support cache tune in libvirt
by Eli Qiao
Addressed comment from v8 -> v7
Martin:
* Patch subject prefix.
* Move some of cpu related information to virhostcpu.c.
* Fix some memory leak in src/utils/resctrl.c
Martin & Marcelo:
* Don't remove directories which are not maintained by libvirt.
Addressed comment from v7 -> v6
Marcelo:
* Fix flock usage while VM initialization.
Addressed comment from v6 -> v5
Marcelo:
* Support other APPs to operate /sys/fs/resctrl at same time
Libvirt will scan /sys/fs/resctrl again before doing cache allocation.
patch 10 will address this.
Addressed comment from v4 -> v5:
Marcelo:
* Several typos
* Use flock instead of virFileLock
Addressed comment from v3 -> v4:
Daniel & Marcelo:
* Added concurrence support
Addressed comment from v2 -> v3:
Daniel:
* Fixed coding style, passed `make check` and `make syntax-check`
* Variables renaming and move from header file to c file.
* For locking/mutex support, no progress.
There are some discussion from mailing list, but I can not find a better
way to add locking support without performance impact.
I'll explain the process and please help to advice what shoud we do.
VM create:
1) Get the cache left value on each bank of the host. This should be
shared amount all VMs.
2) Calculate the schemata on the bank based on all created resctrl
domain's schemata
3) Calculate the default schemata by scaning all domain's schemata.
4) Flush default schemata to /sys/fs/resctrl/schemata
VM destroy:
1) Remove the resctrl domain of that VM
2) Recalculate default schemata
3) Flush default schemata to /sys/fs/resctrl/schemata
The key point is that all VMs shares /sys/fs/resctrl/schemata, and
when a VM create a resctrl domain, the schemata of that VM depends on
the default schemata and all other exsited schematas. So a global
mutex is reqired.
Before calculate a schemata or update default schemata, libvirt
should gain this global mutex.
I will try to think more about how to support this gracefully in next
patch set.
Marcelo:
* Added vcpu support for cachetune, this will allow user to define which
vcpu using which cache allocation bank.
<cachetune id='0' host_id=0 size='3072' unit='KiB' vcpus='0,1'/>
vcpus is a cpumap, the vcpu pids will be added to tasks file
* Added cdp compatible, user can specify l3 cache even host enable cdp.
See patch 8.
On a cdp enabled host, specify l3code/l3data by
<cachetune id='0' host_id='0' type='l3' size='3072' unit='KiB'/>
This will create a schemata like:
L3data:0=0xff00;...
L3code:0=0xff00;...
* Would you please help to test if the functions work.
Martin:
* Xml test case, I have no time to work on this yet, would you please
show me an example, would like to amend it later.
This series patches are for supportting CAT featues, which also
called cache tune in libvirt.
First to expose cache information which could be tuned in capabilites XML.
Then add new domain xml element support to add cacahe bank which will apply
on this libvirt domain.
This series patches add a util file `resctrl.c/h`, an interface to talk with
linux kernel's system fs.
There are still one TODO left:
1. Expose a new public interface to get free cache information.
2. Expose a new public interface to set cachetune lively.
Some discussion about this feature support can be found from:
https://www.redhat.com/archives/libvir-list/2017-January/msg00644.html
Eli Qiao (10):
Resctrl: Add some utils functions
Resctrl: expose cache information to capabilities
Resctrl: Add new xml element to support cache tune
Resctrl: Add private interfaces to operate cache bank
Qemu: Set cache tune while booting a new domain.
Resctrl: enable l3code/l3data
Resctrl: Make sure l3data/l3code are pairs
Resctrl: Compatible mode for cdp enabled
Resctrl: concurrence support
Resctrl: Scan resctrl before doing cache allocation
docs/schemas/domaincommon.rng | 46 ++
include/libvirt/virterror.h | 1 +
po/POTFILES.in | 1 +
src/Makefile.am | 1 +
src/conf/capabilities.c | 56 +++
src/conf/capabilities.h | 23 +
src/conf/domain_conf.c | 182 ++++++++
src/conf/domain_conf.h | 19 +
src/libvirt_private.syms | 11 +
src/nodeinfo.c | 64 +++
src/nodeinfo.h | 1 +
src/qemu/qemu_capabilities.c | 8 +
src/qemu/qemu_driver.c | 6 +
src/qemu/qemu_process.c | 54 +++
src/util/virerror.c | 1 +
src/util/virhostcpu.c | 186 +++++++-
src/util/virhostcpu.h | 6 +
src/util/virresctrl.c | 1027 +++++++++++++++++++++++++++++++++++++++++
src/util/virresctrl.h | 88 ++++
19 files changed, 1764 insertions(+), 17 deletions(-)
create mode 100644 src/util/virresctrl.c
create mode 100644 src/util/virresctrl.h
--
1.9.1
8 years, 1 month
[libvirt] [PATCH] libxl: implement virDomainObjCheckIsActive
by Sagar Ghuge
Add function which raises error if domain is
not active
Signed-off-by: Sagar Ghuge <ghugesss(a)gmail.com>
---
src/conf/domain_conf.c | 13 +++++++++++++
src/conf/domain_conf.h | 1 +
src/libxl/libxl_driver.c | 4 +---
3 files changed, 15 insertions(+), 3 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 1bc72a4..10a69af 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -2995,6 +2995,19 @@ virDomainObjWait(virDomainObjPtr vm)
}
+int
+virDomainObjCheckIsActive(virDomainObjPtr vm)
+{
+ if (!virDomainObjIsActive(vm)) {
+ virReportError(VIR_ERR_OPERATION_FAILED, "%s",
+ _("domain is not running"));
+ return -1;
+ }
+
+ return 0;
+}
+
+
/**
* Waits for domain condition to be triggered for a specific period of time.
*
diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
index dd79206..b6c7826 100644
--- a/src/conf/domain_conf.h
+++ b/src/conf/domain_conf.h
@@ -2559,6 +2559,7 @@ bool virDomainObjTaint(virDomainObjPtr obj,
void virDomainObjBroadcast(virDomainObjPtr vm);
int virDomainObjWait(virDomainObjPtr vm);
+int virDomainObjCheckIsActive(virDomainObjPtr vm);
int virDomainObjWaitUntil(virDomainObjPtr vm,
unsigned long long whenms);
diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
index 74cb05a..3a487ac 100644
--- a/src/libxl/libxl_driver.c
+++ b/src/libxl/libxl_driver.c
@@ -1183,10 +1183,8 @@ libxlDomainSuspend(virDomainPtr dom)
if (libxlDomainObjBeginJob(driver, vm, LIBXL_JOB_MODIFY) < 0)
goto cleanup;
- if (!virDomainObjIsActive(vm)) {
- virReportError(VIR_ERR_OPERATION_INVALID, "%s", _("Domain is not running"));
+ if (virDomainObjCheckIsActive(vm) < 0)
goto endjob;
- }
if (virDomainObjGetState(vm, NULL) != VIR_DOMAIN_PAUSED) {
if (libxl_domain_pause(cfg->ctx, vm->def->id) != 0) {
--
2.9.3
8 years, 1 month
[libvirt] [PATCH] conf: Don't accept dummy values for <memoryBacking/> attributes
by Michal Privoznik
Our virSomeEnumTypeFromString() functions return either the value
of item from the enum or -1 on error. Usually however the value 0
means 'this value is not set in the domain XML, use some sensible
default'. Therefore, we don't accept corresponding string in
domain XML, for instance:
<memoryBacking>
<source mode="none"/>
<access mode="default"/>
<allocation mode="none"/>
</memoryBacking>
should be rejected as invalid XML.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/conf/domain_conf.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 79bdbdf50..f718b9abc 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -16724,7 +16724,7 @@ virDomainDefParseXML(xmlDocPtr xml,
tmp = virXPathString("string(./memoryBacking/source/@type)", ctxt);
if (tmp) {
- if ((def->mem.source = virDomainMemorySourceTypeFromString(tmp)) < 0) {
+ if ((def->mem.source = virDomainMemorySourceTypeFromString(tmp)) <= 0) {
virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
_("unknown memoryBacking/source/type '%s'"), tmp);
goto error;
@@ -16734,7 +16734,7 @@ virDomainDefParseXML(xmlDocPtr xml,
tmp = virXPathString("string(./memoryBacking/access/@mode)", ctxt);
if (tmp) {
- if ((def->mem.access = virDomainMemoryAccessTypeFromString(tmp)) < 0) {
+ if ((def->mem.access = virDomainMemoryAccessTypeFromString(tmp)) <= 0) {
virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
_("unknown memoryBacking/access/mode '%s'"), tmp);
goto error;
@@ -16744,7 +16744,7 @@ virDomainDefParseXML(xmlDocPtr xml,
tmp = virXPathString("string(./memoryBacking/allocation/@mode)", ctxt);
if (tmp) {
- if ((def->mem.allocation = virDomainMemoryAllocationTypeFromString(tmp)) < 0) {
+ if ((def->mem.allocation = virDomainMemoryAllocationTypeFromString(tmp)) <= 0) {
virReportError(VIR_ERR_CONFIG_UNSUPPORTED,
_("unknown memoryBacking/allocation/mode '%s'"), tmp);
goto error;
--
2.11.0
8 years, 1 month
[libvirt] [PATCH 0/2] Couple of cleanup patches
by John Ferlan
Fix a couple of things seen while continuing work in the area...
John Ferlan (2):
conf: Cleanup matchFCHostToSCSIHost
conf: Fix leak in virNodeDeviceObjListExport
src/conf/node_device_conf.c | 1 +
src/conf/storage_conf.c | 26 +++++++++++++-------------
2 files changed, 14 insertions(+), 13 deletions(-)
--
2.9.3
8 years, 1 month
[libvirt] [PATCH] qemu: Fix deadlock across fork() in QEMU driver
by Marc Hartmayer
The functions in virCommand() after fork() must be careful with regard
to accessing any mutexes that may have been locked by other threads in
the parent process. It is possible that another thread in the parent
process holds the lock for the virQEMUDriver while fork() is called.
This leads to a deadlock in the child process when
'virQEMUDriverGetConfig(driver)' is called and therefore the handshake
never completes between the child and the parent process. Ultimately
the virDomainObjectPtr will never be unlocked.
It gets much worse if the other thread of the parent process, that
holds the lock for the virQEMUDriver, tries to lock the already locked
virDomainObject. This leads to a completely unresponsive libvirtd.
It's possible to reproduce this case with calling 'virsh start XXX'
and 'virsh managedsave XXX' in a tight loop for multiple domains.
This commit fixes the deadlock in the same way as it is described in
commit 61b52d2e3813cc8c9ff3ab67f232bd0c65f7318d.
Signed-off-by: Marc Hartmayer <mhartmay(a)linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy(a)linux.vnet.ibm.com>
---
src/qemu/qemu_domain.c | 73 +++++++++++++++++++++++--------------------------
src/qemu/qemu_domain.h | 3 +-
src/qemu/qemu_process.c | 2 +-
3 files changed, 37 insertions(+), 41 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index ea4b282..c187214 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -7045,13 +7045,12 @@ qemuDomainGetHostdevPath(virDomainDefPtr def,
* Returns 0 on success, -1 otherwise (with error reported)
*/
static int
-qemuDomainGetPreservedMounts(virQEMUDriverPtr driver,
+qemuDomainGetPreservedMounts(virQEMUDriverConfigPtr cfg,
virDomainObjPtr vm,
char ***devPath,
char ***devSavePath,
size_t *ndevPath)
{
- virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver);
char **paths = NULL, **mounts = NULL;
size_t i, nmounts;
@@ -7092,13 +7091,11 @@ qemuDomainGetPreservedMounts(virQEMUDriverPtr driver,
if (ndevPath)
*ndevPath = nmounts;
- virObjectUnref(cfg);
return 0;
error:
virStringListFreeCount(mounts, nmounts);
virStringListFreeCount(paths, nmounts);
- virObjectUnref(cfg);
return -1;
}
@@ -7310,11 +7307,10 @@ qemuDomainCreateDevice(const char *device,
static int
-qemuDomainPopulateDevices(virQEMUDriverPtr driver,
+qemuDomainPopulateDevices(virQEMUDriverConfigPtr cfg,
virDomainObjPtr vm ATTRIBUTE_UNUSED,
const char *path)
{
- virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver);
const char *const *devices = (const char *const *) cfg->cgroupDeviceACL;
size_t i;
int ret = -1;
@@ -7329,13 +7325,13 @@ qemuDomainPopulateDevices(virQEMUDriverPtr driver,
ret = 0;
cleanup:
- virObjectUnref(cfg);
return ret;
}
static int
-qemuDomainSetupDev(virQEMUDriverPtr driver,
+qemuDomainSetupDev(virQEMUDriverConfigPtr cfg,
+ virSecurityManagerPtr mgr,
virDomainObjPtr vm,
const char *path)
{
@@ -7345,7 +7341,7 @@ qemuDomainSetupDev(virQEMUDriverPtr driver,
VIR_DEBUG("Setting up /dev/ for domain %s", vm->def->name);
- mount_options = virSecurityManagerGetMountOptions(driver->securityManager,
+ mount_options = virSecurityManagerGetMountOptions(mgr,
vm->def);
if (!mount_options &&
@@ -7363,7 +7359,7 @@ qemuDomainSetupDev(virQEMUDriverPtr driver,
if (virFileSetupDev(path, opts) < 0)
goto cleanup;
- if (qemuDomainPopulateDevices(driver, vm, path) < 0)
+ if (qemuDomainPopulateDevices(cfg, vm, path) < 0)
goto cleanup;
ret = 0;
@@ -7375,7 +7371,7 @@ qemuDomainSetupDev(virQEMUDriverPtr driver,
static int
-qemuDomainSetupDisk(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
+qemuDomainSetupDisk(virQEMUDriverConfigPtr cfg ATTRIBUTE_UNUSED,
virDomainDiskDefPtr disk,
const char *devPath)
{
@@ -7401,7 +7397,7 @@ qemuDomainSetupDisk(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
static int
-qemuDomainSetupAllDisks(virQEMUDriverPtr driver,
+qemuDomainSetupAllDisks(virQEMUDriverConfigPtr cfg,
virDomainObjPtr vm,
const char *devPath)
{
@@ -7409,7 +7405,7 @@ qemuDomainSetupAllDisks(virQEMUDriverPtr driver,
VIR_DEBUG("Setting up disks");
for (i = 0; i < vm->def->ndisks; i++) {
- if (qemuDomainSetupDisk(driver,
+ if (qemuDomainSetupDisk(cfg,
vm->def->disks[i],
devPath) < 0)
return -1;
@@ -7421,7 +7417,7 @@ qemuDomainSetupAllDisks(virQEMUDriverPtr driver,
static int
-qemuDomainSetupHostdev(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
+qemuDomainSetupHostdev(virQEMUDriverConfigPtr cfg ATTRIBUTE_UNUSED,
virDomainHostdevDefPtr dev,
const char *devPath)
{
@@ -7447,7 +7443,7 @@ qemuDomainSetupHostdev(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
static int
-qemuDomainSetupAllHostdevs(virQEMUDriverPtr driver,
+qemuDomainSetupAllHostdevs(virQEMUDriverConfigPtr cfg,
virDomainObjPtr vm,
const char *devPath)
{
@@ -7455,7 +7451,7 @@ qemuDomainSetupAllHostdevs(virQEMUDriverPtr driver,
VIR_DEBUG("Setting up hostdevs");
for (i = 0; i < vm->def->nhostdevs; i++) {
- if (qemuDomainSetupHostdev(driver,
+ if (qemuDomainSetupHostdev(cfg,
vm->def->hostdevs[i],
devPath) < 0)
return -1;
@@ -7480,7 +7476,7 @@ qemuDomainSetupChardev(virDomainDefPtr def ATTRIBUTE_UNUSED,
static int
-qemuDomainSetupAllChardevs(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
+qemuDomainSetupAllChardevs(virQEMUDriverConfigPtr cfg ATTRIBUTE_UNUSED,
virDomainObjPtr vm,
const char *devPath)
{
@@ -7498,7 +7494,7 @@ qemuDomainSetupAllChardevs(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
static int
-qemuDomainSetupTPM(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
+qemuDomainSetupTPM(virQEMUDriverConfigPtr cfg ATTRIBUTE_UNUSED,
virDomainObjPtr vm,
const char *devPath)
{
@@ -7527,7 +7523,7 @@ qemuDomainSetupTPM(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
static int
-qemuDomainSetupGraphics(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
+qemuDomainSetupGraphics(virQEMUDriverConfigPtr cfg ATTRIBUTE_UNUSED,
virDomainGraphicsDefPtr gfx,
const char *devPath)
{
@@ -7543,7 +7539,7 @@ qemuDomainSetupGraphics(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
static int
-qemuDomainSetupAllGraphics(virQEMUDriverPtr driver,
+qemuDomainSetupAllGraphics(virQEMUDriverConfigPtr cfg,
virDomainObjPtr vm,
const char *devPath)
{
@@ -7551,7 +7547,7 @@ qemuDomainSetupAllGraphics(virQEMUDriverPtr driver,
VIR_DEBUG("Setting up graphics");
for (i = 0; i < vm->def->ngraphics; i++) {
- if (qemuDomainSetupGraphics(driver,
+ if (qemuDomainSetupGraphics(cfg,
vm->def->graphics[i],
devPath) < 0)
return -1;
@@ -7563,7 +7559,7 @@ qemuDomainSetupAllGraphics(virQEMUDriverPtr driver,
static int
-qemuDomainSetupInput(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
+qemuDomainSetupInput(virQEMUDriverConfigPtr cfg ATTRIBUTE_UNUSED,
virDomainInputDefPtr input,
const char *devPath)
{
@@ -7590,7 +7586,7 @@ qemuDomainSetupInput(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
static int
-qemuDomainSetupAllInputs(virQEMUDriverPtr driver,
+qemuDomainSetupAllInputs(virQEMUDriverConfigPtr cfg,
virDomainObjPtr vm,
const char *devPath)
{
@@ -7598,7 +7594,7 @@ qemuDomainSetupAllInputs(virQEMUDriverPtr driver,
VIR_DEBUG("Setting up inputs");
for (i = 0; i < vm->def->ninputs; i++) {
- if (qemuDomainSetupInput(driver,
+ if (qemuDomainSetupInput(cfg,
vm->def->inputs[i],
devPath) < 0)
return -1;
@@ -7609,7 +7605,7 @@ qemuDomainSetupAllInputs(virQEMUDriverPtr driver,
static int
-qemuDomainSetupRNG(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
+qemuDomainSetupRNG(virQEMUDriverConfigPtr cfg ATTRIBUTE_UNUSED,
virDomainRNGDefPtr rng,
const char *devPath)
{
@@ -7629,7 +7625,7 @@ qemuDomainSetupRNG(virQEMUDriverPtr driver ATTRIBUTE_UNUSED,
static int
-qemuDomainSetupAllRNGs(virQEMUDriverPtr driver,
+qemuDomainSetupAllRNGs(virQEMUDriverConfigPtr cfg,
virDomainObjPtr vm,
const char *devPath)
{
@@ -7637,7 +7633,7 @@ qemuDomainSetupAllRNGs(virQEMUDriverPtr driver,
VIR_DEBUG("Setting up RNGs");
for (i = 0; i < vm->def->nrngs; i++) {
- if (qemuDomainSetupRNG(driver,
+ if (qemuDomainSetupRNG(cfg,
vm->def->rngs[i],
devPath) < 0)
return -1;
@@ -7649,10 +7645,10 @@ qemuDomainSetupAllRNGs(virQEMUDriverPtr driver,
int
-qemuDomainBuildNamespace(virQEMUDriverPtr driver,
+qemuDomainBuildNamespace(virQEMUDriverConfigPtr cfg,
+ virSecurityManagerPtr mgr,
virDomainObjPtr vm)
{
- virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver);
char *devPath = NULL;
char **devMountsPath = NULL, **devMountsSavePath = NULL;
size_t ndevMountsPath = 0, i;
@@ -7663,7 +7659,7 @@ qemuDomainBuildNamespace(virQEMUDriverPtr driver,
goto cleanup;
}
- if (qemuDomainGetPreservedMounts(driver, vm,
+ if (qemuDomainGetPreservedMounts(cfg, vm,
&devMountsPath, &devMountsSavePath,
&ndevMountsPath) < 0)
goto cleanup;
@@ -7684,7 +7680,7 @@ qemuDomainBuildNamespace(virQEMUDriverPtr driver,
if (virProcessSetupPrivateMountNS() < 0)
goto cleanup;
- if (qemuDomainSetupDev(driver, vm, devPath) < 0)
+ if (qemuDomainSetupDev(cfg, mgr, vm, devPath) < 0)
goto cleanup;
/* Save some mount points because we want to share them with the host */
@@ -7703,25 +7699,25 @@ qemuDomainBuildNamespace(virQEMUDriverPtr driver,
goto cleanup;
}
- if (qemuDomainSetupAllDisks(driver, vm, devPath) < 0)
+ if (qemuDomainSetupAllDisks(cfg, vm, devPath) < 0)
goto cleanup;
- if (qemuDomainSetupAllHostdevs(driver, vm, devPath) < 0)
+ if (qemuDomainSetupAllHostdevs(cfg, vm, devPath) < 0)
goto cleanup;
- if (qemuDomainSetupAllChardevs(driver, vm, devPath) < 0)
+ if (qemuDomainSetupAllChardevs(cfg, vm, devPath) < 0)
goto cleanup;
- if (qemuDomainSetupTPM(driver, vm, devPath) < 0)
+ if (qemuDomainSetupTPM(cfg, vm, devPath) < 0)
goto cleanup;
- if (qemuDomainSetupAllGraphics(driver, vm, devPath) < 0)
+ if (qemuDomainSetupAllGraphics(cfg, vm, devPath) < 0)
goto cleanup;
- if (qemuDomainSetupAllInputs(driver, vm, devPath) < 0)
+ if (qemuDomainSetupAllInputs(cfg, vm, devPath) < 0)
goto cleanup;
- if (qemuDomainSetupAllRNGs(driver, vm, devPath) < 0)
+ if (qemuDomainSetupAllRNGs(cfg, vm, devPath) < 0)
goto cleanup;
if (virFileMoveMount(devPath, "/dev") < 0)
@@ -7743,7 +7739,6 @@ qemuDomainBuildNamespace(virQEMUDriverPtr driver,
ret = 0;
cleanup:
- virObjectUnref(cfg);
for (i = 0; i < ndevMountsPath; i++)
rmdir(devMountsSavePath[i]);
virStringListFreeCount(devMountsPath, ndevMountsPath);
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
index 8ba807c..72efa33 100644
--- a/src/qemu/qemu_domain.h
+++ b/src/qemu/qemu_domain.h
@@ -809,7 +809,8 @@ int qemuDomainGetHostdevPath(virDomainDefPtr def,
char ***path,
int **perms);
-int qemuDomainBuildNamespace(virQEMUDriverPtr driver,
+int qemuDomainBuildNamespace(virQEMUDriverConfigPtr cfg,
+ virSecurityManagerPtr mgr,
virDomainObjPtr vm);
int qemuDomainCreateNamespace(virQEMUDriverPtr driver,
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 522f49d..e1a738c 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -2636,7 +2636,7 @@ static int qemuProcessHook(void *data)
if (virSecurityManagerClearSocketLabel(h->driver->securityManager, h->vm->def) < 0)
goto cleanup;
- if (qemuDomainBuildNamespace(h->driver, h->vm) < 0)
+ if (qemuDomainBuildNamespace(h->cfg, h->driver->securityManager, h->vm) < 0)
goto cleanup;
if (virDomainNumatuneGetMode(h->vm->def->numa, -1, &mode) == 0) {
--
2.5.5
8 years, 1 month
[libvirt] [PATCH 00/12] implements iothread polling feature into libvirt
by Pavel Hrdina
Pavel Hrdina (12):
conf: introduce domain XML element <polling> for iothread
lib: introduce an API to add new iothread with parameters
lib: introduce an API to modify parameters of existing iothread
virsh: extend iothreadadd to support virDomainAddIOThreadParams
virsh: introduce command iothreadmod that uses
virDomainModIOThreadParams
qemu_capabilities: detect whether iothread polling is supported
util: properly handle NULL props in
virQEMUBuildObjectCommandlineFromJSON
qemu_monitor: extend qemuMonitorGetIOThreads to fetch polling data
qemu: implement iothread polling
qemu: implement virDomainAddIOThreadParams API
qemu: implement virDomainModIOThreadParams API
news: add entry for for iothread polling feature
docs/formatdomain.html.in | 19 +-
docs/news.xml | 9 +
docs/schemas/domaincommon.rng | 24 ++
include/libvirt/libvirt-domain.h | 44 ++++
src/conf/domain_conf.c | 199 +++++++++++++-
src/conf/domain_conf.h | 18 +-
src/driver-hypervisor.h | 16 ++
src/libvirt-domain.c | 140 ++++++++++
src/libvirt_private.syms | 2 +
src/libvirt_public.syms | 6 +
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 78 +++++-
src/qemu/qemu_command.h | 5 +-
src/qemu/qemu_domain.c | 23 +-
src/qemu/qemu_domain.h | 6 +
src/qemu/qemu_driver.c | 292 ++++++++++++++++++---
src/qemu/qemu_monitor.c | 25 +-
src/qemu/qemu_monitor.h | 9 +-
src/qemu/qemu_monitor_json.c | 51 +++-
src/qemu/qemu_monitor_json.h | 7 +-
src/qemu/qemu_process.c | 14 +-
src/remote/remote_driver.c | 2 +
src/remote/remote_protocol.x | 34 ++-
src/remote_protocol-structs | 20 ++
src/util/virqemu.c | 3 +-
.../generic-iothreads-no-polling.xml | 22 ++
.../generic-iothreads-polling-disabled.xml | 24 ++
.../generic-iothreads-polling-enabled-fail.xml | 24 ++
.../generic-iothreads-polling-enabled.xml | 24 ++
tests/genericxml2xmltest.c | 6 +
.../qemucapabilitiesdata/caps_2.9.0.x86_64.replies | 12 +
tests/qemucapabilitiesdata/caps_2.9.0.x86_64.xml | 1 +
tests/qemumonitorjsontest.c | 2 +-
.../qemuxml2argv-iothreads-polling-disabled.args | 23 ++
.../qemuxml2argv-iothreads-polling-disabled.xml | 36 +++
.../qemuxml2argv-iothreads-polling-enabled.args | 23 ++
.../qemuxml2argv-iothreads-polling-enabled.xml | 36 +++
...emuxml2argv-iothreads-polling-not-supported.xml | 1 +
tests/qemuxml2argvtest.c | 8 +
tests/testutils.c | 3 +-
tools/virsh-domain.c | 188 ++++++++++++-
tools/virsh.pod | 18 ++
43 files changed, 1442 insertions(+), 58 deletions(-)
create mode 100644 tests/genericxml2xmlindata/generic-iothreads-no-polling.xml
create mode 100644 tests/genericxml2xmlindata/generic-iothreads-polling-disabled.xml
create mode 100644 tests/genericxml2xmlindata/generic-iothreads-polling-enabled-fail.xml
create mode 100644 tests/genericxml2xmlindata/generic-iothreads-polling-enabled.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-disabled.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-disabled.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-enabled.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-enabled.xml
create mode 120000 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-not-supported.xml
--
2.11.1
8 years, 1 month
[libvirt] [PATCH v2 0/5] storage: modularize storage backend drivers
by Peter Krempa
Version 2 contains one more patch that adds disables the 'zfs' and 'vstorage'
backends in the spec file. The rest is the same, but I've reposted it due to
it's nature.
Peter Krempa (5):
spec: Don't check for storage driver backends in configure script
storage: Turn storage backends into dynamic modules
tests: drivermodule: Make sure that all compiled storage backends can
be loaded
spec: Modularize the storage driver
news: Mention storage driver split
docs/news.xml | 10 +++
libvirt.spec.in | 188 ++++++++++++++++++++++++++++++++++++------
src/Makefile.am | 85 ++++++++++++++++++-
src/storage/storage_backend.c | 70 ++++++++++++----
src/storage/storage_backend.h | 2 +-
src/storage/storage_driver.c | 19 ++++-
src/storage/storage_driver.h | 1 +
tests/Makefile.am | 4 +-
tests/virdrivermoduletest.c | 2 +-
tests/virstoragetest.c | 2 +-
10 files changed, 337 insertions(+), 46 deletions(-)
--
2.11.1
8 years, 1 month
[libvirt] spice-gl bug: Failed to initialize EGL render node for SPICE GL
by Paul Kek
Hello,
I am having issues with getting spice-gl to work properly with libvirt.
There is a bugzilla report which describes the issue but until now there is still no solution (The patch didn't work on my system.)
As I also mentioned there, I tried adding the `renderD128` device to `cgroup_device_acl` in the qemu.conf file but this produced different errors:
qemu-system-x86_64: egl: eglGetDisplay failed
qemu-system-x86_64: egl: EGL_KHR_surfaceless_context not supported
qemu-system-x86_64: Failed to initialize EGL render node for SPICE GL
Paul
8 years, 1 month
[libvirt] [PATCH] rpc: fix use-after-free when sending event message
by Wang King
If there is a process with a client which registers event callbacks,
and it calls libvirt's API which uses the same virConnectPtr in that
callback function. When this process exit abnormally lead to client
disconnect, there is a possibility that the main thread is refer to
virServerClient just after the virServerClient been freed by job
thread of libvirtd.
Following is the backtrace:
#0 0x00007fda223d66d8 in virClassIsDerivedFrom (klass=0xdeadbeef,parent=0x7fda24c81b40)
#1 0x00007fda223d6a1e in virObjectIsClass (anyobj=anyobj@entry=0x7fd9e575b400,klass=<optimized out>)
#2 0x00007fda223d6a44 in virObjectLock (anyobj=anyobj@entry=0x7fd9e575b400)
#3 0x00007fda22507f71 in virNetServerClientSendMessage (client=client@entry=0x7fd9e575b400, msg=msg@entry=0x7fd9ec30de90)
#4 0x00007fda230d714d in remoteDispatchObjectEventSend (client=0x7fd9e575b400, program=0x7fda24c844e0, procnr=procnr@entry=348, proc=0x7fda2310e5e0 <xdr_remote_domain_event_callback_tunable_msg>, data=data@entry=0x7ffc3857fdb0)
#5 0x00007fda230dd71b in remoteRelayDomainEventTunable (conn=<optimized out>, dom=0x7fda27cd7660, params=0x7fda27f3aae0, nparams=1, opaque=0x7fd9e6c99e00)
#6 0x00007fda224484cb in virDomainEventDispatchDefaultFunc (conn=0x7fda27cd0120, event=0x7fda2736ea00, cb=0x7fda230dd610 <remoteRelayDomainEventTunable>, cbopaque=0x7fd9e6c99e00)
#7 0x00007fda22446871 in virObjectEventStateDispatchCallbacks (callbacks=<optimized out>, callbacks=<optimized out>, event=0x7fda2736ea00, state=0x7fda24ca3960)
#8 virObjectEventStateQueueDispatch (callbacks=0x7fda24c65800, queue=0x7ffc3857fe90, state=0x7fda24ca3960)
#9 virObjectEventStateFlush (state=0x7fda24ca3960)
#10 virObjectEventTimer (timer=<optimized out>, opaque=0x7fda24ca3960)
#11 0x00007fda223ae8b9 in virEventPollDispatchTimeouts ()
#12 virEventPollRunOnce ()
#13 0x00007fda223ad1d2 in virEventRunDefaultImpl ()
#14 0x00007fda225046cd in virNetDaemonRun (dmn=dmn@entry=0x7fda24c775c0)
#15 0x00007fda230d6351 in main (argc=<optimized out>, argv=<optimized out>)
(gdb) p *(virNetServerClientPtr)0x7fd9e575b400
$2 = {parent = {parent = {u = {dummy_align1 = 140573849338048, dummy_align2 = 0x7fd9e65ac0c0, s = {magic = 3864707264, refs = 32729}}, klass = 0x7fda00000078}, lock = {lock = {__data = {__lock = 0,
__count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}}}, wantClose = false,
delayedClose = false, sock = 0x0, auth = 0, readonly = false, tlsCtxt = 0x0, tls = 0x0, sasl = 0x0, sockTimer = 0, identity = 0x0, nrequests = 0, nrequests_max = 0, rx = 0x0, tx = 0x0, filters = 0x0,
nextFilterID = 0, dispatchFunc = 0x0, dispatchOpaque = 0x0, privateData = 0x0, privateDataFreeFunc = 0x0, privateDataPreExecRestart = 0x0, privateDataCloseFunc = 0x0, keepalive = 0x0}
---
src/rpc/virnetserverclient.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/src/rpc/virnetserverclient.c b/src/rpc/virnetserverclient.c
index 81da82c..562516f 100644
--- a/src/rpc/virnetserverclient.c
+++ b/src/rpc/virnetserverclient.c
@@ -1021,6 +1021,12 @@ void virNetServerClientClose(virNetServerClientPtr client)
client->sock = NULL;
}
+ if (client->privateData &&
+ client->privateDataFreeFunc) {
+ client->privateDataFreeFunc(client->privateData);
+ client->privateData = NULL;
+ }
+
virObjectUnlock(client);
}
--
2.8.3
8 years, 1 month
[libvirt] [PATCH] conf: fix use-after-free when sending event message
by Wang King
If there is a process with a client which registers event callbacks,
and it calls libvirt's API which uses the same virConnectPtr in that
callback function. When this process exit abnormally lead to client
disconnect, there is a possibility that the main thread is refer to
virServerClient just after the virServerClient been freed by job
thread of libvirtd.
Following is the backtrace:
#0 0x00007fda223d66d8 in virClassIsDerivedFrom (klass=0xdeadbeef,parent=0x7fda24c81b40)
#1 0x00007fda223d6a1e in virObjectIsClass (anyobj=anyobj@entry=0x7fd9e575b400,klass=<optimized out>)
#2 0x00007fda223d6a44 in virObjectLock (anyobj=anyobj@entry=0x7fd9e575b400)
#3 0x00007fda22507f71 in virNetServerClientSendMessage (client=client@entry=0x7fd9e575b400, msg=msg@entry=0x7fd9ec30de90)
#4 0x00007fda230d714d in remoteDispatchObjectEventSend (client=0x7fd9e575b400, program=0x7fda24c844e0, procnr=procnr@entry=348, proc=0x7fda2310e5e0 <xdr_remote_domain_event_callback_tunable_msg>, data=data@entry=0x7ffc3857fdb0)
#5 0x00007fda230dd71b in remoteRelayDomainEventTunable (conn=<optimized out>, dom=0x7fda27cd7660, params=0x7fda27f3aae0, nparams=1, opaque=0x7fd9e6c99e00)
#6 0x00007fda224484cb in virDomainEventDispatchDefaultFunc (conn=0x7fda27cd0120, event=0x7fda2736ea00, cb=0x7fda230dd610 <remoteRelayDomainEventTunable>, cbopaque=0x7fd9e6c99e00)
#7 0x00007fda22446871 in virObjectEventStateDispatchCallbacks (callbacks=<optimized out>, callbacks=<optimized out>, event=0x7fda2736ea00, state=0x7fda24ca3960)
#8 virObjectEventStateQueueDispatch (callbacks=0x7fda24c65800, queue=0x7ffc3857fe90, state=0x7fda24ca3960)
#9 virObjectEventStateFlush (state=0x7fda24ca3960)
#10 virObjectEventTimer (timer=<optimized out>, opaque=0x7fda24ca3960)
#11 0x00007fda223ae8b9 in virEventPollDispatchTimeouts ()
#12 virEventPollRunOnce ()
#13 0x00007fda223ad1d2 in virEventRunDefaultImpl ()
#14 0x00007fda225046cd in virNetDaemonRun (dmn=dmn@entry=0x7fda24c775c0)
#15 0x00007fda230d6351 in main (argc=<optimized out>, argv=<optimized out>)
(gdb) p *(virNetServerClientPtr)0x7fd9e575b400
$2 = {parent = {parent = {u = {dummy_align1 = 140573849338048, dummy_align2 = 0x7fd9e65ac0c0, s = {magic = 3864707264, refs = 32729}}, klass = 0x7fda00000078}, lock = {lock = {__data = {__lock = 0,
__count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}}}, wantClose = false,
delayedClose = false, sock = 0x0, auth = 0, readonly = false, tlsCtxt = 0x0, tls = 0x0, sasl = 0x0, sockTimer = 0, identity = 0x0, nrequests = 0, nrequests_max = 0, rx = 0x0, tx = 0x0, filters = 0x0,
nextFilterID = 0, dispatchFunc = 0x0, dispatchOpaque = 0x0, privateData = 0x0, privateDataFreeFunc = 0x0, privateDataPreExecRestart = 0x0, privateDataCloseFunc = 0x0, keepalive = 0x0}
---
src/rpc/virnetserverclient.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/src/rpc/virnetserverclient.c b/src/rpc/virnetserverclient.c
index 81da82c..562516f 100644
--- a/src/rpc/virnetserverclient.c
+++ b/src/rpc/virnetserverclient.c
@@ -1021,6 +1021,12 @@ void virNetServerClientClose(virNetServerClientPtr client)
client->sock = NULL;
}
+ if (client->privateData &&
+ client->privateDataFreeFunc) {
+ client->privateDataFreeFunc(client->privateData);
+ client->privateData = NULL;
+ }
+
virObjectUnlock(client);
}
--
2.8.3
8 years, 1 month