[libvirt] [PATCH 0/6] A few misc fixes from LGTM static analysis
by Daniel P. Berrangé
There is an online service call LGTM (Looks Good To Me) which does
static analysis of open source projects and I happened to learn that
they include coverage of libvirt
https://lgtm.com/projects/g/libvirt/libvirt
I looked at the alerts they reported. Currently no errors, 41 warnings
and 90 recommendations (79 of which are FIXME comments :-).
There's nothing particularly important they identify right now, but I
felt like addressing a few of them anyway, hence this series.
Daniel P. Berrangé (6):
conf: remove pointless check on enum value
remote: remove variable whose value is a constant
storage: pass struct _virStorageBackendQemuImgInfo by reference
qemu: pass virDomainDeviceInfo by reference
hyperv: remove unused 'total' variable
hyperv: use "is None" not "== None" for PEP-8 compliance
src/conf/domain_conf.c | 20 ++++++++---------
src/hyperv/hyperv_wmi_generator.py | 3 +--
src/qemu/qemu_command.c | 4 ++--
src/qemu/qemu_domain.c | 10 ++++-----
src/qemu/qemu_domain.h | 9 ++++----
src/qemu/qemu_domain_address.c | 2 +-
src/remote/remote_daemon_dispatch.c | 8 ++-----
src/storage/storage_util.c | 35 ++++++++++++++---------------
8 files changed, 42 insertions(+), 49 deletions(-)
--
2.20.1
5 years, 10 months
[libvirt] [PATCH v2 0/2] Enum formating changes
by Peter Krempa
v2 contains a tweak to the CSS to widen the page slightly and keep
borders on narrow screen.
Peter Krempa (2):
docs: Format bit shift and hex notation for bitwise flag enums
docs: css: Make docs page wider while still accomodating narrow
screens
docs/apibuild.py | 10 ++++++++++
docs/libvirt.css | 9 +++++++--
docs/newapi.xsl | 22 ++++++++++++++++++++--
3 files changed, 37 insertions(+), 4 deletions(-)
--
2.20.1
5 years, 10 months
[libvirt] [PATCH 0/7] Add support to list Storage Driver backend capabilities
by John Ferlan
Although I suppose this could have been an RFC - I just went with
a v1. I would think at least the first 4 patches are non controversial.
Beyond that it depends on what is "expected" as output for the
capabilities output for the storage driver.
John Ferlan (7):
conf: Extract host XML formatting from virCapabilitiesFormatXML
conf: Alter virCapabilitiesFormatHostXML to take virCapsHostPtr
conf: Extract guest XML formatting from virCapabilitiesFormatXML
conf: Alter virCapabilitiesFormatGuestXML to take virCapsGuestPtr
conf: Introduce storage pool functions into capabilities
storage: Process storage pool capabilities
storage: Add storage backend pool/vol API's to capability output
src/conf/capabilities.c | 449 ++++++++++++++++++++++------------
src/conf/capabilities.h | 19 ++
src/conf/virstorageobj.h | 4 +
src/libvirt_private.syms | 1 +
src/storage/storage_backend.c | 65 +++++
src/storage/storage_backend.h | 3 +
src/storage/storage_driver.c | 17 ++
7 files changed, 400 insertions(+), 158 deletions(-)
--
2.20.1
5 years, 10 months
[libvirt] list vsock cids allocated to VMs?
by Brian Kroth
Other than dumping and parsing the config for all running VMs, is
there a way to get the current map of vsock cids allocated to their VM
domains?
Thanks,
Brian
5 years, 10 months
[libvirt] [PATCH 00/11] qemu: Labelling cleanup and fix labelling for block copy pivot (blockdev-add saga)
by Peter Krempa
Refactor some labelling code and then move out the backing chain
labelling from the block copy pivot operation into starting of the job.
Peter Krempa (11):
qemu: domain: Clarify temp variable scope in
qemuDomainDetermineDiskChain
qemu: domain: Allow overriding disk source in
qemuDomainDetermineDiskChain
qemu: cgroup: Change qemu[Setup|Teardown]DiskCgroup to take
virStorageSource
security: Remove security driver internals for disk labelling
qemu: security: Add 'backingChain' flag to
qemuSecurity[Set|Restore]ImageLabel
qemu: security: Replace and remove qemuSecurity[Set|Restore]DiskLabel
security: Remove disk labelling functions and fix callers
qemu: driver: Remove disk source munging in qemuDomainBlockPivot
locking: Use virDomainLockImage[Attach|Detach] instead of *Disk
qemu: hotplug: Refactor qemuHotplugPrepareDiskAccess to work on
virStorageSource
qemu: Label backing chain of user-provided target of blockCopy when
starting the job
src/libvirt_private.syms | 4 --
src/libxl/libxl_driver.c | 14 +++---
src/locking/domain_lock.c | 17 -------
src/locking/domain_lock.h | 8 ----
src/lxc/lxc_controller.c | 3 +-
src/lxc/lxc_driver.c | 4 +-
src/qemu/qemu_blockjob.c | 2 +-
src/qemu/qemu_cgroup.c | 14 +++---
src/qemu/qemu_cgroup.h | 8 ++--
src/qemu/qemu_domain.c | 60 +++++++++++++++---------
src/qemu/qemu_domain.h | 1 +
src/qemu/qemu_driver.c | 53 +++++++++------------
src/qemu/qemu_hotplug.c | 79 +++++++++++++-------------------
src/qemu/qemu_process.c | 26 ++++++++++-
src/qemu/qemu_security.c | 74 +++---------------------------
src/qemu/qemu_security.h | 14 ++----
src/security/security_apparmor.c | 24 ++--------
src/security/security_dac.c | 40 +++++-----------
src/security/security_driver.h | 15 ++----
src/security/security_manager.c | 70 ++++------------------------
src/security/security_manager.h | 12 ++---
src/security/security_nop.c | 25 ++--------
src/security/security_selinux.c | 42 +++++------------
src/security/security_stack.c | 50 +++-----------------
24 files changed, 202 insertions(+), 457 deletions(-)
--
2.20.1
5 years, 10 months
[libvirt] [PATCH 0/2] Fix a couple build issues
by John Ferlan
Recent adjustment to add XML namespace processing for storage pool
XML processing broke the mingw* builds:
CC storagevolxml2xmltest.o
gmake[2]: *** No rule to make target '../src/libvirt_driver_storage_impl.la', needed by 'storagepoolxml2xmltest.exe'. Stop.
gmake[2]: *** Waiting for unfinished jobs....
CC storagepoolxml2xmltest.o
So looking at things again, I see that other options that include
../src/libvirt_driver_storage_impl.la would be inside a WITH_STORAGE
conditional. So move the build to that and alter the ! WITH_STORAGE
as well.
Seeing this, I also see that I didn't add storagepoolxml2argv.c to
the ! WITH_STORAGE either, so that's the second patch.
Build works for me, but I have no idea if it'll work for mingw* -
I am assuming so based on other examples. Another alternative is
to not test the XML namespace processing for storagepoolxml2xml -
although given that it seems mingw* doesn't build/support storage
anyway, perhaps not the correct alternative.
John Ferlan (2):
tests: Fix build issue with storagevolxml2xmltest
tests: Add storagepoolxml2argvtest source to EXTRA_DIST
tests/Makefile.am | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)
--
2.20.1
5 years, 10 months
[libvirt] [PATCH] tests: Build and run storagevolxml2xmltest iff WITH_STORAGE
by Michal Privoznik
Commit 7a227688a83880 assumes that libvirt_driver_storage_impl.la
is always available. Well it is not. Users have option to turn
the storage driver off in which case it isn't build and linking
the test with the library then fails.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
And alternative approach might be to move only those test cases that
require WITH_STORAGE under #ifdef and link the library again only if
WITH_STORAGE is enabled. But this is harder to do properly - I mean
for future test cases it will be hard to decide whether to put them
inside or outside of WITH_STORAGE section.
tests/Makefile.am | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/tests/Makefile.am b/tests/Makefile.am
index ab4c716529..c034fe0bf3 100644
--- a/tests/Makefile.am
+++ b/tests/Makefile.am
@@ -368,6 +368,7 @@ if WITH_STORAGE
test_programs += storagevolxml2argvtest
test_programs += storagepoolxml2argvtest
test_programs += virstorageutiltest
+test_programs += storagepoolxml2xmltest
endif WITH_STORAGE
if WITH_STORAGE_FS
@@ -384,7 +385,7 @@ test_programs += nsstest nssguesttest
test_libraries += nssmock.la
endif WITH_NSS
-test_programs += storagevolxml2xmltest storagepoolxml2xmltest
+test_programs += storagevolxml2xmltest
test_programs += nodedevxml2xmltest
@@ -924,9 +925,17 @@ storagepoolxml2argvtest_LDADD = \
../src/libvirt_util.la \
$(LDADDS)
+storagepoolxml2xmltest_SOURCES = \
+ storagepoolxml2xmltest.c \
+ testutils.c testutils.h
+storagepoolxml2xmltest_LDADD = $(LDADDS) \
+ ../src/libvirt_driver_storage_impl.la \
+ $(GNULIB_LIBS)
+
else ! WITH_STORAGE
EXTRA_DIST += storagevolxml2argvtest.c
EXTRA_DIST += virstorageutiltest.c
+EXTRA_DIST += storagepoolxml2xmltest.c
endif ! WITH_STORAGE
storagevolxml2xmltest_SOURCES = \
@@ -934,13 +943,6 @@ storagevolxml2xmltest_SOURCES = \
testutils.c testutils.h
storagevolxml2xmltest_LDADD = $(LDADDS)
-storagepoolxml2xmltest_SOURCES = \
- storagepoolxml2xmltest.c \
- testutils.c testutils.h
-storagepoolxml2xmltest_LDADD = $(LDADDS) \
- ../src/libvirt_driver_storage_impl.la \
- $(GNULIB_LIBS)
-
nodedevxml2xmltest_SOURCES = \
nodedevxml2xmltest.c \
testutils.c testutils.h
--
2.19.2
5 years, 10 months
[libvirt] [nicsysco.com] Weird Libvirt Behavior
by nico
Hi folks,
First time contributor, but I felt that what I discovered was (probably) a
very rare situation.
I'm running a Centos server (my only Linux deployment) to which customers
all over the U.S. connect to process their micro-lender businesses. There
are several VM's, among other one which runs the fortress system, called a2.
In the beginning the .raw file was about 10GB, which was a 5X overkill in
terms of capacity, at the time.
For years we had no problems and the CentOS box would tick over day after
day without as much as a hiccup.
About three months ago a2 started to slow down, almost to the point of
timing out when applications and users log on. The band-aid was to copy an
earlier a2.raw backup over the current one on a regular basis, and it would
rectify the problem. At first applying this band-aid on Sunday nights, would
suffice. But, later we had to increase it to twice a week and these last
couple of weeks we had to do it almost every night. The system also sent
alerts that a "Degraded Array event had been detected on md device
/dev/md1". Inspecting the drives showed no crisis.
Today it folded completely and brought the system down, with clients' "our
computers are down" response to their customers walking into their stores.
Restarting the box just brought a2 to a paused state, never recovering. We
had to killall to get rid of it.
Having nowhere else to go with it, I decided to rebuild a2 in another,
separate drive to at least address the degraded array alerts. As I edited
the .xml file, I saw the following:
<source file='/var/lib/libvirt/:machines/a2/a2-disk1.raw'/>
What the hell was that colon doing there? I checked the size of the .raw
file. It has grown to over 96GB. Just to check the sanity-box, I checked the
other VM's .xml files and they didn't have a colon, as I expected.
I removed the colon and virsh-started a2, which fired up immediately, with
the rest of the system following suit. No doubt that ":" was the culprit!
My question is: Would that colon cause an append-action to the .raw file? We
have no idea when it got in there or how. We haven't worked on that xml file
for a long time. Why would a2 even fire up at all?
It would be great to hear what the guru's think about that.
Thanks
Nico van Niekerk
Agoura Hills, CA 91301
5 years, 10 months
[libvirt] [PATCH] qemu: Rework setting process affinity
by Michal Privoznik
https://bugzilla.redhat.com/show_bug.cgi?id=1503284
The way we currently start qemu from CPU affinity POV is as
follows:
1) the child process is set affinity to all online CPUs (unless
some vcpu pinning was given in the domain XML)
2) Once qemu is running, cpuset cgroup is configured taking
memory pinning into account
Problem is that we let qemu allocate its memory just anywhere in
1) and then rely in 2) to be able to move the memory to
configured NUMA nodes. This might not be always possible (e.g.
qemu might lock some parts of its memory) and is very suboptimal
(copying large memory between NUMA nodes takes significant amount
of time). The solution is to set the affinity correctly from the
beginning and then possibly refine it later via cgroups.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/qemu/qemu_process.c | 152 ++++++++++++++++++++++------------------
1 file changed, 83 insertions(+), 69 deletions(-)
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 9ccc3601a2..a4668f6773 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -2435,6 +2435,44 @@ qemuProcessDetectIOThreadPIDs(virQEMUDriverPtr driver,
}
+static int
+qemuProcessGetAllCpuAffinity(pid_t pid,
+ virBitmapPtr *cpumapRet)
+{
+ VIR_AUTOPTR(virBitmap) cpumap = NULL;
+ VIR_AUTOPTR(virBitmap) hostcpumap = NULL;
+ int hostcpus;
+
+ *cpumapRet = NULL;
+
+ if (!virHostCPUHasBitmap())
+ return 0;
+
+ if (!(hostcpumap = virHostCPUGetOnlineBitmap()) ||
+ !(cpumap = virProcessGetAffinity(pid)))
+ return -1;
+
+ if (!virBitmapEqual(hostcpumap, cpumap)) {
+ /* setaffinity fails if you set bits for CPUs which
+ * aren't present, so we have to limit ourselves */
+ if ((hostcpus = virHostCPUGetCount()) < 0)
+ return -1;
+
+ if (hostcpus > QEMUD_CPUMASK_LEN)
+ hostcpus = QEMUD_CPUMASK_LEN;
+
+ virBitmapFree(cpumap);
+ if (!(cpumap = virBitmapNew(hostcpus)))
+ return -1;
+
+ virBitmapSetAll(cpumap);
+ }
+
+ VIR_STEAL_PTR(*cpumapRet, cpumap);
+ return 0;
+}
+
+
/*
* To be run between fork/exec of QEMU only
*/
@@ -2443,9 +2481,9 @@ static int
qemuProcessInitCpuAffinity(virDomainObjPtr vm)
{
int ret = -1;
- virBitmapPtr cpumap = NULL;
virBitmapPtr cpumapToSet = NULL;
- virBitmapPtr hostcpumap = NULL;
+ VIR_AUTOPTR(virBitmap) hostcpumap = NULL;
+ virDomainNumatuneMemMode mem_mode;
qemuDomainObjPrivatePtr priv = vm->privateData;
if (!vm->pid) {
@@ -2454,59 +2492,36 @@ qemuProcessInitCpuAffinity(virDomainObjPtr vm)
return -1;
}
- if (vm->def->placement_mode == VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO) {
- VIR_DEBUG("Set CPU affinity with advisory nodeset from numad");
- cpumapToSet = priv->autoCpuset;
+ /* Here is the deal, we can't set cpuset.mems before qemu is
+ * started as it clashes with KVM allocation. Therefore we
+ * used to let qemu allocate its memory anywhere as we would
+ * then move the memory to desired NUMA node via CGroups.
+ * However, that might not be always possible because qemu
+ * might lock some parts of its memory (e.g. due to VFIO).
+ * Solution is to set some temporary affinity now and then
+ * fix it later, once qemu is already running. */
+ if (virDomainNumaGetNodeCount(vm->def->numa) <= 1 &&
+ virDomainNumatuneGetMode(vm->def->numa, -1, &mem_mode) == 0 &&
+ mem_mode == VIR_DOMAIN_NUMATUNE_MEM_STRICT) {
+ if (virDomainNumatuneMaybeGetNodeset(vm->def->numa,
+ priv->autoNodeset,
+ &cpumapToSet,
+ -1) < 0)
+ goto cleanup;
+ } else if (vm->def->cputune.emulatorpin) {
+ cpumapToSet = vm->def->cputune.emulatorpin;
} else {
- VIR_DEBUG("Set CPU affinity with specified cpuset");
- if (vm->def->cpumask) {
- cpumapToSet = vm->def->cpumask;
- } else {
- /* You may think this is redundant, but we can't assume libvirtd
- * itself is running on all pCPUs, so we need to explicitly set
- * the spawned QEMU instance to all pCPUs if no map is given in
- * its config file */
- int hostcpus;
-
- if (virHostCPUHasBitmap()) {
- hostcpumap = virHostCPUGetOnlineBitmap();
- cpumap = virProcessGetAffinity(vm->pid);
- }
-
- if (hostcpumap && cpumap && virBitmapEqual(hostcpumap, cpumap)) {
- /* we're using all available CPUs, no reason to set
- * mask. If libvirtd is running without explicit
- * affinity, we can use hotplugged CPUs for this VM */
- ret = 0;
- goto cleanup;
- } else {
- /* setaffinity fails if you set bits for CPUs which
- * aren't present, so we have to limit ourselves */
- if ((hostcpus = virHostCPUGetCount()) < 0)
- goto cleanup;
-
- if (hostcpus > QEMUD_CPUMASK_LEN)
- hostcpus = QEMUD_CPUMASK_LEN;
-
- virBitmapFree(cpumap);
- if (!(cpumap = virBitmapNew(hostcpus)))
- goto cleanup;
-
- virBitmapSetAll(cpumap);
-
- cpumapToSet = cpumap;
- }
- }
+ if (qemuProcessGetAllCpuAffinity(vm->pid, &hostcpumap) < 0)
+ goto cleanup;
+ cpumapToSet = hostcpumap;
}
- if (virProcessSetAffinity(vm->pid, cpumapToSet) < 0)
+ if (cpumapToSet &&
+ virProcessSetAffinity(vm->pid, cpumapToSet) < 0)
goto cleanup;
ret = 0;
-
cleanup:
- virBitmapFree(cpumap);
- virBitmapFree(hostcpumap);
return ret;
}
#else /* !defined(HAVE_SCHED_GETAFFINITY) && !defined(HAVE_BSD_CPU_AFFINITY) */
@@ -2586,7 +2601,8 @@ qemuProcessSetupPid(virDomainObjPtr vm,
qemuDomainObjPrivatePtr priv = vm->privateData;
virDomainNumatuneMemMode mem_mode;
virCgroupPtr cgroup = NULL;
- virBitmapPtr use_cpumask;
+ virBitmapPtr use_cpumask = NULL;
+ VIR_AUTOPTR(virBitmap) hostcpumap = NULL;
char *mem_mask = NULL;
int ret = -1;
@@ -2598,12 +2614,21 @@ qemuProcessSetupPid(virDomainObjPtr vm,
}
/* Infer which cpumask shall be used. */
- if (cpumask)
+ if (cpumask) {
use_cpumask = cpumask;
- else if (vm->def->placement_mode == VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO)
+ } else if (vm->def->placement_mode == VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO) {
use_cpumask = priv->autoCpuset;
- else
+ } else if (vm->def->cpumask) {
use_cpumask = vm->def->cpumask;
+ } else if (virHostCPUHasBitmap()) {
+ /* You may think this is redundant, but we can't assume libvirtd
+ * itself is running on all pCPUs, so we need to explicitly set
+ * the spawned QEMU instance to all pCPUs if no map is given in
+ * its config file */
+ if (qemuProcessGetAllCpuAffinity(pid, &hostcpumap) < 0)
+ goto cleanup;
+ use_cpumask = hostcpumap;
+ }
/*
* If CPU cgroup controller is not initialized here, then we need
@@ -2628,13 +2653,7 @@ qemuProcessSetupPid(virDomainObjPtr vm,
qemuSetupCgroupCpusetCpus(cgroup, use_cpumask) < 0)
goto cleanup;
- /*
- * Don't setup cpuset.mems for the emulator, they need to
- * be set up after initialization in order for kvm
- * allocations to succeed.
- */
- if (nameval != VIR_CGROUP_THREAD_EMULATOR &&
- mem_mask && virCgroupSetCpusetMems(cgroup, mem_mask) < 0)
+ if (mem_mask && virCgroupSetCpusetMems(cgroup, mem_mask) < 0)
goto cleanup;
}
@@ -6634,12 +6653,7 @@ qemuProcessLaunch(virConnectPtr conn,
/* This must be done after cgroup placement to avoid resetting CPU
* affinity */
- if (!vm->def->cputune.emulatorpin &&
- qemuProcessInitCpuAffinity(vm) < 0)
- goto cleanup;
-
- VIR_DEBUG("Setting emulator tuning/settings");
- if (qemuProcessSetupEmulator(vm) < 0)
+ if (qemuProcessInitCpuAffinity(vm) < 0)
goto cleanup;
VIR_DEBUG("Setting cgroup for external devices (if required)");
@@ -6708,10 +6722,6 @@ qemuProcessLaunch(virConnectPtr conn,
if (qemuProcessUpdateAndVerifyCPU(driver, vm, asyncJob) < 0)
goto cleanup;
- VIR_DEBUG("Setting up post-init cgroup restrictions");
- if (qemuSetupCpusetMems(vm) < 0)
- goto cleanup;
-
VIR_DEBUG("setting up hotpluggable cpus");
if (qemuDomainHasHotpluggableStartupVcpus(vm->def)) {
if (qemuDomainRefreshVcpuInfo(driver, vm, asyncJob, false) < 0)
@@ -6737,6 +6747,10 @@ qemuProcessLaunch(virConnectPtr conn,
if (qemuProcessDetectIOThreadPIDs(driver, vm, asyncJob) < 0)
goto cleanup;
+ VIR_DEBUG("Setting emulator tuning/settings");
+ if (qemuProcessSetupEmulator(vm) < 0)
+ goto cleanup;
+
VIR_DEBUG("Setting global CPU cgroup (if required)");
if (qemuSetupGlobalCpuCgroup(vm) < 0)
goto cleanup;
--
2.19.2
5 years, 10 months
[libvirt] [PATCH v5 0/9] Allow adding mountOpts to the storage pool mount command
by John Ferlan
v4: https://www.redhat.com/archives/libvir-list/2019-January/msg00614.html
NB: Still keeping same subject for cover to keep the same context even
though the contents are very different from the original.
Changes since v4:
* Alter patch1 to make the addition of mount options more generic to
"fs" and "netfs" pools. Tested/generated the output for all the various
pools by modifying the sources, using VIR_TEST_REGENERATE_OUTPUT and
having specific "linux" or "freebsd" output.
* Alter patch2 to rephrase the news article and add the R-by
* Alter patch3 to add the R-by
* Alter patch4 to account for changes in patch1 and to only add the
nfsvers=%u to/for NETFS type pools passing that argument along to
the called methods. Left off the R-by since I changed things.
* Alter patch5 and patch6 to add the R-by
* Combine patch7 and patch8 into one larger patch accounting for review
comments mostly from the former patch8. Modify the docs to indicate lack
of support guarantees, changed the "netfs:" to be just "fs:", fixed the
comments in storage_pool_fs, and changed methods/structs to use FS and
not NetFS. Left off the R-by since I changed things.
* Modify patch9 to use the new struct names and add a taint VIR_WARN message.
Theoretically this could be combined with the previous too if we really
desired to make one large patch. Left off the R-by since I changed things.
* Modify patch10 to add the taint message and fix the poorly cut-n-paste'd
code that neglected to rename the new struct to use "Config" instead of
"Mount". Left off the R-by since I changed things.
John Ferlan (9):
storage: Add default mount options for fs/netfs storage pools
docs: Add news mention of default fs/netfs storage pool mount options
conf: Add optional NFS Source Pool <protocol ver='n'/> option
storage: Add the nfsvers to the command line
virsh: Add source-protocol-ver for pool commands
conf: Introduce virStoragePoolXMLNamespace
storage: Add infrastructure to manage XML namespace options
storage: Add storage pool namespace options to fs and netfs command
lines
rbd: Utilize storage pool namespace to manage config options
docs/formatstorage.html.in | 129 +++++++++++++
docs/news.xml | 11 ++
docs/schemas/storagepool.rng | 53 ++++++
src/conf/storage_conf.c | 73 +++++++-
src/conf/storage_conf.h | 27 +++
src/libvirt_private.syms | 1 +
src/storage/storage_backend_fs.c | 132 ++++++++++++++
src/storage/storage_backend_rbd.c | 169 +++++++++++++++++-
src/storage/storage_util.c | 81 ++++++++-
src/storage/storage_util.h | 14 ++
tests/Makefile.am | 4 +-
.../pool-fs-freebsd.argv | 1 +
.../pool-fs-linux.argv | 1 +
.../pool-netfs-auto-freebsd.argv | 1 +
.../pool-netfs-auto-linux.argv | 1 +
.../pool-netfs-cifs-freebsd.argv | 1 +
.../pool-netfs-cifs-linux.argv | 1 +
.../pool-netfs-freebsd.argv | 1 +
.../pool-netfs-gluster-freebsd.argv | 2 +
.../pool-netfs-gluster-linux.argv | 2 +
.../pool-netfs-linux.argv | 1 +
.../pool-netfs-ns-mountopts-freebsd.argv | 2 +
.../pool-netfs-ns-mountopts-linux.argv | 2 +
.../pool-netfs-ns-mountopts.argv | 1 +
.../pool-netfs-protocol-ver-freebsd.argv | 1 +
.../pool-netfs-protocol-ver-linux.argv | 2 +
.../pool-netfs-protocol-ver.argv | 1 +
tests/storagepoolxml2argvtest.c | 57 +++++-
.../pool-netfs-ns-mountopts.xml | 25 +++
.../pool-netfs-protocol-ver.xml | 21 +++
.../pool-rbd-ns-configopts.xml | 17 ++
.../pool-netfs-ns-mountopts.xml | 25 +++
.../pool-netfs-protocol-ver.xml | 21 +++
.../pool-rbd-ns-configopts.xml | 20 +++
tests/storagepoolxml2xmltest.c | 8 +
tools/virsh-pool.c | 12 +-
tools/virsh.pod | 5 +
37 files changed, 903 insertions(+), 23 deletions(-)
create mode 100644 tests/storagepoolxml2argvdata/pool-fs-freebsd.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-fs-linux.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-netfs-auto-freebsd.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-netfs-auto-linux.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-netfs-cifs-freebsd.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-netfs-cifs-linux.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-netfs-freebsd.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-netfs-gluster-freebsd.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-netfs-gluster-linux.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-netfs-linux.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-netfs-ns-mountopts-freebsd.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-netfs-ns-mountopts-linux.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-netfs-ns-mountopts.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-netfs-protocol-ver-freebsd.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-netfs-protocol-ver-linux.argv
create mode 100644 tests/storagepoolxml2argvdata/pool-netfs-protocol-ver.argv
create mode 100644 tests/storagepoolxml2xmlin/pool-netfs-ns-mountopts.xml
create mode 100644 tests/storagepoolxml2xmlin/pool-netfs-protocol-ver.xml
create mode 100644 tests/storagepoolxml2xmlin/pool-rbd-ns-configopts.xml
create mode 100644 tests/storagepoolxml2xmlout/pool-netfs-ns-mountopts.xml
create mode 100644 tests/storagepoolxml2xmlout/pool-netfs-protocol-ver.xml
create mode 100644 tests/storagepoolxml2xmlout/pool-rbd-ns-configopts.xml
--
2.20.1
5 years, 10 months