[libvirt] [PATCH 0/2] configure gnutls cleanup
by Pavel Hrdina
Pavel Hrdina (2):
configure: move gnutls check into virt-gnutls.m4
m4/virt-gnutls: remove code for gnutls < 2.2.0
configure.ac | 109 +-----------------------------------------------------
m4/virt-gnutls.m4 | 62 +++++++++++++++++++++++++++++++
2 files changed, 64 insertions(+), 107 deletions(-)
create mode 100644 m4/virt-gnutls.m4
--
2.10.1
8 years, 3 months
[libvirt] [PATCH RFC v3 00/15] FSPool: backend directory
by Olga Krishtal
Hi everyone, we would like to propose the first implementation of fspool
with directory backend according to previous discussions:
https://www.redhat.com/archives/libvir-list/2016-April/msg01941.html
https://www.redhat.com/archives/libvir-list/2016-May/msg00208.html
https://www.redhat.com/archives/libvir-list/2016-September/msg00463.html
Filesystem pools is a facility to manage filesystems resources similar
to how storage pools manages volume resources.The manageble unit is a single
filesystem, so fspool items have only one type - dir (storage pools can manage files,
block devices, etc). However, backends for fspools can be different.
This series introduses the simplest backend - host directory.
API mostly follows storage pool API: we can create fspool, build it,
populate with items. Moreover, to create filesystem pool we need
some storage. So, all structures to describe storage that will hold
fspool is borrowed from storage pool ones. The same is true for functions
that work with them.
As it was mentioned before - here we present directory backend for fspool.
To mangae fspools and items we tried to use as much functionality from storage pool
(directory and fs backend) as possible.
The first 3 patches - is preparational refactoring. Both storage pool and fspool
reside upon some storage, so there is a good chance to use the same code for
describing storage source and functions that work with it. All reusable code is moved
virpoolcommon.c/.h It would be great if you share you thoughts about such changes.
Because what we trying to achive - is to have less copy/paste and to have
separate drivers for storage pools and filesystem pools.
All other patches is devoted to fspool implementation and is presented according
libvirt recommendations.
Uploading/downloading operations
are not defined yet as it is not obvious how to make it properly. I guess
we can use some kind of tar to make a stream from a filesystem. Please share
you thoughts on this particular issue.
v2:
- renamed Fs to FS
- in configure.ac script macro m4 is used
- updates docs
- created simple tests
- updated virsh.pod
- added information abot fspool in fotmatfs.html
v3:
- in this version storage pool code is reused
- resplitted patches
- fixed some errors
Olga Krishtal (15):
storage pools: refactoring of basic structs
storage pools: functions refactoring
storage pools: refactoring of fs backend
FSPool: defining the public API
FSPool: defining the internal API
FSpool: implementing the public API
FSPool: added access control objects and permissions
FSPool: added --with-fs compilation option
FSPool: implementation of remote protocol
FSPool: added configuration description
virsh: filesystem pools commands
FSPool: empty implementation of driver methods
FSPool: driver methods implementation
FSPool: directory backend inplementation
FSPool: Tests and documentation
configure.ac | 6 +
daemon/Makefile.am | 4 +
daemon/libvirtd.c | 9 +
daemon/remote.c | 35 +
docs/formatfs.html.in | 206 ++
docs/fspool.html.in | 41 +
docs/schemas/fsitem.rng | 66 +
docs/schemas/fspool.rng | 82 +
docs/sitemap.html.in | 4 +
include/libvirt/libvirt-fs.h | 254 +++
include/libvirt/libvirt-storage.h | 5 +-
include/libvirt/libvirt.h | 1 +
include/libvirt/virterror.h | 7 +
m4/virt-driver-fspool.m4 | 43 +
po/POTFILES.in | 7 +
src/Makefile.am | 59 +-
src/access/viraccessdriver.h | 15 +
src/access/viraccessdrivernop.c | 21 +
src/access/viraccessdriverpolkit.c | 47 +
src/access/viraccessdriverstack.c | 50 +
src/access/viraccessmanager.c | 32 +
src/access/viraccessmanager.h | 11 +-
src/access/viraccessperm.c | 15 +-
src/access/viraccessperm.h | 126 ++
src/check-driverimpls.pl | 2 +
src/conf/fs_conf.c | 1479 ++++++++++++++
src/conf/fs_conf.h | 262 +++
src/conf/storage_conf.c | 162 --
src/conf/storage_conf.h | 137 +-
src/datatypes.c | 150 ++
src/datatypes.h | 60 +-
src/driver-fs.h | 193 ++
src/driver.h | 3 +
src/fs/fs_backend.h | 94 +
src/fs/fs_backend_dir.c | 290 +++
src/fs/fs_backend_dir.h | 8 +
src/fs/fs_driver.c | 2044 ++++++++++++++++++++
src/fs/fs_driver.h | 10 +
src/libvirt-fs.c | 1555 +++++++++++++++
src/libvirt.c | 30 +-
src/libvirt_private.syms | 58 +-
src/libvirt_public.syms | 46 +
src/remote/remote_driver.c | 66 +
src/remote/remote_protocol.x | 466 ++++-
src/remote_protocol-structs | 165 ++
src/rpc/gendispatch.pl | 23 +-
src/storage/storage_backend.h | 1 -
src/storage/storage_backend_fs.c | 74 +-
src/util/virerror.c | 37 +
src/util/virpoolcommon.c | 212 ++
src/util/virpoolcommon.h | 189 ++
src/util/virstoragefile.c | 73 +
src/util/virstoragefile.h | 3 +
tests/Makefile.am | 12 +
tests/fsitemxml2xmlin/item.xml | 13 +
tests/fsitemxml2xmlout/item.xml | 13 +
tests/fsitemxml2xmltest.c | 105 +
.../dir-missing-target-path-invalid.xml | 12 +
tests/fspoolxml2xmlin/fspool-dir.xml | 16 +
tests/fspoolxml2xmlout/fspool-dir.xml | 16 +
tests/fspoolxml2xmltest.c | 81 +
tools/Makefile.am | 2 +
tools/virsh-fsitem.c | 1293 +++++++++++++
tools/virsh-fsitem.h | 39 +
tools/virsh-fspool.c | 1574 +++++++++++++++
tools/virsh-fspool.h | 38 +
tools/virsh.c | 4 +
tools/virsh.h | 9 +
tools/virsh.pod | 252 ++-
69 files changed, 12128 insertions(+), 389 deletions(-)
create mode 100644 docs/formatfs.html.in
create mode 100644 docs/fspool.html.in
create mode 100644 docs/schemas/fsitem.rng
create mode 100644 docs/schemas/fspool.rng
create mode 100644 include/libvirt/libvirt-fs.h
create mode 100644 m4/virt-driver-fspool.m4
create mode 100644 src/conf/fs_conf.c
create mode 100644 src/conf/fs_conf.h
create mode 100644 src/driver-fs.h
create mode 100644 src/fs/fs_backend.h
create mode 100644 src/fs/fs_backend_dir.c
create mode 100644 src/fs/fs_backend_dir.h
create mode 100644 src/fs/fs_driver.c
create mode 100644 src/fs/fs_driver.h
create mode 100644 src/libvirt-fs.c
create mode 100644 src/util/virpoolcommon.c
create mode 100644 src/util/virpoolcommon.h
create mode 100644 tests/fsitemxml2xmlin/item.xml
create mode 100644 tests/fsitemxml2xmlout/item.xml
create mode 100644 tests/fsitemxml2xmltest.c
create mode 100644 tests/fspoolschemadata/dir-missing-target-path-invalid.xml
create mode 100644 tests/fspoolxml2xmlin/fspool-dir.xml
create mode 100644 tests/fspoolxml2xmlout/fspool-dir.xml
create mode 100644 tests/fspoolxml2xmltest.c
create mode 100644 tools/virsh-fsitem.c
create mode 100644 tools/virsh-fsitem.h
create mode 100644 tools/virsh-fspool.c
create mode 100644 tools/virsh-fspool.h
--
1.8.3.1
8 years, 3 months
[libvirt] [PATCH v2 0/2] Allow saving QEMU libvirt state to a pipe
by Chen Hanxiao
This series introduce flag VIR_DOMAIN_SAVE_DIRECT
to enable command 'save' to write to PIPE.
Base upon patches from Roy Keene <rkeene(a)knightpoint.com>
with some fixes.
Change from original patch:
1) Check whether the specified path is a PIPE.
2) Rebase on upstream.
3) Add doc for virsh command
v2:
rename VIR_DOMAIN_SAVE_PIPE to VIR_DOMAIN_SAVE_DIRECT
remove S_ISFIFO check
Chen Hanxiao (2):
qemu: Allow saving QEMU libvirt state to a pipe
virsh: introduce flage --direct for save command
include/libvirt/libvirt-domain.h | 1 +
src/qemu/qemu_driver.c | 54 ++++++++++++++++++++++++++--------------
tools/virsh-domain.c | 6 +++++
tools/virsh.pod | 5 +++-
4 files changed, 47 insertions(+), 19 deletions(-)
--
2.7.4
8 years, 3 months
[libvirt] [PATCH] m4/virt-loader-nvram: use quotation for list of loader:nvram pairs
by Pavel Hrdina
The bug was introduced by commit 08c2d1480b. The string must be quoted
because it is used as function argument.
Signed-off-by: Pavel Hrdina <phrdina(a)redhat.com>
---
Pushed under build breaker rule.
m4/virt-loader-nvram.m4 | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/m4/virt-loader-nvram.m4 b/m4/virt-loader-nvram.m4
index e57ba829f4..e3e8b82825 100644
--- a/m4/virt-loader-nvram.m4
+++ b/m4/virt-loader-nvram.m4
@@ -31,7 +31,7 @@ AC_DEFUN([LIBVIRT_CHECK_LOADER_NVRAM], [
if test $(expr $l % 2) -ne 0 ; then
AC_MSG_ERROR([Malformed --with-loader-nvram argument])
fi
- AC_DEFINE_UNQUOTED([DEFAULT_LOADER_NVRAM], [$with_loader_nvram],
+ AC_DEFINE_UNQUOTED([DEFAULT_LOADER_NVRAM], ["$with_loader_nvram"],
[List of loader:nvram pairs])
fi
])
--
2.11.0
8 years, 3 months
[libvirt] [PATCH] Allow virtio-console on PPC64
by Shivaprasad G Bhat
virQEMUCapsSupportsChardev existing checks returns true
for spapr-vty alone. Instead verify spapr-vty validity
and let the logic to return true for other device types
so that virtio-console passes.
The non-pseries machines dont have spapr-vio-bus. So, the
function always returned false for them before.
Fixes - https://bugzilla.redhat.com/show_bug.cgi?id=1257813
Signed-off-by: Shivaprasad G Bhat <sbhat(a)linux.vnet.ibm.com>
---
src/qemu/qemu_capabilities.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 6eee85d..784496b 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -4286,9 +4286,12 @@ virQEMUCapsSupportsChardev(const virDomainDef *def,
return false;
if ((def->os.arch == VIR_ARCH_PPC) || ARCH_IS_PPC64(def->os.arch)) {
+ if (!qemuDomainMachineIsPSeries(def))
+ return false;
/* only pseries need -device spapr-vty with -chardev */
- return (chr->deviceType == VIR_DOMAIN_CHR_DEVICE_TYPE_SERIAL &&
- chr->info.type == VIR_DOMAIN_DEVICE_ADDRESS_TYPE_SPAPRVIO);
+ if (chr->deviceType == VIR_DOMAIN_CHR_DEVICE_TYPE_SERIAL &&
+ chr->info.type != VIR_DOMAIN_DEVICE_ADDRESS_TYPE_SPAPRVIO)
+ return false;
}
if ((def->os.arch != VIR_ARCH_ARMV7L) && (def->os.arch != VIR_ARCH_AARCH64))
8 years, 3 months
[libvirt] [PATCH] NEWS: Fix indentation
by Andrea Bolognani
---
Pushed as trivial.
docs/news.html.in | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/docs/news.html.in b/docs/news.html.in
index 5a34674..22611db 100644
--- a/docs/news.html.in
+++ b/docs/news.html.in
@@ -46,11 +46,11 @@
</li>
<li><strong>Bug fixes</strong>
<ul>
- <li>qemu: Correct GetBlockInfo values<br/>
- For an active domain, correct the physical value provided for
- a raw sparse file backed storage and the allocation value provided
- for a qcow2 file backed storage that hasn't yet been opened on
- the domain
+ <li>qemu: Correct GetBlockInfo values<br/>
+ For an active domain, correct the physical value provided for
+ a raw sparse file backed storage and the allocation value provided
+ for a qcow2 file backed storage that hasn't yet been opened on
+ the domain
</li>
</ul>
</li>
--
2.7.4
8 years, 3 months
[libvirt] [PATCH 0/5] vz: fix some CT disk representation cases and its statistics
by Maxim Nestratov
Maxim Nestratov (5):
vz: report "scsi" bus for disks when nothing was set explixitly
vz: don't query boot devices information for VZ, set boot from disk
always
vz: don't add implicit devices for CTs
vz: report disks either as disks or filesystems depending on original
xml
vz: get disks statistics for CTs
src/vz/vz_driver.c | 10 ++++-
src/vz/vz_sdk.c | 127 +++++++++++++++++++++++++++++++++++++++++++----------
src/vz/vz_sdk.h | 2 +-
3 files changed, 112 insertions(+), 27 deletions(-)
--
2.4.11
8 years, 3 months
[libvirt] [PATCH 0/2] qemu: migration: show disks stats for nbd migration
by Nikolay Shirokovskiy
Current migration stats will show something like [1] when in
the process of mirroring of non shared disks. This gives very
little info on the migration progress. Likewise completed stats miss
disks mirroring info.
This patch provides disks stats in the said phase like in [2] so
user can now understand what's going on. However data stats miss
memory stats, so data total and remaining will change when memory
migration starts.
AFAIU disks stats were available before the nbd based migration
becomes the default. So this patch returns disks stats back at
some level.
Patch 1 is just a little cleanup. Removed code uses qemuMigrationFetchJobStatus
so the patch 1 helps analysis of the patch 2.
[1]
Job type: Unbounded
Time elapsed: 4964 ms
[2]
Job type: Unbounded
Time elapsed: 4964 ms
Data processed: 146.000 MiB
Data remaining: 854.000 MiB
Data total: 1000.000 MiB
File processed: 146.000 MiB
File remaining: 854.000 MiB
File total: 1000.000 MiB
Nikolay Shirokovskiy (2):
qemu: clean out unused migrate to unix
qemu: migration: show disks stats for nbd migration
docs/news.html.in | 4 ++
src/qemu/qemu_driver.c | 5 +-
src/qemu/qemu_migration.c | 128 +++++++++++++++++++++++++++++-----------------
src/qemu/qemu_migration.h | 3 +-
src/qemu/qemu_monitor.c | 24 ---------
src/qemu/qemu_monitor.h | 4 --
6 files changed, 90 insertions(+), 78 deletions(-)
--
1.8.3.1
8 years, 3 months
[libvirt] connecting to qemu domain monitor socket outside of libvirt/virsh
by Jason Miesionczek
Hi,
So I see that when i have a qemu vm running, that i created via libvirt,
there is a socket here:
/var/lib/libvirt/qemu/domain-<name>/monitor.sock
I am trying to connect to this socket via cli or a completely separate
C/C++ application to be able to control the VM, but I can't seem to get it
to work.
Does anyone know if/how this is possible?
I've tried 'nc', 'socat', and based on the qemu libvirt code,
'socket'/'connect', but nothing seems to work.
Also, is it possible through libvirt, when creating a qemu VM, one can
specify to enable the QMP socket?
Thanks in advance,
Jason
8 years, 3 months
[libvirt] [PATCH v2 0/1] storage: vstorage support
by Olga Krishtal
The patch supports pool and volume managment using Vistuozzo Storage (vstorage)
as a backend.
To define pool use:
virsh -c qemu+unix:///system pool-define-as --name VZ --type vstorage
--source-name vz7-vzstorage --target /vzstorage_pool
The resulting XML:
<pool type='vstorage'>
<name>VZ</name>
<uuid>5f45665b-66fa-4b18-84d1-248774cff3a1</uuid>
<capacity unit='bytes'>107374182400</capacity>
<allocation unit='bytes'>1441144832</allocation>
<available unit='bytes'>105933037568</available>
<source>
<name>vz7-vzstorage</name>
</source>
<target>
<path>/vzstorage_pool</path>
<permissions>
<mode>0700</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
For the vstorage pool the only obligatory parameter, which stores cluster name,
is --source-name.
v2:
- maximum code reusage
- fixed name issue - we use vstorage
- simplified findPoolSources
Olga Krishtal (1):
storage: vz storage pool support
configure.ac | 28 ++++++++++
docs/schemas/storagepool.rng | 13 +++++
include/libvirt/libvirt-storage.h | 1 +
src/conf/storage_conf.c | 16 +++++-
src/conf/storage_conf.h | 4 +-
src/storage/storage_backend.c | 3 +
src/storage/storage_backend_fs.c | 114 ++++++++++++++++++++++++++++++++++++--
src/storage/storage_backend_fs.h | 3 +
src/storage/storage_driver.c | 2 +
tools/virsh-pool.c | 2 +
tools/virsh.c | 3 +
11 files changed, 181 insertions(+), 8 deletions(-)
--
1.8.3.1
8 years, 3 months