[libvirt] [PATCHv2 00/33] Gluster backing chains and relative block commit/pull
by Peter Krempa
Hi,
this is my second take on this topic. Some of the patches are still in a RFC
status as the last six patches require qemu support which is still on review
upstream. The rest should be fine as is even without the qemu bits.
Peter Krempa (33):
qemu: process: Refresh backing chain info when reconnecting to qemu
qemu: Make qemuDomainPrepareDiskChainElement aware of remote storage
storage: Store gluster volume name separately
storage: Rework debugging of storage file access through storage
driver
conf: Fix domain disk path iterator to work with networked storage
storage: Add NONE protocol type for network disks
storage: Add support for access to files using provided uid/gid
storage: Add storage file API to read file headers
storage: backend: Add unique id retrieval API
storage: Add API to check accessibility of storage volumes
storage: Move virStorageFileGetMetadata to the storage driver
storage: Determine the local storage type right away
test: storage: Initialize storage source to correct type
storage: backend: Add possibility to suppress errors from backend
lookup
storage: Switch metadata crawler to use storage driver to get unique
path
storage: Switch metadata crawler to use storage driver to read headers
storage: Switch metadata crawler to use storage driver file access
check
storage: Add infrastructure to parse remote network backing names
storage: Change to new backing store parser
storage: Traverse backing chains of network disks
util: string: Return element count from virStringSplit
util: string: Add helper to free non-NULL terminated string arrays
util: storagefile: Add helper to resolve "../", "./" and "////" in
paths
util: storage: Add helper to resolve relative path difference
util: storagefile: Add canonicalization to virStorageFileSimplifyPath
storage: gluster: Add backend to return unique storage file path
qemu: json: Add format strings for optional command arguments
qemu: monitor: Add argument for specifying backing name for block
commit
qemu: monitor: Add support for backing name specification for
block-stream
lib: Introduce flag VIR_DOMAIN_BLOCK_COMMIT_RELATIVE
lib: Introduce flag VIR_DOMAIN_BLOCK_REBASE_RELATIVE
qemu: Add support for networked disks for block commit
qemu: Add support for networked disks for block pull/block rebase
cfg.mk | 2 +-
include/libvirt/libvirt.h.in | 6 +
src/Makefile.am | 2 +
src/conf/domain_conf.c | 72 ++-
src/libvirt_private.syms | 7 +-
src/qemu/qemu_capabilities.c | 8 +
src/qemu/qemu_capabilities.h | 2 +
src/qemu/qemu_command.c | 24 +-
src/qemu/qemu_domain.c | 10 +-
src/qemu/qemu_driver.c | 160 +++++--
src/qemu/qemu_migration.c | 6 +-
src/qemu/qemu_monitor.c | 21 +-
src/qemu/qemu_monitor.h | 4 +-
src/qemu/qemu_monitor_json.c | 139 ++++--
src/qemu/qemu_monitor_json.h | 2 +
src/qemu/qemu_process.c | 5 +
src/security/virt-aa-helper.c | 2 +
src/storage/storage_backend.c | 16 +-
src/storage/storage_backend.h | 22 +-
src/storage/storage_backend_fs.c | 127 +++++-
src/storage/storage_backend_gluster.c | 203 +++++++--
src/storage/storage_driver.c | 329 +++++++++++++-
src/storage/storage_driver.h | 15 +-
src/util/virstoragefile.c | 818 ++++++++++++++++++++++++----------
src/util/virstoragefile.h | 31 +-
src/util/virstring.c | 44 +-
src/util/virstring.h | 7 +
tests/Makefile.am | 7 +-
tests/qemumonitorjsontest.c | 2 +-
tests/virstoragetest.c | 290 +++++++++++-
tools/virsh-domain.c | 29 +-
31 files changed, 1988 insertions(+), 424 deletions(-)
--
1.9.3
10 years, 6 months
[libvirt] [bug] python-libvirt vcpus mismatch
by Chris Friesen
I've got a libvirt-created instance where I've been messing with
affinity, and now something is strange.
I did the following in python:
>>> import libvirt
>>> conn=libvirt.open("qemu:///system")
>>> dom = conn.lookupByName('instance-00000027')
>>> dom.vcpus()
([(0, 1, 528150000000L, 2), (1, 1, 548070000000L, 3)], [(False, False,
True, False), (False, False, True, False)])
I'm totally confused by that "3". It's supposed to represent the
physical cpu that virtual cpu 1 is running on. But cpu 3 isn't even in
the allowable affinity map for vcpu 1.
If I query the data other ways, I get both cpus running on physical cpu 2:
root@compute-0:~# virsh vcpupin instance-00000027
VCPU: CPU Affinity
----------------------------------
0: 2
1: 2
root@compute-0:~# virsh emulatorpin instance-00000027
emulator: CPU Affinity
----------------------------------
*: 2
root@compute-0:~# taskset -pac 15072
pid 15072's current affinity list: 2
pid 15073's current affinity list: 1-3
pid 15075's current affinity list: 2
pid 15076's current affinity list: 0
So I'm left with the conclusion that there is something strange going on
with libvirt-python. Anyone got any ideas?
Chris
10 years, 6 months
[libvirt] [PATCH RFC] build: fix build with libselinux 2.3
by Jim Fehlig
The attached patch is an attempt to fix recent build failures I've
noticed with libselinux 2.3
CC securityselinuxhelper.lo
securityselinuxhelper.c:159:5: error: conflicting types for 'setcon_raw'
int setcon_raw(security_context_t context)
^
In file included from securityselinuxhelper.c:30:0:
/usr/include/selinux/selinux.h:41:12: note: previous declaration of
'setcon_raw' was here
extern int setcon_raw(const char * con);
^
securityselinuxhelper.c:168:5: error: conflicting types for 'setcon'
int setcon(security_context_t context)
^
In file included from securityselinuxhelper.c:30:0:
/usr/include/selinux/selinux.h:40:12: note: previous declaration of
'setcon' was here
extern int setcon(const char * con);
^
securityselinuxhelper.c:174:5: error: conflicting types for 'setfilecon_raw'
int setfilecon_raw(const char *path, security_context_t con)
^
In file included from securityselinuxhelper.c:30:0:
/usr/include/selinux/selinux.h:110:12: note: previous declaration of
'setfilecon_raw' was here
extern int setfilecon_raw(const char *path, const char * con);
^
securityselinuxhelper.c:185:5: error: conflicting types for 'setfilecon'
int setfilecon(const char *path, security_context_t con)
^
In file included from securityselinuxhelper.c:30:0:
/usr/include/selinux/selinux.h:109:12: note: previous declaration of
'setfilecon' was here
extern int setfilecon(const char *path, const char * con);
^
Noticing that security_context_t changed to 'const char *', my first
thought was to use AC_CHECK_TYPE to check for security_conext_t, but
alas the typedef remains in 2.3 with the comment "No longer used; here
for compatibility with legacy callers".
I then pursued the approach in this patch of defining a config var based
on 'pkg-config --modversion', which works in a test script, but not in
the context of the LIBVIRT_CHECK_SELINUX macro. Probably due to some
missed quoting, but I'm reaching the m4 knowledge barrier. Before
attempting to bypass that, I'd like to see what others think of this
approach. Is there a simpler solution?
Regards,
Jim
10 years, 6 months
[libvirt] [PATCH] util: fix virTimeLocalOffsetFromUTC DST processing
by Laine Stump
The original version of virTimeLocalOffsetFromUTC() (commit
1cddaea7aeca441b733c31990a3f139dd2d346f6) would fail for certain times
of the day if daylight savings time was active. This could most easily
be seen by uncommenting the TEST_LOCALOFFSET() cases that invlude a
DST setting.
After a lot of experimenting, I found that the way to solve it in
almost all test cases is to set tm_isdst = -1 in the stuct tm prior to
calling mktime(). Once this is done, the correct offset is returned
for all test cases at all times except the two hours just after
00:00:00 Jan 1 UTC - during that time, any timezone that is *behind*
UTC, and that is supposed to always be in DST will not have DST
accounted for in its offset. I still do not know the source of this
problem, but it appears to be either a bug in glibc, or my improper
specification of a TZ setting, so I am leaving the offending tests
listed in virtimetest.c, but disabling them for now.
---
See https://www.redhat.com/archives/libvir-list/2014-May/msg00898.html
for my earlier comments on this problem.
src/util/virtime.c | 3 +++
tests/virtimetest.c | 24 +++++++++++++++++-------
2 files changed, 20 insertions(+), 7 deletions(-)
diff --git a/src/util/virtime.c b/src/util/virtime.c
index 3a56400..c69dff1 100644
--- a/src/util/virtime.c
+++ b/src/util/virtime.c
@@ -377,6 +377,9 @@ virTimeLocalOffsetFromUTC(long *offset)
return -1;
}
+ /* tell mktime to figure out itself whether or not DST is in effect */
+ gmtimeinfo.tm_isdst = -1;
+
/* mktime() also obeys current timezone rules */
if ((utc = mktime(&gmtimeinfo)) == (time_t)-1) {
virReportSystemError(errno, "%s",
diff --git a/tests/virtimetest.c b/tests/virtimetest.c
index bf27682..35551ea 100644
--- a/tests/virtimetest.c
+++ b/tests/virtimetest.c
@@ -161,20 +161,30 @@ mymain(void)
TEST_LOCALOFFSET("VIR00:30", -30 * 60);
TEST_LOCALOFFSET("VIR01:30", -90 * 60);
+ TEST_LOCALOFFSET("VIR05:00", (-5 * 60) * 60);
TEST_LOCALOFFSET("UTC", 0);
TEST_LOCALOFFSET("VIR-00:30", 30 * 60);
TEST_LOCALOFFSET("VIR-01:30", 90 * 60);
-#if __TEST_DST
+
/* test DST processing with timezones that always
* have DST in effect; what's more, cover a zone with
* with an unusual DST different than a usual one hour
*/
- /* NB: These tests fail at certain times of the day, so
- * must be disabled until we figure out why
- */
- TEST_LOCALOFFSET("VIR-00:30VID,0,365", 90 * 60);
- TEST_LOCALOFFSET("VIR-02:30VID,0,365", 210 * 60);
- TEST_LOCALOFFSET("VIR-02:30VID-04:30,0,365", 270 * 60);
+ TEST_LOCALOFFSET("VIR-00:30VID,0/00:00:00,366/23:59:59",
+ ((1 * 60) + 30) * 60);
+ TEST_LOCALOFFSET("VIR-02:30VID,0/00:00:00,366/23:59:59",
+ ((3 * 60) + 30) * 60);
+ TEST_LOCALOFFSET("VIR-02:30VID-04:30,0/00:00:00,366/23:59:59",
+ ((4 * 60) + 30) * 60);
+ TEST_LOCALOFFSET("VIR-12:00VID-13:00,0/00:00:00,366/23:59:59",
+ ((13 * 60) + 0) * 60);
+#ifdef __BROKEN_DST_TESTS
+ TEST_LOCALOFFSET("VIR02:45VID00:45,0/00:00:00,366/23:59:59",
+ -45 * 60);
+ TEST_LOCALOFFSET("VIR05:00VID04:00,0/00:00:00,366/23:59:59",
+ ((-4 * 60) + 0) * 60);
+ TEST_LOCALOFFSET("VIR11:00VID10:00,0/00:00:00,366/23:59:59",
+ ((-10 * 60) + 0) * 60);
#endif
return ret == 0 ? EXIT_SUCCESS : EXIT_FAILURE;
--
1.9.3
10 years, 6 months
[libvirt] Python setBlkioParameters function broken
by Qiang Guan
I've tried to use domain.setBlkioParameters with ipython, and I'm
finding it's throwing the mysterious error:
ubuntu@host-192-168-0-7:~$ dpkg -l | grep libvirt
ii libvirt-bin 1.2.2-0ubuntu13.1 amd64 programs for the libvirt library
ii libvirt0 1.2.2-0ubuntu13.1 amd64 library for interfacing with
different virtualization systems
ii python-libvirt 1.2.2-0ubuntu1 amd64 libvirt Python bindings
ubuntu@host-192-168-0-7:~$ sudo ipython
......
In [85]: doms[0]
Out[85]: <libvirt.virDomain at 0x7fe688879250>
In [86]: params
Out[86]: {'weight': 250}
In [87]: doms[0].setBlkioParameters(params, 0)
libvirt: error : argument unsupported: parameter '' not supported
---------------------------------------------------------------------------
libvirtError Traceback (most recent call last)
<ipython-input-87-026c35502a5c> in <module>()
----> 1 doms[0].setBlkioParameters(params, 0)
/usr/lib/python2.7/dist-packages/libvirt.pyc in setBlkioParameters(self,
params, flags)
1961 """Change the blkio tunables """
1962 ret = libvirtmod.virDomainSetBlkioParameters(self._o, params, flags)
-> 1963 if ret == -1: raise libvirtError ('virDomainSetBlkioParameters()
failed', dom=self)
1964 return ret
1965
libvirtError: argument unsupported: parameter '' not supported
The libvirt debug log info are as following:
2014-05-27 08:37:27.752+0000: 7559: debug :
remoteDispatchDomainSetBlkioParametersHelper:7328 :
server=0x7fd81deec930 client=0x7fd81df05170 msg=0x7fd81df06eb0
rerr=0x7fd816059c80 args=0x7fd7e40009a0 ret=0x7fd7e4000910
2014-05-27 08:37:27.752+0000: 7559: debug : virObjectNew:199 :
OBJECT_NEW: obj=0x7fd7e4000c60 classname=virDomain
2014-05-27 08:37:27.752+0000: 7559: debug : virObjectRef:293 :
OBJECT_REF: obj=0x7fd7f80011f0
2014-05-27 08:37:27.752+0000: 7559: debug :
virDomainSetBlkioParameters:3985 : dom=0x7fd7e4000c60, (VM:
name=instance-00000050, uuid=f844cf42-07d1-4b73-8de7-7437bf32aab0),
params=0x7fd7e4000930, nparams=1, flags=0
2014-05-27 08:37:27.752+0000: 7559: debug :
virDomainSetBlkioParameters:3986 : params[""]=(uint)250
2014-05-27 08:37:27.752+0000: 7559: debug : virObjectRef:293 :
OBJECT_REF: obj=0x7fd81deec300
2014-05-27 08:37:27.752+0000: 7559: debug :
virAccessManagerCheckConnect:215 : manager=0x7fd81deec300(name=stack)
driver=QEMU perm=0
2014-05-27 08:37:27.752+0000: 7559: debug :
virAccessManagerCheckConnect:215 : manager=0x7fd81deec370(name=none)
driver=QEMU perm=0
2014-05-27 08:37:27.752+0000: 7559: debug : virObjectUnref:256 :
OBJECT_UNREF: obj=0x7fd81deec300
2014-05-27 08:37:27.752+0000: 7559: error : virTypedParamsValidate:97 :
argument unsupported: parameter '' not supported
--
------------
Jackie
Best Regards
10 years, 6 months
[libvirt] [Patch v2 0/3] use -serial for ppce500 board and add test case
by Olivia Yin
Machine name ppce500 is used to replace ppce500v2 supported by QEMU.
QEMU ppce500 board uses the legacy -serial option.
Test case ppce500-serial is used to verify this change.
Olivia Yin (3):
change machine name ppce500v2 as ppce500
qemu: Fix specifying char devs for PPC
tests: add test case for -serial option for ppce500
docs/schemas/domaincommon.rng | 2 +-
src/qemu/qemu_capabilities.c | 9 +++++++-
tests/qemuxml2argvdata/qemuxml2argv-ppc-dtb.args | 2 +-
tests/qemuxml2argvdata/qemuxml2argv-ppc-dtb.xml | 2 +-
.../qemuxml2argv-ppce500-serial.args | 7 ++++++
.../qemuxml2argv-ppce500-serial.xml | 26 ++++++++++++++++++++++
tests/qemuxml2argvtest.c | 1 +
tests/testutilsqemu.c | 2 +-
8 files changed, 46 insertions(+), 5 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-ppce500-serial.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-ppce500-serial.xml
--
1.8.5
10 years, 6 months
[libvirt] [PATCH 0/3] use -serial for ppce500 board and add test case
by Olivia Yin
Machine name ppce500 is used to replace ppce500v2 supported by QEMU.
QEMU ppce500 board uses the old style -serial options.
Test case ppce500-serial is used to verify this change.
Olivia Yin (3):
change machine name ppce500v2 as ppce500
qemu: Fix specifying char devs for PPC
tests: add test case for -serial option for ppce500
docs/schemas/domaincommon.rng | 2 +-
src/qemu/qemu_capabilities.c | 10 ++++++---
tests/qemuxml2argvdata/qemuxml2argv-ppc-dtb.args | 2 +-
tests/qemuxml2argvdata/qemuxml2argv-ppc-dtb.xml | 2 +-
.../qemuxml2argv-ppce500-serial.args | 7 ++++++
.../qemuxml2argv-ppce500-serial.xml | 26 ++++++++++++++++++++++
tests/qemuxml2argvtest.c | 1 +
tests/testutilsqemu.c | 2 +-
8 files changed, 45 insertions(+), 7 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-ppce500-serial.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-ppce500-serial.xml
--
1.8.5
10 years, 6 months
[libvirt] Job Control API [RFC]
by Tucker DiNapoli
My name is Tucker DiNapoli and I am working on implementing job control
for
the storage driver for the google summer of code, the first step in doing
this
is creating and implementing a unified api for job control.
Currently there are several places where various aspects of job control are
implemented. The qemu and libxl drivers both contain internal
implementations
for job control on domain level jobs, with the qemu driver containing
support
for asynchronous jobs. There is also code in the libvirt.c file for running
block jobs and for querying domain jobs for information.
I would like for the job control api to be as independent of different
drivers
as possible since it will need to be used with storage drivers as well as
different virtualization drivers.
I imagine most of the api will revolve around a job object, and I think it's
important to decide what exactly should go in this job object.
This is a response from my first post on the mailing list and I think this
is a
good idea.
>>I'd _really_ like to see a common notion of a 'job id' that EVERY job
>>(whether domain-level, like migration; or block-level, like
>>commit/pull/rebase; or storage-level, like your new proposed storage
>>jobs) shares a common job namespace. The job id is a positive integer.
>> Existing APIs will have to be retrofitted into the new job id notion;
>>any action that starts a long-running job that currently returns 0 on
>>success could be changed to return a positive job id; or we may need a
>>new API that queries the notion of the 'current job' (the job most
>>recently started) or even to set the 'current job' to a different job
>>id. We'll need new API for querying a job by id, and to be most
>>portable, we should do job reporting via virTypedParameter
>>(virDomainGetJobInfo and virDomainGetBlockJobInfo are hardcoded into
>>returning a struct, so they are non-extensible; virDomainGetJobStats
>>almost did it right, except that the user has to call it twice, once to
>>learn how large to allocate, and again to pass in pre-allocated memory -
>>the ideal API would allocate the memory on a single call).
Currently there are separate types for block job info and job info, if
possible
I would like to merge these into a common job info type, and perhaps make
this
a part of the job object itself.
Currently (in libxl and qemu) jobs are a part of the domain struct, I think
that jobs should be moved out of the domain struct instead using the idea of
job ids for domains to keep track of currently running jobs. I'm still new
to
libvirt so it this doesn't make sense and the idea of keeping job objects
attached to domains makes sense that's fine.
I think at the minimum each job object should contain: the id of the thread
running the job, the type of job, the job id, a condition variable to
coordinate jobs, and information about the job, either as a separate job
info
object or as part of the job object itself. The job should also contain a
reference to the domain or storage it is associated with.
There are a few basic functions that should definitely be part of the api:
initialize a job, free a job, start a job, end a job, abort a job and get
info
on a job. It would be nice to be able to suspend a job and to change the
currently running job as well. That's what I can come up with, but I don't
have
much experience in libvirt so if there are other features that make sense
they
can be added as well.
Finally (as far as I can think of right now) is the idea of parallel
jobs. Currently the qemu driver allows some jobs to be run in parallel by
allowing a job to be run asynchronously, this async job has a mask of job
types
associated with it that determine what types of regular jobs can be run
during
it. However I would like to allow an arbitrary number of jobs to be run at
once
(I'm not sure how useful this would be, but it seems best not to impose hard
limits on things). The easiest way to deal with this is to just ignore it
and
put the burden of synchronizing jobs on the drivers. This is obviously a bad
solution. Another way would be the way it is currently done it the qemu
driver,
have a mask of job types associated with each domain/storage which is
updated
when a job is started or ended which dictates what types of jobs can be
started. Regardless of how this is done it will require support from the
driver/domain/storage that each job is associated with.
Tucker DINapoli
10 years, 6 months
[libvirt] Fix an extra ' in a translated string
by Daniel Veillard
Raised by ukrainian translator Yuri Chornoivan
https://fedora.transifex.com/projects/p/libvirt/translate/#uk/strings/254...
Pushed as trivial fix,
Daniel
diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c
index 9575344..cd6babe 100644
--- a/src/storage/storage_driver.c
+++ b/src/storage/storage_driver.c
@@ -2975,7 +2975,7 @@ virStorageFileReadHeader(virStorageSourcePtr src,
if (!src->drv->backend->storageFileReadHeader) {
virReportError(VIR_ERR_INTERNAL_ERROR,
_("storage file header reading is not supported for "
- "storage type %s (protocol: %s)'"),
+ "storage type %s (protocol: %s)"),
virStorageTypeToString(src->type),
virStorageNetProtocolTypeToString(src->protocol));
return -2;
--
Daniel Veillard | Open Source and Standards, Red Hat
veillard(a)redhat.com | libxml Gnome XML XSLT toolkit http://xmlsoft.org/
http://veillard.com/ | virtualization library http://libvirt.org/
10 years, 6 months
[libvirt] [PATCH] Clean up chardev sockets on QEMU shutdown
by Ján Tomko
https://bugzilla.redhat.com/show_bug.cgi?id=1088787
Clean up unix socket files for chardevs using mode='bind',
like we clean up the monitor socket.
They are created by QEMU on startup and not really useful
after shutting it down.
---
src/qemu/qemu_process.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 592e3b7..f3ec246 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -2655,6 +2655,19 @@ qemuProcessPrepareChardevDevice(virDomainDefPtr def ATTRIBUTE_UNUSED,
}
+static int
+qemuProcessCleanupChardevDevice(virDomainDefPtr def ATTRIBUTE_UNUSED,
+ virDomainChrDefPtr dev,
+ void *opaque ATTRIBUTE_UNUSED)
+{
+ if (dev->source.type == VIR_DOMAIN_CHR_TYPE_UNIX &&
+ dev->source.data.nix.listen)
+ unlink(dev->source.data.nix.path);
+
+ return 0;
+}
+
+
struct qemuProcessHookData {
virConnectPtr conn;
virDomainObjPtr vm;
@@ -4339,6 +4352,12 @@ void qemuProcessStop(virQEMUDriverPtr driver,
priv->monConfig = NULL;
}
+ ignore_value(virDomainChrDefForeach(vm->def,
+ false,
+ qemuProcessCleanupChardevDevice,
+ NULL));
+
+
/* shut it off for sure */
ignore_value(qemuProcessKill(vm,
VIR_QEMU_PROCESS_KILL_FORCE|
--
1.8.3.2
10 years, 6 months