[libvirt] [PATCH] storage: Flush host cache after write
by Michal Privoznik
Although we are flushing cache after some critical writes (e.g.
volume creation), after some others we do not (e.g. volume cloning).
This patch fix this issue. That is for volume cloning, writing
header of logical volume, and storage wipe.
---
src/storage/storage_backend.c | 8 ++++++++
src/storage/storage_backend_logical.c | 7 +++++++
src/storage/storage_driver.c | 8 ++++++++
3 files changed, 23 insertions(+), 0 deletions(-)
diff --git a/src/storage/storage_backend.c b/src/storage/storage_backend.c
index 6243d1e..889f530 100644
--- a/src/storage/storage_backend.c
+++ b/src/storage/storage_backend.c
@@ -208,6 +208,14 @@ virStorageBackendCopyToFD(virStorageVolDefPtr vol,
} while ((amtleft -= interval) > 0);
}
+ if (fdatasync(fd) < 0) {
+ ret = -errno;
+ virReportSystemError(errno, _("cannot sync data to file '%s'"),
+ vol->target.path);
+ goto cleanup;
+ }
+
+
if (VIR_CLOSE(inputfd) < 0) {
ret = -errno;
virReportSystemError(errno,
diff --git a/src/storage/storage_backend_logical.c b/src/storage/storage_backend_logical.c
index c622d2a..ca4166d 100644
--- a/src/storage/storage_backend_logical.c
+++ b/src/storage/storage_backend_logical.c
@@ -424,6 +424,13 @@ virStorageBackendLogicalBuildPool(virConnectPtr conn ATTRIBUTE_UNUSED,
VIR_FORCE_CLOSE(fd);
goto cleanup;
}
+ if (fsync(fd) < 0) {
+ virReportSystemError(errno,
+ _("cannot flush header of device'%s'"),
+ pool->def->source.devices[i].path);
+ VIR_FORCE_CLOSE(fd);
+ goto cleanup;
+ }
if (VIR_CLOSE(fd) < 0) {
virReportSystemError(errno,
_("cannot close device '%s'"),
diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c
index 6715790..68cac1f 100644
--- a/src/storage/storage_driver.c
+++ b/src/storage/storage_driver.c
@@ -1777,6 +1777,14 @@ storageWipeExtent(virStorageVolDefPtr vol,
remaining -= written;
}
+ if (fdatasync(fd) < 0) {
+ ret = -errno;
+ virReportSystemError(errno,
+ _("cannot sync data to volume with path '%s'"),
+ vol->target.path);
+ goto out;
+ }
+
VIR_DEBUG("Wrote %zu bytes to volume with path '%s'",
*bytes_wiped, vol->target.path);
--
1.7.3.4
13 years, 4 months
[libvirt] [PATCH v2] daemon: initialize GnuTLS
by Michal Privoznik
When spice_tls is set but listen_tls is not, we don't initialize
GnuTLS library. So any later gnutls call (e.g. during migration,
where we initialize a certificate) will access uninitialized GnuTLS
internal structs and throws an error.
Although, we might now initialize GnuTLS twice, it is safe according
to the documentation:
This function can be called many times,
but will only do something the first time.
This patch creates 2 functions: virNetTLSInit and virNetTLSDeinit
with respect to written above.
---
diff to v1:
- moved from qemu to daemon
- created special init function
daemon/libvirtd.c | 2 ++
src/rpc/virnettlscontext.c | 25 ++++++++++++++++++++++---
src/rpc/virnettlscontext.h | 3 +++
3 files changed, 27 insertions(+), 3 deletions(-)
diff --git a/daemon/libvirtd.c b/daemon/libvirtd.c
index b99c637..0530ba5 100644
--- a/daemon/libvirtd.c
+++ b/daemon/libvirtd.c
@@ -1516,6 +1516,7 @@ int main(int argc, char **argv) {
virHookCall(VIR_HOOK_DRIVER_DAEMON, "-", VIR_HOOK_DAEMON_OP_START,
0, "start", NULL);
+ virNetTLSInit();
if (daemonSetupNetworking(srv, config,
sock_file, sock_file_ro,
ipsock, privileged) < 0) {
@@ -1554,6 +1555,7 @@ cleanup:
virNetServerProgramFree(qemuProgram);
virNetServerClose(srv);
virNetServerFree(srv);
+ virNetTLSDeinit();
if (statuswrite != -1) {
if (ret != 0) {
/* Tell parent of daemon what failed */
diff --git a/src/rpc/virnettlscontext.c b/src/rpc/virnettlscontext.c
index 19a9b25..8482eaf 100644
--- a/src/rpc/virnettlscontext.c
+++ b/src/rpc/virnettlscontext.c
@@ -679,9 +679,6 @@ static virNetTLSContextPtr virNetTLSContextNew(const char *cacert,
ctxt->refs = 1;
- /* Initialise GnuTLS. */
- gnutls_global_init();
-
if ((gnutlsdebug = getenv("LIBVIRT_GNUTLS_DEBUG")) != NULL) {
int val;
if (virStrToLong_i(gnutlsdebug, NULL, 10, &val) < 0)
@@ -1399,3 +1396,25 @@ void virNetTLSSessionFree(virNetTLSSessionPtr sess)
virMutexDestroy(&sess->lock);
VIR_FREE(sess);
}
+
+/*
+ * This function MUST be called before any
+ * virNetTLS* because it initializes
+ * underlying GnuTLS library. According to
+ * it's documentation, it's safe to be called
+ * many times, but is not thread safe. Each
+ * call SHOULD be later followed by
+ * virNetTLSContextDeinit.
+ */
+void virNetTLSInit(void)
+{
+ gnutls_global_init();
+}
+
+/*
+ * See virNetTLSInit
+ */
+void virNetTLSDeinit(void)
+{
+ gnutls_global_deinit();
+}
diff --git a/src/rpc/virnettlscontext.h b/src/rpc/virnettlscontext.h
index 641d67e..99f31b9 100644
--- a/src/rpc/virnettlscontext.h
+++ b/src/rpc/virnettlscontext.h
@@ -30,6 +30,9 @@ typedef struct _virNetTLSSession virNetTLSSession;
typedef virNetTLSSession *virNetTLSSessionPtr;
+void virNetTLSInit(void);
+void virNetTLSDeinit(void);
+
virNetTLSContextPtr virNetTLSContextNewServerPath(const char *pkipath,
bool tryUserPkiPath,
const char *const*x509dnWhitelist,
--
1.7.3.4
13 years, 4 months
[libvirt] [PATCH] schedinfo: add missing documentation
by Taku Izumi
This patch adds the missing documentation about the scheduler parameter
"vcpu_period" and "vcpu_quota".
Signed-off-by: Taku Izumi <izumi.taku(a)jp.fujitsu.com>
---
tools/virsh.pod | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
Index: libvirt/tools/virsh.pod
===================================================================
--- libvirt.orig/tools/virsh.pod
+++ libvirt/tools/virsh.pod
@@ -712,7 +712,9 @@ I<domain-id>
Allows you to show (and set) the domain scheduler parameters. The parameters
available for each hypervisor are:
-LXC, QEMU/KVM (posix scheduler): cpu_shares
+LXC (posix scheduler) : cpu_shares
+
+QEMU/KVM (posix scheduler): cpu_shares, vcpu_period, vcpu_quota
Xen (credit scheduler): weight, cap
@@ -729,6 +731,10 @@ Therefore, -1 is a useful shorthand for
B<Note>: The weight and cap parameters are defined only for the
XEN_CREDIT scheduler and are now I<DEPRECATED>.
+B<Note>: The vcpu_period parameter has a valid value range of 1000-1000000 or
+0, and the vcpu_quota parameter has a valid value range of 1000-1844674407370955
+or less than 0.
+
=item B<screenshot> I<domain-id> [I<imagefilepath>] [I<--screen> B<screenID>]
Takes a screenshot of a current domain console and stores it into a file.
13 years, 4 months
[libvirt] [PATCH] maint: ignore generated file
by Eric Blake
I did 'git add .' while in the middle of 'make syntax-check', and
it picked up a temporary file that should not be committed.
* .gitignore: Ignore sc_* from syntax check.
---
Pushing under the trivial rule.
.gitignore | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/.gitignore b/.gitignore
index e8b2dbf..39ecd6d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -53,6 +53,7 @@
/po/*
/proxy/
/python/generator.py.stamp
+/sc_*
/src/libvirt_iohelper
/src/locking/qemu-sanlock.conf
/src/remote/*_client_bodies.h
--
1.7.4.4
13 years, 4 months
[libvirt] [RFC] Should libvirt be a proxy for migration data
by Jiri Denemark
Hi all,
Currently when we start a non-tunneled migration, data go straight from
source qemu to destination qemu. This is nice in that there is no additional
overhead but it also has several disadvantages. If the communication between
source and destination qemu breaks, we only get unexpected error message from
qemu with no glue about what happened. Another issue is that if qemu cannot
send migration data, we cannot cancel the migration because migrate_cancel
blocks until all buffers with migration data queued up for transmission are
written into the socket.
That said, I think we should act as a proxy between source and destination
qemu so that we can detect and report normal errors (such as connection reset
by peer) and cancel migration at any time. Since we have virNetSocket and we
already use that for connecting to destination qemu, we should use it for
proxying migration data as well. This approach also has some disadvantages,
e.g., a single libvirt thread instead of several qemu processes will now send
migration data from all domains that are being migrated. However, I feel like
the gain is bigger than the downside. And we already do the same for tunneled
migration anyway.
Any objections?
Jirka
13 years, 4 months
Re: [libvirt] [Qemu-devel] Killing block migration in qemu?
by Anthony Liguori
On 08/17/2011 11:51 AM, Paolo Bonzini wrote:
> Hi all,
>
> following discussions yesterday with Juan Quintela and Marcelo Tosatti,
> here is my humble proposal: remove block migration from qemu master.
+1
I was going to propose the same thing after the conference actually.
Regards,
Anthony Liguori
13 years, 4 months
[libvirt] RFC (V2) New virDomainBlockPull API family to libvirt
by Adam Litke
Here are the patches to implement the BlockPull/BlockJob API as discussed and
agreed to. I am testing with a python script (included for completeness as the
final patch). The qemu monitor interface is not expected to change in the
future. Stefan is planning to submit placeholder commands for upstream qemu
until the generic streaming support is implemented.
Changes since V1:
- Make virDomainBlockPullAbort() and virDomainGetBlockPullInfo() into a
generic BlockJob interface.
- Added virDomainBlockJobSetSpeed()
- Rename VIR_DOMAIN_EVENT_ID_BLOCK_PULL event to fit into block job API
- Add bandwidth argument to virDomainBlockPull()
Summary of changes since first generation patch series:
- Qemu dropped incremental streaming so remove libvirt incremental
BlockPull() API
- Rename virDomainBlockPullAll() to virDomainBlockPull()
- Changes required to qemu monitor handlers for changed command names
--
To help speed the provisioning process for large domains, new QED disks are
created with backing to a template image. These disks are configured with
copy on read such that blocks that are read from the backing file are copied
to the new disk. This reduces I/O over a potentially costly path to the
backing image.
In such a configuration, there is a desire to remove the dependency on the
backing image as the domain runs. To accomplish this, qemu will provide an
interface to perform sequential copy on read operations during normal VM
operation. Once all data has been copied, the disk image's link to the
backing file is removed.
The virDomainBlockPull API family brings this functionality to libvirt.
virDomainBlockPull() instructs the hypervisor to stream the entire device in
the background. Progress of this operation can be checked with the function
virDomainBlockJobInfo(). An ongoing stream can be cancelled with
virDomainBlockJobAbort(). virDomainBlockJobSetSpeed() allows you to limit the
bandwidth that the operation may consume.
An event (VIR_DOMAIN_EVENT_ID_BLOCK_JOB) will be emitted when a disk has been
fully populated or if a BlockPull() operation was terminated due to an error.
This event is useful to avoid polling on virDomainBlockJobInfo() for
completion and could also be used by the security driver to revoke access to
the backing file when it is no longer needed.
make check: PASS
make syntax-check: PASS
make -C tests valgrind: PASS
[PATCH 1/8] Add new API virDomainBlockPull* to headers
[PATCH 2/8] virDomainBlockPull: Implement the main entry points
[PATCH 3/8] Add virDomainBlockPull support to the remote driver
[PATCH 4/8] Implement virDomainBlockPull for the qemu driver
[PATCH 5/8] Enable the virDomainBlockPull API in virsh
[PATCH 6/8] Enable virDomainBlockPull in the python API.
[PATCH 7/8] Asynchronous event for BlockJob completion
[PATCH 8/8] Test the blockJob/BlockPull API
13 years, 4 months
Re: [libvirt] [Qemu-devel] Killing block migration in qemu?
by Stefan Hajnoczi
On Wed, Aug 17, 2011 at 5:51 PM, Paolo Bonzini <pbonzini(a)redhat.com> wrote:
> following discussions yesterday with Juan Quintela and Marcelo Tosatti, here
> is my humble proposal: remove block migration from qemu master. It seems to
> me that keeping block migration is going to slow down further improvements
> on migration. The main problems are:
>
> 1) there are very good reasons to move migration to a separate thread. Only
> a limited amount of extra locking, perhaps none is needed in order to do so
> for RAM and devices. But the block drivers pretty much need to run under
> the I/O thread lock, and coroutines will not help if the I/O thread is taken
> by another thread. It's hard/unreliable/pointless to ping-pong migration
> between threads.
The image streaming approach will also run in the I/O thread for the
mid-term future. Is the problem that the block migration code today
is too tied into the actual migration code path and therefore stops
from using it when migration happens in a separate thread?
>
> 2) there already are plans to reimplement block migration... it's called
> streaming :) and not coincidentially it reuses some of the block migration
> code.
What are the concrete issues with the existing block migration code?
I'm not disagreeing that we should move to a streaming approach but I
simply don't know the details of the existing block migration code.
> Here is how it would go:
This sounds reasonable. In fact we can do both pre-copy and post-copy
block migration using streaming (+mirroring).
Stefan
13 years, 4 months