[libvirt] [PATCH v4 0/3] OVMF exposure
by Michal Privoznik
diff to v3:
-Pure rebae & resend
diff to v2:
-Adapted to Laszlo's review on v2
Michal Privoznik (3):
conf: Extend <loader/> and introduce <nvram/>
qemu: Implement extended loader and nvram
qemu: Automatically create NVRAM store
docs/formatdomain.html.in | 19 ++-
docs/schemas/domaincommon.rng | 21 ++++
libvirt.spec.in | 2 +
src/Makefile.am | 1 +
src/conf/domain_conf.c | 87 +++++++++++++-
src/conf/domain_conf.h | 22 +++-
src/libvirt_private.syms | 3 +
src/qemu/libvirtd_qemu.aug | 3 +
src/qemu/qemu.conf | 14 +++
src/qemu/qemu_command.c | 97 ++++++++++++++-
src/qemu/qemu_conf.c | 93 +++++++++++++++
src/qemu/qemu_conf.h | 5 +
src/qemu/qemu_process.c | 132 +++++++++++++++++++++
src/qemu/test_libvirtd_qemu.aug.in | 3 +
src/security/security_dac.c | 8 ++
src/security/security_selinux.c | 8 ++
src/security/virt-aa-helper.c | 4 +-
src/vbox/vbox_common.c | 7 +-
src/xenapi/xenapi_driver.c | 3 +-
src/xenconfig/xen_common.c | 7 +-
src/xenconfig/xen_sxpr.c | 16 +--
.../qemuxml2argvdata/qemuxml2argv-bios-nvram.args | 10 ++
tests/qemuxml2argvdata/qemuxml2argv-bios-nvram.xml | 40 +++++++
tests/qemuxml2argvtest.c | 2 +
.../qemuxml2xmlout-pci-bridge-many-disks.xml | 2 +-
tests/qemuxml2xmltest.c | 2 +
tests/sexpr2xmldata/sexpr2xml-fv-autoport.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-empty-kernel.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-force-hpet.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-force-nohpet.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-kernel.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-legacy-vfb.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-localtime.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-net-ioemu.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-net-netfront.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-parallel-tcp.xml | 2 +-
.../sexpr2xml-fv-serial-dev-2-ports.xml | 2 +-
.../sexpr2xml-fv-serial-dev-2nd-port.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-serial-file.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-serial-null.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-serial-pipe.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-serial-pty.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-serial-stdio.xml | 2 +-
.../sexpr2xml-fv-serial-tcp-telnet.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-serial-tcp.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-serial-udp.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-serial-unix.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-sound-all.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-sound.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-usbmouse.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-usbtablet.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-utc.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv-v2.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-fv.xml | 2 +-
tests/sexpr2xmldata/sexpr2xml-no-source-cdrom.xml | 2 +-
tests/xmconfigdata/test-escape-paths.xml | 2 +-
tests/xmconfigdata/test-fullvirt-force-hpet.xml | 2 +-
tests/xmconfigdata/test-fullvirt-force-nohpet.xml | 2 +-
tests/xmconfigdata/test-fullvirt-localtime.xml | 2 +-
tests/xmconfigdata/test-fullvirt-net-ioemu.xml | 2 +-
tests/xmconfigdata/test-fullvirt-net-netfront.xml | 2 +-
tests/xmconfigdata/test-fullvirt-new-cdrom.xml | 2 +-
tests/xmconfigdata/test-fullvirt-old-cdrom.xml | 2 +-
tests/xmconfigdata/test-fullvirt-parallel-tcp.xml | 2 +-
.../test-fullvirt-serial-dev-2-ports.xml | 2 +-
.../test-fullvirt-serial-dev-2nd-port.xml | 2 +-
tests/xmconfigdata/test-fullvirt-serial-file.xml | 2 +-
tests/xmconfigdata/test-fullvirt-serial-null.xml | 2 +-
tests/xmconfigdata/test-fullvirt-serial-pipe.xml | 2 +-
tests/xmconfigdata/test-fullvirt-serial-pty.xml | 2 +-
tests/xmconfigdata/test-fullvirt-serial-stdio.xml | 2 +-
.../test-fullvirt-serial-tcp-telnet.xml | 2 +-
tests/xmconfigdata/test-fullvirt-serial-tcp.xml | 2 +-
tests/xmconfigdata/test-fullvirt-serial-udp.xml | 2 +-
tests/xmconfigdata/test-fullvirt-serial-unix.xml | 2 +-
tests/xmconfigdata/test-fullvirt-sound.xml | 2 +-
tests/xmconfigdata/test-fullvirt-usbmouse.xml | 2 +-
tests/xmconfigdata/test-fullvirt-usbtablet.xml | 2 +-
tests/xmconfigdata/test-fullvirt-utc.xml | 2 +-
tests/xmconfigdata/test-no-source-cdrom.xml | 2 +-
tests/xmconfigdata/test-pci-devs.xml | 2 +-
81 files changed, 639 insertions(+), 82 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-bios-nvram.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-bios-nvram.xml
--
1.8.5.5
10 years, 2 months
[libvirt] [PATCH 00/11] bulk stats: QEMU implementation
by Francesco Romani
This patchset enhances the QEMU support for the new bulk stats API.
What is added is the equivalent of these APIs:
virDomainBlockInfo
virDomainGetInfo - for balloon stats
virDomainGetCPUStats
virDomainBlockStatsFlags
virDomainInterfaceStats
virDomainGetVcpusFlags
virDomainGetVcpus
This subset of API is the one oVirt relies on.
The patchset is organized as follows:
- the first 4 patches do refactoring to extract internal helper
functions to be used by the old API and by the new bulk one.
For block stats on helper is actually added instead of extracted.
- since some groups require access to the QEMU monitor, one patch
extend the internal interface to easily accomodate that
- finally, the last six patches implement the support for the
bulk API.
Francesco Romani (11):
qemu: extract helper to get the current balloon
qemu: extract helper to gather vcpu data
qemu: add helper to get the block stats
qemu: extract helper to get block info
qemu: bulk stats: pass connection to workers
qemu: bulk stats: implement CPU stats group
qemu: bulk stats: implement balloon group
qemu: bulk stats: implement VCPU group
qemu: bulk stats: implement interface group
qemu: bulk stats: implement block group
qemu: bulk stats: implement blockinfo group
include/libvirt/libvirt.h.in | 6 +
src/qemu/qemu_driver.c | 558 ++++++++++++++++++++++++++++++++++++++-----
2 files changed, 502 insertions(+), 62 deletions(-)
--
1.9.3
10 years, 2 months
[libvirt] [PATCH] Fix connection to already running session libvirtd
by Christophe Fergeau
Since 1b807f92, connecting with virsh to an already running session
libvirtd fails with:
$ virsh list --all
error: failed to connect to the hypervisor
error: no valid connection
error: Failed to connect socket to
'/run/user/1000/libvirt/libvirt-sock': Transport endpoint is already
connected
This is caused by a logic error in virNetSocketNewConnectUnix: even if
the connection to the daemon socket succeeded, we still try to spawn the
daemon and then connect to it.
This commit changes the logic to not try to spawn libvirtd if we
successfully connected to its socket.
With whitespace changes removed, this patch becomes just this:
diff --git a/src/rpc/virnetsocket.c b/src/rpc/virnetsocket.c
index f913365..79540b3 100644
--- a/src/rpc/virnetsocket.c
+++ b/src/rpc/virnetsocket.c
@@ -574,7 +574,8 @@ int virNetSocketNewConnectUNIX(const char *path,
remoteAddr.data.un.sun_path[0] = '\0';
retry:
- if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0 &&
!spawnDaemon) {
+ if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0) {
+ if (!spawnDaemon) {
virReportSystemError(errno, _("Failed to connect socket to
'%s'"),
path);
goto error;
@@ -634,6 +635,7 @@ int virNetSocketNewConnectUNIX(const char *path,
if (virNetSocketForkDaemon(binary, passfd) < 0)
goto error;
}
+ }
localAddr.len = sizeof(localAddr.data);
if (getsockname(fd, &localAddr.data.sa, &localAddr.len) < 0) {
---
src/rpc/virnetsocket.c | 102 +++++++++++++++++++++++++------------------------
1 file changed, 52 insertions(+), 50 deletions(-)
diff --git a/src/rpc/virnetsocket.c b/src/rpc/virnetsocket.c
index 79258ef..8fc5d80 100644
--- a/src/rpc/virnetsocket.c
+++ b/src/rpc/virnetsocket.c
@@ -573,65 +573,67 @@ int virNetSocketNewConnectUNIX(const char *path,
remoteAddr.data.un.sun_path[0] = '\0';
retry:
- if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0 && !spawnDaemon) {
- virReportSystemError(errno, _("Failed to connect socket to '%s'"),
- path);
- goto error;
- } else if (spawnDaemon) {
- int status = 0;
- pid_t pid = 0;
-
- if ((passfd = socket(PF_UNIX, SOCK_STREAM, 0)) < 0) {
- virReportSystemError(errno, "%s", _("Failed to create socket"));
+ if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0) {
+ if (!spawnDaemon) {
+ virReportSystemError(errno, _("Failed to connect socket to '%s'"),
+ path);
goto error;
- }
+ } else if (spawnDaemon) {
+ int status = 0;
+ pid_t pid = 0;
- /*
- * We have to fork() here, because umask() is set
- * per-process, chmod() is racy and fchmod() has undefined
- * behaviour on sockets according to POSIX, so it doesn't
- * work outside Linux.
- */
- if ((pid = virFork()) < 0)
- goto error;
+ if ((passfd = socket(PF_UNIX, SOCK_STREAM, 0)) < 0) {
+ virReportSystemError(errno, "%s", _("Failed to create socket"));
+ goto error;
+ }
- if (pid == 0) {
- umask(0077);
- if (bind(passfd, &remoteAddr.data.sa, remoteAddr.len) < 0)
- _exit(EXIT_FAILURE);
+ /*
+ * We have to fork() here, because umask() is set
+ * per-process, chmod() is racy and fchmod() has undefined
+ * behaviour on sockets according to POSIX, so it doesn't
+ * work outside Linux.
+ */
+ if ((pid = virFork()) < 0)
+ goto error;
- _exit(EXIT_SUCCESS);
- }
+ if (pid == 0) {
+ umask(0077);
+ if (bind(passfd, &remoteAddr.data.sa, remoteAddr.len) < 0)
+ _exit(EXIT_FAILURE);
- if (virProcessWait(pid, &status, false) < 0)
- goto error;
+ _exit(EXIT_SUCCESS);
+ }
- if (status != EXIT_SUCCESS) {
- /*
- * OK, so the subprocces failed to bind() the socket. This may mean
- * that another daemon was starting at the same time and succeeded
- * with its bind(). So we'll try connecting again, but this time
- * without spawning the daemon.
- */
- spawnDaemon = false;
- goto retry;
- }
+ if (virProcessWait(pid, &status, false) < 0)
+ goto error;
- if (listen(passfd, 0) < 0) {
- virReportSystemError(errno, "%s",
- _("Failed to listen on socket that's about "
- "to be passed to the daemon"));
- goto error;
- }
+ if (status != EXIT_SUCCESS) {
+ /*
+ * OK, so the subprocces failed to bind() the socket. This may mean
+ * that another daemon was starting at the same time and succeeded
+ * with its bind(). So we'll try connecting again, but this time
+ * without spawning the daemon.
+ */
+ spawnDaemon = false;
+ goto retry;
+ }
- if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0) {
- virReportSystemError(errno, _("Failed to connect socket to '%s'"),
- path);
- goto error;
- }
+ if (listen(passfd, 0) < 0) {
+ virReportSystemError(errno, "%s",
+ _("Failed to listen on socket that's about "
+ "to be passed to the daemon"));
+ goto error;
+ }
- if (virNetSocketForkDaemon(binary, passfd) < 0)
- goto error;
+ if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0) {
+ virReportSystemError(errno, _("Failed to connect socket to '%s'"),
+ path);
+ goto error;
+ }
+
+ if (virNetSocketForkDaemon(binary, passfd) < 0)
+ goto error;
+ }
}
localAddr.len = sizeof(localAddr.data);
--
1.9.3
10 years, 2 months
[libvirt] [PATCH v2] Fix connection to already running session libvirtd
by Christophe Fergeau
Since 1b807f92, connecting with virsh to an already running session
libvirtd fails with:
$ virsh list --all
error: failed to connect to the hypervisor
error: no valid connection
error: Failed to connect socket to
'/run/user/1000/libvirt/libvirt-sock': Transport endpoint is already
connected
This is caused by a logic error in virNetSocketNewConnectUnix: even if
the connection to the daemon socket succeeded, we still try to spawn the
daemon and then connect to it.
This commit changes the logic to not try to spawn libvirtd if we
successfully connected to its socket.
Most of this commit is whitespace changes, use of -w is used to look at
it.
---
Changes since v1:
- Removed now redundant test in the else branch
src/rpc/virnetsocket.c | 102 +++++++++++++++++++++++++------------------------
1 file changed, 52 insertions(+), 50 deletions(-)
diff --git a/src/rpc/virnetsocket.c b/src/rpc/virnetsocket.c
index 79258ef..9780e17 100644
--- a/src/rpc/virnetsocket.c
+++ b/src/rpc/virnetsocket.c
@@ -573,65 +573,67 @@ int virNetSocketNewConnectUNIX(const char *path,
remoteAddr.data.un.sun_path[0] = '\0';
retry:
- if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0 && !spawnDaemon) {
- virReportSystemError(errno, _("Failed to connect socket to '%s'"),
- path);
- goto error;
- } else if (spawnDaemon) {
- int status = 0;
- pid_t pid = 0;
-
- if ((passfd = socket(PF_UNIX, SOCK_STREAM, 0)) < 0) {
- virReportSystemError(errno, "%s", _("Failed to create socket"));
+ if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0) {
+ if (!spawnDaemon) {
+ virReportSystemError(errno, _("Failed to connect socket to '%s'"),
+ path);
goto error;
- }
+ } else {
+ int status = 0;
+ pid_t pid = 0;
- /*
- * We have to fork() here, because umask() is set
- * per-process, chmod() is racy and fchmod() has undefined
- * behaviour on sockets according to POSIX, so it doesn't
- * work outside Linux.
- */
- if ((pid = virFork()) < 0)
- goto error;
+ if ((passfd = socket(PF_UNIX, SOCK_STREAM, 0)) < 0) {
+ virReportSystemError(errno, "%s", _("Failed to create socket"));
+ goto error;
+ }
- if (pid == 0) {
- umask(0077);
- if (bind(passfd, &remoteAddr.data.sa, remoteAddr.len) < 0)
- _exit(EXIT_FAILURE);
+ /*
+ * We have to fork() here, because umask() is set
+ * per-process, chmod() is racy and fchmod() has undefined
+ * behaviour on sockets according to POSIX, so it doesn't
+ * work outside Linux.
+ */
+ if ((pid = virFork()) < 0)
+ goto error;
- _exit(EXIT_SUCCESS);
- }
+ if (pid == 0) {
+ umask(0077);
+ if (bind(passfd, &remoteAddr.data.sa, remoteAddr.len) < 0)
+ _exit(EXIT_FAILURE);
- if (virProcessWait(pid, &status, false) < 0)
- goto error;
+ _exit(EXIT_SUCCESS);
+ }
- if (status != EXIT_SUCCESS) {
- /*
- * OK, so the subprocces failed to bind() the socket. This may mean
- * that another daemon was starting at the same time and succeeded
- * with its bind(). So we'll try connecting again, but this time
- * without spawning the daemon.
- */
- spawnDaemon = false;
- goto retry;
- }
+ if (virProcessWait(pid, &status, false) < 0)
+ goto error;
- if (listen(passfd, 0) < 0) {
- virReportSystemError(errno, "%s",
- _("Failed to listen on socket that's about "
- "to be passed to the daemon"));
- goto error;
- }
+ if (status != EXIT_SUCCESS) {
+ /*
+ * OK, so the subprocces failed to bind() the socket. This may mean
+ * that another daemon was starting at the same time and succeeded
+ * with its bind(). So we'll try connecting again, but this time
+ * without spawning the daemon.
+ */
+ spawnDaemon = false;
+ goto retry;
+ }
- if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0) {
- virReportSystemError(errno, _("Failed to connect socket to '%s'"),
- path);
- goto error;
- }
+ if (listen(passfd, 0) < 0) {
+ virReportSystemError(errno, "%s",
+ _("Failed to listen on socket that's about "
+ "to be passed to the daemon"));
+ goto error;
+ }
- if (virNetSocketForkDaemon(binary, passfd) < 0)
- goto error;
+ if (connect(fd, &remoteAddr.data.sa, remoteAddr.len) < 0) {
+ virReportSystemError(errno, _("Failed to connect socket to '%s'"),
+ path);
+ goto error;
+ }
+
+ if (virNetSocketForkDaemon(binary, passfd) < 0)
+ goto error;
+ }
}
localAddr.len = sizeof(localAddr.data);
--
1.9.3
10 years, 2 months
[libvirt] [PATCH] conf: Check migration_host is valid or not during libvirt restarts
by Chen Fan
if user specified an invalid strings as migration hostname,
like setting: migration_host = "XXXXXXX", libvirt should check
it and return error during lbivirt restart.
Signed-off-by: Chen Fan <chen.fan.fnst(a)cn.fujitsu.com>
---
src/qemu/qemu_conf.c | 40 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index e2ec54f..450ac5b 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -33,6 +33,7 @@
#include <fcntl.h>
#include <sys/wait.h>
#include <arpa/inet.h>
+#include <netdb.h>
#include "virerror.h"
#include "qemu_conf.h"
@@ -650,6 +651,45 @@ int virQEMUDriverConfigLoadFile(virQEMUDriverConfigPtr cfg,
GET_VALUE_LONG("seccomp_sandbox", cfg->seccompSandbox);
GET_VALUE_STR("migration_host", cfg->migrateHost);
+ if (cfg->migrateHost) {
+ struct addrinfo hints;
+ struct addrinfo *res;
+
+ memset(&hints, 0, sizeof(hints));
+ hints.ai_flags = AI_ADDRCONFIG;
+ hints.ai_family = AF_UNSPEC;
+
+ if (getaddrinfo(cfg->migrateHost, NULL, &hints, &res) != 0) {
+ virReportError(VIR_ERR_CONF_SYNTAX,
+ _("migration_host: '%s' is not a valid hostname"),
+ cfg->migrateHost);
+ goto cleanup;
+ }
+
+ if (res == NULL) {
+ virReportError(VIR_ERR_CONF_SYNTAX,
+ _("No IP address for host '%s' found"),
+ cfg->migrateHost);
+ goto cleanup;
+ }
+
+ freeaddrinfo(res);
+
+ if (STRPREFIX(cfg->migrateHost, "localhost")) {
+ virReportError(VIR_ERR_CONF_SYNTAX, "%s",
+ _("setting migration_host to 'localhost' is not allowed"));
+ goto cleanup;
+ }
+
+ if (STREQ(cfg->migrateHost, "127.0.0.1") ||
+ STREQ(cfg->migrateHost, "::1")) {
+ virReportError(VIR_ERR_CONF_SYNTAX, "%s",
+ _("setting migration_host to '127.0.0.1' or '::1' "
+ "is not allowed"));
+ goto cleanup;
+ }
+ }
+
GET_VALUE_STR("migration_address", cfg->migrationAddress);
GET_VALUE_BOOL("log_timestamp", cfg->logTimestamp);
--
1.9.3
10 years, 2 months
[libvirt] [PATCH] qemu: Add network bandwidth settings for ethernet type interfaces
by Anirban Chakraborty
Please apply.
thanks,
Anirban Chakraborty
Signed-off-by: Anirban Chakraborty <abchak(a)juniper.net>
---
src/qemu/qemu_command.c | 5 +++++
src/qemu/qemu_hotplug.c | 3 +++
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 2184caa..258c6a7 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -7251,6 +7251,11 @@ qemuBuildInterfaceCommandLine(virCommandPtr cmd,
if (tapfd[0] < 0)
goto cleanup;
}
+ /* Configure network bandwidth for ethernet type network interfaces */
+ if (actualType == VIR_DOMAIN_NET_TYPE_ETHERNET)
+ if (virNetDevBandwidthSet(net->ifname,
+ virDomainNetGetActualBandwidth(net), false) < 0)
+ goto cleanup;
if ((actualType == VIR_DOMAIN_NET_TYPE_NETWORK ||
actualType == VIR_DOMAIN_NET_TYPE_BRIDGE ||
diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c
index a364c52..aeb53c5 100644
--- a/src/qemu/qemu_hotplug.c
+++ b/src/qemu/qemu_hotplug.c
@@ -940,6 +940,9 @@ int qemuDomainAttachNetDevice(virConnectPtr conn,
if (qemuOpenVhostNet(vm->def, net, priv->qemuCaps, vhostfd, &vhostfdSize) < 0)
goto cleanup;
} else if (actualType == VIR_DOMAIN_NET_TYPE_ETHERNET) {
+ if (virNetDevBandwidthSet(net->ifname,
+ virDomainNetGetActualBandwidth(net), false) < 0)
+ goto cleanup;
vhostfdSize = 1;
if (VIR_ALLOC(vhostfd) < 0)
goto cleanup;
--
1.8.2.3
10 years, 2 months
[libvirt] Cosmetic bug on libvirt.org
by Christophe Fergeau
Hey,
I just noticed that clicking on "FAQ" in the left sidebar on libvirt.org
main page highlights the "Wiki" cell instead of highlighting "FAQ".
I have no idea how the website works, so I'm just reporting it here.
Cheers,
Christophe
10 years, 2 months
[libvirt] [PATCH 1.2.8] storage: zfs: fix double listing of new volumes
by Roman Bogorodskiy
Currently, after calling commands to create a new volumes,
virStorageBackendZFSCreateVol calls virStorageBackendZFSFindVols that
calls virStorageBackendZFSParseVol.
virStorageBackendZFSParseVol checks if a volume already exists by
trying to get it using virStorageVolDefFindByName.
For a just created volume it returns NULL, so volume is reported as
new and appended to pool->volumes. This causes a volume to be listed
twice as storageVolCreateXML appends this new volume to the list as
well.
Fix that by passing a new volume definition to
virStorageBackendZFSParseVol so it could determine if it needs to add
this volume to the list.
---
src/storage/storage_backend_zfs.c | 45 ++++++++++++++++++++++-----------------
1 file changed, 26 insertions(+), 19 deletions(-)
diff --git a/src/storage/storage_backend_zfs.c b/src/storage/storage_backend_zfs.c
index 2d74055..d8201ac 100644
--- a/src/storage/storage_backend_zfs.c
+++ b/src/storage/storage_backend_zfs.c
@@ -58,7 +58,8 @@ virStorageBackendZFSCheckPool(virConnectPtr conn ATTRIBUTE_UNUSED,
static int
virStorageBackendZFSParseVol(virStoragePoolObjPtr pool,
- const char *volume)
+ virStorageVolDefPtr vol,
+ const char *volume_string)
{
int ret = -1;
char **tokens;
@@ -66,9 +67,9 @@ virStorageBackendZFSParseVol(virStoragePoolObjPtr pool,
char **name_tokens = NULL;
char *vol_name;
bool is_new_vol = false;
- virStorageVolDefPtr vol = NULL;
+ virStorageVolDefPtr volume = NULL;
- if (!(tokens = virStringSplitCount(volume, "\t", 0, &count)))
+ if (!(tokens = virStringSplitCount(volume_string, "\t", 0, &count)))
return -1;
if (count != 2)
@@ -79,36 +80,41 @@ virStorageBackendZFSParseVol(virStoragePoolObjPtr pool,
vol_name = name_tokens[1];
- vol = virStorageVolDefFindByName(pool, vol_name);
+ if (vol == NULL)
+ volume = virStorageVolDefFindByName(pool, vol_name);
+ else
+ volume = vol;
- if (vol == NULL) {
- if (VIR_ALLOC(vol) < 0)
+ if (volume == NULL) {
+ if (VIR_ALLOC(volume) < 0)
goto cleanup;
is_new_vol = true;
- vol->type = VIR_STORAGE_VOL_BLOCK;
+ volume->type = VIR_STORAGE_VOL_BLOCK;
- if (VIR_STRDUP(vol->name, vol_name) < 0)
+ if (VIR_STRDUP(volume->name, vol_name) < 0)
goto cleanup;
}
- if (!vol->key && VIR_STRDUP(vol->key, tokens[0]) < 0)
+ if (!volume->key && VIR_STRDUP(volume->key, tokens[0]) < 0)
goto cleanup;
- if (vol->target.path == NULL) {
- if (virAsprintf(&vol->target.path, "%s/%s",
- pool->def->target.path, vol->name) < 0)
+ if (volume->target.path == NULL) {
+ if (virAsprintf(&volume->target.path, "%s/%s",
+ pool->def->target.path, volume->name) < 0)
goto cleanup;
}
- if (virStrToLong_ull(tokens[1], NULL, 10, &vol->target.capacity) < 0) {
+ if (virStrToLong_ull(tokens[1], NULL, 10, &volume->target.capacity) < 0) {
virReportError(VIR_ERR_INTERNAL_ERROR,
"%s", _("malformed volsize reported"));
goto cleanup;
}
if (is_new_vol &&
- VIR_APPEND_ELEMENT(pool->volumes.objs, pool->volumes.count, vol) < 0)
+ VIR_APPEND_ELEMENT(pool->volumes.objs,
+ pool->volumes.count,
+ volume) < 0)
goto cleanup;
ret = 0;
@@ -116,12 +122,13 @@ virStorageBackendZFSParseVol(virStoragePoolObjPtr pool,
virStringFreeList(tokens);
virStringFreeList(name_tokens);
if (is_new_vol && (ret == -1))
- virStorageVolDefFree(vol);
+ virStorageVolDefFree(volume);
return ret;
}
static int
-virStorageBackendZFSFindVols(virStoragePoolObjPtr pool)
+virStorageBackendZFSFindVols(virStoragePoolObjPtr pool,
+ virStorageVolDefPtr vol)
{
virCommandPtr cmd = NULL;
char *volumes_list = NULL;
@@ -157,7 +164,7 @@ virStorageBackendZFSFindVols(virStoragePoolObjPtr pool)
if (STREQ(lines[i], ""))
continue;
- if (virStorageBackendZFSParseVol(pool, lines[i]) < 0)
+ if (virStorageBackendZFSParseVol(pool, vol, lines[i]) < 0)
continue;
}
@@ -233,7 +240,7 @@ virStorageBackendZFSRefreshPool(virConnectPtr conn ATTRIBUTE_UNUSED,
}
/* Obtain a list of volumes */
- if (virStorageBackendZFSFindVols(pool) < 0)
+ if (virStorageBackendZFSFindVols(pool, NULL) < 0)
goto cleanup;
cleanup:
@@ -285,7 +292,7 @@ virStorageBackendZFSCreateVol(virConnectPtr conn ATTRIBUTE_UNUSED,
if (virCommandRun(cmd, NULL) < 0)
goto cleanup;
- if (virStorageBackendZFSFindVols(pool) < 0)
+ if (virStorageBackendZFSFindVols(pool, vol) < 0)
goto cleanup;
ret = 0;
--
2.0.2
10 years, 2 months
Re: [libvirt] [Qemu-devel] IO accounting overhaul
by Stefan Hajnoczi
On Thu, Aug 28, 2014 at 04:38:09PM +0200, Benoît Canet wrote:
> I collected some items of a cloud provider wishlist regarding I/O accouting.
>
> In a cloud I/O accouting can have 3 purpose: billing, helping the customers
> and doing metrology to help the cloud provider seeks hidden costs.
>
> I'll cover the two former topic in this mail because they are the most important
> business wize.
>
> 1) prefered place to collect billing IO accounting data:
> --------------------------------------------------------
> For billing purpose the collected data must be as close as possible to what the
> customer would see by using iostats in his vm.
>
> The first conclusion we can draw is that the choice of collecting IO accouting
> data used for billing in the block devices models is right.
I agree. When statistics are collected at lower layers it becomes are
for the end user to understand numbers that include hidden costs for
image formats, network protocols, etc.
> 2) what to do with occurences of rare events:
> ---------------------------------------------
>
> Another point is that QEMU developpers agree that they don't know which policy
> to apply to some I/O accounting events.
> Must QEMU discard invalid I/O write IO or account them as done ?
> Must QEMU count a failed read I/O as done ?
>
> When discusting this with a cloud provider the following appears: these decisions
> are really specific to each cloud provider and QEMU should not implement them.
> The right thing to do is to add accouting counters to collect these events.
>
> Moreover these rare events are precious troubleshooting data so it's an additional
> reason not to toss them.
Sounds good, network interface statistics also include error counters.
> 3) list of block I/O accouting metrics wished for billing and helping the customers
> -----------------------------------------------------------------------------------
>
> Basic I/O accouting data will end up making the customers bills.
> Extra I/O accouting informations would be a precious help for the cloud provider
> to implement a monitoring panel like Amazon Cloudwatch.
One thing to be aware of is that counters inside QEMU cannot be trusted.
If a malicious guest can overwrite memory in QEMU then the counters can
be manipulated.
For most purposes this should be okay. Just be aware that evil guests
could manipulate their counters if a security hole is found in QEMU.
> Here is the list of counters and statitics I would like to help implement in QEMU.
>
> This is the most important part of the mail and the one I would like the community
> review the most.
>
> Once this list is settled I would proceed to implement the required infrastructure
> in QEMU before using it in the device models.
>
> /* volume of data transfered by the IOs */
> read_bytes
> write_bytes
>
> /* operation count */
> read_ios
> write_ios
> flush_ios
>
> /* how many invalid IOs the guest submit */
> invalid_read_ios
> invalid_write_ios
> invalid_flush_ios
>
> /* how many io error happened */
> read_ios_error
> write_ios_error
> flush_ios_error
>
> /* account the time passed doing IOs */
> total_read_time
> total_write_time
> total_flush_time
>
> /* since when the volume is iddle */
> qvolume_iddleness_time
?
>
> /* the following would compute latecies for slices of 1 seconds then toss the
> * result and start a new slice. A weighted sumation of the instant latencies
> * could help to implement this.
> */
> 1s_read_average_latency
> 1s_write_average_latency
> 1s_flush_average_latency
>
> /* the former three numbers could be used to further compute a 1 minute slice value */
> 1m_read_average_latency
> 1m_write_average_latency
> 1m_flush_average_latency
>
> /* the former three numbers could be used to further compute a 1 hours slice value */
> 1h_read_average_latency
> 1h_write_average_latency
> 1h_flush_average_latency
>
> /* 1 second average number of requests in flight */
> 1s_read_queue_depth
> 1s_write_queue_depth
>
> /* 1 minute average number of requests in flight */
> 1m_read_queue_depth
> 1m_write_queue_depth
>
> /* 1 hours average number of requests in flight */
> 1h_read_queue_depth
> 1h_write_queue_depth
I think libvirt captures similar data. At least virt-manager displays
graphs with similar data (maybe for CPU, memory, or network instead of
disk).
> 4) Making this happen
> -------------------------
>
> Outscale want to make these IO stat happen and gave me the go to do whatever
> grunt is required to do so.
> That said we could collaborate on some part of the work.
Seems like a nice improvement to the query-blockstats available today.
CCing libvirt for management stack ideas.
Stefan
10 years, 2 months
[libvirt] [PATCH 0/2] Couple more Coverity issues
by John Ferlan
After perusing the pile of 70 or so warnings - these two stuck out as
ones that were low hanging fruit and not false positives.
Many of the remaining "issues" are false positives or perhaps even
bugs in Coverity, but I understand why they're being flagged. Freeing
memory from counted lists where the incoming count must be zero based
on code path - for some reason Coverity flags them because the incoming
list memory is NULL and the for loop deref would be bad. The issue
is Coverity doesn't seem to dig deep enough to determine that the
count and list pointer are linked, sigh (yes, a lot of those).
John Ferlan (2):
virnetserverservice: Resolve Coverity ARRAY_VS_SINGLETON
qemu_driver: Resolve Coverity FORWARD_NULL
src/qemu/qemu_driver.c | 3 +--
src/rpc/virnetserverservice.c | 2 +-
2 files changed, 2 insertions(+), 3 deletions(-)
--
1.9.3
10 years, 2 months