[libvirt] [PATCH 0/3] several cgroups/cpuset fixes
by Henning Schild
Hi,
i already explained some of the cgroup problems in some detail so i
will not do that again.
https://www.redhat.com/archives/libvir-list/2015-October/msg00876.html
I managed to solve some of the problems in the current codebase, and
am now sharing the patches. But they are really just half of what i
had to change to get libvirt to behave in a system with isolated cpus.
Other changes/hacks i am not sending here because they do not work for
the general case:
- create machine.slice before starting libvirtd (smaller than root)
... and hope it wont grow
- disabling cpuset.cpus inheritance in libvirtd
- allowing only xml with fully specified cputune
- set machine cpuset to (vcpupins | emulatorpin)
I am not sure how useful the individual fixes are, i am sending them
as concrete examples for the problems i described earlier. And i am
hoping that will start a discussion.
Henning
Henning Schild (3):
util: cgroups do not implicitly add task to new machine cgroup
qemu: do not put a task into machine cgroup
qemu cgroups: move new threads to new cgroup after cpuset is set up
src/lxc/lxc_cgroup.c | 6 ++++++
src/qemu/qemu_cgroup.c | 23 ++++++++++++++---------
src/util/vircgroup.c | 22 ----------------------
3 files changed, 20 insertions(+), 31 deletions(-)
--
2.4.10
8 years, 10 months
[libvirt] [PATCH 00/21] Support NBD for tunnelled migration
by Pavel Boldin
The provided patchset implements NBD disk migration over a tunnelled
connection provided by libvirt.
The migration source instructs QEMU to NBD mirror drives into the provided
UNIX socket. These connections and all the data are then tunnelled to the
destination using newly introduced RPC call. The migration destination
implements a driver method that connects the tunnelled stream to the QEMU's
NBD destination.
The detailed scheme is the following:
PREPARE
1. Migration destination starts QEMU's NBD server listening on a UNIX socket
using the `nbd-server-add` monitor command and tells NBD to accept listed
disks via code added to qemuMigrationStartNBDServer that calls introduced
qemuMonitorNBDServerStartUnix monitor function.
PERFORM
2. Migration source creates a UNIX socket that is later used as NBDs
destination in `drive-mirror` monitor command.
This is implemented as a call to virNetSocketNewListenUnix from
doTunnelMigrate.
3. Source starts IOThread that polls on the UNIX socket, accepting every
incoming QEMU connection.
This is done by adding a new pollfd in the poll(2) call in
qemuMigrationIOFunc that calls introduced qemuNBDTunnelAcceptAndPipe
function.
4. The qemuNBDTunnelAcceptAndPipe function accepts the connection and creates
two virStream's. One is `local` that is later associated with just accepted
connection using virFDStreamOpen. Second is `remote` that is later
tunnelled to the remote destination stream.
The `local` stream is converted to a virFDStreamDrv stream using the
virFDStreamOpen call on the fd returned by accept(2).
The `remote` stream is associated with a stream on the destination in
the way similar to used by PrepareTunnel3* function. That is, the
virDomainMigrateOpenTunnel function called on the destination
connection object. The virDomainMigrateOpenTunnel calls remote driver's
handler remoteDomainMigrateOpenTunnel that makes DOMAIN_MIGRATE_OPEN_TUNNEL
call to the destination host. The code in remoteDomainMigrateOpenTunnel
ties passed virStream object to a virStream on the destination host via
remoteStreamDrv driver. The remote driver handles stream's IO by tunnelling
data through the RPC connection.
The qemuNBDTunnelAcceptAndPipe at last assigns both streams the same event
callback qemuMigrationPipeEvent. Its job is to track statuses of the
streams doing IO whenever it is necessary.
5. Source starts the drive mirroring using the qemuMigrationDriveMirror func.
The function instructs QEMU to mirror drives to the UNIX socket that thread
listens on.
Since it is necessary for the mirror driving to get into the 'synchronized'
state, where writes go to both destinations simultaneously, before
continuing VM migration, the thread serving the connections must be
started earlier.
6. When the connection to a UNIX socket on the migration source is made
the DOMAIN_MIGRATE_OPEN_TUNNEL proc is called on the migration destination.
The handler of this code calls virDomainMigrateOpenTunnel which calls
qemuMigrationOpenNBDTunnel by the means of qemuDomainMigrateOpenTunnel.
The qemuMigrationOpenNBDTunnel connects the stream linked to a source's
stream to the NBD's UNIX socket on the migration destination side.
7. The rest of the disk migration occurs semimagically: virStream* APIs tunnel
data in both directions. This is done by qemuMigrationPipeEvent event
callback set for both streams.
The order of the patches is roughly the following:
* First, the RPC machinery and remote driver's virDrvDomainMigrateOpenTunnel
implementation are added.
* Then, the source-side of the protocol is implemented: code listening
on a UNIX socket is added, DriveMirror is enhanced to instruct QEMU to
`drive-mirror` here and starting IOThread driving the tunneling sooner.
* After that, the destination-side of the protocol is implemented:
the qemuMonitorNBDServerStartUnix added and qemuMigrationStartNBDServer
enhanced to call it. The qemuDomainMigrateOpenTunnel is implemented
along with qemuMigrationOpenNBDTunnel that does the real job.
* Finally, the code blocking NBD migration for tunnelled migration is
removed.
Pavel Boldin (21):
rpc: add DOMAIN_MIGRATE_OPEN_TUNNEL proc
driver: add virDrvDomainMigrateOpenTunnel
remote_driver: introduce virRemoteClientNew
remote_driver: add remoteDomainMigrateOpenTunnel
domain: add virDomainMigrateOpenTunnel
domain: add virDomainMigrateTunnelFlags
remote: impl remoteDispatchDomainMigrateOpenTunnel
qemu: migration: src: add nbd tunnel socket data
qemu: migration: src: nbdtunnel unix socket
qemu: migration: src: qemu `drive-mirror` to UNIX
qemu: migration: src: qemuSock for running thread
qemu: migration: src: add NBD unixSock to iothread
qemu: migration: src: qemuNBDTunnelAcceptAndPipe
qemu: migration: src: stream piping
qemu: monitor: add qemuMonitorNBDServerStartUnix
qemu: migration: dest: nbd-server to UNIX sock
qemu: migration: dest: qemuMigrationOpenTunnel
qemu: driver: add qemuDomainMigrateOpenTunnel
qemu: migration: dest: qemuMigrationOpenNBDTunnel
qemu: migration: allow NBD tunneling migration
apparmor: fix tunnelmigrate permissions
daemon/remote.c | 50 ++++
docs/apibuild.py | 1 +
docs/hvsupport.pl | 1 +
include/libvirt/libvirt-domain.h | 3 +
src/driver-hypervisor.h | 8 +
src/libvirt-domain.c | 43 ++++
src/libvirt_internal.h | 6 +
src/libvirt_private.syms | 1 +
src/qemu/qemu_driver.c | 24 ++
src/qemu/qemu_migration.c | 495 +++++++++++++++++++++++++++++++++------
src/qemu/qemu_migration.h | 6 +
src/qemu/qemu_monitor.c | 12 +
src/qemu/qemu_monitor.h | 2 +
src/qemu/qemu_monitor_json.c | 35 +++
src/qemu/qemu_monitor_json.h | 2 +
src/remote/remote_driver.c | 91 +++++--
src/remote/remote_protocol.x | 19 +-
src/remote_protocol-structs | 8 +
src/security/virt-aa-helper.c | 4 +-
19 files changed, 719 insertions(+), 92 deletions(-)
--
1.9.1
8 years, 10 months
[libvirt] [PATCH v1] libvirtd: Increase NL buffer size for lots of interface
by Leno Hou
1. When switching CPUs to offline/online in a system more than 128 cpus
2. When using virsh to destroy domain in a system with more interface
All of above happens nl_recv returned with error: No buffer space available.
This patch set socket buffer size to 128K and turn on message peeking for nl_recv,
as this would solve this problem totally and permanetly.
LTC-Bugzilla: #133359 #125768
Signed-off-by: Leno Hou <houqy(a)linux.vnet.ibm.com>
Cc: Wenyi Gao <wenyi(a)linux.vnet.ibm.com>
---
src/util/virnetlink.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/src/util/virnetlink.c b/src/util/virnetlink.c
index 679b48e..c8c9fe0 100644
--- a/src/util/virnetlink.c
+++ b/src/util/virnetlink.c
@@ -696,6 +696,14 @@ virNetlinkEventServiceStart(unsigned int protocol, unsigned int groups)
goto error_server;
}
+ if (nl_socket_set_buffer_size(srv->netlinknh, 131702, 0) < 0) {
+ virReportSystemError(errno,
+ "%s",_("cannot set netlink socket buffer size to 128k"));
+ goto error_server;
+ }
+
+ nl_socket_enable_msg_peek(srv->netlinknh);
+
if ((srv->eventwatch = virEventAddHandle(fd,
VIR_EVENT_HANDLE_READABLE,
virNetlinkEventCallback,
--
1.9.1
8 years, 10 months
[libvirt] Hot plug multi function PCI devices
by Ziviani .
Hello list!
I'm new here and interested in hot-plug multi-function PCI devices.
Basically I'd like to know why Libvirt does not support it. I've been
through the archives and basically found this thread:
https://www.redhat.com/archives/libvir-list/2011-May/msg00457.html
But Qemu seems to handle it accordingly:
virsh qemu-monitor-command --hmp fedora-23 'device_add
vfio-pci,host=00:16.0,addr=08.0'
virsh qemu-monitor-command --hmp fedora-23 'device_add
vfio-pci,host=00:16.3,addr=08.3'
GUEST:
# lspci
(snip)
00:08.0 Communication controller: Intel Corporation 8 Series HECI #0 (rev
04)
00:08.3 Serial controller: Intel Corporation 8 Series HECI KT (rev 04)
However, using Libvirt:
% virsh attach-device fedora-23 pci_0000_00_16_0.xml --live
Device attached successfully
% virsh attach-device fedora-23 pci_0000_00_16_3.xml --live
error: Failed to attach device from pci_0000_00_16_3.xml
error: internal error: Only PCI device addresses with function=0 are
supported
I made some changes on domain_addr.c[1] for testing and it worked.
[1]https://gist.github.com/jrziviani/1da184c7fd0b413e0426
% virsh attach-device fedora-23 pci_0000_00_16_3.xml --live
Device attached successfully
GUEST:
# lspci
(snip)
00:08.0 Communication controller: Intel Corporation 8 Series HECI #0 (rev
04)
00:08.3 Serial controller: Intel Corporation 8 Series HECI KT (rev 04)
So there is more to it that I'm not aware?
Thank you!
8 years, 10 months
[libvirt] [PATCH] qemu: align the cur_balloon too if not explicitly specified by the user
by Shivaprasad G Bhat
As of now, the cur_ballon is set to actual memory if not specified by the user.
When the user specified memory is not aligned the cur_balloon alone ends up
unaligned. For qemu in function qemuDomainAttachMemory(), the cur_balloon
wouldn't add up to the actual memory as the cur_ballon was not aligned.
So, there is need for explicit setmem post attach-device for such guests.
The decision as to whether to align the cur_balloon memory or not is
not possible if we set it to actual memory by default in post-parse.
Move the default cur_balloon assignment to their respective drivers during
domain start wherever possible. For qemu align the cur_balloon too iow assign
the aligned actual memory when not specified by the user.
Signed-off-by: Shivaprasad G Bhat <sbhat(a)linux.vnet.ibm.com>
---
src/conf/domain_conf.c | 3 +--
src/libxl/libxl_conf.c | 2 ++
src/lxc/lxc_process.c | 3 +++
src/openvz/openvz_driver.c | 13 +++++++------
src/phyp/phyp_driver.c | 10 +++-------
src/qemu/qemu_domain.c | 3 +++
src/uml/uml_conf.c | 3 +++
src/vbox/vbox_common.c | 3 +++
src/vmx/vmx.c | 3 +++
src/vz/vz_sdk.c | 3 +++
src/xenconfig/xen_common.c | 3 +++
src/xenconfig/xen_sxpr.c | 4 ++++
12 files changed, 38 insertions(+), 15 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 2f5c0ed..68338f4 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -3527,8 +3527,7 @@ virDomainDefPostParseMemory(virDomainDefPtr def,
return -1;
}
- if (def->mem.cur_balloon > virDomainDefGetMemoryActual(def) ||
- def->mem.cur_balloon == 0)
+ if (def->mem.cur_balloon > virDomainDefGetMemoryActual(def))
def->mem.cur_balloon = virDomainDefGetMemoryActual(def);
if ((def->mem.max_memory || def->mem.memory_slots) &&
diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c
index 4eed5ca..6b6e764 100644
--- a/src/libxl/libxl_conf.c
+++ b/src/libxl/libxl_conf.c
@@ -665,6 +665,8 @@ libxlMakeDomBuildInfo(virDomainDefPtr def,
}
b_info->sched_params.weight = 1000;
b_info->max_memkb = virDomainDefGetMemoryInitial(def);
+ if (!def->mem.cur_balloon)
+ def->mem.cur_balloon = virDomainDefGetMemoryInitial(def);
b_info->target_memkb = def->mem.cur_balloon;
if (hvm) {
char bootorder[VIR_DOMAIN_BOOT_LAST + 1];
diff --git a/src/lxc/lxc_process.c b/src/lxc/lxc_process.c
index 57e3880..201ee61 100644
--- a/src/lxc/lxc_process.c
+++ b/src/lxc/lxc_process.c
@@ -1257,6 +1257,9 @@ int virLXCProcessStart(virConnectPtr conn,
vm->def->resource = res;
}
+ if (!vm->def->mem.cur_balloon)
+ vm->def->mem.cur_balloon = virDomainDefGetMemoryActual(vm->def);
+
if (virAsprintf(&logfile, "%s/%s.log",
cfg->logDir, vm->def->name) < 0)
goto cleanup;
diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c
index b8c0f50..8a56d94 100644
--- a/src/openvz/openvz_driver.c
+++ b/src/openvz/openvz_driver.c
@@ -1043,12 +1043,13 @@ openvzDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int fla
}
}
- if (vm->def->mem.cur_balloon > 0) {
- if (openvzDomainSetMemoryInternal(vm, vm->def->mem.cur_balloon) < 0) {
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not set memory size"));
- goto cleanup;
- }
+ if (!vm->def->mem.cur_balloon)
+ vm->def->mem.cur_balloon = virDomainDefGetMemoryActual(vm->def);
+
+ if (openvzDomainSetMemoryInternal(vm, vm->def->mem.cur_balloon) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("Could not set memory size"));
+ goto cleanup;
}
dom = virGetDomain(conn, vm->def->name, vm->def->uuid);
diff --git a/src/phyp/phyp_driver.c b/src/phyp/phyp_driver.c
index 14264c0..1fc7b34 100644
--- a/src/phyp/phyp_driver.c
+++ b/src/phyp/phyp_driver.c
@@ -3491,13 +3491,6 @@ phypBuildLpar(virConnectPtr conn, virDomainDefPtr def)
int exit_status = 0;
virBuffer buf = VIR_BUFFER_INITIALIZER;
- if (!def->mem.cur_balloon) {
- virReportError(VIR_ERR_XML_ERROR, "%s",
- _("Field <currentMemory> on the domain XML file is "
- "missing or has invalid value"));
- goto cleanup;
- }
-
if (!virDomainDefGetMemoryInitial(def)) {
virReportError(VIR_ERR_XML_ERROR, "%s",
_("Field <memory> on the domain XML file is missing or "
@@ -3505,6 +3498,9 @@ phypBuildLpar(virConnectPtr conn, virDomainDefPtr def)
goto cleanup;
}
+ if (!def->mem.cur_balloon)
+ def->mem.cur_balloon = virDomainDefGetMemoryActual(def);
+
if (def->ndisks < 1) {
virReportError(VIR_ERR_XML_ERROR, "%s",
_("Domain XML must contain at least one <disk> element."));
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 40e1f18..9d92d99 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -3567,6 +3567,9 @@ qemuDomainAlignMemorySizes(virDomainDefPtr def)
}
}
+ if (!def->mem.cur_balloon)
+ def->mem.cur_balloon = virDomainDefGetMemoryActual(def);
+
return 0;
}
diff --git a/src/uml/uml_conf.c b/src/uml/uml_conf.c
index afc0375..c53bc65 100644
--- a/src/uml/uml_conf.c
+++ b/src/uml/uml_conf.c
@@ -399,6 +399,9 @@ virCommandPtr umlBuildCommandLine(virConnectPtr conn,
virCommandAddEnvPassCommon(cmd);
+ if (!vm->def->mem.cur_balloon)
+ vm->def->mem.cur_balloon = virDomainDefGetMemoryActual(vm->def);
+
//virCommandAddArgPair(cmd, "con0", "fd:0,fd:1");
virCommandAddArgFormat(cmd, "mem=%lluK", vm->def->mem.cur_balloon);
virCommandAddArgPair(cmd, "umid", vm->def->name);
diff --git a/src/vbox/vbox_common.c b/src/vbox/vbox_common.c
index 9369367..38bf35e 100644
--- a/src/vbox/vbox_common.c
+++ b/src/vbox/vbox_common.c
@@ -1886,6 +1886,9 @@ vboxDomainDefineXMLFlags(virConnectPtr conn, const char *xml, unsigned int flags
goto cleanup;
}
+ if (!def->mem.cur_balloon)
+ def->mem.cur_balloon = virDomainDefGetMemoryActual(def);
+
rc = gVBoxAPI.UIMachine.SetMemorySize(machine,
VIR_DIV_UP(def->mem.cur_balloon, 1024));
if (NS_FAILED(rc)) {
diff --git a/src/vmx/vmx.c b/src/vmx/vmx.c
index 7c3c10a..8f4d66a 100644
--- a/src/vmx/vmx.c
+++ b/src/vmx/vmx.c
@@ -3157,6 +3157,9 @@ virVMXFormatConfig(virVMXContext *ctx, virDomainXMLOptionPtr xmlopt, virDomainDe
virBufferAsprintf(&buffer, "memsize = \"%llu\"\n",
max_balloon / 1024); /* Scale from kilobytes to megabytes */
+ if (!def->mem.cur_balloon)
+ def->mem.cur_balloon = virDomainDefGetMemoryActual(def);
+
/* def:mem.cur_balloon -> vmx:sched.mem.max */
if (def->mem.cur_balloon < max_balloon) {
virBufferAsprintf(&buffer, "sched.mem.max = \"%llu\"\n",
diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index 1fced3f..131f6bb 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -3475,6 +3475,9 @@ prlsdkDoApplyConfig(virConnectPtr conn,
bool needBoot = true;
char *mask = NULL;
+ if (!def->mem.cur_ballon)
+ def->mem.cur_ballon = virDomainDefGetMemoryActual(def);
+
if (prlsdkCheckUnsupportedParams(sdkdom, def) < 0)
return -1;
diff --git a/src/xenconfig/xen_common.c b/src/xenconfig/xen_common.c
index 0890c73..ba6dffa 100644
--- a/src/xenconfig/xen_common.c
+++ b/src/xenconfig/xen_common.c
@@ -1306,6 +1306,9 @@ xenFormatMem(virConfPtr conf, virDomainDefPtr def)
VIR_DIV_UP(virDomainDefGetMemoryActual(def), 1024)) < 0)
return -1;
+ if (!def->mem.cur_balloon)
+ def->mem.cur_balloon = virDomainDefGetMemoryActual(def);
+
if (xenConfigSetInt(conf, "memory",
VIR_DIV_UP(def->mem.cur_balloon, 1024)) < 0)
return -1;
diff --git a/src/xenconfig/xen_sxpr.c b/src/xenconfig/xen_sxpr.c
index 7fc9c9d..63b79a0 100644
--- a/src/xenconfig/xen_sxpr.c
+++ b/src/xenconfig/xen_sxpr.c
@@ -2219,6 +2219,10 @@ xenFormatSxpr(virConnectPtr conn,
virBufferAddLit(&buf, "(vm ");
virBufferEscapeSexpr(&buf, "(name '%s')", def->name);
+
+ if (!def->mem.cur_balloon)
+ def->mem.cur_balloon = virDomainDefGetMemoryActual(def);
+
virBufferAsprintf(&buf, "(memory %llu)(maxmem %llu)",
VIR_DIV_UP(def->mem.cur_balloon, 1024),
VIR_DIV_UP(virDomainDefGetMemoryActual(def), 1024));
8 years, 10 months
[libvirt] [PATCH 0/2] Xen: Support vif outging bandwidth QoS
by Jim Fehlig
Happy Holidays! :-)
This small series adds support for specifying vif outgoing rate limits
in Xen. The first patch adds support for converting rate limits between
xl/xm config and domXML, along with introducing a test for the conversion
logic. The second patch adds outgoing rate limiting to the libxl driver.
Jim Fehlig (2):
xenconfig: support parsing and formatting vif bandwidth
libxl: support vif outgoing bandwidth QoS
src/libxl/libxl_conf.c | 39 ++++++++++++++++++
src/xenconfig/xen_common.c | 77 ++++++++++++++++++++++++++++++++++++
tests/xlconfigdata/test-vif-rate.cfg | 26 ++++++++++++
tests/xlconfigdata/test-vif-rate.xml | 57 ++++++++++++++++++++++++++
tests/xlconfigtest.c | 1 +
5 files changed, 200 insertions(+)
create mode 100644 tests/xlconfigdata/test-vif-rate.cfg
create mode 100644 tests/xlconfigdata/test-vif-rate.xml
--
2.1.4
8 years, 10 months
[libvirt] [PATCH 0/4] virsh: Some events cleanups and improvements
by Jiri Denemark
Jiri Denemark (4):
virsh: Refactor event printing
virsh: Add timestamps to events
virsh: Pass ctl to virshCatchDisconnect
virsh: Interrupt *event --loop on disconnect
tools/virsh-domain.c | 310 +++++++++++++++++++++++++--------------------------
tools/virsh.c | 5 +-
2 files changed, 156 insertions(+), 159 deletions(-)
--
2.6.4
8 years, 10 months
[libvirt] [PATCH] qemu: fix default migration_address for NBD server
by Michael Chapman
If no migration_address is configured, QEMU will listen on 0.0.0.0 or
[::]. Commit 674afcb09e3d33500cfbbcf870ebf92cb99ecfa3 moved the
handling of this default value into a new qemuMigrationPrepareIncoming
function, however the address is also needed when starting the NBD
server from within qemuMigrationPrepareAny.
Without this commit, the nbd-server-start command fails if no explicit
migration_address is configured since libvirt passes a null value in the
listen address:
{
"execute": "nbd-server-start", "arguments": {
"addr": {
"type": "inet",
"data": { "host": null, "port": "49153" }
}
},
"id": "libvirt-14"
}
The migration address only applies to direct migration. Unfortunately we
can't move this logic as far back as qemuMigrationPrepareDirect since
QEMU's IPv6 support is only known after it has been launched.
Signed-off-by: Michael Chapman <mike(a)very.puzzling.org>
---
src/qemu/qemu_migration.c | 79 +++++++++++++++++++++++------------------------
1 file changed, 38 insertions(+), 41 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 4519aef..3f83b57 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -3307,54 +3307,14 @@ qemuMigrationPrepareIncoming(virDomainObjPtr vm,
if (VIR_STRDUP(migrateFrom, "stdio") < 0)
goto cleanup;
} else {
- bool encloseAddress = false;
- bool hostIPv6Capable = false;
- bool qemuIPv6Capable = false;
- struct addrinfo *info = NULL;
- struct addrinfo hints = { .ai_flags = AI_ADDRCONFIG,
- .ai_socktype = SOCK_STREAM };
const char *incFormat;
- if (getaddrinfo("::", NULL, &hints, &info) == 0) {
- freeaddrinfo(info);
- hostIPv6Capable = true;
- }
- qemuIPv6Capable = virQEMUCapsGet(priv->qemuCaps,
- QEMU_CAPS_IPV6_MIGRATION);
-
- if (listenAddress) {
- if (virSocketAddrNumericFamily(listenAddress) == AF_INET6) {
- if (!qemuIPv6Capable) {
- virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, "%s",
- _("qemu isn't capable of IPv6"));
- goto cleanup;
- }
- if (!hostIPv6Capable) {
- virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, "%s",
- _("host isn't capable of IPv6"));
- goto cleanup;
- }
- /* IPv6 address must be escaped in brackets on the cmd line */
- encloseAddress = true;
- } else {
- /* listenAddress is a hostname or IPv4 */
- }
- } else if (qemuIPv6Capable && hostIPv6Capable) {
- /* Listen on :: instead of 0.0.0.0 if QEMU understands it
- * and there is at least one IPv6 address configured
- */
- listenAddress = "::";
- encloseAddress = true;
- } else {
- listenAddress = "0.0.0.0";
- }
-
/* QEMU will be started with
* -incoming protocol:[<IPv6 addr>]:port,
* -incoming protocol:<IPv4 addr>:port, or
* -incoming protocol:<hostname>:port
*/
- if (encloseAddress)
+ if (virSocketAddrNumericFamily(listenAddress) == AF_INET6)
incFormat = "%s:[%s]:%d";
else
incFormat = "%s:%s:%d";
@@ -3540,6 +3500,43 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver,
goto stopjob;
stopProcess = true;
+ if (!tunnel) {
+ bool hostIPv6Capable = false;
+ bool qemuIPv6Capable = false;
+ struct addrinfo *info = NULL;
+ struct addrinfo hints = { .ai_flags = AI_ADDRCONFIG,
+ .ai_socktype = SOCK_STREAM };
+
+ if (getaddrinfo("::", NULL, &hints, &info) == 0) {
+ freeaddrinfo(info);
+ hostIPv6Capable = true;
+ }
+ qemuIPv6Capable = virQEMUCapsGet(priv->qemuCaps,
+ QEMU_CAPS_IPV6_MIGRATION);
+
+ if (listenAddress) {
+ if (virSocketAddrNumericFamily(listenAddress) == AF_INET6) {
+ if (!qemuIPv6Capable) {
+ virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, "%s",
+ _("qemu isn't capable of IPv6"));
+ goto stopjob;
+ }
+ if (!hostIPv6Capable) {
+ virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, "%s",
+ _("host isn't capable of IPv6"));
+ goto stopjob;
+ }
+ }
+ } else if (qemuIPv6Capable && hostIPv6Capable) {
+ /* Listen on :: instead of 0.0.0.0 if QEMU understands it
+ * and there is at least one IPv6 address configured
+ */
+ listenAddress = "::";
+ } else {
+ listenAddress = "0.0.0.0";
+ }
+ }
+
if (!(incoming = qemuMigrationPrepareIncoming(vm, tunnel, protocol,
listenAddress, port,
dataFD[0])))
--
2.4.3
8 years, 10 months
[libvirt] [PATCH] nestedhvm for libxl
by Alvin Starr
This patch adds a nestedhvm directive to the lxm features to allow libxl to create a nested HVM.
It also has mask_svm_npt to allow the created HVM to have the svm/npt bits masked so that it cannot in turn run up an nested hvm.
Alvin Starr (1):
add nested HVM to libxl
docs/schemas/domaincommon.rng | 10 ++++++++++
src/conf/domain_conf.c | 6 ++++++
src/conf/domain_conf.h | 2 ++
src/libxl/libxl_conf.c | 35 +++++++++++++++++++++++++++++++++++
4 files changed, 53 insertions(+)
--
2.4.3
8 years, 10 months
[libvirt] [PATCH v2 00/14] Use macros for more common virsh command options
by John Ferlan
v1: http://www.redhat.com/archives/libvir-list/2015-December/msg00731.html
Changes over v1:
1. Insert patch 1 to convert already pushed VSH_POOL into VIRSH_POOL
since that was the review comment from this patch series
2. Insert patch 2 to move the POOL_OPT_COMMON to virsh.h for later
patch reuse.
3. Use VIRSH_* instead of VSH_* for patches 1-8 (now 3-10)
4. Add usage of common domain for virsh-domain-monitor.c and
virsh-snapshot.c (patches 11-12)
5. Add common macros for "network" and "interface" (patches 13-14).
NOTE: I figure to let this perculate for a bit as I'll assume there
may be varying opinions on this... Also, the next couple of weeks
heavy on people perhaps paying not paying close attention to the list.
John Ferlan (14):
virsh: Covert VSH_POOL_ macro to VIRSH_POOL_
virsh: Move VIRSH_POOL_OPT_COMMON to virsh.h
virsh: Create macro for common "domain" option
virsh: Create macro for common "persistent" option
virsh: Create macro for common "config" option
virsh: Create macro for common "live" option
virsh: Create macro for common "current" option
virsh: Create macro for common "file" option
virsh: Create macros for common "pool" options
virsh: Create macros for common "vol" options
virsh: Have domain-monitor use common "domain" option
virsh: have snapshot use common "domain" option
virsh: Create macro for common "network" option
virsh: Create macro for common "interface" option
po/POTFILES.in | 1 +
tools/virsh-domain-monitor.c | 77 +---
tools/virsh-domain.c | 911 +++++++++----------------------------------
tools/virsh-interface.c | 37 +-
tools/virsh-network.c | 61 +--
tools/virsh-pool.c | 71 ++--
tools/virsh-snapshot.c | 60 +--
tools/virsh-volume.c | 148 ++-----
tools/virsh.h | 17 +
9 files changed, 334 insertions(+), 1049 deletions(-)
--
2.5.0
8 years, 10 months