[libvirt] [libvirt-php] libvirt_domain_create_xml allow passing flags
by Vasiliy Tolstov
libvirt_domain_create_xml miss ability to pass flags when create domain,
fixing it.
Signed-off-by: Vasiliy Tolstov <v.tolstov(a)selfip.ru>
---
src/libvirt-php.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/src/libvirt-php.c b/src/libvirt-php.c
index f3b3f9f..39199da 100644
--- a/src/libvirt-php.c
+++ b/src/libvirt-php.c
@@ -1274,6 +1274,14 @@ PHP_MINIT_FUNCTION(libvirt)
REGISTER_LONG_CONSTANT("VIR_SNAPSHOT_REVERT_PAUSED", VIR_DOMAIN_SNAPSHOT_REVERT_PAUSED, CONST_CS | CONST_PERSISTENT);
REGISTER_LONG_CONSTANT("VIR_SNAPSHOT_REVERT_FORCE", VIR_DOMAIN_SNAPSHOT_REVERT_FORCE, CONST_CS | CONST_PERSISTENT);
+ /* Create flags */
+ REGISTER_LONG_CONSTANT("VIR_DOMAIN_NONE", VIR_DOMAIN_NONE, CONST_CS | CONST_PERSISTENT);
+ REGISTER_LONG_CONSTANT("VIR_DOMAIN_START_PAUSED", VIR_DOMAIN_START_PAUSED, CONST_CS | CONST_PERSISTENT);
+ REGISTER_LONG_CONSTANT("VIR_DOMAIN_START_AUTODESTROY", VIR_DOMAIN_START_AUTODESTROY, CONST_CS | CONST_PERSISTENT);
+ REGISTER_LONG_CONSTANT("VIR_DOMAIN_START_BYPASS_CACHE", VIR_DOMAIN_START_BYPASS_CACHE, CONST_CS | CONST_PERSISTENT);
+ REGISTER_LONG_CONSTANT("VIR_DOMAIN_START_FORCE_BOOT", VIR_DOMAIN_START_FORCE_BOOT, CONST_CS | CONST_PERSISTENT);
+ REGISTER_LONG_CONSTANT("VIR_DOMAIN_START_VALIDATE", VIR_DOMAIN_START_VALIDATE, CONST_CS | CONST_PERSISTENT);
+
/* Memory constants */
REGISTER_LONG_CONSTANT("VIR_MEMORY_VIRTUAL", 1, CONST_CS | CONST_PERSISTENT);
REGISTER_LONG_CONSTANT("VIR_MEMORY_PHYSICAL", 2, CONST_CS | CONST_PERSISTENT);
@@ -5911,10 +5919,11 @@ PHP_FUNCTION(libvirt_domain_create_xml)
virDomainPtr domain=NULL;
char *xml;
int xml_len;
+ long flags=0;
- GET_CONNECTION_FROM_ARGS("rs",&zconn,&xml,&xml_len);
+ GET_CONNECTION_FROM_ARGS("rs|l",&zconn,&xml,&xml_len,&flags);
- domain=virDomainCreateXML(conn->conn,xml,0);
+ domain=virDomainCreateXML(conn->conn,xml,flags);
DPRINTF("%s: virDomainCreateXML(%p, <xml>, 0) returned %p\n", PHPFUNC, conn->conn, domain);
if (domain==NULL) RETURN_FALSE;
--
2.6.4
8 years, 10 months
[libvirt] [PATCH v2 0/2] Couple of RO/RW connection fixes
by Michal Privoznik
diff to v1:
- After some discussion with Daniel, allow virDomainInterfaceAddresses on RO
only if it does not end up talking to guest agent.
- Also fix virDomainGetTime
Michal Privoznik (2):
virDomainInterfaceAddresses: Allow API on RO connection too
virDomainGetTime: Deny on RO connections
src/libvirt-domain.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--
2.4.10
8 years, 10 months
[libvirt] [PATCH] virDomainInterfaceAddresses: Allow API on RO connection too
by Michal Privoznik
This API does not change domain state. It's merely like
virDomainGetXMLDesc() - and we don't reject RO connections there.
There's no reason to reject them here.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/libvirt-domain.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c
index 677a9ad..e5af933 100644
--- a/src/libvirt-domain.c
+++ b/src/libvirt-domain.c
@@ -11546,7 +11546,6 @@ virDomainInterfaceAddresses(virDomainPtr dom,
*ifaces = NULL;
virCheckDomainReturn(dom, -1);
virCheckNonNullArgGoto(ifaces, error);
- virCheckReadOnlyGoto(dom->conn->flags, error);
if (dom->conn->driver->domainInterfaceAddresses) {
int ret;
--
2.4.10
8 years, 10 months
[libvirt] [PATCH 00/21] Support NBD for tunnelled migration
by Pavel Boldin
The provided patchset implements NBD disk migration over a tunnelled
connection provided by libvirt.
The migration source instructs QEMU to NBD mirror drives into the provided
UNIX socket. These connections and all the data are then tunnelled to the
destination using newly introduced RPC call. The migration destination
implements a driver method that connects the tunnelled stream to the QEMU's
NBD destination.
The detailed scheme is the following:
PREPARE
1. Migration destination starts QEMU's NBD server listening on a UNIX socket
using the `nbd-server-add` monitor command and tells NBD to accept listed
disks via code added to qemuMigrationStartNBDServer that calls introduced
qemuMonitorNBDServerStartUnix monitor function.
PERFORM
2. Migration source creates a UNIX socket that is later used as NBDs
destination in `drive-mirror` monitor command.
This is implemented as a call to virNetSocketNewListenUnix from
doTunnelMigrate.
3. Source starts IOThread that polls on the UNIX socket, accepting every
incoming QEMU connection.
This is done by adding a new pollfd in the poll(2) call in
qemuMigrationIOFunc that calls introduced qemuNBDTunnelAcceptAndPipe
function.
4. The qemuNBDTunnelAcceptAndPipe function accepts the connection and creates
two virStream's. One is `local` that is later associated with just accepted
connection using virFDStreamOpen. Second is `remote` that is later
tunnelled to the remote destination stream.
The `local` stream is converted to a virFDStreamDrv stream using the
virFDStreamOpen call on the fd returned by accept(2).
The `remote` stream is associated with a stream on the destination in
the way similar to used by PrepareTunnel3* function. That is, the
virDomainMigrateOpenTunnel function called on the destination
connection object. The virDomainMigrateOpenTunnel calls remote driver's
handler remoteDomainMigrateOpenTunnel that makes DOMAIN_MIGRATE_OPEN_TUNNEL
call to the destination host. The code in remoteDomainMigrateOpenTunnel
ties passed virStream object to a virStream on the destination host via
remoteStreamDrv driver. The remote driver handles stream's IO by tunnelling
data through the RPC connection.
The qemuNBDTunnelAcceptAndPipe at last assigns both streams the same event
callback qemuMigrationPipeEvent. Its job is to track statuses of the
streams doing IO whenever it is necessary.
5. Source starts the drive mirroring using the qemuMigrationDriveMirror func.
The function instructs QEMU to mirror drives to the UNIX socket that thread
listens on.
Since it is necessary for the mirror driving to get into the 'synchronized'
state, where writes go to both destinations simultaneously, before
continuing VM migration, the thread serving the connections must be
started earlier.
6. When the connection to a UNIX socket on the migration source is made
the DOMAIN_MIGRATE_OPEN_TUNNEL proc is called on the migration destination.
The handler of this code calls virDomainMigrateOpenTunnel which calls
qemuMigrationOpenNBDTunnel by the means of qemuDomainMigrateOpenTunnel.
The qemuMigrationOpenNBDTunnel connects the stream linked to a source's
stream to the NBD's UNIX socket on the migration destination side.
7. The rest of the disk migration occurs semimagically: virStream* APIs tunnel
data in both directions. This is done by qemuMigrationPipeEvent event
callback set for both streams.
The order of the patches is roughly the following:
* First, the RPC machinery and remote driver's virDrvDomainMigrateOpenTunnel
implementation are added.
* Then, the source-side of the protocol is implemented: code listening
on a UNIX socket is added, DriveMirror is enhanced to instruct QEMU to
`drive-mirror` here and starting IOThread driving the tunneling sooner.
* After that, the destination-side of the protocol is implemented:
the qemuMonitorNBDServerStartUnix added and qemuMigrationStartNBDServer
enhanced to call it. The qemuDomainMigrateOpenTunnel is implemented
along with qemuMigrationOpenNBDTunnel that does the real job.
* Finally, the code blocking NBD migration for tunnelled migration is
removed.
Pavel Boldin (21):
rpc: add DOMAIN_MIGRATE_OPEN_TUNNEL proc
driver: add virDrvDomainMigrateOpenTunnel
remote_driver: introduce virRemoteClientNew
remote_driver: add remoteDomainMigrateOpenTunnel
domain: add virDomainMigrateOpenTunnel
domain: add virDomainMigrateTunnelFlags
remote: impl remoteDispatchDomainMigrateOpenTunnel
qemu: migration: src: add nbd tunnel socket data
qemu: migration: src: nbdtunnel unix socket
qemu: migration: src: qemu `drive-mirror` to UNIX
qemu: migration: src: qemuSock for running thread
qemu: migration: src: add NBD unixSock to iothread
qemu: migration: src: qemuNBDTunnelAcceptAndPipe
qemu: migration: src: stream piping
qemu: monitor: add qemuMonitorNBDServerStartUnix
qemu: migration: dest: nbd-server to UNIX sock
qemu: migration: dest: qemuMigrationOpenTunnel
qemu: driver: add qemuDomainMigrateOpenTunnel
qemu: migration: dest: qemuMigrationOpenNBDTunnel
qemu: migration: allow NBD tunneling migration
apparmor: fix tunnelmigrate permissions
daemon/remote.c | 50 ++++
docs/apibuild.py | 1 +
docs/hvsupport.pl | 1 +
include/libvirt/libvirt-domain.h | 3 +
src/driver-hypervisor.h | 8 +
src/libvirt-domain.c | 43 ++++
src/libvirt_internal.h | 6 +
src/libvirt_private.syms | 1 +
src/qemu/qemu_driver.c | 24 ++
src/qemu/qemu_migration.c | 495 +++++++++++++++++++++++++++++++++------
src/qemu/qemu_migration.h | 6 +
src/qemu/qemu_monitor.c | 12 +
src/qemu/qemu_monitor.h | 2 +
src/qemu/qemu_monitor_json.c | 35 +++
src/qemu/qemu_monitor_json.h | 2 +
src/remote/remote_driver.c | 91 +++++--
src/remote/remote_protocol.x | 19 +-
src/remote_protocol-structs | 8 +
src/security/virt-aa-helper.c | 4 +-
19 files changed, 719 insertions(+), 92 deletions(-)
--
1.9.1
8 years, 10 months
[libvirt] [PATCH 0/3] virsh: Implement and document --timeout everywhere
by Andrea Bolognani
Jirka added a new --timestamp option to 'virsh event'[1], so I went
ahead and updated the 'net-event' and 'qemu-monitor-event' virsh
commands to support it as well.
I've also updated the man page so that the new option is properly
documented.
Cheers.
[1] https://www.redhat.com/archives/libvir-list/2015-December/msg00806.html
Andrea Bolognani (3):
virsh: Add timestamps to QEMU monitor events
virsh: Add timestamps to network events
virsh: Document the --timestamp option
tools/virsh-domain.c | 22 ++++++++++++++++++++--
tools/virsh-network.c | 24 ++++++++++++++++++++++--
tools/virsh.pod | 15 +++++++++++++--
3 files changed, 55 insertions(+), 6 deletions(-)
--
2.5.0
8 years, 10 months
[libvirt] [PATCH 0/2] Fix crashing libvirt after my commit
by Martin Kletzander
The commit was clearing the socket path even when parsing status XML.
This series should fix that.
Martin Kletzander (2):
Provide parse flags to PostParse functions
Don't clear libvirt-internal paths when parsing status XML
src/bhyve/bhyve_domain.c | 2 ++
src/conf/domain_conf.c | 15 ++++++++++-----
src/conf/domain_conf.h | 2 ++
src/libxl/libxl_domain.c | 2 ++
src/lxc/lxc_domain.c | 2 ++
src/openvz/openvz_driver.c | 2 ++
src/phyp/phyp_driver.c | 2 ++
src/qemu/qemu_domain.c | 7 +++++--
src/uml/uml_driver.c | 2 ++
src/vbox/vbox_common.c | 2 ++
src/vmware/vmware_driver.c | 2 ++
src/vmx/vmx.c | 2 ++
src/vz/vz_driver.c | 2 ++
src/xen/xen_driver.c | 2 ++
src/xenapi/xenapi_driver.c | 2 ++
15 files changed, 41 insertions(+), 7 deletions(-)
--
2.7.0
8 years, 10 months
[libvirt] Plans for next release
by Daniel Veillard
So we should push 1.3.1 by the middle of this month (with 1.3.2 for
end of Feb as decided when we planned 1.3.0) which means we should enter
freeze soon. I am suggesting to start the freeze in Tuesday, with rc2 on
Thursday and having the release by next week-end,
if anybody has an issue with this, raise your voice before Tuesday :-)
Daniel
--
Daniel Veillard | Open Source and Standards, Red Hat
veillard(a)redhat.com | libxml Gnome XML XSLT toolkit http://xmlsoft.org/
http://veillard.com/ | virtualization library http://libvirt.org/
8 years, 10 months
[libvirt] [PATCH 0/4] qemu: Support disable_s3/s4 for -M q35
by Cole Robinson
q35/ICH9 uses a different qemu option for disabling s3/s4 support.
Probe for it and wire it up.
Cole Robinson (4):
qemu: capabilities: s/Pixx/Piix/g
qemu: caps: Rename CAPS_DISABLE_S[34] to CAPS_PIIX_DISABLE_S[34]
qemu: caps: check for q35/ICH9 disable S3/S4
qemu: command: wire up usage of q35/ich9 disable s3/s4
src/qemu/qemu_capabilities.c | 19 +++--
src/qemu/qemu_capabilities.h | 6 +-
src/qemu/qemu_command.c | 32 ++++++--
tests/qemucapabilitiesdata/caps_1.2.2-1.replies | 23 ++++--
tests/qemucapabilitiesdata/caps_1.3.1-1.replies | 22 ++++--
tests/qemucapabilitiesdata/caps_1.4.2-1.replies | 23 ++++--
tests/qemucapabilitiesdata/caps_1.5.3-1.replies | 22 ++++--
tests/qemucapabilitiesdata/caps_1.6.0-1.replies | 22 ++++--
tests/qemucapabilitiesdata/caps_1.6.50-1.replies | 22 ++++--
tests/qemucapabilitiesdata/caps_2.1.1-1.replies | 22 ++++--
tests/qemucapabilitiesdata/caps_2.4.0-1.caps | 2 +
tests/qemucapabilitiesdata/caps_2.4.0-1.replies | 92 ++++++++++++++++++++--
tests/qemucapabilitiesdata/caps_2.5.0-1.caps | 2 +
tests/qemucapabilitiesdata/caps_2.5.0-1.replies | 92 ++++++++++++++++++++--
tests/qemucapabilitiesdata/caps_2.6.0-1.caps | 2 +
tests/qemucapabilitiesdata/caps_2.6.0-1.replies | 92 ++++++++++++++++++++--
.../qemuxml2argv-q35-pm-disable-fallback.args | 23 ++++++
.../qemuxml2argv-q35-pm-disable-fallback.xml | 18 +++++
.../qemuxml2argv-q35-pm-disable.args | 23 ++++++
.../qemuxml2argv-q35-pm-disable.xml | 18 +++++
tests/qemuxml2argvtest.c | 17 +++-
21 files changed, 507 insertions(+), 87 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-q35-pm-disable-fallback.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-q35-pm-disable-fallback.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-q35-pm-disable.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-q35-pm-disable.xml
--
2.5.0
8 years, 10 months
[libvirt] [PATCH v2] libvirtd: Increase NL buffer size for lots of interface
by Leno Hou
1. When switching CPUs to offline/online in a system more than 128 cpus
2. When using virsh to destroy domain in a system with more interface
All of above happens nl_recv returned with error: No buffer space available.
This patch sets the socket buffer size to 128K and turns on message peeking
for nl_recv,as this would solve this problem totally and permanetly.
Signed-off-by: Leno Hou <houqy(a)linux.vnet.ibm.com>
Cc: Wenyi Gao <wenyi(a)linux.vnet.ibm.com>
CC: Laine Stump <laine(a)laine.org>
CC: Michal Privoznik <mprivozn(a)redhat.com>
---
src/util/virnetlink.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/src/util/virnetlink.c b/src/util/virnetlink.c
index 679b48e..2f2691c 100644
--- a/src/util/virnetlink.c
+++ b/src/util/virnetlink.c
@@ -65,10 +65,12 @@ struct virNetlinkEventHandle {
# ifdef HAVE_LIBNL1
# define virNetlinkAlloc nl_handle_alloc
+# define virSocketSetBufferSize nl_set_buffer_size
# define virNetlinkFree nl_handle_destroy
typedef struct nl_handle virNetlinkHandle;
# else
# define virNetlinkAlloc nl_socket_alloc
+# define virSocketSetBufferSize nl_socket_buffer_size
# define virNetlinkFree nl_socket_free
typedef struct nl_sock virNetlinkHandle;
# endif
@@ -696,6 +698,14 @@ virNetlinkEventServiceStart(unsigned int protocol, unsigned int groups)
goto error_server;
}
+ if (virSocketSetBufferSize(srv->netlinknh, 131702, 0) < 0) {
+ virReportSystemError(errno,
+ "%s",_("cannot set netlink socket buffer size to 128k"));
+ goto error_server;
+ }
+
+ nl_socket_enable_msg_peek(srv->netlinknh);
+
if ((srv->eventwatch = virEventAddHandle(fd,
VIR_EVENT_HANDLE_READABLE,
virNetlinkEventCallback,
--
1.9.1
8 years, 10 months
[libvirt] [PATCH v1] libvirtd: Increase NL buffer size for lots of interface
by Leno Hou
1. When switching CPUs to offline/online in a system more than 128 cpus
2. When using virsh to destroy domain in a system with more interface
All of above happens nl_recv returned with error: No buffer space available.
This patch set socket buffer size to 128K and turn on message peeking for nl_recv,
as this would solve this problem totally and permanetly.
LTC-Bugzilla: #133359 #125768
Signed-off-by: Leno Hou <houqy(a)linux.vnet.ibm.com>
Cc: Wenyi Gao <wenyi(a)linux.vnet.ibm.com>
---
src/util/virnetlink.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/src/util/virnetlink.c b/src/util/virnetlink.c
index 679b48e..c8c9fe0 100644
--- a/src/util/virnetlink.c
+++ b/src/util/virnetlink.c
@@ -696,6 +696,14 @@ virNetlinkEventServiceStart(unsigned int protocol, unsigned int groups)
goto error_server;
}
+ if (nl_socket_set_buffer_size(srv->netlinknh, 131702, 0) < 0) {
+ virReportSystemError(errno,
+ "%s",_("cannot set netlink socket buffer size to 128k"));
+ goto error_server;
+ }
+
+ nl_socket_enable_msg_peek(srv->netlinknh);
+
if ((srv->eventwatch = virEventAddHandle(fd,
VIR_EVENT_HANDLE_READABLE,
virNetlinkEventCallback,
--
1.9.1
8 years, 10 months