[libvirt] [PATCH v3 0/2] migration: add option to set target ndb server port
by Nikolay Shirokovskiy
Current libvirt + qemu pair lacks secure migrations in case of
VMs with non-shared disks. The only option to migrate securely
natively is to use tunneled mode and some kind of secure
destination URI. But tunelled mode does not support non-shared
disks.
The other way to make migration secure is to organize a tunnel
by external means. This is possible in case of shared disks
migration thru use of proper combination of destination URI,
migration URI and VIR_MIGRATE_PARAM_LISTEN_ADDRESS migration
param. But again this is not possible in case of non shared disks
migration as we have no option to control target nbd server port.
But fixing this much more simplier that supporting non-shared
disks in tunneled mode.
So this patch series adds option to set target ndb port.
Finally all qemu migration connections will be secured AFAIK but
even in this case this patch could be convinient if one wants
all migration traffic be put in a single connection.
difference from v2:
===================
1. patch is splitted into API and implementation parts
2. code that starts nbd server is reorganized
3. add check for setting disks port for tunneled case
4. misc small changes according to Jiri comments
Nikolay Shirokovskiy (2):
migration: add target peer disks port
qemu: implement setting target disks migration port
include/libvirt/libvirt-domain.h | 10 ++++
src/qemu/qemu_driver.c | 25 ++++++---
src/qemu/qemu_migration.c | 108 +++++++++++++++++++++++++++++----------
src/qemu/qemu_migration.h | 3 ++
tools/virsh-domain.c | 12 +++++
tools/virsh.pod | 5 +-
6 files changed, 127 insertions(+), 36 deletions(-)
--
1.8.3.1
8 years, 9 months
[libvirt] [PATCH v2 0/2] persistent live migration with specified XML
by Dmitry Andreev
v2: reimplemented with new migration param
Libvirt doesn't allow to specify destination persistent domain
configuration. VIR_MIGRATE_PARAM_DEST_XML migration param is used for
active configuration and persistent configuration is taken from source
domain. The problem is mentioned in this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=835300
This patch-set introduces new migration param VIR_MIGRATE_PARAM_DEST_PERSIST_XML
and implements its support in qemu driver.
Dmitry Andreev (2):
qemuMigrationCookieAddPersistent: change argument type
qemu: migration: new migration param for persistent destination XML
include/libvirt/libvirt-domain.h | 15 ++++++++++
src/qemu/qemu_driver.c | 10 +++++--
src/qemu/qemu_migration.c | 62 ++++++++++++++++++++++++++--------------
src/qemu/qemu_migration.h | 2 ++
4 files changed, 65 insertions(+), 24 deletions(-)
--
1.8.3.1
8 years, 9 months
[libvirt] [PATCH 0/2] nodedev: Expose PCI header type information
by Martin Kletzander
I can squash those two patches together if it's desirable.
Martin Kletzander (2):
nodedev: Indent PCI express for future fix
nodedev: Expose PCI header type
docs/schemas/nodedev.rng | 17 ++++++++++
src/conf/node_device_conf.c | 37 ++++++++++++++++++++
src/conf/node_device_conf.h | 2 ++
src/libvirt_private.syms | 3 ++
src/node_device/node_device_udev.c | 39 +++++++++++++---------
src/util/virpci.c | 38 +++++++++++++++++++++
src/util/virpci.h | 12 +++++++
.../pci_0000_00_02_0_header_type.xml | 16 +++++++++
.../pci_0000_00_1c_0_header_type.xml | 22 ++++++++++++
tests/nodedevxml2xmltest.c | 2 ++
10 files changed, 172 insertions(+), 16 deletions(-)
create mode 100644 tests/nodedevschemadata/pci_0000_00_02_0_header_type.xml
create mode 100644 tests/nodedevschemadata/pci_0000_00_1c_0_header_type.xml
--
2.7.3
8 years, 9 months
[libvirt] [PATCH 0/5] virlog: Refactor parsing log outputs
by Erik Skultety
Erik Skultety (5):
virlog: Change virLogDestination to virLogDestinationType
virlog: Introduce Type{To,From}String for virLogDestination
virlog: Refactor virLogParseOutputs
tests: Slightly tweak virlogtest
tests: Add a new test for logging outputs parser
po/POTFILES.in | 1 +
src/util/virlog.c | 197 +++++++++++++++++++++++++++--------------------------
src/util/virlog.h | 10 ++-
tests/virlogtest.c | 76 +++++++++++++++++----
4 files changed, 168 insertions(+), 116 deletions(-)
--
2.4.3
8 years, 9 months
[libvirt] [libvirt-perl][PATCH] Add VIR_DOMAIN_EVENT_DEFINED_FROM_SNAPSHOT constant
by Michal Privoznik
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
Changes | 1 +
Virt.xs | 1 +
lib/Sys/Virt/Domain.pm | 4 ++++
3 files changed, 6 insertions(+)
diff --git a/Changes b/Changes
index aa71a1e..8f2cba6 100644
--- a/Changes
+++ b/Changes
@@ -8,6 +8,7 @@ Revision history for perl module Sys::Virt
constants
- Add VIR_DOMAIN_EVENT_ID_JOB_COMPLETED constant and callback
- Add VIR_ERR_NO_SERVER constant
+ - Add VIR_DOMAIN_EVENT_DEFINED_FROM_SNAPSHOT constant
1.3.2 2016-03-01
diff --git a/Virt.xs b/Virt.xs
index 9cb80fa..2148eaf 100644
--- a/Virt.xs
+++ b/Virt.xs
@@ -7645,6 +7645,7 @@ BOOT:
REGISTER_CONSTANT(VIR_DOMAIN_EVENT_DEFINED_ADDED, EVENT_DEFINED_ADDED);
REGISTER_CONSTANT(VIR_DOMAIN_EVENT_DEFINED_UPDATED, EVENT_DEFINED_UPDATED);
REGISTER_CONSTANT(VIR_DOMAIN_EVENT_DEFINED_RENAMED, EVENT_DEFINED_RENAMED);
+ REGISTER_CONSTANT(VIR_DOMAIN_EVENT_DEFINED_FROM_SNAPSHOT, EVENT_DEFINED_FROM_SNAPSHOT);
REGISTER_CONSTANT(VIR_DOMAIN_EVENT_UNDEFINED_REMOVED, EVENT_UNDEFINED_REMOVED);
REGISTER_CONSTANT(VIR_DOMAIN_EVENT_UNDEFINED_RENAMED, EVENT_UNDEFINED_RENAMED);
diff --git a/lib/Sys/Virt/Domain.pm b/lib/Sys/Virt/Domain.pm
index d0d79b9..3e9e7ba 100644
--- a/lib/Sys/Virt/Domain.pm
+++ b/lib/Sys/Virt/Domain.pm
@@ -2616,6 +2616,10 @@ The defined configuration is an update to an existing configuration
The defined configuration is a rename of an existing configuration
+=item Sys::Virt::Domain::EVENT_DEFINED_FROM_SNAPSHOT
+
+The defined configuration was restored from a snapshot
+
=back
=item Sys::Virt::Domain::EVENT_RESUMED
--
2.4.10
8 years, 9 months
[libvirt] [Issue]: Regarding client socket getting closed from the server once the lxc container is started
by rammohan madhusudan
Hi Folks,
Using libvirt python bindings we are creating an lxc container.Here is the
problem that we see sometimes (say 20 % of the time) when we create a new
container.
1.container gets created, and also starts.However the we are not able to
enter the namepace of the container.It throws an error initPid not
available.We see that the using netstat command , socket connection is
closed.
2.To get around this problem we have to stop and start the container
again.We see that socket under (/var/run/libvirt/*) connection is
established and we are able to enter the namespace.
Enabled the libvirtd debug logs to debug this issue.
For *success* case we see that for new client connection gets created and
is able to handle async incoming events,
*2016-03-12 08:18:55.748+0000: 1247: info : virObjectNew:202 : OBJECT_NEW:
obj=0x7fed54005460 classname=virLXCMonitor*
*2016-03-12 08:18:55.748+0000: 1247: debug : virNetSocketNew:159 :
localAddr=0x7fed7cd1d170 remoteAddr=0x7fed7cd1d200 fd=28 errfd=-1 pid=0*
*2016-03-12 08:18:55.749+0000: 1247: info : virObjectNew:202 : OBJECT_NEW:
obj=0x7fed54009040 classname=virNetSocket*
*2016-03-12 08:18:55.749+0000: 1247: info : virNetSocketNew:209 :
RPC_SOCKET_NEW: sock=0x7fed54009040 fd=28 errfd=-1 pid=0
localAddr=127.0.0.1;0, remoteAddr=127.0.0.1;0*
*2016-03-12 08:18:55.749+0000: 1247: info : virObjectNew:202 : OBJECT_NEW:
obj=0x7fed54009d10 classname=virNetClient*
*2016-03-12 08:18:55.749+0000: 1247: info : virNetClientNew:327 :
RPC_CLIENT_NEW: client=0x7fed54009d10 sock=0x7fed54009040*
*2016-03-12 08:18:55.749+0000: 1247: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed54009d10*
*2016-03-12 08:18:55.749+0000: 1247: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed54009040*
*2016-03-12 08:18:55.750+0000: 1247: info : virObjectNew:202 : OBJECT_NEW:
obj=0x7fed540009a0 classname=virNetClientProgram*
*2016-03-12 08:18:55.750+0000: 1247: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed540009a0*
*2016-03-12 08:18:55.750+0000: 1247: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed54005460*
*2016-03-12 08:18:55.750+0000: 1247: info : virObjectUnref:259 :
OBJECT_UNREF: obj=0x7fed5c168eb0*
*2016-03-12 08:18:55.750+0000: 1247: debug :
virLXCProcessCleanInterfaces:475 : Cleared net names: eth0 *
*2016-03-12 08:18:55.750+0000: 1247: info : virObjectUnref:259 :
OBJECT_UNREF: obj=0x7fed5c168eb0*
*2016-03-12 08:18:55.750+0000: 1247: info : virObjectUnref:259 :
OBJECT_UNREF: obj=0x7fed5c169600*
*2016-03-12 08:18:55.755+0000: 1244: debug : virNetClientIncomingEvent:1808
: client=0x7fed54009d10 wantclose=0*
*2016-03-12 08:18:55.755+0000: 1244: debug : virNetClientIncomingEvent:1816
: Event fired 0x7fed54009040 1*
*2016-03-12 08:18:55.755+0000: 1244: debug : virNetMessageDecodeLength:151
: Got length, now need 36 total (32 more)*
*2016-03-12 08:18:55.756+0000: 1244: info : virNetClientCallDispatch:1116 :
RPC_CLIENT_MSG_RX: client=0x7fed54009d10 len=36 prog=305402420 vers=1
proc=2 type=2 status=0 serial=1*
*2016-03-12 08:18:55.756+0000: 1244: debug : virKeepAliveCheckMessage:377 :
ka=(nil), client=0x7fed81fc5ed4, msg=0x7fed54009d78*
*2016-03-12 08:18:55.756+0000: 1244: debug :
virNetClientProgramDispatch:220 : prog=305402420 ver=1 type=2 status=0
serial=1 proc=2*
*2016-03-12 08:18:55.756+0000: 1244: debug :
virLXCMonitorHandleEventInit:109 : Event init 1420 *
For *failure* case ,we see that the client socket connection is initiated
and gets closed immediately after receiving an incoming event.In this case,
I don’t see an object for virNetClientProgram being created.
Incoming event comes in and since the its unable to find client->prog it
bails out and closes the connection.
Snaphost of the code,
static int virNetClientCallDispatchMessage(virNetClientPtr client)
{
size_t i;
virNetClientProgramPtr prog = NULL;
for (i = 0; i < client->nprograms; i++) {
if (virNetClientProgramMatches(client->programs[i],
&client->msg)) {
prog = client->programs[i];
break;
}
}
if (!prog) {
* VIR_DEBUG("No program found for event with prog=%d vers=%d",*
* client->msg.header.prog, client->msg.header.vers);*
return -1;
}
*2016-03-12 08:19:15.935+0000: 1246: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed5c168eb0*
*2016-03-12 08:19:15.935+0000: 1246: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed82bd7bc0*
*2016-03-12 08:19:15.935+0000: 1246: info : virObjectNew:202 : OBJECT_NEW:
obj=0x7fed82bd8120 classname=virLXCMonitor*
*2016-03-12 08:19:15.935+0000: 1246: debug : virNetSocketNew:159 :
localAddr=0x7fed7d51e170 remoteAddr=0x7fed7d51e200 fd=31 errfd=-1 pid=0*
*2016-03-12 08:19:15.936+0000: 1246: info : virObjectNew:202 : OBJECT_NEW:
obj=0x7fed82bd8660 classname=virNetSocket*
*2016-03-12 08:19:15.936+0000: 1246: info : virNetSocketNew:209 :
RPC_SOCKET_NEW: sock=0x7fed82bd8660 fd=31 errfd=-1 pid=0
localAddr=127.0.0.1;0, remoteAddr=127.0.0.1;0*
*2016-03-12 08:19:15.936+0000: 1246: info : virObjectNew:202 : OBJECT_NEW:
obj=0x7fed82bd8ca0 classname=virNetClient*
*2016-03-12 08:19:15.936+0000: 1246: info : virNetClientNew:327 :
RPC_CLIENT_NEW: client=0x7fed82bd8ca0 sock=0x7fed82bd8660*
*2016-03-12 08:19:15.936+0000: 1246: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed82bd8ca0*
*2016-03-12 08:19:15.936+0000: 1246: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed82bd8660*
*2016-03-12 08:19:15.942+0000: 1244: debug : virNetClientIncomingEvent:1808
: client=0x7fed82bd8ca0 wantclose=0*
*2016-03-12 08:19:15.942+0000: 1244: debug : virNetClientIncomingEvent:1816
: Event fired 0x7fed82bd8660 1*
*2016-03-12 08:19:15.942+0000: 1244: debug : virNetMessageDecodeLength:151
: Got length, now need 36 total (32 more)*
*2016-03-12 08:19:15.942+0000: 1244: info : virNetClientCallDispatch:1116 :
RPC_CLIENT_MSG_RX: client=0x7fed82bd8ca0 len=36 prog=305402420 vers=1
proc=2 type=2 status=0 serial=1*
*2016-03-12 08:19:15.942+0000: 1244: debug : virKeepAliveCheckMessage:377 :
ka=(nil), client=0x7fed81fc5ed4, msg=0x7fed82bd8d08*
*2016-03-12 08:19:15.942+0000: 1244: debug :
virNetClientCallDispatchMessage:1008 : No program found for event with
prog=305402420 vers=1*
*2016-03-12 08:19:15.942+0000: 1244: debug : virNetMessageClear:57 :
msg=0x7fed82bd8d08 nfds=0*
*2016-03-12 08:19:15.942+0000: 1244: debug : virNetClientMarkClose:632 :
client=0x7fed82bd8ca0, reason=0*
*2016-03-12 08:19:15.942+0000: 1244: debug : virNetClientCloseLocked:648 :
client=0x7fed82bd8ca0, sock=0x7fed82bd8660, reason=0*
Here is the snapshot of code ,
virLXCMonitorPtr virLXCMonitorNew(virDomainObjPtr vm,
const char *socketdir,
virLXCMonitorCallbacksPtr cb)
{
virLXCMonitorPtr mon;
char *sockpath = NULL;
if (virLXCMonitorInitialize() < 0)
return NULL;
if (!(mon = virObjectLockableNew(virLXCMonitorClass)))
return NULL;
if (virAsprintf(&sockpath, "%s/%s.sock",
socketdir, vm->def->name) < 0)
goto error;
if (!(mon->client = virNetClientNewUNIX(sockpath, false, NULL)))
goto error;
* if (virNetClientRegisterAsyncIO(mon->client) < 0)*
* goto error;*
* if (!(mon->program = virNetClientProgramNew(VIR_LXC_MONITOR_PROGRAM,*
*
VIR_LXC_MONITOR_PROGRAM_VERSION,*
* virLXCMonitorEvents,*
*
ARRAY_CARDINALITY(virLXCMonitorEvents),*
* mon)))*
* goto error;*
* if (virNetClientAddProgram(mon->client,*
* mon->program) < 0)*
* goto error;*
* mon->vm = vm;*
* memcpy(&mon->cb, cb, sizeof(mon->cb));*
virObjectRef(mon);
virNetClientSetCloseCallback(mon->client, virLXCMonitorEOFNotify, mon,
virLXCMonitorCloseFreeCallback);
Is the problem occurring due to invocation of “*virNetClientRegisterAsyncIO"
api before the virNetClientAddProgram.Probably once we register for aysnc
IO , immediately an event comes in and that thread takes priority and bails
out since it does not find the client->prog?Also the client is not retrying
to establish a new connection.*
*Please let me any thoughts/comments.Is there any patch already
available which has fixed this issue?We are using libvirt 1.2.15*
*-Thanks,*
*Rammohan*
8 years, 9 months
[libvirt] [PATCH] rpc: wait longer for session daemon to start up
by Cole Robinson
https://bugzilla.redhat.com/show_bug.cgi?id=1271183
We only wait .5 seconds for the session daemon to start up and present
its socket, which isn't sufficient for many users. Bump up the sleep
interval and retry amount so we wait for a total of 5 seconds.
---
danpb suggests dropping the reverting this:
commit be78814ae07f092d9c4e71fd82dd1947aba2f029
Author: Michal Privoznik <mprivozn(a)redhat.com>
Date: Thu Apr 2 14:41:17 2015 +0200
virNetSocketNewConnectUNIX: Use flocks when spawning a daemon
Prior to that we didn't need the retry logic at all... but that's a
bit more involved and boxes users are suffering with this issue in
the meantime
src/rpc/virnetsocket.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/rpc/virnetsocket.c b/src/rpc/virnetsocket.c
index b0d5b1c..d909b94 100644
--- a/src/rpc/virnetsocket.c
+++ b/src/rpc/virnetsocket.c
@@ -614,7 +614,7 @@ int virNetSocketNewConnectUNIX(const char *path,
char *lockpath = NULL;
int lockfd = -1;
int fd = -1;
- int retries = 100;
+ int retries = 500;
virSocketAddr localAddr;
virSocketAddr remoteAddr;
char *rundir = NULL;
@@ -707,7 +707,7 @@ int virNetSocketNewConnectUNIX(const char *path,
daemonLaunched = true;
}
- usleep(5000);
+ usleep(10000);
}
localAddr.len = sizeof(localAddr.data);
--
2.5.0
8 years, 9 months
[libvirt] [PATCH] virlog: Fix build breaker with "comparison between signed and unsigned"
by Erik Skultety
Refactor series 0b231195 worked with virLogDestination type which, depending
on the compiler, might be (and probably will be) an unsigned data type.
However, virEnumFromString may return -1 in case of an error. So, when enum
happens to be unsigned, some compilers will naturally complain about foo:
'if (foo < 0)'
---
I pushed the patch under build breaker rule.
src/util/virlog.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/util/virlog.c b/src/util/virlog.c
index 591d38e..007fc65 100644
--- a/src/util/virlog.c
+++ b/src/util/virlog.c
@@ -1088,7 +1088,7 @@ virLogParseOutput(const char *src)
char *abspath = NULL;
size_t count = 0;
virLogPriority prio;
- virLogDestination dest;
+ int dest;
bool isSUID = virIsSUID();
if (!src)
--
2.4.3
8 years, 9 months
[libvirt] [PATCH v2 0/7] vz: add disk and controller check in domainPostParse phase
by Maxim Nestratov
Changes from v1
===============
A new patch moving prlsdkCheckDiskUnsupportedParams to vz_utils.c added.
Commit messages reworded.
Minor formatting issues fixed.
Maxim Nestratov (1):
vz: move prlsdkCheckDiskUnsupportedParams to vz_utils.c
Mikhail Feoktistov (6):
vz: save vz version in connection structure
vz: add vzCapabilities to connection structure
vz: check supported disk format and bus
vz: report correct disk format in domainGetXMLDesc
vz: check supported controllers
vz: set default SCSI model
src/vz/vz_driver.c | 61 +++--------
src/vz/vz_sdk.c | 203 +++++++---------------------------
src/vz/vz_sdk.h | 2 +-
src/vz/vz_utils.c | 314 +++++++++++++++++++++++++++++++++++++++++++++++++++++
src/vz/vz_utils.h | 24 ++++
5 files changed, 397 insertions(+), 207 deletions(-)
--
2.4.3
8 years, 9 months
[libvirt] Where does 'rx_drop' in 'domifstat' come from?
by Yaniv Kaul
Any idea why I'd see drops on an interface?
mini@ykaul-mini:~/ovirt-system-tests$ sudo virsh domifstat 111 vnet1
vnet1 rx_bytes 25488
vnet1 rx_packets 387
vnet1 rx_errs 0
*vnet1 rx_drop 1424*
vnet1 tx_bytes 5751
vnet1 tx_packets 39
vnet1 tx_errs 0
vnet1 tx_drop 0
I have several others VMs in the same host, pretty much identically
configured, all are fine, all on the same bridge:
mini@ykaul-mini:~/ovirt-system-tests$ sudo brctl show
bridge name bridge id STP enabled interfaces
5e85-930e31b 8000.525400247745 yes 5e85-93031b-nic
vnet0
vnet1
vnet2
vnet3
vnet4
virbr0 8000.5254005e7e4b yes virbr0-nic
Network (host0 is the problematic one):
virsh # net-dumpxml 5e85-930e31b9b9
<network connections='5'>
<name>5e85-930e31b9b9</name>
<uuid>6f52f5a8-5322-4b57-890e-d4d5a0a7ed50</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='5e85-930e31b' stp='on' delay='0'/>
<mac address='52:54:00:24:77:45'/>
<dns forwardPlainNames='yes'/>
<ip address='192.168.200.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.200.100' end='192.168.200.254'/>
<host mac='54:52:c0:a8:c8:04' name='lago_basic_suite_3_6_host1'
ip='192.168.200.4'/>
<host mac='54:52:c0:a8:c8:03' name='lago_basic_suite_3_6_engine'
ip='192.168.200.3'/>
<host mac='54:52:c0:a8:c8:02'
name='lago_basic_suite_3_6_storage-iscsi' ip='192.168.200.2'/>
<host mac='54:52:c0:a8:c8:05' name='lago_basic_suite_3_6_storage-nfs'
ip='192.168.200.5'/>
<host mac='54:52:c0:a8:c8:06' name='lago_basic_suite_3_6_host0'
ip='192.168.200.6'/>
</dhcp>
</ip>
</network>
Domain XML attached.
Running libvirt 1.2.18, on latest F23.
TIA,
Y.
8 years, 9 months