[libvirt] [PATCH] rpc: fix race sending and encoding sasl data
by Daniel P. Berrange
The virNetSocketWriteSASL method has to encode the buffer it is given and then
write it to the underlying socket. This write is not guaranteed to send the
full amount of data that was encoded by SASL. We cache the SASL encoded data so
that on the next invokation of virNetSocketWriteSASL we carry on sending it.
The subtle problem is that the 'len' value passed into virNetSocketWriteSASL on
the 2nd call may be larger than the original value. So when we've completed
sending the SASL encoded data we previously cached, we must return the original
length we encoded, not the new length.
This flaw means we could potentially have been discarded queued data without
sending it. This would have exhibited itself as a libvirt client never receiving
the reply to a method it invokes, async events silently going missing, or worse
stream data silently getting dropped.
For this to be a problem libvirtd would have to be queued data to send to the
client, while at the same time the TCP socket send buffer is full (due to a very
slow client). This is quite unlikely so if this bug was ever triggered by a real
world user it would be almost impossible to reproduce or diagnose, if indeed it
was ever noticed at all.
Signed-off-by: Daniel P. Berrange <berrange(a)redhat.com>
---
src/rpc/virnetsocket.c | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/src/rpc/virnetsocket.c b/src/rpc/virnetsocket.c
index 23089afef4..2d41a716ba 100644
--- a/src/rpc/virnetsocket.c
+++ b/src/rpc/virnetsocket.c
@@ -107,6 +107,7 @@ struct _virNetSocket {
const char *saslEncoded;
size_t saslEncodedLength;
+ size_t saslEncodedRawLength;
size_t saslEncodedOffset;
#endif
#if WITH_SSH2
@@ -1927,6 +1928,7 @@ static ssize_t virNetSocketWriteSASL(virNetSocketPtr sock, const char *buf, size
&sock->saslEncodedLength) < 0)
return -1;
+ sock->saslEncodedRawLength = tosend;
sock->saslEncodedOffset = 0;
}
@@ -1943,11 +1945,20 @@ static ssize_t virNetSocketWriteSASL(virNetSocketPtr sock, const char *buf, size
/* Sent all encoded, so update raw buffer to indicate completion */
if (sock->saslEncodedOffset == sock->saslEncodedLength) {
+ ssize_t done = sock->saslEncodedRawLength;
sock->saslEncoded = NULL;
- sock->saslEncodedOffset = sock->saslEncodedLength = 0;
-
- /* Mark as complete, so caller detects completion */
- return tosend;
+ sock->saslEncodedOffset = sock->saslEncodedLength = sock->saslEncodedRawLength = 0;
+
+ /* Mark as complete, so caller detects completion.
+ *
+ * Note that 'done' is possibly less than our current
+ * 'tosend' value, since if virNetSocketWriteWire
+ * only partially sent the data, we might have been
+ * called a 2nd time to write remaining cached
+ * encoded data. This means that the caller might
+ * also have further raw data pending that's included
+ * in 'tosend' */
+ return done;
} else {
/* Still have stuff pending in saslEncoded buffer.
* Pretend to caller that we didn't send any yet.
--
2.14.3
7 years, 4 months
[libvirt] [PATCH v2 0/2] qemu: Enforce vCPU hotplug granularity constraints
by Andrea Bolognani
Changes from [v1]:
* make qemuDomainDefGetVcpuHotplugGranularity() simpler by returning
the result directy instead of requiring a return location;
* fix inaccurate comments.
[v1] https://www.redhat.com/archives/libvir-list/2017-December/msg00539.html
Andrea Bolognani (2):
qemu: Invert condition nesting in qemuDomainDefValidate()
qemu: Enforce vCPU hotplug granularity constraints
src/qemu/qemu_domain.c | 58 +++++++++++++++++++---
tests/qemuxml2argvdata/cpu-hotplug-granularity.xml | 18 +++++++
tests/qemuxml2argvtest.c | 3 ++
3 files changed, 73 insertions(+), 6 deletions(-)
create mode 100644 tests/qemuxml2argvdata/cpu-hotplug-granularity.xml
--
2.14.3
7 years, 4 months
[libvirt] [PATCH 0/2] Two qemu config validation patches
by Laine Stump
The first is a bug fix to a recently pushed bugfix, the 2nd is just
reorganizing working code.
Laine Stump (2):
qemu: assign correct type of PCI address for vhost-scsi when using
pcie-root
qemu: move qemuDomainDefValidateVideo into
qemuDomainDeviceDefValidateVideo
src/qemu/qemu_domain.c | 151 +++++++++------------
src/qemu/qemu_domain_address.c | 7 +
.../hostdev-scsi-vhost-scsi-pcie.args | 25 ++++
.../hostdev-scsi-vhost-scsi-pcie.xml | 23 ++++
tests/qemuxml2argvtest.c | 7 +
.../hostdev-scsi-vhost-scsi-pcie.xml | 40 ++++++
tests/qemuxml2xmltest.c | 7 +
7 files changed, 175 insertions(+), 85 deletions(-)
create mode 100644 tests/qemuxml2argvdata/hostdev-scsi-vhost-scsi-pcie.args
create mode 100644 tests/qemuxml2argvdata/hostdev-scsi-vhost-scsi-pcie.xml
create mode 100644 tests/qemuxml2xmloutdata/hostdev-scsi-vhost-scsi-pcie.xml
--
2.13.6
7 years, 4 months
[libvirt] libvirt wiki account request
by Ján Tomko
Hello.
On 5th anniversary of commit 9d92bf1 I would like to request
a libvirt wiki account.
login: jtomko
e-mail: jtomko(a)redhat.com
password: hunter2
Thank you,
Jan
7 years, 4 months
[libvirt] Qemu capability probes lifecycle should be tied to libvirtd
by Christian Ehrhardt
Hi,
on libvirt 3.10 I see a set of qemu processes used for capability
probing [1] (in my case 8x x86_64 and 3xi386 which seems a lot, but
ok).
But when stopping the service those still stay around [2].
That is correct for guests that were started by libvirt as their
lifecycle isn't tied to the libvirtd service. But those probes are
IMHO tied to the service.
At first this might seem non-relevant, but e.g. when users want to
uninstall they might think on stopping their guests, but I'd assume no
one will clean up the capability probes before the hard removal.
But then on the removal scripts will run into issues e.g. failing to
remove users as they are still in use by those qemu processes.
Right now Distro's have to be aware to clean those up at least at
times where packaging would expect them to be gone, but I wanted to
ask if there would be a consensus that it would be "correct" to stop
the processes on a libvirtd stop?
[1]: http://paste.ubuntu.com/26208661/
[2]: http://paste.ubuntu.com/26208664/
P.S. I discussed this on IRC last Friday, but other than Michael
confirming the current state there was no further traction on the
discussion yet.
--
Christian Ehrhardt
Software Engineer, Ubuntu Server
Canonical Ltd
7 years, 4 months
[libvirt] [PATCH] Improve filtering of Xen domain0 in libvirt-guests
by Jim Fehlig
The list_guests function in libvirt-guests uses 'grep -v' to filter
Xen domain0 from a list of guests. If domain0 is the only item in
the list, 'grep -v' returns 1, causing the 'stop' operation to fail
when action is 'suspend'. Improve the filtering by using sed to remove
domain0 from the list of guests.
Signed-off-by: Jim Fehlig <jfehlig(a)suse.com>
---
Failure of the 'stop' operation was fixed in commit 69ed99c7 by marking
domain0 as a persistent domain. That fixes cases where domain0 is the
only domain in the list of persistent domains. But there may be cases
where domain0 is the only domain in 'virsh list --uuid', and the code
should be made more robust to handle those.
tools/libvirt-guests.sh.in | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/libvirt-guests.sh.in b/tools/libvirt-guests.sh.in
index 791d9277b..8a158cca4 100644
--- a/tools/libvirt-guests.sh.in
+++ b/tools/libvirt-guests.sh.in
@@ -121,7 +121,7 @@ list_guests() {
return 1
fi
- echo "$list" | grep -v 00000000-0000-0000-0000-000000000000
+ echo "$list" | sed "/00000000-0000-0000-0000-000000000000/d"
}
# guest_name URI UUID
--
2.15.1
7 years, 4 months
[libvirt] RFC: auto port allocations
by Nikolay Shirokovskiy
There is a problem with current port allocations in libvirt.
1. With default qemu driver conf values remote_display_port_min = 5900 and
remote_websocket_port_min = 5700 one can not start more than 200 domains
each of one have vnc graphics with auto allocated socket and websocket. For
the 200th domain driver allocate 6100 port for both socket and websocket
and qemu fail to start.
2. Different hypervisor drivers use port pools with same borders (libxl
and qemu migration ports, hardcoded libxl vnc sockets and default conf
qemu vnc sockets for example. However I'm not sure it is possible/practical
to use these drivers simultaneously on same host). As a result there
can be failures due to races when first driver check for bind successfully then
second driver check for bind successfully and then both pass same port value
to hypervisors.
Suggestion is to make port bitmap driver/daemon global and leave only borders
in port pool object. This was we can solve the first and the both issues
correspondingly.
Nikolay
7 years, 4 months
[libvirt] [perl] Fix check of return value from virStreamRecv*
by Jim Fehlig
Commit 049baec5 introduced a small bug in the logic checking
return value of virStreamRecv*. Change the logic to only croak
when return value is < 0 and not equal to -2 or -3.
---
Changes | 2 +-
Virt.xs | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/Changes b/Changes
index 3763b1d..96b42f8 100644
--- a/Changes
+++ b/Changes
@@ -2,7 +2,7 @@ Revision history for perl module Sys::Virt
4.0.0 2018-00-00
- - XXX
+ - Fix check of return value from virStreamRecv*
3.9.1 2017-12-05
diff --git a/Virt.xs b/Virt.xs
index 415eb8a..c47b915 100644
--- a/Virt.xs
+++ b/Virt.xs
@@ -8074,7 +8074,7 @@ recv(st, data, nbytes, flags=0)
else
RETVAL = virStreamRecv(st, rawdata, nbytes);
- if (RETVAL != -2 && RETVAL != -3) {
+ if ((RETVAL < 0) && (RETVAL != -2 || RETVAL != -3)) {
Safefree(rawdata);
_croak_error();
}
--
2.15.1
7 years, 4 months