[libvirt] [PATCH 0/2] Couple more Coverity issues

After perusing the pile of 70 or so warnings - these two stuck out as ones that were low hanging fruit and not false positives. Many of the remaining "issues" are false positives or perhaps even bugs in Coverity, but I understand why they're being flagged. Freeing memory from counted lists where the incoming count must be zero based on code path - for some reason Coverity flags them because the incoming list memory is NULL and the for loop deref would be bad. The issue is Coverity doesn't seem to dig deep enough to determine that the count and list pointer are linked, sigh (yes, a lot of those). John Ferlan (2): virnetserverservice: Resolve Coverity ARRAY_VS_SINGLETON qemu_driver: Resolve Coverity FORWARD_NULL src/qemu/qemu_driver.c | 3 +-- src/rpc/virnetserverservice.c | 2 +- 2 files changed, 2 insertions(+), 3 deletions(-) -- 1.9.3

Coverity complained about the following: (3) Event ptr_arith: Performing pointer arithmetic on "cur_fd" in expression "cur_fd++". 130 return virNetServerServiceNewFD(*cur_fd++, It seems the belief is their is pointer arithmetic taking place. By using (*cur_fd)++ we avoid this possible issue Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/rpc/virnetserverservice.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/rpc/virnetserverservice.c b/src/rpc/virnetserverservice.c index fea05c3..486f93e 100644 --- a/src/rpc/virnetserverservice.c +++ b/src/rpc/virnetserverservice.c @@ -127,7 +127,7 @@ virNetServerServiceNewFDOrUNIX(const char *path, * There's still enough file descriptors. In this case we'll * use the current one and increment it afterwards. */ - return virNetServerServiceNewFD(*cur_fd++, + return virNetServerServiceNewFD((*cur_fd)++, auth, #if WITH_GNUTLS tls, -- 1.9.3

On 08/28/2014 01:28 PM, John Ferlan wrote:
Coverity complained about the following:
(3) Event ptr_arith: Performing pointer arithmetic on "cur_fd" in expression "cur_fd++". 130 return virNetServerServiceNewFD(*cur_fd++,
It seems the belief is their is pointer arithmetic taking place. By using (*cur_fd)++ we avoid this possible issue
Not just their belief, but the actual C language.
Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/rpc/virnetserverservice.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/rpc/virnetserverservice.c b/src/rpc/virnetserverservice.c index fea05c3..486f93e 100644 --- a/src/rpc/virnetserverservice.c +++ b/src/rpc/virnetserverservice.c @@ -127,7 +127,7 @@ virNetServerServiceNewFDOrUNIX(const char *path, * There's still enough file descriptors. In this case we'll * use the current one and increment it afterwards. */ - return virNetServerServiceNewFD(*cur_fd++,
In this function, cur_fd is int* (four bytes wide). Suppose cur_fd starts life as 0x1000, and we start with memory contents of 3 at 0x1000, and 4 at 0x1004. C precedence says this is equivalent to *(cur_fd++) - take the pointer cur_fd, do a pointer post-increment, then dereference the memory at the location before the increment. We end up calling virNetServerServiceNewFD(3) (the contents of cur_fd before the deref), the memory at 0x1000 remains at 3, and any further uses of cur_fd would be at the next location 0x1004 - but we just returned so cur_fd is no longer in scope, so who cares about that change - we could have just written '*cur_fd' and been done with it.
+ return virNetServerServiceNewFD((*cur_fd)++,
While this says dereference cur_fd, then post-increment that location. We still end up calling virNetServerServiceNewFD(3) (the contents of the memory location before post-increment), but now cur_fd remains at 0x1000, whose contents are now 4. The caller, daemon/libvirtd.c:daemonSetupNetworking(), calls this function twice in a row. Without your patch, it ends up calling virNetServerServiceNewFD(3) for regular AND read-only sockets. With your patch, it does the intended registration of fd 3 and 4. I hope that's not a security bug. Can you trace when this was introduced? ACK. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On 08/28/2014 04:57 PM, Eric Blake wrote:
On 08/28/2014 01:28 PM, John Ferlan wrote:
Coverity complained about the following:
(3) Event ptr_arith: Performing pointer arithmetic on "cur_fd" in expression "cur_fd++". 130 return virNetServerServiceNewFD(*cur_fd++,
It seems the belief is their is pointer arithmetic taking place. By using (*cur_fd)++ we avoid this possible issue
Not just their belief, but the actual C language.
So yes, I was being a little facetious too. I cut off part of an original "comment" regarding auto increment pointer arithmetic... It is almost always prime for "issues" - one of those language constructs useful for so many things, but oh so dangerous.
Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/rpc/virnetserverservice.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/rpc/virnetserverservice.c b/src/rpc/virnetserverservice.c index fea05c3..486f93e 100644 --- a/src/rpc/virnetserverservice.c +++ b/src/rpc/virnetserverservice.c @@ -127,7 +127,7 @@ virNetServerServiceNewFDOrUNIX(const char *path, * There's still enough file descriptors. In this case we'll * use the current one and increment it afterwards. */ - return virNetServerServiceNewFD(*cur_fd++,
In this function, cur_fd is int* (four bytes wide). Suppose cur_fd starts life as 0x1000, and we start with memory contents of 3 at 0x1000, and 4 at 0x1004.
C precedence says this is equivalent to *(cur_fd++) - take the pointer cur_fd, do a pointer post-increment, then dereference the memory at the location before the increment. We end up calling virNetServerServiceNewFD(3) (the contents of cur_fd before the deref), the memory at 0x1000 remains at 3, and any further uses of cur_fd would be at the next location 0x1004 - but we just returned so cur_fd is no longer in scope, so who cares about that change - we could have just written '*cur_fd' and been done with it.
+ return virNetServerServiceNewFD((*cur_fd)++,
While this says dereference cur_fd, then post-increment that location. We still end up calling virNetServerServiceNewFD(3) (the contents of the memory location before post-increment), but now cur_fd remains at 0x1000, whose contents are now 4.
The caller, daemon/libvirtd.c:daemonSetupNetworking(), calls this function twice in a row. Without your patch, it ends up calling virNetServerServiceNewFD(3) for regular AND read-only sockets. With your patch, it does the intended registration of fd 3 and 4. I hope that's not a security bug. Can you trace when this was introduced?
ACK.
For the benefit of others ;-) as Eric and I have been IRC'g over this... This was introduced in this release cycle (commit id '9805256d')... So not in the wild yet (fairly recent commit of 8/22) Although I know you ACK'd what I have... How about the attached instead? It avoids the auto incremented pointer arithmetic (but of course still has an assumption regarding fd's being sequential). It looks like more changed only because of the formatting to use "ret = " instead of "return" - just needed one more character in my variable name to avoid lines of single space adjustment. John

On 08/28/2014 04:04 PM, John Ferlan wrote:
ACK.
For the benefit of others ;-) as Eric and I have been IRC'g over this... This was introduced in this release cycle (commit id '9805256d')... So not in the wild yet (fairly recent commit of 8/22)
Although I know you ACK'd what I have... How about the attached instead? It avoids the auto incremented pointer arithmetic (but of course still has an assumption regarding fd's being sequential). It looks like more changed only because of the formatting to use "ret = " instead of "return" - just needed one more character in my variable name to avoid lines of single space adjustment.
No, the short version of (*cur_fd)++ is just fine. Basically, any time * and ++ (or --) appear in the same statement, it's usually sufficient to just add () to make it explicit which precedence you were intending. A more invasive solution might be to rewrite the coordination between daemon/libvirtd.c and virnetserverservice.c; right now, the semantics are loosely: int cur_fd = STDERR_FILENO + 1; read-write: virNetServerServiceNewFDOrUnix(..., &cur_fd); read-only: virNetServerServiceNewFDOrUnix(..., &cur_fd); where the idea is that the code registers two services, and those services are either attached to unix sockets (no change to cur_fd, because it is not consuming fds) or to incoming file descriptors passed in on the command line to main (and we have control over the program exec'ing this, so we KNOW that fd 3 is the read-write and fd 4 is the read-only incoming fd). That is, virNetServerServiceNewFDrUinx is either incrementing cur_fd on behalf of libvirtd.c, so that the second call starts at the next fd. But that feels magical enough that if you would just have: int cur_fd = STDERR_FILENO + 1; bool consumed; read-write: virNetServerServiceNewFDOrUnix(..., cur_fd, &consumed); if (consumed) cur_fd++; read-only: virNetServerServiceNewFDOrUnix(..., cur_fd, &consumed); then this code wouldn't be incrementing the contents of a pointer that will be used in the next call, so much as assigning into a boolean pointer; and the outermost caller is then responsible for tracking how fast it is moving through the nfds inherited during exec. But for the sake of getting the release ready, simpler feels better, so pushing your v1 patch is just fine. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

In qemuDomainSnapshotCreateDiskActive() if we jumped to cleanup from a failed actions = virJSONValueNewArray(), then 'cfg' would be NULL. So just return -1, which in turn removes the need for cleanup: Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/qemu/qemu_driver.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 5d21080..239a300 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -13023,7 +13023,7 @@ qemuDomainSnapshotCreateDiskActive(virQEMUDriverPtr driver, if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_TRANSACTION)) { if (!(actions = virJSONValueNewArray())) - goto cleanup; + return -1; } else if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_DISK_SNAPSHOT)) { virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", _("live disk snapshot not supported with this " @@ -13106,7 +13106,6 @@ qemuDomainSnapshotCreateDiskActive(virQEMUDriverPtr driver, } } - cleanup: /* recheck backing chains of all disks involved in the snapshot */ orig_err = virSaveLastError(); for (i = 0; i < snap->def->ndisks; i++) { -- 1.9.3

On 08/28/2014 01:28 PM, John Ferlan wrote:
In qemuDomainSnapshotCreateDiskActive() if we jumped to cleanup from a failed actions = virJSONValueNewArray(), then 'cfg' would be NULL.
So just return -1, which in turn removes the need for cleanup:
Signed-off-by: John Ferlan <jferlan@redhat.com> --- src/qemu/qemu_driver.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
ACK. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On 08/28/2014 03:28 PM, John Ferlan wrote:
After perusing the pile of 70 or so warnings - these two stuck out as ones that were low hanging fruit and not false positives.
Many of the remaining "issues" are false positives or perhaps even bugs in Coverity, but I understand why they're being flagged. Freeing memory from counted lists where the incoming count must be zero based on code path - for some reason Coverity flags them because the incoming list memory is NULL and the for loop deref would be bad. The issue is Coverity doesn't seem to dig deep enough to determine that the count and list pointer are linked, sigh (yes, a lot of those).
John Ferlan (2): virnetserverservice: Resolve Coverity ARRAY_VS_SINGLETON qemu_driver: Resolve Coverity FORWARD_NULL
src/qemu/qemu_driver.c | 3 +-- src/rpc/virnetserverservice.c | 2 +- 2 files changed, 2 insertions(+), 3 deletions(-)
Both are now pushed. John
participants (2)
-
Eric Blake
-
John Ferlan