On 07/01/2015 10:20 AM, Peter Krempa wrote:
On Wed, Jul 01, 2015 at 10:03:44 -0400, John Ferlan wrote:
> The following Coverity issues are flagged in Coverity 7.6.1 compared
> to not being seen in 7.5.1 - this series just addresses those issues
> before 7.6.1 is installed in our upstream Jenkins environment:
>
>
http://jenkins-virt.usersys.redhat.com/job/libvirt-coverity/
This is a Red Hat internal link where other users don't have access to
this definitely should not be published to a mailing list at least to
avoid frustration to non-redhatters.
hmm.. for some reason I thought it was an external reference - I know
we've been running many tests thru Jenkins on an external CentOS server,
so my brain said this was too... But now you've reminded me that the
Coverity ones had to be internal because of the license... In any case -
it wasn't intentional... Sorry about that.
>
> It seems the bulk are essentially false positives, but we can do something
> in order to work around them.
So why are we doing anything about that? Coverity is the software that
should be fixed here.
While coverity is a very useful tool in some cases where it reports
actual errors the noise it generates is sometimes quite unbearable and
as it looks the new version added just a few false positives, but no
actual fault.
Additionally if we continue to patch up the mistakes rather than
reporting it we might as well as end up by flagging something with the
sa_assert() macro that will change into an actual error in later
patches and then we won't be able to detect that.
As of such I think that libvirt should mostly fix just actual errors
found by coverity and people who run coverity on the libvirt code base
should rather report the errorrs to the coverity vendor to fix the false
positive notifications rather than working that around by silencing it.
</rant>
Peter
Certainly understand your frustration with Coverity's false positive;
however, remember perhaps one person's "bug" is someone else's
"feature". The more complex the code and the deeper the calling stack,
the more difficult it is for artificial intelligence tools to model
every possible combination. There's a point in which even they have to
let the human intelligence (sic) take over.
A particular pain point are API's where a status or count is returned as
well as returning a pointer to something. We know from reading the man
pages or reading the code that the only way that pointer can be filled
in is if the status/count is a particular value (usually non negative -
so zero or more). However, that's more difficult for modeling code
which tries "all" paths when it finds a condition and then flags when
there's a "chance" for something to go wrong. For example, code such as
the following causes these types of issues:
char *retparam = NULL;
if ((rc = function(param, param, &retparam)) < 0)
goto error;
if (retparam->field)
Coverity will scan the code and "flag" that retparam could be a problem
here. It will further dive into 'function' and determine code paths from
there that could perhaps return >= 0 such that retparam wouldn't be
filled in. If 'function' is a macro for some other function which, then
calls yet another function, which is a macro for something else (ad
nauseum), it becomes more difficult to model, thus Coverity "flags" it
as a potential problem for a human to resolve/check. False positives are
a way of life - they get worse for stricter checking - we only do the
'basic' checks.
In the case of the 'virpci' changes (patches 1-3) - the "eventual" call
to virVasprintfInternal() can fail returning -1 and setting '*strp =
NULL;" - perfectly reasonable... The caller virAsprintfInternal doesn't
check ret, but passes it back to it's caller via macro in virstring.h:
# define virAsprintf(strp, ...) \
virAsprintfInternal(
Eventually virPCIFile (for example) returns -1 or 0 based on that
return, but it seems being able to correlate that return back in any of
the callers cannot happen. Coverity wants to mock both the true and
false path of the condition, thus the error. Whether it's macros or
variable call stacks that confuses things, I'm not quite sure. But, I'll
assume there was some corner case for someone else's code that caused
the modeling code to not flag it when someone felt it should. Who really
knows, I don't follow Coverity development that closely.
Sometimes, issues are even more opaque...
Consider the case from openvz.conf.c changes:
if (VIR_STRDUP(str, value) < 0)
goto error;
iden = strtok_r(line, " ", &saveptr);
Seems perfectly fine, right? However, if one digs into
VIR_STRDUP()/virStrdup() they find this:
*dest = NULL;
if (!src)
return 0;
So if by chance 'value' was NULL (we don't prevent or check it), then
str will be NULL and that's perfectly reasonable. The caller is
expected to check it, right? But in this case in order to get into the
function we have 'assumed' the 'value' is non-NULL based on the positive
return:
ret = openvzReadVPSConfigParam(veid, param, &temp);
if (ret > 0) {
if (openvzParseBarrierLimit(temp, &barrier, &limit)) {
So now Coverity cannot model the called functions, rather it has to
model the calling functions and "know" that ret > 0 means temp != NULL.
Since we can do this ourselves by reading the header of the ConfigParam
function, but the artificial intelligence just doesn't see that since it
doesn't know to correlate "ret" with "temp". Thus we just have to
teach
it and move on.
John