On Thu, Jul 14, 2016 at 10:17:56AM +0200, Andrea Bolognani wrote:
On Thu, 2016-07-14 at 08:30 +0200, Martin Kletzander wrote:
> > Commit ca10bb040fcf introduced a new test that fails to build
> > on at least some architectures:
> >
> > commandtest.c: In function 'test25':
> > commandtest.c:1121:5: error: comparison is always true due to
> > limited range of data type [-Werror=type-limits]
> > if (rv >= 0) {
> > ^
> >
> > Change the type of 'rv' from char to int, which is the proper
> > return type for virCommandExec() anyway.
> > ---
> > Posting this to the list so that Michal/others can chime in.
> >
> > Using int instead of char seems completely safe here, and in
> > fact should probably have been the right choice from the start.
> >
> > On the other hand, I would expect this kind of error if we were
> > using unsigned char, not plain char... By the way, changing it
> > to signed char is another way to get the code to compile again
> > on ppc64/aarch64.
>
> Souns about right to me. The byte sent over the pipe is usually just
> char because we simply don't need anything else, but this time around
> there's an exception from what I remember from the rest of the code and
> that is that the value is used as an integer later on (other times we
> just throw it away or check if it's 0 or 1). So ACK.
I still don't get why this silly C program
#include <stdio.h>
int
main(int argc,
char **argv)
{
char a = -1;
if (a >= 0) {
printf("positive\n");
}
return 0;
}
compiles just fine with -Wtype-limits -Werror on x86_64, but
fails with the same error as above on ppc64 and aarch64.
Isn't char supposed to be signed unless otherwise specified?
Or is char being signed or unsigned an implementation choice?
Anyway, pushed.
Simple search found out this
http://stackoverflow.com/a/2054941
--
Andrea Bolognani / Red Hat / Virtualization
--
libvir-list mailing list
libvir-list(a)redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list