On 04/25/2011 10:57 AM, Eric Blake wrote:
On 04/24/2011 04:26 PM, Matthias Bolte wrote:
> Make virtTestLoadFile allocate the buffer to read the file into.
>
> Fix logic error in virtTestLoadFile, stop reading on the an empty line.
>
> Use virFileReadLimFD in virtTestCaptureProgramOutput.
> ---
> +++ b/tests/commandhelper.c
> @@ -99,8 +99,8 @@ int main(int argc, char **argv) {
> }
>
> fprintf(log, "DAEMON:%s\n", getpgrp() == getsid(0) ? "yes" :
"no");
> - char cwd[1024];
> - if (!getcwd(cwd, sizeof(cwd)))
> + char *cwd = NULL;
> + if (!(cwd = getcwd(NULL, 0)))
Ouch. This is not portable to POSIX, and while gnulib can guarantee
that it works, the current gnulib getcwd module is GPL (and relies on
openat, which is a rather heavy-weight replacement!).
I realize my statement may have come across as rather cryptic, so here's
some more details:
POSIX states that getcwd(NULL, n) has unspecified behavior. On most
modern systems, it malloc's n bytes, or if n is 0, the perfect size for
the answer; but older Solaris would fail with EINVAL, and it is possible
that there were other old OS that failed with a core dump.
POSIX also states that getcwd(buf, n) must fail with ERANGE if buf was
too small, so that you can manage the malloc()s yourself and iteratively
try larger buffers until you succeed (or, less likely, fail for some
other reason like EACCES); at least tools/virsh.c was already using this
idiom.
Meanwhile, modern Linux mishandles getcwd() for large directories. The
kernel refuses to return the current working directory if it is larger
than a page (only possible for super-deep hierarchies, but Jim Meyering
is fond of creating those as stress-tests for coreutils and gnulib).
/proc/self/cwd might fare better, but is probably another instance of
the kernel being likely to give up if the absolute name gets too long.
glibc has fallbacks in place for when the syscall fails, which involve
readdir() over progressively longer ../../ chains to learn the name of
each parent directory, and by using openat() it is possible to still do
this in linear instead of O(n^2) time without having to use chdir().
However, these fallbacks can fail if any directory in the middle has
permissions issues, such as no search permissions.
The end result: portable code must _always_ be prepared for getcwd() to
fail (and not just due to ENOMEM). And while it is always possible to
manage malloc() yourself, that gets to be tedious, so it would be nice
to make gnulib guarantee that getcwd(NULL,0) manages malloc().
The current gnulib module for getcwd is the least likely to fail - it
takes the best of all worlds (syscall, /proc/self/cwd, and openat()
../../ traversal) into some pretty complex code that has very few
failure cases (it can succeed in some places where glibc does not);
however, that complexity comes with some pretty heavy weight (the gnulib
openat() module can cause a call to exit(), so it is not library-safe,
and is therefore only GPL code and can't be used in LGPL libvirt).
So I'm working on adding a new gnulib module getcwd-lgpl, which
guarantees the allocation for getcwd(NULL,0), but doesn't address the
possible problems with super-deep hierarchies. That is, the version is
more likely to fail than the robust version used in GNU coreutils' pwd,
but those failures are in corner cases unlikely to happen (no one
reinstalls libvirtd into a super-deep directory) or where we can safely
pass failure back to the caller (it's now up to the user to bypass their
super-deep hierarchy in some other manner, since libvirt won't do it),
which seems like a reasonable trade-off.
--
Eric Blake eblake(a)redhat.com +1-801-349-2682
Libvirt virtualization library
http://libvirt.org