DB> +static int lxcContainerMountNewFS(virDomainDefPtr vmDef)
DB> +{
DB> + virDomainFSDefPtr tmp;
DB> +
DB> + /* Pull in rest of container's mounts */
DB> + for (tmp = vmDef->fss; tmp; tmp = tmp->next) {
DB> + char *src;
DB> + if (STREQ(tmp->dst, "/"))
DB> + continue;
DB> + // XXX fix
DB> + if (tmp->type != VIR_DOMAIN_FS_TYPE_MOUNT)
DB> + continue;
DB> +
DB> + if (asprintf(&src, "/.oldroot/%s", tmp->src) < 0)
DB> + return -1;
DB> +
DB> + if (virFileMakePath(tmp->dst) < 0 ||
DB> + mount(src, tmp->dst, NULL, MS_BIND, NULL) < 0) {
DB> + VIR_FREE(src);
DB> + lxcError(NULL, NULL, VIR_ERR_INTERNAL_ERROR,
DB> + _("failed to mount %s at %s for container: %s"),
DB> + tmp->src, tmp->dst, strerror(errno));
DB> + return -1;
DB> + }
DB> + VIR_FREE(src);
DB> + }
DB> + return -1;
Shouldn't this be "return 0"? AFAICT, this means this function will
always fail and thus any domain with a root target will fail to start.
If I change this to "return 0" I'm able to start such guests, with
properly pivoted roots.
On a more general note, it seems like there are a lot of places where
failures trigger a "return -1" that rolls completely up the stack with
no error information getting logged. Since you have the excellent
per-container logging functionality, can we increase the verbosity a
little so that there is some way to diagnose where things are failing?
Thus far, I've just started sprinkling fprintf()'s into the code until I
start to narrow things down. I'd be glad to help with that after this
goes in.
Thanks!
--
Dan Smith
IBM Linux Technology Center
Open Hypervisor Team
email: danms(a)us.ibm.com