On 03/06/2018 12:31 PM, Michal Privoznik wrote:
On 03/02/2018 04:54 PM, John Ferlan wrote:
>
>
> On 02/21/2018 01:11 PM, Michal Privoznik wrote:
>> Before we exec() qemu we have to spawn pr-helper processes for
>> all managed reservations (well, technically there can only one).
>
> "can be only one"
>
> Since there can only be one why do we bother w/ plurality?
Yeah, this is leftover from v1 where the code was prepared for multiple
managed helpers per one domain.
>
>> The only caveat there is that we should place the process into
>> the same namespace and cgroup as qemu (so that it shares the same
>> view of the system). But we can do that only after we've forked.
>> That means calling the setup function between fork() and exec().
>
> Seems like a good note/comment in the code where
> qemuProcessSetupPRDaemons is called so that someone doesn't have to go
> back to the commit message in order to find out why the call was placed
> where it was.
Okay, I will add it. But we need some rules on this because sometimes I
add a comment and reviewer says putting the info in the commit message
is enough. Or vice versa.
Agreed - it would be nice, but it's reviewer dependent. I look at it
this way - is the code really self documenting? Is there something
being done that we should note in code comment rather than forcing a
reader to "find" the commit that added the function (needle meet
haystack) or the upstream review where even more relevant discussion may
have taken place.
>
>>
>> Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
>> ---
>> src/qemu/qemu_process.c | 164 ++++++++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 164 insertions(+)
>>
>> diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
>> index dc33eb7bc..33e0ad30c 100644
>> --- a/src/qemu/qemu_process.c
>> +++ b/src/qemu/qemu_process.c
>> @@ -2551,6 +2551,164 @@ qemuProcessResctrlCreate(virQEMUDriverPtr driver,
>> ret = 0;
>> cleanup:
>> virObjectUnref(caps);
>> +
>> + return ret;
>> +}
>> +
>> +
>> +static void
>> +qemuProcessKillPRDaemons(virDomainObjPtr vm)
>
> Daemon*s* is is a misnomer
>
>> +{
>> + qemuDomainObjPrivatePtr priv = vm->privateData;
>> +
>> + if (priv->prPid == (pid_t) -1)
>> + return;
>> +
>> + virProcessKillPainfully(priv->prPid, true);
>> + priv->prPid = (pid_t) -1;
>> +}
>> +
>> +
>> +static int
>> +qemuProcessSetupOnePRDaemonHook(void *opaque)
>> +{
>> + virDomainObjPtr vm = opaque;
>> + size_t i, nfds = 0;
>> + int *fds = NULL;
>> + int ret = -1;
>> +
>> + if (virProcessGetNamespaces(vm->pid, &nfds, &fds) < 0)
>> + return ret;
>
> <sigh> another false positive for Coverity since it for some reason
> believes "virProcessGetNamespaces" allocates memory that is stored into
> "fds" when there's an error returned.
>
> Strangely, setting *nfdlist = 0 in the (ret < 0) part of cleanup: in
> virProcessGetNamespaces and going to cleanup here magically clears the
> error...
>
>> +
>> + if (nfds > 0 &&
>> + virProcessSetNamespaces(nfds, fds) < 0)
>> + goto cleanup;
>> +
>> + ret = 0;
>> + cleanup:
>> + for (i = 0; i < nfds; i++)
>> + VIR_FORCE_CLOSE(fds[i]);
>> + VIR_FREE(fds);
>> + return ret;
>> +}
>> +
>> +
>
> If we returned:
>
> -1 error, 0 started, and 1 unnecessary, then our caller could be a bit
> smarter about needing to run through the whole ndisks list.
This might help if there're thousands of disks, but I guess since we're
far from that it's unnecessary IMO. But I can change that if you want me
to :-)
You know what happens now that you've said that? Think about domain
stats and excessive disks... Sure for a loop of < 50 probably no big
deal and perhaps the norm is smaller disk guests, but then you run into
the one that isn't and have to redo/rework things.
>
>> +static int
>> +qemuProcessSetupOnePRDaemon(virDomainObjPtr vm,
>> + virDomainDiskDefPtr disk)
>> +{
>> + qemuDomainObjPrivatePtr priv = vm->privateData;
>> + virQEMUDriverPtr driver = priv->driver;
>> + virQEMUDriverConfigPtr cfg;
>> + qemuDomainStorageSourcePrivatePtr srcPriv;
>> + qemuDomainDiskPRDPtr prd;
>> + char *pidfile = NULL;
>> + pid_t cpid = -1;
>> + virCommandPtr cmd = NULL;
>> + virTimeBackOffVar timebackoff;
>> + const unsigned long long timeout = 500000; /* ms */
>> + int ret = -1;
>> +
>> + if (priv->prPid != (pid_t) -1 ||
>> + !virStoragePRDefIsManaged(disk->src->pr))
>> + return 0;
>
> Ah... so this is where we ensure there is only ever one pr-helper... and
> this !managed usage still doesn't help my earlier confusion.
>
> In one mode we have:
>
> -object pr-manager-helper,id=pr-helper-sdb,\
> path=/path/to/qemu-pr-helper.sock
>
> and the other we have:
>
> -object pr-manager-helper,id=pr-helper0,\
> path=/tmp/lib/domain--1-QEMUGuest1/pr-helper0.sock
>
> But for "everything" (or both) we still have the qemu-pr-helper process.
Question is, who starts the process. Let's forget about PR for a while
and take a look on <hostdev/>-s. The also have @managed attribute.
Question there is who detaches the hostdev on domain startup. The
command line is generated the same in both cases. But if @managed =
'yes' then libvirt takes the extra step and detaches the hostdev from
host. Otherwise it relies on user (which is equal to admin for the sake
of this conversation) that it detached the hostdev themselves.
This is the bit I'm really struggling with - who starts the process. To
some degree does is matter who or what the process is if all we're
providing is some functionality to either start and connect to or just
connect to some process by some socket. I think I noted that before
we're focused on pr-manager now, but what's next? Someone else will copy
that code in qemu and ask libvirt to support it.
> For both the alias/id is added to the -drive command line using
> "file.pr-manager=$alias".
Sure. We have to.
>
> In one mode we would start the qemu-pr-helper, but in the other we're
> not. So, if none of the lun's used the second mode, then what do they
> connect to?
To socket that's created by user spawned qemu-pr-helper. Alternatively
there were some discussion whether systemd's socket activation should be
implemented for qemu-pr-helper. In that case, it's going to be systemd
who starts the process once qemu tries to connect.
Oh sure, great, let's bring in systemd into the mix and really confuse
me }-<... I'm glad you understand this - just look both ways before you
cross the street please, we don't need you getting hit by a tram!
John
>
> Furthermore, the managed mode helps decide which alias to use and of
> course which socket will be used. With the alias, I had the earlier
> question/confusion over how many objects would be created using the
> "other" mode in the above examples since "theoretically speaking"
of
> course the id= should be unique. Then the path is just a client path to
> the socket which in the former mode is user defined and the latter mode
> is static or the same for every lun using that mode. So can we have more
> than one lun using the same client socket path?
Theoretically we can. But why bother? We can try to find unique socket
paths and add them just once. But then we have to:
a) allocate an unique alias for pr-manager-helper,
b) track use count on hot(un-)plug.
IMO it only complicates things. But sure, in the long run we can switch
to that approach.
>
> I don't know if the above helps explain my confusion or makes it even
> harder to understand. I hope it helps.
>
>
>> +
>> + cfg = virQEMUDriverGetConfig(driver);
>> + srcPriv = QEMU_DOMAIN_STORAGE_SOURCE_PRIVATE(disk->src);
>> + prd = srcPriv->prd;
>> +
>> + if (!virFileIsExecutable(cfg->prHelperName)) {
>> + virReportSystemError(errno, _("'%s' is not a suitable pr
helper"),
>> + cfg->prHelperName);
>> + goto cleanup;
>> + }
>> +
>> + if (!(pidfile = virPidFileBuildPath(cfg->stateDir, prd->alias)))
>> + goto cleanup;
>> +
>> + if (unlink(pidfile) < 0 &&
>> + errno != ENOENT) {
>> + virReportSystemError(errno,
>> + _("Cannot remove stale PID file %s"),
>> + pidfile);
>> + goto cleanup;
>> + }
>> +
>> + if (!(cmd = virCommandNewArgList(cfg->prHelperName,
>> + "-k", prd->path,
>> + NULL)))
>> + goto cleanup;
>> +
>> + virCommandDaemonize(cmd);
>> + virCommandSetPidFile(cmd, pidfile);
>> + virCommandSetPreExecHook(cmd, qemuProcessSetupOnePRDaemonHook, vm);
>> +
>> + if (virCommandRun(cmd, NULL) < 0)
>> + goto cleanup;
>> +
>> + if (virPidFileReadPath(pidfile, &cpid) < 0) {
>> + virReportError(VIR_ERR_INTERNAL_ERROR,
>> + _("pr helper %s didn't show up"),
prd->alias);
>> + goto cleanup;
>> + }
>> +
>> + if (virTimeBackOffStart(&timebackoff, 1, timeout) < 0)
>> + goto cleanup;
>> + while (virTimeBackOffWait(&timebackoff)) {
>> + if (virFileExists(prd->path))
>> + break;
>> +
>> + if (virProcessKill(cpid, 0) == 0)
>> + continue;
>> +
>> + virReportSystemError(errno,
>> + _("pr helper %s died unexpectedly"),
prd->alias);
>> + goto cleanup;
>> + }
>> +
>> + if (priv->cgroup &&
>> + virCgroupAddMachineTask(priv->cgroup, cpid) < 0)
>> + goto cleanup;
>> +
>> + if (qemuSecurityDomainSetPathLabel(driver->securityManager,
>> + vm->def, prd->path, true) < 0)
>> + goto cleanup;
>> +
>> + priv->prPid = cpid;
>> + ret = 0;
>> + cleanup:
>> + if (ret < 0) {
>> + virCommandAbort(cmd);
>> + virProcessKillPainfully(cpid, true);
>> + }
>> + virCommandFree(cmd);
>> + if (unlink(pidfile) < 0 &&
>
> I thought I pointed this out in the previous series, but Coverity gets
> grumpy here since @pidfile could be NULL if we jump to cleanup before
> it's created when virFileIsExecutable of prHelperName fails.
Ah okay.
Michal