On Mon, Jun 17, 2019 at 10:14:37PM +0200, Ilias Stamatis wrote:
On Mon, Jun 17, 2019 at 9:52 AM Erik Skultety
<eskultet(a)redhat.com> wrote:
>
> On Fri, Jun 14, 2019 at 12:59:11PM +0200, Ilias Stamatis wrote:
> > On Fri, Jun 14, 2019 at 10:07 AM Erik Skultety <eskultet(a)redhat.com>
wrote:
> > >
> > > On Thu, Jun 13, 2019 at 02:20:22PM +0200, Ilias Stamatis wrote:
> > > > On Thu, Jun 13, 2019 at 10:20 AM Erik Skultety
<eskultet(a)redhat.com> wrote:
> > > > >
> > > > > On Tue, Jun 04, 2019 at 03:17:43PM +0200, Ilias Stamatis wrote:
> > > > > > Only succeed when @pid_value is 1, since according to the
docs this is
> > > > >
> > > > > Why do we need to restrict ourselves for @pid 1 in the test
driver? This
> > > > > restriction exists in LXC for a reason, but why in test driver?
> > > > > a) it's only a check with @pid not being actually used
> > > > > b) each guest OS will have their own limit in
/proc/sys/kernel/pid_max for the
> > > > > max number of PID so restricting it from the top doesn't
make sense
> > > > > c) -@pid is used for process groups, so again, this will be
handled within the
> > > > > guest OS in reality
> > > >
> > > > To my understanding if there's no process with such pid in the
guest,
> > > > this API will fail. If we succeed for any pid, that would mean that
> > > > the test domain has 2^8 processes running and I wasn't sure if
we
> > > > wanted that. But yes, as I wrote as a "comment" in the
initial patch
> > > > this could as well make sense within test driver's scope so
it's fine
> > > > to remove the check.
> > >
> > > Well, 2^8 isn't that bad is it? That's realistic, but long long is
64bit, so
> > > 2^63-1 that's a lot of PIDs, so you're right, it would feel weird,
on the
> > > other hand, looking at it from the perspective of an app developer,
requiring
> > > pid==1 is restrictive, if you allow any pid, you give them flexibility -
PID
> > > being filled from a dynamic data structure, so eventually, they're
just simply
> > > switch test driver for the real API, but that is my perspective, I
don't want
> > > to force it to upstream, we can wait for another opinion, one can say that
if
> > > they want to test with dynamic PIDs, use the real API.
> > >
> > > Erik
> >
> > Of course I meant 2^64, but I had 8 bytes in my mind so I got confused.
> >
> > This scenario you describe makes sense, even though currently only the
> > LXC driver implements this API.
>
> Yes, and I sincerely doubt any other hypervisor would ever implement this since
> this should completely in the guest OS' scope, e.g. in case of QEMU, most
likely
> qemu-guest-agent would have to be involved, which means that good old ssh will
> work just as nicely + you'd more probably want to terminate services by their
> name rather than pid (and even more probably you want to terminate service
> units, not individual pids).
>
> >
> > So in conclusion I would say that the possible options are a) only
> > succeed for pid 1 b) succeed for ANY pid c) succeed for any pid up to
> > a pid_limit.
>
> a) using this for test driver is okay, because it doesn't do anything, but then
> again, from logical POV, it doesn't make any sense to me, init ignores/masks
> almost all the signals, even if it didn't, the user most likely just panicked
> the guest's kernel, great, virDomainDestroy would have served much better here.
>
> b) phrdina complained that we lose the error path if we succeed for every PID
> and that's true, but gives more flexibility
>
> c) I guess I'd personally prefer this one, but what should the arbitrary limit
> be? (e.g. for < 1024 and succeed otherwise, dunno...)
>
> Whichever option we choose, we need to clarify it very clearly in the
> documentation, so I'll leave the choice to you as long as there's a doc
string
> documenting the behaviour of the API in v2.
>
> Erik
I'm up for option c with 1024 as a limit. Tbh, I think the specific
limit value is not super important neither we should spend much more
time thinking about it.
So I'll send a new patch with 1024 as the limit and an update of the docs.
I think that even this limit is pointless, it will be most likely used
only in tests to validate that you can integrate with libvirt and you
can handle both cases when it fails or succeed and the specific PID is
not relevant, but feel free to implement the 1024 limit as well if it's
properly documented.
Pavel