On Mon, Sep 13, 2010 at 03:49:38PM +0200, Saggi Mizrahi wrote:
On Mon, 2010-09-13 at 14:29 +0100, Daniel P. Berrange wrote:
> On Mon, Sep 13, 2010 at 03:20:13PM +0200, Saggi Mizrahi wrote:
> > On Mon, 2010-09-13 at 13:35 +0100, Daniel P. Berrange wrote:
> > > >
> > > > Overall, this looks workable to me. As proposed, this assumes a 1:1
> > > > relation between LockManager process and managed VMs. But I guess
you
> > > > can still have a central manager process that manages all the VMs, by
> > > > having the lock manager plugin spawn a simple shim process that does
all
> > > > the communication with the central lock manager.
> > >
> > > I could have decided it such that it didn't assume presence of a
angel
> > > process around each VM, but I think it is easier to be able to presume
> > > that there is one. It can be an incredibly thin stub if desired, so I
> > > don't think it'll be too onerous on implementations
> >
> > > We are looking into the possibility of not having a process manage a
> > VM but rather having the sync_manager process register with a central
> > daemon and exec into qemu (or anything else) so assuming there is a
> > process per VM is essentially false. But the verb could be used for
> > "unregistering" the current instance with the main manager so the
verb
> > does have it use.
> >
> > Further more, even if we decide to leave the current 'sync_manager
> > process per child process' system as is for now. The general direction
> > is a central deamon per host for managing all the leases and guarding
> > all processes. So be sure to keep that in mind while assembling the API.
>
> Having a single daemon per host that exec's the VMs is explicitly *not*
> something we intend to support because the QEMU process needs to inherit
> its process execution state from libvirtd. It is fundamental to the
> security architecture that processes are completely isolated the moment
> that libvirtd has spawned them. We don't want to offload all the security
> driver setup into a central lock manager daemon. Aside from this we also
> pass open file descriptors down from libvirtd to the QEMU daemon.
My explanation might have been confusing or ill phrased. I'll try again.
What the suggestion was:
instead of libvirt running sync manager that will fork off and run qemu.
libvirt would run sync_manager wrapper that will register with the
central daemon wait for it to acquire leases and then exec to qemu (in
process). From that moment the central daemon monitors the process and
when it quits frees it's leases.
This way we still keep all the context stuff from libvirt and have only
1 process managing the leases.
But, as I said, this is only a suggestion and is still in very early
stages. We might not implement in the initial version and leave the
current forking method.
That is probably possible with the current security driver implementations
but more generally I think it will still hit some trouble. Specifically
one of the items on our todo list is a new security driver that makes use
of Linux container namespace functionality to isolate the VMs, so they
can't even see other resources / processes on the host. This may well
prevent the sync manager wrapper talking to a central sync mnager process
The general rule we aim for is that once libvirtd has spawned a VM they
are completely isolated with exception of any disks marked with <shareable/>
In other words, any communictions channels must be initiated/established
by the mgmt layer to the VM process, with nothing to be established in the
reverse direction.
Daniel
--
|: Red Hat, Engineering, London -o-
http://people.redhat.com/berrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org -o-
http://deltacloud.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|