On Wed, Nov 20, 2019 at 3:40 PM Christian Ehrhardt
<christian.ehrhardt(a)canonical.com> wrote:
>
> On Tue, Nov 19, 2019 at 10:12 PM Jamie Strandboge <jamie(a)canonical.com>
wrote:
> >
> > On Wed, 16 Oct 2019, Christian Ehrhardt wrote:
> >
> > > There are currently broken use cases, e.g. snapshotting more than one disk
at
> > > once like:
> > > $ virsh snapshot-create-as --domain eoan --disk-only --atomic
> > > --diskspec vda,snapshot=no --diskspec vdb,snapshot=no
> > > --diskspec vdc,file=/test/disk1.snapshot1.qcow,snapshot=external
> > > --diskspec vdd,file=/test/disk2.snapshot1.qcow,snapshot=external
> > > The command above will iterate from qemuDomainSnapshotCreateDiskActive
and
> > > eventually add /test/disk1.snapshot1.qcow first (appears in the rules)
> > > to then later add /test/disk2.snapshot1.qcow and while doing so throwing
> > > away the former rule causing it to fail.
> > >
> > > All other calls to (re)load_profile already use append=true when adding
> > > rules append=false is only used when restoring rules [1].
> > >
> > > Fix this by letting AppArmorSetSecurityImageLabel use append=true as
well.
> > >
> > > Bugs:
> > >
https://bugs.launchpad.net/libvirt/+bug/1845506
> > >
https://bugzilla.redhat.com/show_bug.cgi?id=1746684
> > >
> > > [1]:
https://bugs.launchpad.net/libvirt/+bug/1845506/comments/13
> > >
> > > Signed-off-by: Christian Ehrhardt
<christian.ehrhardt(a)canonical.com>
> > > ---
> > > src/security/security_apparmor.c | 2 +-
> > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/src/security/security_apparmor.c
b/src/security/security_apparmor.c
> > > index 320d69e52a..4dd7ba20b4 100644
> > > --- a/src/security/security_apparmor.c
> > > +++ b/src/security/security_apparmor.c
> > > @@ -812,7 +812,7 @@ AppArmorSetSecurityImageLabel(virSecurityManagerPtr
mgr,
> > > return -1;
> > > }
> > >
> > > - return reload_profile(mgr, def, src->path, false);
> > > + return reload_profile(mgr, def, src->path, true);
> >
> > src/security/security_manager.c shows that
> > virSecurityManagerSetImageLabel() is called to label 'a storage image
> > with the configured security label'. Based on your report, it seems that
> > in addition to the underlying disk (vda in this case), it is also called
> > for each additional disk (ie, vdc and vdd).
>
> Yes in the "transaction" for e.g. a snapshot on multiple disks there will
be
> calls for both disks before any of them becomes part of the VM-definition.
> That is where the second call "removes" the first rule and then things
fail.
>
> To be clear, I didn't add any "disk" in the broken use-case it adds
> further backing image chain elements to existing disks as part of the
> snapshot action.
> So if I snapshot in one shot vdb and vdc it is called for each of
> those adding the new paths for the respective backing chain that
> grows.
>
> If you ever power cycle that it will be part of the definition and
> virt-aa-helper resolves backing chains as needed.
>
> > While your patch to append
> > makes sense for this scenario, what happens if you update the vm
> > definition to use 'vde' instead of 'vda', is the vda access
removed and
> > vde added with vdc and vdd remaining?
>
> Hmm - not sure what exactly you mean, but lets try to break it ...
>
> This splits into three major paths to consider:
>
> a) adding/removing disks while the guest is off.
> This isn't interesting as that way it will be part (or not) of the
> definition when it starts.
> The guest will get initial rules based on that definition on
> start, nothing special.
>
> b) adding/removing disks at runtime of a guest
> b1) you can edit a live definition in XML, but that will only
> add/modify the disk on next start
> Even if you now add another disk via proper hot-add later the
> first will not be added
> as it isn't really part of the active definition yet.
> b2) you can hot add/remove a disk, those will be properly labelled
> and added/removed
> via their labelling calls on that action
>
> c) snapshots at runtime of the guest (that was the broken case)
> As mentioned above it will need to add new elements to the backing-chain
> c1) snapshot one disk will work, the call to
> AppArmorSetSecurityImageLabel will add the new rule needed
> c2) snapshot multiple disks at once then it will fail without the fix here
> the second call to AppArmorSetSecurityImageLabel will revoke
> the former rule
>
> Sorry, I don't find the " update the vm definition to use 'vde'
> instead of 'vda'" that you meant in either of the paths a/b/c above.
> I'll try to reach you on IRC later on to sort this out ...
After discussing on IRC (thanks) and understanding that it wasn't so
much about the new change in this patch but a general "are rules
revoked as expected" I came up with this tests:
Setup:
vca (root)
vdb (helper disk not used here)
vdc /var/lib/uvtool/libvirt/images/focal-test2.qcow (present at guest start)
vdd /var/lib/uvtool/libvirt/images/focal-test1.qcow (hot-added after boot)
Then I'll remove one or the other and ensure that the rules are gone.
Furthermore I then will call a snapshot on vda to ensure that it
doesn't e.g. add the rule back from some unexpected source.
- Initially there was no rule (obviously)
- after guest start vdc had a rule
- after attaching disk2 it has vdc+vdd rules
- after detaching disk1 it has only the vdd rule
- after detaching disk2 it had none of the rules
- snapshot vda to a new path, new path added no vdc/vdd
Thereby the checks you wondered about are done, thanks for helping me
to make sure this is fine.
Thanks! LGTM. Feel free to add some language to the commit message to
summarize your testing, but not blocking on that.
--
Jamie Strandboge |