On Thu, Jul 09, 2020 at 02:15:50PM -0600, Chris Murphy wrote:
On Thu, Jul 9, 2020 at 12:27 PM Daniel P. Berrangé
<berrange(a)redhat.com> wrote:
>
> On Thu, Jul 09, 2020 at 08:30:14AM -0600, Chris Murphy wrote:
> > It's generally recommended by upstream Btrfs development to set
> > 'nodatacow' using 'chattr +C' on the containing directory for
VM
> > images. By setting it on the containing directory it means new VM
> > images inherit the attribute, including images copied to this
> > location.
> >
> > But 'nodatacow' also implies no compression and no data checksums.
> > Could there be use cases in which it's preferred to do compression and
> > checksumming on VM images? In that case the fragmentation problem can
> > be mitigated by periodic defragmentation.
>
> Setting nodatacow makes sense particularly when using qcow2, since
> qcow2 is already potentially performing copy-on-write.
>
> Skipping compression and data checksums is reasonable in many cases,
> as the OS inside the VM is often going to be able todo this itself,
> so we don't want double checksums or double compression as it just
> wastes performance.
Good point. I am open to suggestion/recommendation by Felipe Borges
about GNOME Boxes' ability to do installs by injecting kickstarts. It
might sound nutty but it's quite sane to consider the guest do
something like plain XFS (no LVM). But all of my VM's are as you
describe: guest is btrfs with the checksums and compression, on a raw
file with chattr +C set.
GNOME Boxes / virt-install both use libosinfo for doing the automated
kickstart installs. In the case of Fedora that's driven by a template
common across all versions. We already have to cope with switch from
ext3 to ext4 way back in Fedora 10. If new Fedora decides on btrfs
by default, we'll need to update the kickstart to follow that
recmmendation too
https://gitlab.com/libosinfo/osinfo-db/-/blob/master/data/install-script/...
> > Is this something libvirt can and should do? Possibly by
default with
> > a way for users to opt out?
>
> The default /var/lib/libvirt/images directory is created by RPM
> at install time. Not sure if there's a way to get RPM to set
> attributes on the dir at this time ?
For lives it's rsync today. I'm not certain if rsync carries over
file attributes. tar does not. Also not sure if squashfs and
unsquashfs do either. So this might mean an anaconda post-install
script is a more reliable way to go, since Anaconda can support rsync
(Live) and rpm (netinstall,dvd) installs. And there is a proposal
dangling in the wind to use plain squashfs (no nested ext4 as today).
Hmm, tricky, so many different scenarios to consider - traditional
Anaconda install, install from live CD, plain RPM post-install,
and pre-built disk image.
> Libvirt's storage pool APIs has support for setting
"nodatacow"
> on a per-file basis when creating the images. This was added for
> btrfs benefit, but this is opt in and I'm not sure any mgmt tools
> actually use this right now. So in practice it probably dooesn't
> have any out of the box benefit.
>
> The storage pool APIs don't have any feature to set nodatacow
> for the directory as a whole, but probably we should add this.
systemd-journald checks for btrfs and sets +C on /var/log/journal if
it is. This can be inhibited by 'touch
/etc/tmpfiles.d/journal-nocow.conf'
> > Another option is to have the installer set 'chattr
+C' in the short
> > term. But this doesn't help GNOME Boxes, since the user home isn't
> > created at installation time.
> >
> > Three advantages of libvirt awareness of Btrfs:
> >
> > (a) GNOME Boxes Cockpit, and other users of libvirt can then use this
> > same mechanism, and apply to their VM image locations.
> >
> > (b) Create the containing directory as a subvolume instead of a directory
> > (1) btrfs snapshots are not recursive, therefore making this a
> > subvolume would prevent it from being snapshot, and thus (temporarily)
> > resuming datacow.
> > (2) in heavy workloads there can be lock contention on file
> > btrees; a subvolume is a dedicated file btree; so this would reduce
> > tendency for lock contention in heavy workloads (this is not likely a
> > desktop/laptop use case)
>
> Being able to create subvolumes sounds like a reasonable idea. We already
> have a ZFS specific storage driver that can do the ZFS equivalent.
>
> Again though we'll also need mgmt tools modified to take advantage of
> this. Not sure how we would make this all work out of the box, with
> the way we let RPM pre-create /var/lib/libvirt/images, as we'd need
> different behaviour depending on what filesystem you install the RPM
> onto.
It seems reasonable to me that libvirtd can "own"
/var/lib/libvirt/images and make these decisions. i.e. if it's empty,
and if btrfs, then delete it and recreate as subvolume and also chattr
+C
Deleting & recreating as a subvolume is a bit more adventurous than
I would like to be for something done transparently from the user.
I think would need that to be an explicit decision somewhere tied
to the libvirt pool build APIs. Perhaps virt-install/virt-manager
can do this though.
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|