On 8/16/24 17:00, Peter Krempa wrote:
On Wed, Aug 14, 2024 at 15:28:57 +0200, Denis V. Lunev wrote:
> On 8/12/24 16:46, Peter Krempa wrote:
>> On Mon, Aug 12, 2024 at 12:04:01 +0200, Denis V. Lunev wrote:
>>> On 8/12/24 10:36, Peter Krempa wrote:
>>>> On Mon, Aug 12, 2024 at 09:26:08 +0200, Peter Krempa wrote:
>>>>> On Sun, Aug 11, 2024 at 17:34:45 +0300, Nikolai Barybin via Devel
wrote:
>>>>>> There are use cases when the existing disks (i.e. LVM) are
wanted
>>>>>> to be used with advanced features. For this purpose QEMU allows
>>>>>> data-file feature for qcow2 files: metadata is kept in the qcow2
>>>>>> file like usual, but guest data is written to an external file.
>>>>>> These patches enable support for this feature in libvirt.
>>>>> So this feature was once attempted to be added but was never
finished
>>>>> and the comitted bits were reverted eventually. (I've purged my
local
>>>>> archive so I don't have the link handy but I can look if you want
the
>>>>> links to the old posting)
>>>>>
>>>>> It was deemed that this doesn't really add any performance
benefit over
>>>>> storing the actual qcow2 with data inside. The qcow2 with data can
be
>>>>> stored inside the LV or other block device for that matter and thus
can
>>>>> provide all features that are necessary. The data file feature makes
>>>>> also the management of the metadata and data much more complex, for
a
>>>>> very bad trade-off. At this point with 'qemu-img measure'
it's easy to
>>>>> query the necessary size to have a fully allocated qcow2 inside the
>>>>> block device.
>>>>>
>>>>> Based on the history of this I'd like to ask you to summarize
>>>>> justifications and reasons for adding this feature before
continuing.
>>>> Based on the current state of the series and what would be required to
>>>> make it viable to be accepted I very strongly suggest re-thinking if you
>>>> really need this feature, especially based on the caveats above.
>>>>
>>> Let me clarify a bit.
>>>
>>> QCOW2 as a data file uses QCOW2 as a metadata storage for ordinary
>>> block devices. This is a feature of QEMU and it would be quite
>>> natural to have its representation in the libvirt as without
>>> libvirt help QEMU could not even start with such a configuration
>>> due to namespaces.
>>>
>>> LVM or not LVM. How it is better in comparison with the QCOW2.
>> Historically when this was considered to be used for the incremental
>> backup feature in oVirt, similar set of the advantages was picked as the
>> justification. Later on after discussing this for a bit it became
>> obvious that the advantages are not as great to justify the extra
>> effort:
>>
>> - extra management complexity (need to carry over the qcow2 as well as
>> the data file)
>> - possibility to desynchronize the state (write into the 'data_file'
>> invalidates the qcow2 "overlay"), without being able to see that
it
>> was desync'd (writing into the data file doesn't touch the overlay
so
>> the qcow2 driver doesn't see that).
>> - basically no performance benefits on top of qcow2
>> - (perhaps other's on oVirt side ... it was long time ago so I don't
>> remember any more)
> Yes. And definitely we will have this extra complexity over extra
> functionality. Right now we have to support for our product
> backups of VM data residing on LVM volumes. This is shipped into
> the production and I have option to have this in downstream
> only or submit this upstream.
Fair enough. As long as you are willing to properly implement backing
file support I'm fine with that. I just wanted to note that one prior
attempt was deemed not worth it and abandoned, so that you can
understand what you are taking on.
> The problem is that if we would say that libvirt is not going
> this way, we should clearly indicate in
> * QCOW2 documentation
> * qemu-img man page
> that the option of using datafile for VM metadata is deprecated
> and will not get further development. This would be at least
> fair.
I have no idea how the qemu project approaches this and what they think
about the status of backing files.
For libvirt if qemu supports it it's a fair game to add the feature if
somebody is willing to spend the effort.
> We have taken the decision that this scenario should be supported
> on the base of availability of this option and presence it in
> the public docs.
>
>>> First of all, there are historical setups when the customer
>>> uses LVM for virtual machines and does not want to reformat
>>> his hard disks to the file system. This makes a sense as we
>> Yes, such setups would not be possible. Users would need to convert to
>> qcow2 but that can be done transparently to the VM. (but briefly
>> requires twice the storage).
> That is a BIG problem. Customers do not want to change
> disks layout. Each time we try to force them, we get
> problems. Real big problems.
>
>> are avoiding two level of metadata, i.e. QCOW2 metadata over
>>> local FS metadata. This makes the setup a little bit more
>> As stated above you don't really need to use a filesystem. You can make
>> a block device (whether real/partition/lv) into a qcow2 image by simply
>> using qemu-img format.
> We use LVM for big data (TBs in size) and QCOW2 for metadata,
> namely CBT. This is how libvirt now distinguish EFI QCOW2
> and disk QCOW2.
I don't quite follow what you've meant here.
>> reliable. It should also be noted that QCOW2 in very specific
>>> setups is at least 2 times slow (when L2 cache does not fit
>>> into the dedicated amount of RAM while the disk itself is
>>> quite big. I would admit that this problem would be seen even
>>> for not so big 1T disks).
>> Doesn't a QCOW2 with a 'data file' behave the same as a
fully-allocated
>> qcow2? I don't really see how this is more reliable/performant than
>> plain fully-allocated qcow2.
> No real disk data resides in QCOW2. This is metadata storage, CBT
> and memory state only in the case of snapshots.
>
>
>>> On top of that LVM is quite useful once we start talking about
>>> clustered LVM setups. Clustered LVM is a good alternative
>>> for CEPH at least for some users.
>>>
>>> Thus this data file setup is a way to provide backups, VM
>>> state snapshots and other features.
>> Backups (changed block tracking) is the only thing you'd gain with this.
>> Snapshots and other features are already possible with external
>> snapshots.
>>
>> While based on the above I don't really see the need/use for
'data_file'
>> qcow2 feature, especially given the complexity of adding it properly,
>> I'm not opposed as long as it will be implemented properly.
>>
>> I suggest having a look at Cole's patches from the last attempt as they
>> were far more complete than this posting.
>>
> Externals snapshots are good, but if the customer wants internal
> ones with a memory state to be kept, data files would be useful.
Well that is fair enough I'd say, but ... that throws your argument
about data files being a magical fix for the qcow2 l2 table performance
issues completely out of the window, because if you do an internal
snapshot you definitely will store also data, not only metadata in the
qcow2 file as the blocks which were actually changed need to be overlaid
somewhere to allow going back.
> We will take a look at Cole patches and may be this would
> help. Anyway, if the denyal is strict - we should clearly indicate
> that in QEMU that this option is deprecated.
Once again I'm not saying I'm agains this feature. On terms of whether
qcow2 data file feature is considered supported by qemu and whether it
properly works with internal snapshots I'll have to defer you to the
qemu mailing list as I have no idea and honestly I don't care about it.
I care just that the implementation is done properly and doesn't break
in the future when it's very likely that I'll be the one fixing it.
OK. In general, we will work on this with more attention looking
into the previous work and will come back with something better.
Thank you for point this out,
Den