hard-disk via virtio-blk under windows (discard_granularity=0)

Hi: few years ago virtio-blk device was showing as hard-disk under windows. recent years the driver change the device to show as thin-provisioned disk. the change is good for ssd, but not so good for raw hard-disk. under windows server 2022 the default virtio-blk situation is quite bad, ssd trim is very slow. and defrag a bigger volume like 1TB harddisk will always show "memory not enough", even when the volume is empty. I found discussions to change the "discard_granularity" to make trim happy again. and libvirt support the syntax like below: <blockio discard_granularity='2097152'/> I also found if I can set "discard_granularity" to zero, then windows will recognize the device as "traditional hard drive" again and won't do unnecessary trim to it. I want to do this for years, but couldn't find a way to set it up like vitio-scsi's rotational parameter. the sad part is if I setup it under RHEL 9.4 with librirt 10.0 like below: <blockio discard_granularity='0'/> the line will just disappear when I close "virsh edit" so I can only use complex format with "<qemu:override>" to set "discard_granularity='0'" I wonder if libvirt would be changed to accept "discard_granularity='0'" so the traditional hard-disk can be recognized under windows again. or is there better ways to distinguish hard-disk/ssd/thin-disk for virtio-blk now? Regards, tbskyd

On Thu, Nov 14, 2024 at 01:02:02AM +0800, d tbsky wrote:
Hi: few years ago virtio-blk device was showing as hard-disk under windows. recent years the driver change the device to show as thin-provisioned disk. the change is good for ssd, but not so good for raw hard-disk.
under windows server 2022 the default virtio-blk situation is quite bad, ssd trim is very slow. and defrag a bigger volume like 1TB harddisk will always show "memory not enough", even when the volume is empty.
I found discussions to change the "discard_granularity" to make trim happy again. and libvirt support the syntax like below:
<blockio discard_granularity='2097152'/>
I also found if I can set "discard_granularity" to zero, then windows will recognize the device as "traditional hard drive" again and won't do unnecessary trim to it. I want to do this for years, but couldn't find a way to set it up like vitio-scsi's rotational parameter.
the sad part is if I setup it under RHEL 9.4 with librirt 10.0 like below:
<blockio discard_granularity='0'/>
the line will just disappear when I close "virsh edit"
That is because we treat '0' as the default. Unfortunately that value in QEMU does not mean the same as for us as QEMU defaults to logical block size and discard_granularity=0 actually changes the behaviour.
so I can only use complex format with "<qemu:override>" to set "discard_granularity='0'"
I wonder if libvirt would be changed to accept "discard_granularity='0'" so the traditional hard-disk can be recognized under windows again.
That should be possible, yes. Given the behaviour described above it should be done (although it feels like there is something else that's worth fixing, but I don't know what and where). Could you file an upstream (or downstream) issue so that we do not lose track of this? Upstream: https://gitlab.com/libvirt/libvirt/-/issues/new Downstream: https://issues.redhat.com/secure/CreateIssue!default.jspa
or is there better ways to distinguish hard-disk/ssd/thin-disk for virtio-blk now?
Based on whether it is taking time in the guest or in the host you could theoretically try setting <driver name='qemu' discard='ignore'/> but I guess that would not help. Other than that I don't know about any other option.
Regards, tbskyd

Martin Kletzander <mkletzan@redhat.com>
I wonder if libvirt would be changed to accept "discard_granularity='0'" so the traditional hard-disk can be recognized under windows again.
That should be possible, yes. Given the behaviour described above it should be done (although it feels like there is something else that's worth fixing, but I don't know what and where).
Thanks a lot for the good news. it would be a lot easier to setup.
Could you file an upstream (or downstream) issue so that we do not lose track of this?
Upstream: https://gitlab.com/libvirt/libvirt/-/issues/new Downstream: https://issues.redhat.com/secure/CreateIssue!default.jspa
I will do that asap.
Based on whether it is taking time in the guest or in the host you could theoretically try setting <driver name='qemu' discard='ignore'/> but I guess that would not help.
no it won't help. I had tried that before. I tried many parameters and also tried to set the "media type" manually at windows (the function seems only designed for storage pools, not normal devices). "discard_granularity=0" is the first parameter which is working. I am glad that it works. or I can only stay with very old drivers or replace virtio-blk with virtio-scsi. Regards, tbskyd

On Sat, Nov 16, 2024 at 12:57:33AM +0800, d tbsky wrote:
Based on whether it is taking time in the guest or in the host you could theoretically try setting <driver name='qemu' discard='ignore'/> but I guess that would not help.
no it won't help. I had tried that before. I tried many parameters and also tried to set the "media type" manually at windows (the function seems only designed for storage pools, not normal devices). "discard_granularity=0" is the first parameter which is working. I am glad that it works. or I can only stay with very old drivers or replace virtio-blk with virtio-scsi.
Based on your description of the situation, it honestly sounds like setting discard_granularity=0 is a workaround and what we should really have is a supported way to control whether the device shows up as HDD or SSD. I'm not sure whether this would have to be an option passed to the QEMU virtio-blk device or some tunable in the driver. I'd also be surprised if this only affected Windows. Wouldn't Linux guests likely see a similar change in how the device is presented? -- Andrea Bolognani / Red Hat / Virtualization

Andrea Bolognani <abologna@redhat.com>
I'd also be surprised if this only affected Windows. Wouldn't Linux guests likely see a similar change in how the device is presented?
I don't understand all the impact under windows. but defrag is the obvious part. by default windows will do disk optimization weekly. trim was fast when I was using windows 2019 with virtio-blk drivers at that time. and you could manually do traditional defrag to the device although it will also do unnecessary trim after defrag. But recent windows server 2019/2022 with recent virtio-blk driver, trim was very slow and a bigger disk like 1TB thin-device will just show "memory not enough" when defrag(but NTFS really need defrag). I don't know what/when changed the behavior. I found "discard_granularity" seems the saver for both cases. when I use linux I don't care since it won't do trim automatically. I need to mount with options or do fstrim for the device. and there is no need to defrag the hard-disk. Since there is no automatic part, I don't know if this is useful or not to distinguish hdd/sdd/thin-device under linux?

On Sat, Nov 16, 2024 at 02:28:24AM +0800, d tbsky wrote:
Andrea Bolognani <abologna@redhat.com>
I'd also be surprised if this only affected Windows. Wouldn't Linux guests likely see a similar change in how the device is presented?
I don't understand all the impact under windows. but defrag is the obvious part. by default windows will do disk optimization weekly. trim was fast when I was using windows 2019 with virtio-blk drivers at that time. and you could manually do traditional defrag to the device although it will also do unnecessary trim after defrag. But recent windows server 2019/2022 with recent virtio-blk driver, trim was very slow and a bigger disk like 1TB thin-device will just show "memory not enough" when defrag(but NTFS really need defrag). I don't know what/when changed the behavior. I found "discard_granularity" seems the saver for both cases.
when I use linux I don't care since it won't do trim automatically. I need to mount with options or do fstrim for the device. and there is no need to defrag the hard-disk. Since there is no automatic part, I don't know if this is useful or not to distinguish hdd/sdd/thin-device under linux?
At least on Fedora, trim should be performed periodically by default. https://fedoraproject.org/wiki/Changes/EnableFSTrimTimer Probably this doesn't hit the same awful performance issues as Windows, but it might still not make sense to do it at all if the underlying storage is not flash-based. Or maybe it does! I'm far from an expert when it comes to storage :) -- Andrea Bolognani / Red Hat / Virtualization

On Fri, Nov 15, 2024 at 02:17:49PM -0500, Andrea Bolognani wrote:
On Sat, Nov 16, 2024 at 02:28:24AM +0800, d tbsky wrote:
Andrea Bolognani <abologna@redhat.com>
I'd also be surprised if this only affected Windows. Wouldn't Linux guests likely see a similar change in how the device is presented?
I don't understand all the impact under windows. but defrag is the obvious part. by default windows will do disk optimization weekly. trim was fast when I was using windows 2019 with virtio-blk drivers at that time. and you could manually do traditional defrag to the device although it will also do unnecessary trim after defrag. But recent windows server 2019/2022 with recent virtio-blk driver, trim was very slow and a bigger disk like 1TB thin-device will just show "memory not enough" when defrag(but NTFS really need defrag). I don't know
This sounds like a virtio driver issue.
what/when changed the behavior. I found "discard_granularity" seems the saver for both cases.
when I use linux I don't care since it won't do trim automatically. I need to mount with options or do fstrim for the device. and there is no need to defrag the hard-disk. Since there is no automatic part, I don't know if this is useful or not to distinguish hdd/sdd/thin-device under linux?
At least on Fedora, trim should be performed periodically by default.
https://fedoraproject.org/wiki/Changes/EnableFSTrimTimer
Probably this doesn't hit the same awful performance issues as Windows, but it might still not make sense to do it at all if the underlying storage is not flash-based.
Which would make sense if there are different drivers. Just a thought though.
Or maybe it does! I'm far from an expert when it comes to storage :)
-- Andrea Bolognani / Red Hat / Virtualization
participants (3)
-
Andrea Bolognani
-
d tbsky
-
Martin Kletzander