Do you have some comparasion of IO performance on thin pool vs. qcow2
file on fs?
In my case each VM would have its own thin volume. I just want to
overcommit disk-space.
Regards,
Jan
On 2017-11-07 13:16 +0300, Vasiliy Tolstov wrote:
Please don't use lvm thin for vm. In our hosting in Russia we
have
100-150 vps on each node with lvm thin pool on ssd and have locks,
slowdowns and other bad things because of COW. After we switching to
qcow2 files on plain ssd ext4 fs and happy =).
2017-11-04 23:21 GMT+03:00 Jan Hutař <jhutar(a)redhat.com>:
> Hello,
> as usual, I'm few years behind trends so I have learned about LVM thin
> volumes recently and I especially like that your volumes can be "sparse"
> - that you can have 1TB thin volume on 250GB VG/thin pool.
>
> Is it somehow possible to use that with libvirt?
>
> I have found this post from 2014:
>
>
https://www.redhat.com/archives/libvirt-users/2014-August/msg00010.html
>
> which says you should be able to create "sparse" volume with `virsh
> vol-create ...` or that libvirt should be able to see thin volumes you
> create yourself, but neither of that works for me
> (libvirt-3.2.1-6.fc26.x86_64).
>
> This way I try to create new:
>
> # vgs storage
> VG #PV #LV #SN Attr VSize VFree
> storage 1 1 0 wz--n- 267.93g 0 # lvs storage -a
> LV VG Attr LSize Pool Origin Data% Meta%
> Move Log Cpy%Sync Convert
> [lvol0_pmspare] storage ewi------- 68.00m
> lvol1 storage twi-aotz-- 267.80g 0.00 0.44
> [lvol1_tdata] storage Twi-ao---- 267.80g
> [lvol1_tmeta] storage ewi-ao---- 68.00m
> # virsh pool-dumpxml storage
> <pool type='logical'>
> <name>storage</name>
> <uuid>f523aed2-a7e4-4dc2-88db-0193a7337704</uuid>
> <capacity unit='bytes'>287687311360</capacity>
> <allocation unit='bytes'>287687311360</allocation>
> <available unit='bytes'>0</available>
> <source>
> <device path='/dev/nvme0n1p3'/>
> <name>storage</name>
> <format type='lvm2'/>
> </source>
> <target>
> <path>/dev/storage</path>
> </target>
> </pool>
> # cat /tmp/big.xml
> <volume>
> <name>big</name>
> <capacity>1073741824</capacity>
> <allocation>1048576</allocation>
> <target>
> <path>/dev/storage/big</path>
> </target>
> </volume>
> # virsh vol-create storage /tmp/big.xml error: Failed to create vol
> from /tmp/big.xml
> error: internal error: Child process (/usr/sbin/lvcreate --name big -L
> 1024K --type snapshot --virtualsize 1048576K storage) unexpected exit status
> 5: Volume group "storage" has insufficient free space (0 extents): 1
> required.
>
> When I create thin volume manually, I do not see it:
>
> # lvcreate -n big -V 500G --thinpool storage/lvol1
> Using default stripesize 64.00 KiB.
> WARNING: Sum of all thin volume sizes (500.00 GiB) exceeds the size of
> thin pool storage/lvol1 and the size of whole volume group (267.93 GiB)!
> For thin pool auto extension activation/thin_pool_autoextend_threshold
> should be below 100.
> Logical volume "big" created.
> # lvs storage -a
> LV VG Attr LSize Pool Origin Data% Meta%
> Move Log Cpy%Sync Convert
> big storage Vwi-a-tz-- 500.00g lvol1 0.00
> [lvol0_pmspare] storage ewi------- 68.00m
> lvol1 storage twi-aotz-- 267.80g 0.00 0.45
> [lvol1_tdata] storage Twi-ao---- 267.80g
> [lvol1_tmeta] storage ewi-ao---- 68.00m
> # virsh vol-list storage
> Name Path
> ------------------------------------------------------------------------------
>
>
> Do I understand the concept incorrectly, or is there something else to
> configure?
>
> At the end I want to get max IO performance with possibility to
> "overcommit disk space". I know I can use storage "dir", but
thought
> there might be something faster?
>
> Thank you very much for a response,
> Jan
>
>
>
> --
> Jan Hutar Systems Management QA
> jhutar(a)redhat.com Red Hat, Inc.
>
> _______________________________________________
> libvirt-users mailing list
> libvirt-users(a)redhat.com
>
https://www.redhat.com/mailman/listinfo/libvirt-users
--
Vasiliy Tolstov,
e-mail: v.tolstov(a)selfip.ru
--
Jan Hutar Systems Management QA
jhutar(a)redhat.com Red Hat, Inc.