i do not think so.i am asking ceph-developers,but no answers
exact info of my storage.ceph'data dir is in /home
1.df -lh
/dev/mapper/vg_ceph1-lv_home
143G 23G 114G 17% /home
2.df -lh
/dev/mapper/vg_ceph2-lv_home
144G 23G 114G 17% /home
23+23=64G, it is impossible 489G.
'rados df -p cloud ' can give right stat.
referring its implement maybe be right.
2012-11-01
libvirt
发件人:Wido den Hollander
发送时间:2012-10-29 22:40
主题:Re: [libvirt] libvirt can not get right stats of a rbd pool
收件人:"Daniel P. Berrange"<berrange(a)redhat.com>
抄送:"yue"<libvirt@163.com>,"libvirt"<libvir-list(a)redhat.com>
On 10/29/2012 03:33 PM, Daniel P. Berrange wrote:
On Fri, Oct 26, 2012 at 11:04:05AM +0800, yue wrote:
> Allocation exceed Capacity ,but Available is not 0.
>
> #virsh pool-info 2361a6d4-0edc-3534-87ae-e7ee09199921
> Name: 2361a6d4-0edc-3534-87ae-e7ee09199921
> UUID: 2361a6d4-0edc-3534-87ae-e7ee09199921
> State: running
> Persistent: yes
> Autostart: no
> Capacity: 285.57 GiB
> Allocation: 489.89 GiB
> Available: 230.59 GiB
Hmm, these values do look a little bit suspect, but I don't know
enough about RBD to suggest what might be going wrong. I'm copying
Wido who wrote this code originally & thus might have an idea.
I think I know where this is coming from, a little background about RBD.
RBD is a disk device striped over 4MB RADOS objects inside a Ceph
cluster. RBD devices are sparse, which means that (RADOS) objects get
created whenever a write comes.
When a read comes for a non-existing object 4MB of nothing is returned.
However, when you do: $ rbd info disk1
You will get that the object COULD be 100GB, but that doesn't mean it
actually occupies 100GB of disk space.
The problem is that you can't (at this point) find out how much space a
RBD device actually occupies. Yes, it can be done, but that should not
be done in the libvirt driver and it is pretty heavy for the Ceph cluster.
Also, a RBD device of 100GB could take up 300GB of space when your
replication is set to 3x.
What you are seeing there is that you over provisioned your Ceph cluster
by creating images which exceed 285GB in total.
Wido
Daniel