Hi,
take a look with:
rados df
rbd -p <pool> ls
and with -l option for long output like
rbd -p rbd ls -l
NAME SIZE PARENT FMT PROT LOCK
vm-127-disk-1 35000M 2
vm-131-disk-1 8192M 2
vm-135-disk-1 8192M 2
...
Udo
On 19.06.2014 09:42, wsnote wrote:
> Lewis, thanks very much!
> With your suggestion, I have installed ceph successfully and started
> it normally.
> Now I find another question.When I start ceph cluster, it has used
> 106TB space.
> What's the thing that take so much space?
>
> command: ceph -s
> [root@yun4 ~]# ceph -s
> cluster cbf88224-1c9f-4d90-9e78-ef72d63ddce6
> health HEALTH_OK
> monmap e1: 10 mons at
> {1=222.186.55.4:6789/0,10=222.186.55.13:6789/0,2=222.186.55.5:6789/0,3=222.186.55.6:6789/0,4=222.186.55.7:6789/0,5=222.186.55.8:6789/0,6=222.186.55.9:6789/0,7=222.186.55.10:6789/0,8=222.186.55.11:6789/0,9=222.186.55.12:6789/0},
> election epoch 8, quorum 0,1,2,3,4,5,6,7,8,9 1,2,3,4,5,6,7,8,9,10
> osdmap e63: 109 osds: 109 up, 109 in
> pgmap v462: 21312 pgs, 3 pools, 0 bytes data, 0 objects
> * 106 TB used, 289 TB / 396 TB avail*
> 21312 active+clean
>
>
> command: ceph df
> in one server of ceph cluster, it shows:
> [root@yun4 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda3 1.8T 25G 1.8T 2% /
> tmpfs 16G 0 16G 0% /dev/shm
> /dev/sda1 194M 60M 125M 33% /boot
> /dev/sdb1 3.7T 1001G 2.7T 27% /data/osd1
> /dev/sdk1 3.7T 1001G 2.7T 27% /data/osd10
> /dev/sdl1 3.7T 1001G 2.7T 27% /data/osd11
> /dev/sdc1 3.7T 1001G 2.7T 27% /data/osd2
> /dev/sdd1 3.7T 1001G 2.7T 27% /data/osd3
> /dev/sde1 3.7T 1001G 2.7T 27% /data/osd4
> /dev/sdf1 3.7T 1001G 2.7T 27% /data/osd5
> /dev/sdg1 3.7T 1001G 2.7T 27% /data/osd6
> /dev/sdh1 3.7T 1001G 2.7T 27% /data/osd7
> /dev/sdi1 3.7T 1001G 2.7T 27% /data/osd8
> /dev/sdj1 3.7T 1001G 2.7T 27% /data/osd9
>
> There is 27% space used in every disk.
> What's the possible reason?
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com