Forgot to reply to the list!

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, January 17, 2019 8:32 AM, David Young 
<funkypeng...@protonmail.com> wrote:

> Thanks David,
>
> "ceph osd df" looks like this:
>
> ---------
> root@node1:~# ceph osd df
> ID CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL    %USE  VAR  PGS
> 9   hdd 7.27698  1.00000 7.3 TiB 6.3 TiB 1008 GiB 86.47 1.22 122
> 10   hdd 7.27698  1.00000 7.3 TiB 4.9 TiB  2.4 TiB 66.90 0.94  94
> 11   hdd 7.27739  0.90002 7.3 TiB 5.4 TiB  1.9 TiB 74.29 1.05 104
> 12   hdd 7.27698  0.95001 7.3 TiB 5.8 TiB  1.5 TiB 79.64 1.12 115
> 13   hdd       0        0     0 B     0 B      0 B     0    0  18
> 40   hdd 7.27739  1.00000 7.3 TiB 6.1 TiB  1.2 TiB 83.32 1.17 120
> 41   hdd 7.27739  0.90002 7.3 TiB 5.6 TiB  1.7 TiB 76.88 1.08 113
> 42   hdd 7.27739  0.80005 7.3 TiB 6.3 TiB  1.0 TiB 85.98 1.21 123
> 43   hdd       0        0     0 B     0 B      0 B     0    0  32
> 44   hdd 7.27739        0     0 B     0 B      0 B     0    0  27
> 45   hdd 7.27739  1.00000 7.3 TiB 5.1 TiB  2.2 TiB 69.44 0.98  98
> 46   hdd       0        0     0 B     0 B      0 B     0    0  38
> 47   hdd 7.27739  1.00000 7.3 TiB 4.4 TiB  2.9 TiB 60.24 0.85  84
> 48   hdd 7.27739  1.00000 7.3 TiB 4.5 TiB  2.8 TiB 61.66 0.87  85
> 49   hdd 7.27739  1.00000 7.3 TiB 4.7 TiB  2.5 TiB 65.07 0.92  90
> 50   hdd 7.27739  1.00000 7.3 TiB 4.7 TiB  2.6 TiB 64.39 0.91  87
> 51   hdd 7.27739  1.00000 7.3 TiB 5.1 TiB  2.2 TiB 70.22 0.99  95
> 52   hdd 7.27739  1.00000 7.3 TiB 4.9 TiB  2.4 TiB 66.69 0.94  98
> 53   hdd 7.27739  1.00000 7.3 TiB 4.8 TiB  2.5 TiB 66.33 0.93  97
> 54   hdd 7.27739  1.00000 7.3 TiB 4.3 TiB  3.0 TiB 59.20 0.83  82
> 0   hdd 7.27699  1.00000 7.3 TiB 3.8 TiB  3.5 TiB 52.34 0.74  71
> 1   hdd 7.27699  1.00000 7.3 TiB 4.9 TiB  2.4 TiB 67.62 0.95  89
> 2   hdd 7.27699  0.90002 7.3 TiB 4.9 TiB  2.4 TiB 66.69 0.94  81
> 3   hdd 7.27699  1.00000 7.3 TiB 4.7 TiB  2.5 TiB 65.21 0.92  88
> 4   hdd 7.27699  0.90002 7.3 TiB 4.9 TiB  2.4 TiB 67.25 0.95  93
> 5   hdd 7.27739  0.95001 7.3 TiB 4.2 TiB  3.0 TiB 58.39 0.82  78
> 6   hdd 7.27739  1.00000 7.3 TiB 5.7 TiB  1.6 TiB 78.35 1.10 105
> 7   hdd 7.27739  0.95001 7.3 TiB 5.2 TiB  2.1 TiB 71.65 1.01  98
> 8   hdd 7.27739  1.00000 7.3 TiB 5.1 TiB  2.2 TiB 69.92 0.98  94
> 14   hdd 7.27739  0.95001 7.3 TiB 5.3 TiB  2.0 TiB 72.46 1.02 100
> 15   hdd 7.27739  0.85004 7.3 TiB 6.0 TiB  1.2 TiB 82.93 1.17 119
> 16   hdd 7.27739  1.00000 7.3 TiB 6.3 TiB  1.0 TiB 86.11 1.21 117
> 17   hdd 7.27739  0.85004 7.3 TiB 5.2 TiB  2.1 TiB 71.48 1.01 103
> 18   hdd 7.27739  1.00000 7.3 TiB 5.2 TiB  2.1 TiB 71.43 1.00 100
> 19   hdd 7.27739  1.00000 7.3 TiB 5.2 TiB  2.0 TiB 72.14 1.01 103
> 20   hdd 7.27739  1.00000 7.3 TiB 5.7 TiB  1.6 TiB 78.13 1.10 110
> 21   hdd 7.27739  1.00000 7.3 TiB 6.2 TiB  1.0 TiB 85.58 1.20 125
> 22   hdd 7.27739  1.00000 7.3 TiB 5.2 TiB  2.1 TiB 71.71 1.01 103
> 23   hdd 7.27739  0.95001 7.3 TiB 6.0 TiB  1.2 TiB 83.04 1.17 110
> 24   hdd       0  1.00000 7.3 TiB 831 GiB  6.5 TiB 11.15 0.16  13
> 25   hdd 7.27739  1.00000 7.3 TiB 6.3 TiB  978 GiB 86.87 1.22 121
> 26   hdd 7.27739  1.00000 7.3 TiB 5.2 TiB  2.1 TiB 70.86 1.00 100
> 27   hdd 7.27739  1.00000 7.3 TiB 5.9 TiB  1.4 TiB 80.92 1.14 115
> 28   hdd 7.27739  1.00000 7.3 TiB 6.5 TiB  826 GiB 88.91 1.25 121
> 29   hdd 7.27739  1.00000 7.3 TiB 5.2 TiB  2.1 TiB 70.99 1.00  95
> 30   hdd       0  1.00000 7.3 TiB 2.0 TiB  5.3 TiB 26.99 0.38  33
> 31   hdd 7.27739  1.00000 7.3 TiB 4.6 TiB  2.7 TiB 62.61 0.88  90
> 32   hdd 7.27739  0.90002 7.3 TiB 5.5 TiB  1.8 TiB 75.65 1.06 107
> 33   hdd 7.27739  1.00000 7.3 TiB 5.7 TiB  1.6 TiB 77.99 1.10 111
> 34   hdd 7.27739        0     0 B     0 B      0 B     0    0  10
> 35   hdd 7.27739  1.00000 7.3 TiB 5.3 TiB  2.0 TiB 73.16 1.03 106
> 36   hdd 7.27739  0.95001 7.3 TiB 6.6 TiB  694 GiB 90.68 1.28 126
> 37   hdd 7.27739  1.00000 7.3 TiB 5.5 TiB  1.8 TiB 75.83 1.07 106
> 38   hdd 7.27739  0.95001 7.3 TiB 6.2 TiB  1.1 TiB 85.02 1.20 115
> 39   hdd 7.27739  1.00000 7.3 TiB 4.9 TiB  2.4 TiB 67.16 0.94  94
>                     TOTAL 400 TiB 266 TiB  134 TiB 71.08
> MIN/MAX VAR: 0.16/1.28  STDDEV: 13.96
> root@node1:~#
> ------------
>
> The drives that are weighted zero are "out" pending the completion of the 
> remaining degraded objects after an OSD failure:
>
> -----------
>   data:
>     pools:   2 pools, 1028 pgs
>     objects: 52.15 M objects, 197 TiB
>     usage:   266 TiB used, 134 TiB / 400 TiB avail
>     pgs:     477114/260622045 objects degraded (0.183%)
>              10027396/260622045 objects misplaced (3.847%)
> ----------------------
>
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> On Thursday, January 17, 2019 7:23 AM, David C <dcsysengin...@gmail.com> 
> wrote:
>
>> On Wed, 16 Jan 2019, 02:20 David Young <funkypeng...@protonmail.com wrote:
>>
>>> Hi folks,
>>>
>>> My ceph cluster is used exclusively for cephfs, as follows:
>>>
>>> ---
>>> root@node1:~# grep ceph /etc/fstab
>>> node2:6789:/ /ceph ceph 
>>> auto,_netdev,name=admin,secretfile=/root/ceph.admin.secret
>>> root@node1:~#
>>> ---
>>>
>>> "rados df" shows me the following:
>>>
>>> ---
>>> root@node1:~# rados df
>>> POOL_NAME          USED  OBJECTS CLONES    COPIES MISSING_ON_PRIMARY 
>>> UNFOUND DEGRADED    RD_OPS      RD    WR_OPS      WR
>>> cephfs_metadata 197 MiB    49066      0     98132                  0       
>>> 0        0   9934744  55 GiB  57244243 232 GiB
>>> media           196 TiB 51768595      0 258842975                  0       
>>> 1   203534 477915206 509 TiB 165167618 292 TiB
>>>
>>> total_objects    51817661
>>> total_used       266 TiB
>>> total_avail      135 TiB
>>> total_space      400 TiB
>>> root@node1:~#
>>> ---
>>>
>>> But "df" on the mounted cephfs volume shows me:
>>>
>>> ---
>>> root@node1:~# df -h /ceph
>>> Filesystem          Size  Used Avail Use% Mounted on
>>> 10.20.30.22:6789:/  207T  196T   11T  95% /ceph
>>> root@node1:~#
>>> ---
>>>
>>> And ceph -s shows me:
>>>
>>> ---
>>>   data:
>>>     pools:   2 pools, 1028 pgs
>>>     objects: 51.82 M objects, 196 TiB
>>>     usage:   266 TiB used, 135 TiB / 400 TiB avail
>>> ---
>>>
>>> "media" is an EC pool with size of 5 (4+1), so I can expect 1TB of data to 
>>> consume 1.25TB raw space.
>>>
>>> My question is, why does "df" show me I have 11TB free, when "rados df" 
>>> shows me I have 135TB (raw) available?
>>
>> Probabaly because your OSDs are quite unbalanced.  What does your 'ceph osd 
>> df' look like?
>>
>>> Thanks!
>>> D
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to