Hello Patrick, 

File system created around 4 months back. Using ceph version 14.2.3 version. 

[root@knode25 /]# ceph fs dump
dumped fsmap epoch 577
e577
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable 
ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses 
versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout 
v2,10=snaprealm v2}
legacy client fscid: 1

Filesystem 'cephfs01' (1)
fs_name cephfs01
epoch   577
flags   32
created 2019-10-18 23:59:29.610249
modified        2020-02-22 03:13:09.425905
tableserver     0
root    0
session_timeout 60
session_autoclose       300
max_file_size   1099511627776
min_compat_client       -1 (unspecified)
last_failure    0
last_failure_osd_epoch  1608
compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable 
ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses 
versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout 
v2,10=snaprealm v2}
max_mds 1
in      0
up      {0=2981519}
failed
damaged
stopped
data_pools      [2]
metadata_pool   1
inline_data     disabled
balancer
standby_count_wanted    1
2981519:        
[v2:10.131.16.30:6808/3209191719,v1:10.131.16.30:6809/3209191719] 'cephfs01-b' 
mds.0.572 up:active seq 22141
2998684:        [v2:10.131.16.89:6832/54557615,v1:10.131.16.89:6833/54557615] 
'cephfs01-a' mds.0.0 up:standby-replay seq 2


[root@knode25 /]# ceph fs status
cephfs01 - 290 clients
========
+------+----------------+------------+---------------+-------+-------+
| Rank |     State      |    MDS     |    Activity   |  dns  |  inos |
+------+----------------+------------+---------------+-------+-------+
|  0   |     active     | cephfs01-b | Reqs:  333 /s | 2738k | 2735k |
| 0-s  | standby-replay | cephfs01-a | Evts:  795 /s | 1368k | 1363k |
+------+----------------+------------+---------------+-------+-------+
+-------------------+----------+-------+-------+
|        Pool       |   type   |  used | avail |
+-------------------+----------+-------+-------+
| cephfs01-metadata | metadata | 2193M | 78.1T |
|   cephfs01-data0  |   data   |  753G | 78.1T |
+-------------------+----------+-------+-------+
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to