Hi,
Is there any recommendation for the mds_cache_memory_limit ? Like a % of the
total ram or something ?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
I am currently running a ceph cluster running in CEPHFS with 3 nodes each
have 6 osd's except 1 who got 5. I got 3 mds : 2 active and 1 standby, 3
mon.
[root@ceph-n1 ~]# ceph -s
cluster:
id: 1d97aa70-2029-463a-b6fa-20e98f3e21fb
health: HEALTH_WARN
3 clie
Hi,
Im not sure if it's normal or not but each time I add a new osd with
ceph-deploy osd create --data /dev/sdg ceph-n1.
It add 1GB to my global data but I just format the drive so it's supposed to
be at 0 right ?
So I have 6 osd in my ceph and it took 6gib.
[root@ceph-n1 ~]# ceph -s
c
Hi,
Im new in a business and I took on the ceph project.
Im still a newbie on that subject and I try to understand what the previous
guy was trying to do.
Is there any reason someone would install radosgw with a cephfs?
If not how can I remove all radosgw configuration without restarti
Hi,
I want to know if there is any dependencies between the ceph admin node and
the other nodes ?
Can I delete my ceph admin node and create a new one and link it to my OSD's
nodes ?
Or can I take all my existing OSD's in a node from Cluster "A" and transfert
it to cluster "B" ?
Dear Ceph Experts,
I have recently deleted a very big directory on my cephfs and a few minutes
after my dashboard start yelling :
Overall status: HEALTH_ERR
MDS_DAMAGE: 1 MDSs report damaged metadata
So I immediately log in my ceph admin node than do a ceph -s:
cluster:
id: 472