[ceph-users] Re: Fstab entry for mounting specific ceph fs?

2022-09-25 Thread Sagittarius-A Black Hole
No, I actually included the ceph fstype, just not in my example (the initial post), but the key is really mds_namespace for specifying the filesystem, this should be included in the documentation. Thanks Daniel On Sun, 25 Sept 2022 at 02:13, Dominique Ramaekers wrote: > > Hi Daniel, > > I also

[ceph-users] Re: HA cluster

2022-09-25 Thread Eugen Block
Hi, do both nodes have the MON and OSD roles? If there's only one MON and you shut it down the cluster is down, of course. If the maps don't change too quickly it's possible that your clients still communicate with their respective OSDs so they don't immediately notice failed MONs. This

[ceph-users] HA cluster

2022-09-25 Thread Murilo Morais
Hello guys. I have a question regarding HA. I set up two hosts with cephadm, created the pools and set up an NFS, everything working so far. I turned off the second Host and the first one continued to work without problems, but if I turn off the first, the second is totally irresponsible. What

[ceph-users] Re: Fstab entry for mounting specific ceph fs?

2022-09-25 Thread Dominique Ramaekers
Hi Daniel, I also needed to add the mds_namespace in my definition... ?? But, did you also forget to specify the fs-type = "ceph" ?? This is my entry in fstab: 10.3.1.23:6789,10.3.1.26:6789,10.3.1.28:6789:/ /srv/poolVMScephname=admin,mds_namespace=poolVMS,noatime,_netdev 0

[ceph-users] 2-Layer CRUSH Map Rule?

2022-09-25 Thread duluxoz
Hi Everybody (Hi Dr. Nick), TL/DR: Is is possible to have a "2-Layer" Crush Map? I think it is (although I'm not sure about how to set it up). My issue is that we're using 4-2 Erasure coding on our OSDs, with 7-OSDs per OSD-Node (yes, the Cluster is handling things AOK - we're running at