Re: [ceph-users] (yet another) multi active mds advise needed

2018-05-18 Thread Daniel Baumann
On 05/19/2018 01:13 AM, Webert de Souza Lima wrote: > New question: will it make any difference in the balancing if instead of > having the MAIL directory in the root of cephfs and the domains's > subtrees inside it, I discard the parent dir and put all the subtress right > in cephfs root? the

Re: [ceph-users] in retrospect get OSD for "slow requests are blocked" ? / get detailed health status via librados?

2018-05-18 Thread Brad Hubbard
On Thu, May 17, 2018 at 6:06 PM, Uwe Sauter wrote: > Brad, > > thanks for the bug report. This is exactly the problem I am having (log-wise). You don't give any indication what version you are running but see https://tracker.ceph.com/issues/23205 >>> >>>

Re: [ceph-users] Multi-MDS Failover

2018-05-18 Thread Scottix
So we have been testing this quite a bit, having the failure domain as partially available is ok for us but odd, since we don't know what will be down. Compared to a single MDS we know everything will be blocked. It would be nice to have an option to have all IO blocked if it hits a degraded

Re: [ceph-users] (yet another) multi active mds advise needed

2018-05-18 Thread Webert de Souza Lima
Hi Patrick On Fri, May 18, 2018 at 6:20 PM Patrick Donnelly wrote: > Each MDS may have multiple subtrees they are authoritative for. Each > MDS may also replicate metadata from another MDS as a form of load > balancing. Ok, its good to know that it actually does some load

Re: [ceph-users] Help/advice with crush rules

2018-05-18 Thread Gregory Farnum
On Thu, May 17, 2018 at 9:05 AM Andras Pataki wrote: > I've been trying to wrap my head around crush rules, and I need some > help/advice. I'm thinking of using erasure coding instead of > replication, and trying to understand the possibilities for planning for >

Re: [ceph-users] Ceph MeetUp Berlin – May 28

2018-05-18 Thread Gregory Farnum
Is there any chance of sharing those slides when the meetup has finished? It sounds interesting! :) On Fri, May 18, 2018 at 6:53 AM Robert Sander wrote: > Hi, > > we are organizing a bi-monthyl meetup in Berlin, Germany and invite any > interested party to join us

Re: [ceph-users] Multi-MDS Failover

2018-05-18 Thread Gregory Farnum
On Fri, May 18, 2018 at 11:56 AM Webert de Souza Lima wrote: > Hello, > > > On Mon, Apr 30, 2018 at 7:16 AM Daniel Baumann > wrote: > >> additionally: if rank 0 is lost, the whole FS stands still (no new >> client can mount the fs; no existing

Re: [ceph-users] Kubernetes/Ceph block performance

2018-05-18 Thread Gregory Farnum
You're doing 4K direct IOs on a distributed storage system and then comparing it to what the local device does with 1GB blocks? :) Try feeding Ceph with some larger IOs and check how it does. -Greg On Fri, May 18, 2018 at 1:22 PM Rhugga Harper wrote: > > We're evaluating

Re: [ceph-users] (yet another) multi active mds advise needed

2018-05-18 Thread Daniel Baumann
On 05/18/2018 11:19 PM, Patrick Donnelly wrote: > So, you would want to have a standby-replay > daemon for each rank or just have normal standbys. It will likely > depend on the size of your MDS (cache size) and available hardware. jftr, having 3 active mds and 3 standby-replay resulted May 20217

Re: [ceph-users] (yet another) multi active mds advise needed

2018-05-18 Thread Patrick Donnelly
Hello Webert, On Fri, May 18, 2018 at 1:10 PM, Webert de Souza Lima wrote: > Hi, > > We're migrating from a Jewel / filestore based cephfs archicture to a > Luminous / buestore based one. > > One MUST HAVE is multiple Active MDS daemons. I'm still lacking knowledge of >

[ceph-users] (yet another) multi active mds advise needed

2018-05-18 Thread Webert de Souza Lima
Hi, We're migrating from a Jewel / filestore based cephfs archicture to a Luminous / buestore based one. One MUST HAVE is multiple Active MDS daemons. I'm still lacking knowledge of how it actually works. After reading the docs and ML we learned that they work by sort of dividing the

Re: [ceph-users] Multi-MDS Failover

2018-05-18 Thread Webert de Souza Lima
Hello, On Mon, Apr 30, 2018 at 7:16 AM Daniel Baumann wrote: > additionally: if rank 0 is lost, the whole FS stands still (no new > client can mount the fs; no existing client can change a directory, etc.). > > my guess is that the root of a cephfs (/; which is always

Re: [ceph-users] ceph osd status output

2018-05-18 Thread John Spray
On Fri, May 18, 2018 at 9:55 AM, Marc Roos wrote: > > Should ceph osd status not be stdout? Oops, that's a bug. http://tracker.ceph.com/issues/24175 https://github.com/ceph/ceph/pull/22089 John > So I can do something like this > > [@ ~]# ceph osd status |grep c01 > >

[ceph-users] ceph osd status output

2018-05-18 Thread Marc Roos
Should ceph osd status not be stdout? So I can do something like this [@ ~]# ceph osd status |grep c01 And don't need to do this [@ ~]# ceph osd status 2>&1 |grep c01 ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Increasing number of PGs by not a factor of two?

2018-05-18 Thread Bryan Banister
+1 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Kai Wagner Sent: Thursday, May 17, 2018 4:20 PM To: David Turner Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Increasing number of PGs by not a factor of two? Great summary David.

[ceph-users] Ceph MeetUp Berlin – May 28

2018-05-18 Thread Robert Sander
Hi, we are organizing a bi-monthyl meetup in Berlin, Germany and invite any interested party to join us for the next one on May 28: https://www.meetup.com/Ceph-Berlin/events/qbpxrhyxhblc/ The presented topic is "High available (active/active) NFS and CIFS exports upon CephFS". Kindest Regards

Re: [ceph-users] Poor CentOS 7.5 client performance

2018-05-18 Thread Ilya Dryomov
On Fri, May 18, 2018 at 3:25 PM, Donald "Mac" McCarthy wrote: > Ilya, > Your recommendation worked beautifully. Thank you! > > Is this something that is expected behavior or is this something that should > be filed as a bug. > > I ask because I have just enough

Re: [ceph-users] Poor CentOS 7.5 client performance

2018-05-18 Thread Donald "Mac" McCarthy
Ilya, Your recommendation worked beautifully. Thank you! Is this something that is expected behavior or is this something that should be filed as a bug. I ask because I have just enough experience with ceph at this point to be very dangerous and not enough history to know if this was

Re: [ceph-users] [PROBLEM] Fail in deploy do ceph on RHEL

2018-05-18 Thread Jacob DeGlopper
Hi Antonio - you need to set !requiretty in your sudoers file.  This is documented here: http://docs.ceph.com/docs/jewel/start/quick-ceph-deploy/   but it appears that section may not have been copied into the current docs. You can test this by running 'ssh sds@node1 sudo whoami' from your

Re: [ceph-users] [PROBLEM] Fail in deploy do ceph on RHEL

2018-05-18 Thread David Turner
That error is a sudo error, not an SSH error. Making root login possible without password doesn't affect this at all. ceph-deploy is successfully logging in as sds to node01, but is failing to be able to execute sudo commands without a password. To fix that you need to use `visudo` to give the

[ceph-users] [PROBLEM] Fail in deploy do ceph on RHEL

2018-05-18 Thread Antonio Novaes
I tried create new cluster ceph, but on the my first command, received this erro in blue. Searched on the gogle about this erro, but believe that is error of the ssh, and dont of the ceph. I tried: alias ssh="ssh -t" on the admin node I Modifyed the file Host node01 Hostname

[ceph-users] (no subject)

2018-05-18 Thread Don Doerner
unsubscribe ceph-users The information contained in this transmission may be confidential. Any disclosure, copying, or further distribution of confidential information is not permitted unless such privilege is explicitly granted in writing by Quantum. Quantum reserves the right to have