Re: [ceph-users] Ceph Monitor cephx issues

2017-01-07 Thread Shinobu Kinjo
Good to know. Sorry for such an inconvenient for you anyway. Regards, On Sun, Jan 8, 2017 at 2:19 PM, Alex Evonosky wrote: > Since this was a test lab, I totally purged the whole cluster and > re-deployed.. working good now, thank you. > > > > Alex F. Evonosky > >

Re: [ceph-users] Ceph Monitor cephx issues

2017-01-07 Thread Alex Evonosky
Since this was a test lab, I totally purged the whole cluster and re-deployed.. working good now, thank you. Alex F. Evonosky On Sat, Jan 7, 2017 at 9:14 PM, Alex Evonosky wrote: > Thank

Re: [ceph-users] Ceph Monitor cephx issues

2017-01-07 Thread Alex Evonosky
Thank you.. After sending the post, I totally removed the mon and issued the build with ceph-deploy: In the logs now: 2017-01-07 21:12:38.113534 7fa9613fd700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption 2017-01-07 21:12:38.113546 7fa9613fd700 0 --

Re: [ceph-users] Ceph Monitor cephx issues

2017-01-07 Thread Shinobu Kinjo
Using ``ceph-deploy`` will save your life: # https://github.com/ceph/ceph/blob/master/doc/start/quick-ceph-deploy.rst * Please look at: Adding Monitors If you are using centos or similar, the latest package is available here: #

Re: [ceph-users] Ceph Monitor cephx issues

2017-01-07 Thread Alex Evonosky
Thank you for the reply! I followed this article: http://docs.ceph.com/docs/jewel/rados/operations/add-or-rm-mons/ Under the section: ADDING A MONITOR (MANUAL) Alex F. Evonosky On Sat, Jan 7, 2017 at 6:36 PM,

Re: [ceph-users] Ceph Monitor cephx issues

2017-01-07 Thread Shinobu Kinjo
How did you add a third MON? Regards, On Sun, Jan 8, 2017 at 7:01 AM, Alex Evonosky wrote: > Anyone see this before? > > > 2017-01-07 16:55:11.406047 7f095b379700 0 cephx: verify_reply couldn't > decrypt with error: error decoding block for decryption > 2017-01-07

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-07 Thread Nick Fisk
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of kevin parrikar Sent: 07 January 2017 13:11 To: Lionel Bouton Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas

Re: [ceph-users] cephfs AND rbds

2017-01-07 Thread Nick Fisk
Technically I think there is no reason why you couldn’t do this, but I think it is unadvisable. There was a similar thread a while back where somebody had done this and it caused problems when he was trying to do maintenance/recovery further down the line. I’m assuming you want to do this

[ceph-users] Ceph Monitor cephx issues

2017-01-07 Thread Alex Evonosky
Anyone see this before? 2017-01-07 16:55:11.406047 7f095b379700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption 2017-01-07 16:55:11.406053 7f095b379700 0 -- 10.10.10.138:6789/0 >> 10.10.10.252:6789/0 pipe(0x55cf8d028000 sd=11 :47548 s=1 pgs=0 cs=0 l=0

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-07 Thread Jake Young
I use 2U servers with 9x 3.5" spinning disks in each. This has scaled well for me, in both performance and budget. I may add 3 more spinning disks to each server at a later time if I need to maximize storage, or I may add 3 SSDs for journals/cache tier if we need better performance. Another

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-07 Thread Maged Mokhtar
Adding more nodes is best if you have unlimited budget :)You should add more osds per node until you start hitting cpu or network bottlenecks. Use a perf tool like atop/sysstat to know when this happens. Original message From: kevin parrikar

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-07 Thread Lionel Bouton
Le 07/01/2017 à 14:11, kevin parrikar a écrit : > Thanks for your valuable input. > We were using these SSD in our NAS box(synology) and it was giving > 13k iops for our fileserver in raid1.We had a few spare disks which we > added to our ceph nodes hoping that it will give good performance same

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-07 Thread kevin parrikar
Thanks for your valuable input. We were using these SSD in our NAS box(synology) and it was giving 13k iops for our fileserver in raid1.We had a few spare disks which we added to our ceph nodes hoping that it will give good performance same as that of NAS box.(i am not comparing NAS with ceph

Re: [ceph-users] RBD mirroring

2017-01-07 Thread Klemen Pogacnik
Yes, disaster recovery can be solved by application layer, but I think it would be nice openstack feature too. Specially when the replication is already solved by Ceph. I'll ask on other forums if something is doing on that feature. Thanks again for pointing me to the right direction. Kemo On

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-07 Thread kevin parrikar
Thanks Maged for your suggestion. I have executed rbd bench and here is the result,please have a look at it rbd bench-write image01 --pool=rbd --io-threads=32 --io-size 4096 --io-pattern rand --rbd_cache=false bench-write io_size 4096 io_threads 32 bytes 1073741824 pattern rand SEC