Re: [ceph-users] cluster network down

2019-10-26 Thread Michel Raabe
On 10/1/19 8:20 AM, Lars Täuber wrote: > Mon, 30 Sep 2019 15:21:18 +0200 > Janne Johansson ==> Lars Täuber : >>> >>> I don't remember where I read it, but it was told that the cluster is >>> migrating its complete traffic over to the public network when the cluster >>> networks goes down. So th

Re: [ceph-users] Nautilus : ceph dashboard ssl not working

2019-09-17 Thread Michel Raabe
Hi Muthu, On 16.09.19 11:30, nokia ceph wrote: Hi Team, In ceph 14.2.2 , ceph dashboard does not have set-ssl-certificate . We are trying to enable ceph dashboard and while using the ssl certificate and key , it is not working . cn5.chn5au1c1.cdn ~# ceph dashboard set-ssl-certificate -i das

Re: [ceph-users] Natlius, RBD-Mirroring & Cluster Names

2019-07-15 Thread Michel Raabe
Hi, On 15.07.19 22:42, dhils...@performair.com wrote: Paul; If I understand you correctly: I will have 2 clusters, each named "ceph" (internally). As such, each will have a configuration file at: /etc/ceph/ceph.conf I would copy the other clusters configuration file to something like:

Re: [ceph-users] Invalid metric type, prometheus module with rbd mirroring

2019-07-05 Thread Michel Raabe
Hi Brett! fyi it's fixed last month: https://github.com/ceph/ceph/commit/425c5358fed9376939cff8a922c3ce1186d6b9e2 HTH, Michel ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Using Ceph Ansible to Add Nodes to Cluster at Weight 0

2019-05-30 Thread Michel Raabe
Hi Mike, On 30.05.19 02:00, Mike Cave wrote: I’d like a s little friction for the cluster as possible as it is in heavy use right now. I’m running mimic (13.2.5) on CentOS. Any suggestions on best practices for this? You can limit the recovery for example * max backfills * recovery max act

Re: [ceph-users] inconsistent number of pools

2019-05-24 Thread Michel Raabe
On 20.05.19 13:04, Lars Täuber wrote: Mon, 20 May 2019 10:52:14 + Eugen Block ==> ceph-users@lists.ceph.com : Hi, have you tried 'ceph health detail'? No I hadn't. Thanks for the hint. You can also try $ rados lspools $ ceph osd pool ls and verify that with the pgs $ ceph pg ls --fo

Re: [ceph-users] ?==?utf-8?q? Intel P4600 3.2TB?==?utf-8?q? U.2 form factor NVMe firmware problems causing dead disks

2019-02-23 Thread Michel Raabe
On Monday, February 18, 2019 16:44 CET, David Turner wrote: > Has anyone else come across this issue before? Our current theory is that > Bluestore is accessing the disk in a way that is triggering a bug in the > older firmware version that isn't triggered by more traditional > filesystems. We

Re: [ceph-users] RBD default pool

2019-02-02 Thread Michel Raabe
> On 2. Feb 2019, at 01:25, Carlos Mogas da Silva wrote: > >> On 01/02/2019 22:40, Alan Johnson wrote: >> Confirm that no pools are created by default with Mimic. > > I can confirm that. Mimic deploy doesn't create any pools. https://ceph.com/community/new-luminous-pool-tags/ Yes and that’s

Re: [ceph-users] Restoring keyring capabilities

2018-02-16 Thread Michel Raabe
[osd] allow * client.bootstrap-mds root@ceph-mon1:/# cat /var/lib/ceph/mon/ceph-ceph-mon1/keyring [mon.] key = AQD1y3RapVDCNxAAmInc8D3OPZKuTVeUcNsPug== caps mon = "allow *" > Michel Raabe writes: > > On 02/16/18 @ 18:21, Nico Schottelius wrote: > >> on a test

Re: [ceph-users] Restoring keyring capabilities

2018-02-16 Thread Michel Raabe
On 02/16/18 @ 18:21, Nico Schottelius wrote: > on a test cluster I issued a few seconds ago: > > ceph auth caps client.admin mgr 'allow *' > > instead of what I really wanted to do > > ceph auth caps client.admin mgr 'allow *' mon 'allow *' osd 'allow *' \ > mds allow > > Now any access t

Re: [ceph-users] Bluestore Hardwaresetup

2018-02-16 Thread Michel Raabe
Hi Peter, On 02/15/18 @ 19:44, Jan Peters wrote: > I want to evaluate ceph with bluestore, so I need some hardware/configure > advices from you. > > My Setup should be: > > 3 Nodes Cluster, on each with: > > - Intel Gold Processor SP 5118, 12 core / 2.30Ghz > - 64GB RAM > - 6 x 7,2k, 4 TB SAS