Re: [ceph-users] Looking for experience

2020-01-09 Thread Joachim Kraftmayer
I would try to scale horizontally with smaller ceph nodes, so you have the advantage of being able to choose an EC profile that does not require too much overhead and you can use failure domain host. Joachim Am 09.01.2020 um 15:31 schrieb Wido den Hollander: On 1/9/20 2:27 PM, Stefan

Re: [ceph-users] rgw: multisite support

2019-10-04 Thread Joachim Kraftmayer
Maybe this will help you: https://docs.ceph.com/docs/master/radosgw/multisite/#migrating-a-single-site-system-to-multi-site ___ Clyso GmbH Am 03.10.2019 um 13:32 schrieb M Ranga Swami Reddy: Thank you. Do we have a quick document to do this migration? Thanks

Re: [ceph-users] Blocked ops after change from filestore on HDD to bluestore on SDD

2019-02-27 Thread Joachim Kraftmayer
Hi Uwe, I can only recommend the use of enterprise SSDs. We've tested many consumer SSDs in the past, including your SSDs. Many of them are not suitable for long-term use and some weard out within 6 months. Cheers, Joachim Homepage: https://www.clyso.com Am 27.02.2019 um 10:24 sc

Re: [ceph-users] Commercial support

2019-01-24 Thread Joachim Kraftmayer
Hi Ketil, We also offer independent ceph consulting and and operate productive cluster for more than 4 years and up 2500 osds. You can meet many in person at the next cephalocon in Barcelona. (https://ceph.com/cephalocon/barcelona-2019/) Regards, Joachim Clyso GmbH Homepage: https

Re: [ceph-users] Ceph monitors overloaded on large cluster restart

2018-12-20 Thread Joachim Kraftmayer
another cluster with 3 mons. We found in the osd logs that the osds were not getting updates from the monitors fast enough. At the moment we use 5 monitors for large clusters and dedicated hardware for monitors. Joachim ___ Clyso GmbH Am 20.12.2018 um 00:47

Re: [ceph-users] НА: ceph pg backfill_toofull

2018-12-12 Thread Joachim Kraftmayer
In such a situation, we noticed a performance drop (caused by the filesystem) and soon had no free inodes left. ___ Clyso GmbH Am 12.12.2018 um 09:24 schrieb Klimenko, Roman: ​Ok, I'll try these params. thx! ---

[ceph-users] Change of Monitors IP Adresses

2013-07-11 Thread Joachim . Tork
Hi folks, I face the difficulty that I have to change ip adresses in the public network for the monitors. What needs to be done beside the change of the ceph.conf? Best regards Joachim Tork___ ceph-users mailing list ceph-users@lists.ceph.com http

[ceph-users] posix acls

2013-07-02 Thread Joachim . Tork
Hi, will there be posix acls in ceph Filesystem? Best regards Joachim___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] invalid argument in ceph osd create

2013-06-27 Thread Joachim . Tork
like this. root@ceph-test4:/etc/ceph# myceph osd create 71 (22) Invalid argument Unfortunately that doesn't seemd to habe worked. What's wrong? Best regards Joachim ___ ceph-users mailing list ceph-users@lists.ceph.com http://list

[ceph-users] Antwort: Re: Replication between 2 datacenter

2013-06-26 Thread Joachim . Tork
Hi, yes exactly. synchronous replication is OK. The distance between the datacenter is only 15 km. How do i configure this in the cruchmap? Best regards Joachim Von:Sage Weil An: joachim.t...@gad.de, Kopie: ceph-users@lists.ceph.com Datum: 25.06.2013 17:39 Betreff:Re

[ceph-users] Replication between 2 datacenter

2013-06-25 Thread Joachim . Tork
hi folks, i have a question concerning data replication using the crushmap. Is it possible to write a crushmap to achive a 2 times 2 replcation in the way a have a pool replication in one data center and an overall replication of this in the backup datacenter? Best regards Joachim

[ceph-users] increase pg num for .rgw.buckets

2013-04-03 Thread Joachim
three options, so I would like to know what my best strategy is. I'm currently on 0.56.4, but I'm willing to upgrade to solve this. I can wait a while, as my OSD's haven't filled up yet, but I would like to fix this in the coming days. Any advice i