I would try to scale horizontally with smaller ceph nodes, so you have
the advantage of being able to choose an EC profile that does not
require too much overhead and you can use failure domain host.
Joachim
Am 09.01.2020 um 15:31 schrieb Wido den Hollander:
On 1/9/20 2:27 PM, Stefan
Maybe this will help you:
https://docs.ceph.com/docs/master/radosgw/multisite/#migrating-a-single-site-system-to-multi-site
___
Clyso GmbH
Am 03.10.2019 um 13:32 schrieb M Ranga Swami Reddy:
Thank you. Do we have a quick document to do this migration?
Thanks
Hi Uwe,
I can only recommend the use of enterprise SSDs. We've tested many
consumer SSDs in the past, including your SSDs. Many of them are not
suitable for long-term use and some weard out within 6 months.
Cheers, Joachim
Homepage: https://www.clyso.com
Am 27.02.2019 um 10:24 sc
Hi Ketil,
We also offer independent ceph consulting and
and operate productive cluster for more than 4 years and up 2500 osds.
You can meet many in person at the next cephalocon in Barcelona.
(https://ceph.com/cephalocon/barcelona-2019/)
Regards, Joachim
Clyso GmbH
Homepage: https
another cluster with 3 mons. We found in the osd logs that the osds were
not getting updates from the monitors fast enough.
At the moment we use 5 monitors for large clusters and dedicated
hardware for monitors.
Joachim
___
Clyso GmbH
Am 20.12.2018 um 00:47
In such a situation, we noticed a performance drop (caused by the
filesystem) and soon had no free inodes left.
___
Clyso GmbH
Am 12.12.2018 um 09:24 schrieb Klimenko, Roman:
Ok, I'll try these params. thx!
---
Hi folks,
I face the difficulty that I have to change ip adresses in the
public network for the monitors.
What needs to be done beside the change of the ceph.conf?
Best regards
Joachim Tork___
ceph-users mailing list
ceph-users@lists.ceph.com
http
Hi,
will there be posix acls in ceph Filesystem?
Best regards
Joachim___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
like this.
root@ceph-test4:/etc/ceph# myceph osd create 71
(22) Invalid argument
Unfortunately that doesn't seemd to habe worked.
What's wrong?
Best regards
Joachim
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://list
Hi,
yes exactly. synchronous replication is OK. The distance between the
datacenter
is only 15 km.
How do i configure this in the cruchmap?
Best regards
Joachim
Von:Sage Weil
An: joachim.t...@gad.de,
Kopie: ceph-users@lists.ceph.com
Datum: 25.06.2013 17:39
Betreff:Re
hi folks,
i have a question concerning data replication using the crushmap.
Is it possible to write a crushmap to achive a 2 times 2 replcation in the
way a have a pool replication in one data center and an overall
replication
of this in the backup datacenter?
Best regards
Joachim
three options, so I would like to know what
my best strategy is. I'm currently on 0.56.4, but I'm willing to upgrade to
solve this.
I can wait a while, as my OSD's haven't filled up yet, but I would like to fix
this in the coming days.
Any advice i
12 matches
Mail list logo