On Thu, Nov 29, 2018 at 11:48:35PM -0500, Michael Green wrote:
Hello collective wisdom,
Ceph neophyte here, running v13.2.2 (mimic).
Question: what tools are available to monitor IO stats on RBD level?
That is, IOPS, Throughput, IOs inflight and so on?
There is some brand new code for r
Hi~
I want to turn on the "CEPH_OSD_FALG_BALANCE_READS" flag to
optimize read performance. Do I just need to set flag in librados API
and is there any other problems?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinf
Hi Mark,
just taking the liberty to follow up on this one, as I'd really like to
get to the bottom of this.
On 28/11/2018 16:53, Florian Haas wrote:
> On 28/11/2018 15:52, Mark Nelson wrote:
>> Option("bluestore_default_buffered_read", Option::TYPE_BOOL,
>> Option::LEVEL_ADVANCED)
>> .set_def
вс, 2 дек. 2018 г., 20:38 Paul Emmerich paul.emmer...@croit.io:
> 10 copies for a replicated setup seems... excessive.
>
I'm try to create golang package for simple key-val store that used ceph
crushmap to distribute data.
For each namespace attach ceph crushmap rule.
>
_
10 copies for a replicated setup seems... excessive.
The rules are quite simple, for example rule 1 could be:
take default
choose firstn 5 type datacenter # picks 5 datacenters
chooseleaf firstn 2 type host # 2 different hosts in each datacenter
emit
rule 2 is the same but type region and first
Hi, i need help with crushmap
I have
3 regions - r1 r2 r3
5 dc - dc1 dc2 dc3 dc4 dc5
dc1 dc2 dc3 in r1
dc4 in r2
dc5 in r3
Each dc have 3 nodes with 2 disks
I need to have 3 rules
rule1 to have 2 copies on two nodes in each dc - 10 copies total failure
domain dc
rule2 to have 2 copies on two nodes