Re: [ceph-users] bluestore write iops calculation

2019-08-07 Thread vitalif
I can add RAM ans is there a way to increase rocksdb caching , can I increase bluestore_cache_size_hdd to higher value to cache rocksdb? In recent releases it's governed by the osd_memory_target parameter. In previous releases it's bluestore_cache_size_hdd. Check release notes to know for

Re: [ceph-users] bluestore write iops calculation

2019-08-06 Thread nokia ceph
On Mon, Aug 5, 2019 at 6:35 PM wrote: > > Hi Team, > > @vita...@yourcmc.ru , thank you for information and could you please > > clarify on the below quires as well, > > > > 1. Average object size we use will be 256KB to 512KB , will there be > > deferred write queue ? > > With the default

Re: [ceph-users] bluestore write iops calculation

2019-08-05 Thread vitalif
Hi Team, @vita...@yourcmc.ru , thank you for information and could you please clarify on the below quires as well, 1. Average object size we use will be 256KB to 512KB , will there be deferred write queue ? With the default settings, no (bluestore_prefer_deferred_size_hdd = 32KB) Are you

Re: [ceph-users] bluestore write iops calculation

2019-08-05 Thread nokia ceph
Hi Team, @vita...@yourcmc.ru , thank you for information and could you please clarify on the below quires as well, 1. Average object size we use will be 256KB to 512KB , will there be deferred write queue ? 2. Share the link of existing rocksdb ticket which does 2 write + syncs. 3. Any

Re: [ceph-users] bluestore write iops calculation

2019-08-02 Thread Nathan Fish
Any EC pool with m=1 is fragile. By default, min_size = k+1, so you'd immediately stop IO the moment you lose a single OSD. min_size can be lowered to k, but that can cause data loss and corruption. You should set m=2 at a minimum. 4+2 doesn't take much more space than 4+1, and it's far safer. On

Re: [ceph-users] bluestore write iops calculation

2019-08-02 Thread Maged Mokhtar
On 02/08/2019 08:54, nokia ceph wrote: Hi Team, Could you please help us in understanding the write iops inside ceph cluster . There seems to be mismatch in iops between theoretical and what we see in disk status. Our platform 5 node cluster 120 OSDs, with each node having 24 disks HDD (

Re: [ceph-users] bluestore write iops calculation

2019-08-02 Thread vitalif
where small means 32kb or smaller going to BlueStore, so <= 128kb writes from the client. Also: please don't do 4+1 erasure coding, see older discussions for details. Can you point me to the discussion abort the problems of 4+1? It's not easy to google :) -- Vitaliy Filippov

Re: [ceph-users] bluestore write iops calculation

2019-08-02 Thread Paul Emmerich
On Fri, Aug 2, 2019 at 2:51 PM wrote: > > > 1. For 750 object write request , data written directly into data > > partition and since we use EC 4+1 there will be 5 iops across the > > cluster for each obejct write . This makes 750 * 5 = 3750 iops > > don't forget about the metadata and the

Re: [ceph-users] bluestore write iops calculation

2019-08-02 Thread vitalif
1. For 750 object write request , data written directly into data partition and since we use EC 4+1 there will be 5 iops across the cluster for each obejct write . This makes 750 * 5 = 3750 iops don't forget about the metadata and the deferring of small writes. deferred write queue + metadata,

[ceph-users] bluestore write iops calculation

2019-08-02 Thread nokia ceph
Hi Team, Could you please help us in understanding the write iops inside ceph cluster . There seems to be mismatch in iops between theoretical and what we see in disk status. Our platform 5 node cluster 120 OSDs, with each node having 24 disks HDD ( data, rcoksdb and rocksdb.WAL all resides in