I can add RAM ans is there a way to increase rocksdb caching , can I
increase bluestore_cache_size_hdd to higher value to cache rocksdb?
In recent releases it's governed by the osd_memory_target parameter. In
previous releases it's bluestore_cache_size_hdd. Check release notes to
know for sure
On Mon, Aug 5, 2019 at 6:35 PM wrote:
> > Hi Team,
> > @vita...@yourcmc.ru , thank you for information and could you please
> > clarify on the below quires as well,
> >
> > 1. Average object size we use will be 256KB to 512KB , will there be
> > deferred write queue ?
>
> With the default settin
Hi Team,
@vita...@yourcmc.ru , thank you for information and could you please
clarify on the below quires as well,
1. Average object size we use will be 256KB to 512KB , will there be
deferred write queue ?
With the default settings, no (bluestore_prefer_deferred_size_hdd =
32KB)
Are you su
Hi Team,
@vita...@yourcmc.ru , thank you for information and could you please
clarify on the below quires as well,
1. Average object size we use will be 256KB to 512KB , will there be
deferred write queue ?
2. Share the link of existing rocksdb ticket which does 2 write + syncs.
3. Any configurat
Any EC pool with m=1 is fragile. By default, min_size = k+1, so you'd
immediately stop IO the moment you lose a single OSD. min_size can be
lowered to k, but that can cause data loss and corruption. You should
set m=2 at a minimum. 4+2 doesn't take much more space than 4+1, and
it's far safer.
On
On 02/08/2019 08:54, nokia ceph wrote:
Hi Team,
Could you please help us in understanding the write iops inside ceph
cluster . There seems to be mismatch in iops between theoretical and
what we see in disk status.
Our platform 5 node cluster 120 OSDs, with each node having 24 disks
HDD ( d
where small means 32kb or smaller going to BlueStore, so <= 128kb
writes
from the client.
Also: please don't do 4+1 erasure coding, see older discussions for
details.
Can you point me to the discussion abort the problems of 4+1? It's not
easy to google :)
--
Vitaliy Filippov
__
On Fri, Aug 2, 2019 at 2:51 PM wrote:
>
> > 1. For 750 object write request , data written directly into data
> > partition and since we use EC 4+1 there will be 5 iops across the
> > cluster for each obejct write . This makes 750 * 5 = 3750 iops
>
> don't forget about the metadata and the deferri
1. For 750 object write request , data written directly into data
partition and since we use EC 4+1 there will be 5 iops across the
cluster for each obejct write . This makes 750 * 5 = 3750 iops
don't forget about the metadata and the deferring of small writes.
deferred write queue + metadata,
Hi Team,
Could you please help us in understanding the write iops inside ceph
cluster . There seems to be mismatch in iops between theoretical and what
we see in disk status.
Our platform 5 node cluster 120 OSDs, with each node having 24 disks HDD (
data, rcoksdb and rocksdb.WAL all resides in th
10 matches
Mail list logo