t; due to high load it may cause you some further issues with lost objects
> e.t.c that wasn't fully replicated.
>
> So it will be at your own risk to add further OSD's..
>
> On Sun, Apr 28, 2019 at 9:40 PM Nikhil R wrote:
>
>> Thanks Paul,
>> Coming back to my q
//croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Sun, Apr 28, 2019 at 6:57 AM Nikhil R wrote:
> >
> > Hi,
> > I have set noout, noscrub and nodeep-scrub and the last time we added
> osd's we a
2 active+recovering+degraded+remapped
client io 11894 kB/s rd, 105 kB/s wr, 981 op/s rd, 72 op/s wr
So, is it a good option to add new osd's on a new node with ssd's as
journals?
in.linkedin.com/in/nikhilravindra
On Sun, Apr 28, 2019 at 6:05 AM Erik McCormick
wrote:
> On Sat, Apr
. This is when we see iostat and the disk is utilised upto 100%.
Appreciate your help David
On Sun, 28 Apr 2019 at 00:46, David C wrote:
>
>
> On Sat, 27 Apr 2019, 18:50 Nikhil R, wrote:
>
>> Guys,
>> We now have a total of 105 osd’s on 5 baremetal nodes each hosting 21
&
Guys,
We now have a total of 105 osd’s on 5 baremetal nodes each hosting 21 osd’s
on HDD which are 7Tb with journals on HDD too. Each journal is about 5GB.
We expanded our cluster last week and added 1 more node with 21 HDD and
journals on same disk.
Our client i/o is too heavy and we are not able
We hit into MAX AVAIL where one of our osd was 92%, we immediately
reweighted the osd to 0.95 from 1 and added new osd's to the cluster.
But, our backfilling is too slow and the ingestion to CEPH is too high.
Currently we have stopped the osd and trying to export-remove a few pg's to
free up some s
Team,
Is there a way to force backfill a pg in ceph jewel. I know this is
available in mimic. Is it available in ceph jewel
I tried ceph pg backfill &ceph pg backfill but no luck
Any help would be appreciated as we have a prod issue.
in.linkedin.com/in/nikhilravindra
___
The issue we have is large leveldb's . do we have any setting to disable
compaction of leveldb on osd start?
in.linkedin.com/in/nikhilravindra
On Fri, Mar 29, 2019 at 7:44 PM Nikhil R wrote:
> Any help on this would be much appreciated as our prod is down since a day
> and each osd
Any help on this would be much appreciated as our prod is down since a day
and each osd restart is taking 4-5 hours.
in.linkedin.com/in/nikhilravindra
On Fri, Mar 29, 2019 at 7:43 PM Nikhil R wrote:
> We have maxed out the files per dir. CEPH is trying to do an online split
> due to
1 leveldb: Delete type=2 #1029109
Is there a way i can skip this?
in.linkedin.com/in/nikhilravindra
On Fri, Mar 29, 2019 at 11:32 AM huang jun wrote:
> Nikhil R 于2019年3月29日周五 下午1:44写道:
> >
> > if i comment filestore_split_multiple = 72 filestore_merge_threshold =
> 480 in
lit settings result the problem,
> what about comment out those settings then see it still used that long
> time to restart?
> As a fast search in code, these two
> filestore_split_multiple = 72
> filestore_merge_threshold = 480
> doesn't support online change.
>
>
Did the time really cost on db compact operation?
> or you can turn on debug_osd=20 to see what happens,
> what about the disk util during start?
>
> Nikhil R 于2019年3月28日周四 下午4:36写道:
> >
> > CEPH osd restarts are taking too long a time
> > below is my ceph.conf
> >
CEPH osd restarts are taking too long a time
below is my ceph.conf
[osd]
osd_compact_leveldb_on_mount = false
leveldb_compact_on_mount = false
leveldb_cache_size=1073741824
leveldb_compression = false
osd_mount_options_xfs = "rw,noatime,inode64,logbsize=256k"
osd_max_backfills = 1
osd_recovery_max_
13 matches
Mail list logo