Hello,
same here. My fix was to examine the keyring file in the misbehaving
server and compare to a different server, I found the file had the key
but was missing:
caps mgr = "profile crash"
caps mon = "profile crash"
I added that back in and now its OK.
/var/lib/ceph/./crash.node1/keyring
No
Thanks for the help. That page had sort of the answer. I had tried "ceph osd
crush tunables optimal" earlier but I got an error:
"Error EINVAL: new crush map requires client version jewel but
require_min_compat_client is firefly"
The links for help on that page are dead, but I did end up findin
Since it's not clear from your email, I'm assuming you've also already done
ceph osd require-osd-release octopus
and fully enabled msgr2 ?
Also, did the new octopus omap conversion complete already? There were
threads earlier that it was using loads of memory.
(see ceph config set osd bluesto
> Op 5 jul. 2020 om 15:26 heeft Wout van Heeswijk het volgende
> geschreven:
>
> Good point, we've looked at that, but can't see any message regarding OOM
> Killer:
>
Have to add here that we looked at changing osd memory target as well, but that
did not make a difference.
tcmalloc seems
Hi all.
I see this sentence in many sites. Does anyone knows why?
> Then turn off print continue. If you have it set to true, you may encounter
> problems with PUT operations
I use nginx in front of my rgw and proxy pass expect header in it.
Thanks.
_
Good point, we've looked at that, but can't see any message regarding
OOM Killer:
root@st0:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.4 LTS
Release: 18.04
Codename: bionic
root@st0:~# grep -i "out of memory" /var/log/kern.log
On 5/07/2020 10:43 pm, Wout van Heeswijk wrote:
After unsetting the norecover and nobackfill flag some OSDs started
crashing every few minutes. The OSD log, even with high debug
settings, don't seem to reveal anything, it just stops logging mid log
line.
POOMA U, but could the OOM Killer be
Hi All,
A customer of ours has upgraded the cluster from nautilus to octopus
after experiencing issues with osds not being able to connect to each
other, clients/mons/mgrs. The connectivity issues was related to the
msgrV2 and require_osd_release setting not being set to nautilus. After
fixin
Dumb question - in what log file are the rocks db spillover warnings posted?
Thanks.
--
Lindsay
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 5/07/2020 8:16 pm, Lindsay Mathieson wrote:
But from what you are saying, the 500GB disk would have been gaining
no benefit? I would be better off allocating 30GB (or 30GB) for each
disk?
Edit: 30GB or 62GB (its a 127GB SSD)
--
Lindsay
___
ceph-
Hi all.
There are high iops on my bucket index pool when there is about 1K PUT
request/s.
Is there any way I can debug why there are so many iops on the bucket
index pool?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
On 5/07/2020 7:38 pm, Alexander E. Patrakov wrote:
If the wal location is not explicitly specified, it goes together with
the db. So it is on the SSD.
Conversely, what happens with the block.db if I place the wal with
--block.wal
The db then stays with the data.
Ah, so my 2nd reading was corr
On Sun, Jul 5, 2020 at 6:57 AM Lindsay Mathieson
wrote:
>
> Nautilus install.
>
> Documentation seems a bit ambiguous to me - this is for a spinner + SSD,
> using ceph-volume
>
> If I put the block.db on the SSD with
>
> "ceph-volume lvm create --bluestore --data /dev/sdd --block.db
> /dev/sd
Den sön 5 juli 2020 kl 00:15 skrev Anthony D'Atri :
> min_compat is a different thing entirely.
> You need to set the tunables as a group. This will cause data to move, so
> you may wish to throttle recovery, model the PG movement ahead of time, use
> the upmap trick to control movement etc.
>
>
14 matches
Mail list logo