[ceph-users] Re: librados documentation has gone

2020-10-12 Thread Daniel Mezentsev
If somebody is looking, i found doc here https://www.bookstack.cn/read/ceph-en Thanks for reply. Unortunaly this is the case. In Google you can use the "Show Page in cache" option to get your desired site. I guess there Was a change from our documentation guys and the forget to Set an

[ceph-users] Ceph test cluster, how to estimate performance.

2020-10-12 Thread Daniel Mezentsev
Hi Ceph users, Im working on  common lisp client utilizing rados library. Got some results, but don't know how to estimate if i am getting correct performance. I'm running test cluster from laptop - 2 OSDs -  VM, RAM 4Gb, 4 vCPU each, monitors and mgr are running from the same VM(s). As

[ceph-users] Re: Bluestore migration: per-osd device copy

2020-10-12 Thread Chris Dunlop
Hi Anthony, Thanks for looking into this and opening the ticket - I'll keep an eye on it. For prepping the LVMs etc. I was thinking could probably use 'ceph-volume lvm prepare' then fixing up the relevant LV tags with the appropriate values from the origin osd. Cheers, Chris On Mon, Oct

[ceph-users] Re: MONs are down, the quorum is unable to resolve.

2020-10-12 Thread Gaël THEROND
I’m not using rook although I think it will probably help a lot with that recovery as rook is containers based too! Thanks a lot! Le mar. 13 oct. 2020 à 00:19, Brian Topping a écrit : > I see, maybe you want to look at these instructions. I don’t know if you > are running Rook, but the point

[ceph-users] Re: MONs are down, the quorum is unable to resolve.

2020-10-12 Thread Brian Topping
I see, maybe you want to look at these instructions. I don’t know if you are running Rook, but the point about getting the container alive by using `sleep` is important. Then you can get into the container with `exec` and do what you need to.

[ceph-users] Re: MONs are down, the quorum is unable to resolve.

2020-10-12 Thread Brian Topping
Hi there! This isn’t a difficult problem to fix. For purposes of clarity, the monmap is just a part of the monitor database. You generally have all the details correct though. Have you looked at the process in

[ceph-users] MONs are down, the quorum is unable to resolve.

2020-10-12 Thread Gaël THEROND
Hi everyone, Because of unfortunate events, I’ve a containers based ceph cluster (nautilus) in a bad shape. One of the lab cluster which is only made of 2 nodes as control plane (I know it’s bad :-)) each of these nodes run a mon, a mgr and a rados-gw containerized ceph_daemon. They were

[ceph-users] Re: librados documentation has gone

2020-10-12 Thread ceph
Unortunaly this is the case. In Google you can use the "Show Page in cache" option to get your desired site. I guess there Was a change from our documentation guys and the forget to Set an valuable redirect/rewrite rule for "older" site which are already crawled via Google. But am not sure...

[ceph-users] Re: Bluestore migration: per-osd device copy

2020-10-12 Thread Anthony D'Atri
Poking through the source I *think* the doc should indeed refer to the “dup” function, vs “copy”. That said, arguably we shouldn’t have a section in the docs that says "there’s this thing you can do but we aren’t going to tell you how”. Looking at the history / blame info, which only seems

[ceph-users] Re: Ubuntu 20 with octopus

2020-10-12 Thread Seena Fallah
If everything is stable isn't it good to update this doc? https://docs.ceph.com/en/latest/start/os-recommendations/ On Mon, Oct 12, 2020 at 12:56 PM Burkhard Linke < burkhard.li...@computational.bio.uni-giessen.de> wrote: > Hi, > > On 10/12/20 2:31 AM, Seena Fallah wrote: > > Hi all, > > > >

[ceph-users] Long heartbeat ping times

2020-10-12 Thread Frank Schilder
Dear all, occasionally, I find messages like Health check update: Long heartbeat ping times on front interface seen, longest is 1043.153 msec (OSD_SLOW_PING_TIME_FRONT) in the cluster log. Unfortunately, I seem to be unable to find out which OSDs were affected (a-posteriori). I cannot find

[ceph-users] Re: Ubuntu 20 with octopus

2020-10-12 Thread Stefan Kooman
On 2020-10-12 09:28, Robert Sander wrote: > Hi, > > Am 12.10.20 um 02:31 schrieb Seena Fallah: >> >> Does anyone has any production cluster with ubuntu 20 (focal) or any >> suggestion or any bugs that prevents to deploy Ceph octopus on Ubuntu 20? > > The underlying distribution does not matter

[ceph-users] Re: Ubuntu 20 with octopus

2020-10-12 Thread Stefan Kooman
On 2020-10-12 08:58, Seena Fallah wrote: > I've seen this PR that reverts the latest ubuntu version from 20.04 to > 18.04 because of some failures! > Are there any updates on this? > https://github.com/ceph/ceph/pull/35110 Apparently there have been attempts to get Ceph built on Focal. I did not

[ceph-users] Re: Bluestore migration: per-osd device copy

2020-10-12 Thread Eugen Block
I really should read these emails more carefully... Sorry, thanks for pointing that out. I haven't done the filestore migration per OSD. I created a filestore OSD in my lab setup to play around with ceph-objectstore-tool but I couldn't find anything except for '--op dup' but it's not

[ceph-users] Re: Cluster under stress - flapping OSDs?

2020-10-12 Thread Burkhard Linke
Hi, On 10/12/20 12:05 PM, Kristof Coucke wrote: Diving into the different logging and searching for answers, I came across the following: PG_DEGRADED Degraded data redundancy: 2101057/10339536570 objects degraded (0.020%), 3 pgs degraded, 3 pgs undersized pg 1.4b is stuck undersized for

[ceph-users] Re: Cluster under stress - flapping OSDs?

2020-10-12 Thread Kristof Coucke
I'll answer it myself: When CRUSH fails to find enough OSDs to map to a PG, it will show as a 2147483647 which is ITEM_NONE or no OSD found. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Cluster under stress - flapping OSDs?

2020-10-12 Thread Kristof Coucke
Diving into the different logging and searching for answers, I came across the following: PG_DEGRADED Degraded data redundancy: 2101057/10339536570 objects degraded (0.020%), 3 pgs degraded, 3 pgs undersized pg 1.4b is stuck undersized for 63114.227655, current state

[ceph-users] Re: Ubuntu 20 with octopus

2020-10-12 Thread Burkhard Linke
Hi, On 10/12/20 2:31 AM, Seena Fallah wrote: Hi all, Does anyone has any production cluster with ubuntu 20 (focal) or any suggestion or any bugs that prevents to deploy Ceph octopus on Ubuntu 20? We are running our new ceph cluster on Ubuntu 20.04 and ceph octopus release. Packages are

[ceph-users] Cluster under stress - flapping OSDs?

2020-10-12 Thread Kristof Coucke
Hi all, We're now having trouble over a week with our Ceph cluster. Short info regarding our situation: - Original cluster had 10 OSD nodes, each having 16 OSDs - Expansion was necessary, so another 6 nodes have been added - Version: 14.2.11 Last week we saw heavily loaded OSD servers, after

[ceph-users] Re: Ubuntu 20 with octopus

2020-10-12 Thread Robert Sander
Hi, Am 12.10.20 um 02:31 schrieb Seena Fallah: > > Does anyone has any production cluster with ubuntu 20 (focal) or any > suggestion or any bugs that prevents to deploy Ceph octopus on Ubuntu 20? The underlying distribution does not matter any more as long as you get cephadm bootstrapped on one

[ceph-users] Re: Ubuntu 20 with octopus

2020-10-12 Thread Seena Fallah
I've seen this PR that reverts the latest ubuntu version from 20.04 to 18.04 because of some failures! Are there any updates on this? https://github.com/ceph/ceph/pull/35110 On Mon, Oct 12, 2020 at 4:11 AM Robert Ruge wrote: > I am using Ubuntu 20.04 LTS for a five node 1PB cephfs setup with