[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-05-02 Thread Ilya Dryomov
On Sun, May 2, 2021 at 11:15 PM Magnus Harlander wrote: > > Hi, > > I know there is a thread about problems with mounting cephfs with 5.11 > kernels. > I tried everything that's mentioned there, but I still can not mount a cephfs > from an octopus node. > > I verified: > > - I can not mount with

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-05-03 Thread Ilya Dryomov
On Mon, May 3, 2021 at 9:20 AM Magnus Harlander wrote: > > Am 03.05.21 um 00:44 schrieb Ilya Dryomov: > > On Sun, May 2, 2021 at 11:15 PM Magnus Harlander wrote: > > Hi, > > I know there is a thread about problems with mounting cephfs with 5.11 > kernels. > > ... > > Hi Magnus, > > What is the o

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-05-03 Thread Ilya Dryomov
On Mon, May 3, 2021 at 12:00 PM Magnus Harlander wrote: > > Am 03.05.21 um 11:22 schrieb Ilya Dryomov: > > max_osd 12 > > I never had more then 10 osds on the two osd nodes of this cluster. > > I was running a 3 osd-node cluster earlier with more than 10 > osds, but the current cluster has been se

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-05-03 Thread Ilya Dryomov
On Mon, May 3, 2021 at 12:27 PM Magnus Harlander wrote: > > Am 03.05.21 um 12:25 schrieb Ilya Dryomov: > > ceph osd setmaxosd 10 > > Bingo! Mount works again. > > Vry strange things are going on here (-: > > Thanx a lot for now!! If I can help to track it down, please let me know. Good to kno

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-05-03 Thread Ilya Dryomov
On Mon, May 3, 2021 at 12:24 PM Magnus Harlander wrote: > > Am 03.05.21 um 11:22 schrieb Ilya Dryomov: > > There is a 6th osd directory on both machines, but it's empty > > [root@s0 osd]# ll > total 0 > drwxrwxrwt. 2 ceph ceph 200 2. Mai 16:31 ceph-1 > drwxrwxrwt. 2 ceph ceph 200 2. Mai 16:31 ce

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-05-11 Thread Konstantin Shalygin
Hi Ilya, > On 3 May 2021, at 14:15, Ilya Dryomov wrote: > > I don't think empty directories matter at this point. You may not have > had 12 OSDs at any point in time, but the max_osd value appears to have > gotten bumped when you were replacing those disks. > > Note that max_osd being greater

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-05-11 Thread Ilya Dryomov
On Tue, May 11, 2021 at 10:50 AM Konstantin Shalygin wrote: > > Hi Ilya, > > On 3 May 2021, at 14:15, Ilya Dryomov wrote: > > I don't think empty directories matter at this point. You may not have > had 12 OSDs at any point in time, but the max_osd value appears to have > gotten bumped when you

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-05-11 Thread Konstantin Shalygin
> On 11 May 2021, at 14:24, Ilya Dryomov wrote: > > No, as mentioned above max_osds being greater is not a problem per se. > Having max_osds set to 1 when you only have a few dozen is going to > waste a lot of memory and network bandwidth, but if it is just slightly > bigger it's not someth

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-06-15 Thread Dan van der Ster
Hi Ilya, We're now hitting this on CentOS 8.4. The "setmaxosd" workaround fixed access to one of our clusters, but isn't working for another, where we have gaps in the osd ids, e.g. # ceph osd getmaxosd max_osd = 553 in epoch 691642 # ceph osd tree | sort -n -k1 | tail 541 ssd 0.87299

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-06-15 Thread Dan van der Ster
Replying to own mail... On Tue, Jun 15, 2021 at 7:54 PM Dan van der Ster wrote: > > Hi Ilya, > > We're now hitting this on CentOS 8.4. > > The "setmaxosd" workaround fixed access to one of our clusters, but > isn't working for another, where we have gaps in the osd ids, e.g. > > # ceph osd getmax

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-06-15 Thread Ackermann, Christoph
Dan, sorry, we have no gaps in osd numbering: isceph@ceph-deploy:~$ sudo ceph osd ls |wc -l; sudo ceph osd tree | sort -n -k1 |tail 76 [..] 73ssd0.28600 osd.73 up 1.0 1.0 74ssd0.27689 osd.74 up 1.0 1.0

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-06-15 Thread Dan van der Ster
Hi Christoph, What about the max osd? If "ceph osd getmaxosd" is not 76 on this cluster, then set it: `ceph osd setmaxosd 76`. -- dan On Tue, Jun 15, 2021 at 8:54 PM Ackermann, Christoph wrote: > > Dan, > > sorry, we have no gaps in osd numbering: > isceph@ceph-deploy:~$ sudo ceph osd ls |wc -l

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-06-15 Thread Ackermann, Christoph
Hi Dan, Thanks for the hint, i'll try this tomorrow with a test bed first. This evening I had to fix some Bareos client systems to get a quiet sleep. ;-) Will give you feedback asap. Best regards, Christoph Am Di., 15. Juni 2021 um 21:03 Uhr schrieb Dan van der Ster < d...@vanderster.com>: >

[ceph-users] Re: cephfs mount problems with 5.11 kernel - not a ipv6 problem

2021-06-16 Thread Ackermann, Christoph
Good Morning Dan, adjusting "ceph osd setmaxosd 76" solved the problem so far. :-) Thanks and Best regards, Christoph Am Di., 15. Juni 2021 um 21:14 Uhr schrieb Ackermann, Christoph < c.ackerm...@infoserve.de>: > Hi Dan, > > Thanks for the hint, i'll try this tomorrow with a test bed first. T