[ceph-users] Re: rbd-nbd crashes Error: failed to read nbd request header: (33) Numerical argument out of domain

2021-05-18 Thread Zhi Zhang
On Wed, May 19, 2021 at 11:19 AM Zhi Zhang wrote: > > On Tue, May 18, 2021 at 10:58 PM Mykola Golub > wrote: > > > > Could you please provide the full rbd-nbd log? If it is too large for > > the attachment then may be via some public url? > > ceph.rbd-client.log.bz2 >

[ceph-users] Re: rbd-nbd crashes Error: failed to read nbd request header: (33) Numerical argument out of domain

2021-05-18 Thread Zhi Zhang
On Tue, May 18, 2021 at 10:58 PM Mykola Golub wrote: > > Could you please provide the full rbd-nbd log? If it is too large for > the attachment then may be via some public url? ceph.rbd-client.log.bz2 I

[ceph-users] Ceph increase RBD Pool Size not change

2021-05-18 Thread codignotto
I am increasing the size of my POOL from 24TiB to 28TiB, I execute the change via the ceph portal, it shows that everything is ok but it does not change the real value, it remains in the same 24TiB, would it be a Bug? I was managing to increase up to 24TiB, I can increase it to 34Tb q is the size

[ceph-users] Re: rbd-nbd crashes Error: failed to read nbd request header: (33) Numerical argument out of domain

2021-05-18 Thread Mykola Golub
Could you please provide the full rbd-nbd log? If it is too large for the attachment then may be via some public url? -- Mykola Golub On Tue, May 18, 2021 at 03:04:51PM +0800, Zhi Zhang wrote: > Hi guys, > > We are recently testing rbd-nbd using ceph N version. After map rbd > image, mkfs and

[ceph-users] Re: MDS rank 0 damaged after update to 14.2.20

2021-05-18 Thread Dan van der Ster
On Tue, May 18, 2021 at 4:00 PM Eugen Block wrote: > > Hi, > > sorry for not responding, our mail server was affected, too, I got > your response after we got our CephFS back online. Glad to hear it's back online! > > Do you have the mds log from the initial crash? > > I would need to take a

[ceph-users] Re: MDS rank 0 damaged after update to 14.2.20

2021-05-18 Thread Eugen Block
Hi, sorry for not responding, our mail server was affected, too, I got your response after we got our CephFS back online. Do you have the mds log from the initial crash? I would need to take a closer look but we're currently dealing with the affected clients to get everything back in

[ceph-users] image + snapshot remove

2021-05-18 Thread Szabo, Istvan (Agoda)
Hi, Pool has been deleted in which was an image that related to another pools and now can't remove the image due to it has snapshot. Can't list snapshot, can't purge, can't flattened can't really do anything. Is it possible to remove the image without pool delete? 021-05-18 14:55:26.710

[ceph-users] Force processing of num_strays in mds

2021-05-18 Thread Mark Schouten
Hi, I have a 12.2.13 I want to go and upgrade. However, there are a whole bunch of stray files/inodes(?) which I would want to have processed. Also because I get a lot of 'No space left on device' messages. I started a 'find . -ls' in the root of the CephFS filesystem, but that causes overload

[ceph-users] remove host from cluster for re-installing it

2021-05-18 Thread mabi
Hello, On my Octopus cluster with 6 nodes (3 mon/mgr, 3 OSD), I would like to re-install the operating system of the first mon/mgr node. For that purpose I tried "ceph host rm mynode" but then I got the following two health warnings: 2 stray daemon(s) not managed by cephadm

[ceph-users] Re: Pool has been deleted before snaptrim finished

2021-05-18 Thread Szabo, Istvan (Agoda)
Thank you Igor your help, I've done on the smashed SSDs seems like finally the cluster come back to normal. How can I avoid this situation? Should I use buffered_io or not? Istvan Szabo Senior Infrastructure Engineer --- Agoda Services Co., Ltd.

[ceph-users] Re: MDS rank 0 damaged after update to 14.2.20

2021-05-18 Thread Dan van der Ster
Hi, Do you have the mds log from the initial crash? Also, I don't see the new global_id warnings in your status output -- did you change any settings from the defaults during this upgrade? Cheers, Dan On Tue, May 18, 2021 at 10:22 AM Eugen Block wrote: > > Hi *, > > I tried a minor update

[ceph-users] MDS rank 0 damaged after update to 14.2.20

2021-05-18 Thread Eugen Block
Hi *, I tried a minor update (14.2.9 --> 14.2.20) on our ceph cluster today and got into a damaged CephFS. It's rather urgent since noone can really work right now, so any quick help is highly appreciated. As for the update process I followed the usual update procedure, when all MONs

[ceph-users] Re: Process for adding a separate block.db to an osd

2021-05-18 Thread Boris Behrens
One more question: How do I get rid of the bluestore spillover message? osd.68 spilled over 64 KiB metadata from 'db' device (13 GiB used of 50 GiB) to slow device I tried an offline compactation, which did not help. Am Mo., 17. Mai 2021 um 15:56 Uhr schrieb Boris Behrens : > I have no

[ceph-users] logrotation in ceph 16.2.4

2021-05-18 Thread Fabrice Bacchella
I have a ceph cluster with 6 OSD servers. 2 are running 16.2.4 and logrotate failed with this message: etc/cron.daily/logrotate: error: Compressing program wrote following message to stderr when compressing log /var/log/ceph/ceph-osd.37.log-20210518: gzip: stdin: file size changed while zipping

[ceph-users] rbd-nbd crashes Error: failed to read nbd request header: (33) Numerical argument out of domain

2021-05-18 Thread Zhi Zhang
Hi guys, We are recently testing rbd-nbd using ceph N version. After map rbd image, mkfs and mount the nbd device, the rbd-nbd and dmesg will show following errors when doing some read/write testing. rbd-nbd log: 2021-05-18 11:35:08.034 7efdb8ff9700 20 []rbd-nbd: reader_entry: waiting for nbd