[ceph-users] Redeploy iSCSI Gateway fail - 167 returned from docker run

2021-06-01 Thread Paul Giralt (pgiralt)
CEPH 16.2.4. I was having an issue where I put a server into maintenance mode and after doing so, the containers for the iSCSI gateway were not running, so I decided to do a redeploy of the service. This caused all the servers running iSCSI to get in a state where it looks like ceph orch was

[ceph-users] Re: cephadm removed mon. key when adding new mon node

2021-06-01 Thread Bryan Stillwell
I was able to determine that the mon. key was not removed. My mon nodes were stuck in a peering state because the new mon node was trying to use the 15.2.8 image instead of the 16.2.4 image. This caused a problem because during a recent Octopus upgrade I set

[ceph-users] Re: Fwd: Re: Ceph osd will not start.

2021-06-01 Thread Marco Pizzolo
Quick update: Unfortunately, it seems like we are still having issues. "ceph orch apply osd --all-available-devices" now enumerates through all 60 available 10TB drives in the host, and the OSDs don't flap, allowing all 60 to be defined and marked in. Once all 60 complete the OSDs begin to

[ceph-users] time duration of radosgw-admin

2021-06-01 Thread Rok Jaklič
Hi, is it normal that radosgw-admin user info --uid=user ... takes around 3s or more? Also other radosgw-admin are taking quite a lot of time. Kind regards, Rok ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] cephadm removed mon. key when adding new mon node

2021-06-01 Thread Bryan Stillwell
This morning I tried adding a mon node to my home Ceph cluster with the following command: ceph orch daemon add mon ether This seemed to work at first, but then it decided to remove it fairly quickly which broke the cluster because the mon. keyring was also removed:

[ceph-users] Can we deprecate FileStore in Quincy?

2021-06-01 Thread Neha Ojha
Hello everyone, Given that BlueStore has been the default and more widely used objectstore since quite some time, we would like to understand whether we can consider deprecating FileStore in our next release, Quincy and remove it in the R release. There is also a proposal [0] to add a health

[ceph-users] Re: Ceph Month June Schedule Now Available

2021-06-01 Thread Mike Perez
Thanks to our speakers for providing presentations for week 1 of Ceph Month! You can find the recordings on the Ceph Month 2021 Youtube playlist: https://www.youtube.com/playlist?list=PLrBUGiINAakM4ttoGHGUQlVI9VkrZq_hX We continue with week 2 on June 10th with a RGW Update, and 2 BoF sessions:

[ceph-users] CentOS 7 dependencies for diskprediction module

2021-06-01 Thread Michal Strnad
Hi, Did anyone get the diskprediction-local plugin working on CentOS 7.9? When we enable the plugin with v14.2.19 we get following error. Module 'diskprediction_local' has failed: No module named sklearn.svm.classes If the package is installed it brings in several dependencies but apparently

[ceph-users] Re: Fwd: Re: Ceph osd will not start.

2021-06-01 Thread Marco Pizzolo
Peter and fellow Ceph users, I just wanted to update this forum on our interim findings so far, but first and foremost, a HUGE thank you to David Orman for all of his help. We're in the process of staging our testing on bare metal now, but I wanted to confirm that for us at least, the showstopper

[ceph-users] Unable to delete disk from iSCSI target

2021-06-01 Thread Paul Giralt (pgiralt)
I’m trying to delete a disk from an iSCSI target so that I can remove the image, but running into an issue. If I try to delete it from the CEPH dashboard, I just get an error saying that the DELETE timed out after 45 seconds. If I try to do it from gwcli, the command never returns:

[ceph-users] Re: Cephfs metadta pool suddenly full (100%) ! [SOLVED but no explanation at this time!]

2021-06-01 Thread Hervé Ballans
Thank you Dan and Sebastian for trying to help me. We managed to get back to a normal situation but we still didn't understood how the problem happened... How do we get back to an optimal situation ? "Fortunately", we had on the cluster, 3 other "spare" NVMe that we didn't use yet. We added

[ceph-users] Re: Ceph Month June Schedule Now Available

2021-06-01 Thread Mike Perez
Hi everyone, In ten minutes, join us for the start of the Ceph Month June event! The schedule and meeting link can be found on this etherpad: https://pad.ceph.com/p/ceph-month-june-2021 On Tue, May 25, 2021 at 11:56 AM Mike Perez wrote: > > Hi everyone, > > The Ceph Month June schedule is now

[ceph-users] HEALTH_WARN and osd zero size

2021-06-01 Thread julien lenseigne
Dear all, I have a problem with a ceph cluster. reconstruction is stopped. The status remains at HEALTH_WARN. Strangely enough, when I do a 'ceph osd df' I have have some OSD which returns zero in size. Maybe, is it related to my reconstruction problem. ceph status :

[ceph-users] Re: Cephfs metadta pool suddenly full (100%) !

2021-06-01 Thread Dan van der Ster
Hi, I've never encountered this before. To troubleshoot you could try to identify if this was caused by the MDS writing to the metadata pool (e.g. maybe the mds log?), or if it was some operation in the OSD which consumed too much space (e.g. something like compaction?). Can you find any unusual

[ceph-users] Re: Fwd: Re: Ceph osd will not start.

2021-06-01 Thread David Orman
ormandj/ceph:v16.2.4-mgrfix <-- pushed to dockerhub. Try bootstrap with: --image "docker.io/ormandj/ceph:v16.2.4-mgrfix" if you want to give it a shot, or you can set CEPHADM_IMAGE. We think these should both work during any cephadm command, even if the documentation doesn't make it clear. On

[ceph-users] Re: Cephfs metadta pool suddenly full (100%) !

2021-06-01 Thread Sebastian Knust
Hi Hervé, On 01.06.21 14:00, Hervé Ballans wrote: I'm aware with your points, and maybe I was not really clear in my previous email (written in a hurry!) The problematic pool is the metadata one. All its OSDs (x3) are full. The associated data pool is OK and no OSD is full on the data pool.

[ceph-users] Re: Cephfs metadta pool suddenly full (100%) !

2021-06-01 Thread Hervé Ballans
Hi Sebastian, Thank you for your quick answer. I'm aware with your points, and maybe I was not really clear in my previous email (written in a hurry!) The problematic pool is the metadata one. All its OSDs (x3) are full. The associated data pool is OK and no OSD is full on the data pool.

[ceph-users] Re: Cephfs metadta pool suddenly full (100%) !

2021-06-01 Thread Sebastian Knust
Hi Hervé, On 01.06.21 13:15, Hervé Ballans wrote: # ceph status   cluster:     id: 838506b7-e0c6-4022-9e17-2d1cf9458be6     health: HEALTH_ERR     1 filesystem is degraded     3 full osd(s)     1 pool(s) full     1 daemons have recently crashed You

[ceph-users] Re: Cephfs metadta pool suddenly full (100%) !

2021-06-01 Thread Hervé Ballans
Hi again, Sorry, I realize that I didn't add some ouputs from useful ceph commands. # ceph status   cluster:     id: 838506b7-e0c6-4022-9e17-2d1cf9458be6     health: HEALTH_ERR     1 filesystem is degraded     3 full osd(s)     1 pool(s) full     1 daemons

[ceph-users] Cephfs metadta pool suddenly full (100%) !

2021-06-01 Thread Hervé Ballans
Hi all, Ceph  Nautilus 14.2.16. We encounter a strange and critical poblem since this morning. Our cephfs metadata pool suddenly grew from 2,7% to 100%! (in less than 5 hours) while there is no significant activities on the OSD data ! Here are some numbers: # ceph df RAW STORAGE:     CLASS 

[ceph-users] local mirror from quay.ceph.io

2021-06-01 Thread Seba chanel
hi, Can you show me how to mirror the ceph docker images that are available from quay.ceph.io Our CEPH clusters are on a private network, and they are installed from a local repository because they cannot directly access the internet network. With Octopus, I need to be able to create a local

[ceph-users] Re: Fwd: Re: Ceph osd will not start.

2021-06-01 Thread David Orman
I do not believe it was in 16.2.4. I will build another patched version of the image tomorrow based on that version. I do agree, I feel this breaks new deploys as well as existing, and hope a point release will come soon that includes the fix. > On May 31, 2021, at 15:33, Marco Pizzolo wrote:

[ceph-users] Re: [Suspicious newsletter] Re: The always welcomed large omap

2021-06-01 Thread Szabo, Istvan (Agoda)
Here is the command output: ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL%USE VAR PGS STATUS TYPE NAME -1 530.89032 - 531 TiB29 TiB 11 TiB31 GiB 338 GiB 502 TiB 5.43 1.00- root default -5