CEPH 16.2.4. I was having an issue where I put a server into maintenance mode
and after doing so, the containers for the iSCSI gateway were not running, so I
decided to do a redeploy of the service. This caused all the servers running
iSCSI to get in a state where it looks like ceph orch was
I was able to determine that the mon. key was not removed. My mon nodes were
stuck in a peering state because the new mon node was trying to use the 15.2.8
image instead of the 16.2.4 image. This caused a problem because during a
recent Octopus upgrade I set
Quick update:
Unfortunately, it seems like we are still having issues.
"ceph orch apply osd --all-available-devices" now enumerates through all 60
available 10TB drives in the host, and the OSDs don't flap, allowing all 60
to be defined and marked in. Once all 60 complete the OSDs begin to
Hi,
is it normal that radosgw-admin user info --uid=user ... takes around 3s or
more?
Also other radosgw-admin are taking quite a lot of time.
Kind regards,
Rok
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
This morning I tried adding a mon node to my home Ceph cluster with the
following command:
ceph orch daemon add mon ether
This seemed to work at first, but then it decided to remove it fairly quickly
which broke the cluster because the mon. keyring was also removed:
Hello everyone,
Given that BlueStore has been the default and more widely used
objectstore since quite some time, we would like to understand whether
we can consider deprecating FileStore in our next release, Quincy and
remove it in the R release. There is also a proposal [0] to add a
health
Thanks to our speakers for providing presentations for week 1 of Ceph
Month! You can find the recordings on the Ceph Month 2021 Youtube
playlist:
https://www.youtube.com/playlist?list=PLrBUGiINAakM4ttoGHGUQlVI9VkrZq_hX
We continue with week 2 on June 10th with a RGW Update, and 2 BoF
sessions:
Hi,
Did anyone get the diskprediction-local plugin working on CentOS 7.9?
When we enable the plugin with v14.2.19 we get following error.
Module 'diskprediction_local' has failed: No module named
sklearn.svm.classes
If the package is installed it brings in several dependencies but
apparently
Peter and fellow Ceph users,
I just wanted to update this forum on our interim findings so far, but
first and foremost, a HUGE thank you to David Orman for all of his help.
We're in the process of staging our testing on bare metal now, but I wanted
to confirm that for us at least, the showstopper
I’m trying to delete a disk from an iSCSI target so that I can remove the
image, but running into an issue. If I try to delete it from the CEPH
dashboard, I just get an error saying that the DELETE timed out after 45
seconds.
If I try to do it from gwcli, the command never returns:
Thank you Dan and Sebastian for trying to help me.
We managed to get back to a normal situation but we still didn't
understood how the problem happened...
How do we get back to an optimal situation ?
"Fortunately", we had on the cluster, 3 other "spare" NVMe that we
didn't use yet. We added
Hi everyone,
In ten minutes, join us for the start of the Ceph Month June event!
The schedule and meeting link can be found on this etherpad:
https://pad.ceph.com/p/ceph-month-june-2021
On Tue, May 25, 2021 at 11:56 AM Mike Perez wrote:
>
> Hi everyone,
>
> The Ceph Month June schedule is now
Dear all, I have a problem with a ceph cluster. reconstruction is
stopped. The status remains at HEALTH_WARN. Strangely enough, when I do
a 'ceph osd df' I have have some OSD which returns zero in size. Maybe,
is it related to my reconstruction problem.
ceph status :
Hi,
I've never encountered this before.
To troubleshoot you could try to identify if this was caused by the
MDS writing to the metadata pool (e.g. maybe the mds log?), or if it
was some operation in the OSD which consumed too much space (e.g.
something like compaction?).
Can you find any unusual
ormandj/ceph:v16.2.4-mgrfix <-- pushed to dockerhub.
Try bootstrap with: --image "docker.io/ormandj/ceph:v16.2.4-mgrfix" if
you want to give it a shot, or you can set CEPHADM_IMAGE. We think
these should both work during any cephadm command, even if the
documentation doesn't make it clear.
On
Hi Hervé,
On 01.06.21 14:00, Hervé Ballans wrote:
I'm aware with your points, and maybe I was not really clear in my
previous email (written in a hurry!)
The problematic pool is the metadata one. All its OSDs (x3) are full.
The associated data pool is OK and no OSD is full on the data pool.
Hi Sebastian,
Thank you for your quick answer.
I'm aware with your points, and maybe I was not really clear in my
previous email (written in a hurry!)
The problematic pool is the metadata one. All its OSDs (x3) are full.
The associated data pool is OK and no OSD is full on the data pool.
Hi Hervé,
On 01.06.21 13:15, Hervé Ballans wrote:
# ceph status
cluster:
id: 838506b7-e0c6-4022-9e17-2d1cf9458be6
health: HEALTH_ERR
1 filesystem is degraded
3 full osd(s)
1 pool(s) full
1 daemons have recently crashed
You
Hi again,
Sorry, I realize that I didn't add some ouputs from useful ceph commands.
# ceph status
cluster:
id: 838506b7-e0c6-4022-9e17-2d1cf9458be6
health: HEALTH_ERR
1 filesystem is degraded
3 full osd(s)
1 pool(s) full
1 daemons
Hi all,
Ceph Nautilus 14.2.16.
We encounter a strange and critical poblem since this morning.
Our cephfs metadata pool suddenly grew from 2,7% to 100%! (in less than
5 hours) while there is no significant activities on the OSD data !
Here are some numbers:
# ceph df
RAW STORAGE:
CLASS
hi,
Can you show me how to mirror the ceph docker images that are
available from quay.ceph.io
Our CEPH clusters are on a private network, and they are installed
from a local repository because they cannot directly access the
internet network.
With Octopus, I need to be able to create a local
I do not believe it was in 16.2.4. I will build another patched version of the
image tomorrow based on that version. I do agree, I feel this breaks new
deploys as well as existing, and hope a point release will come soon that
includes the fix.
> On May 31, 2021, at 15:33, Marco Pizzolo wrote:
Here is the command output:
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
AVAIL%USE VAR PGS STATUS TYPE NAME
-1 530.89032 - 531 TiB29 TiB 11 TiB31 GiB 338 GiB
502 TiB 5.43 1.00- root default
-5
23 matches
Mail list logo