Just an FYI...in case anyone else runs into this.
It appears as though the version that is listed in the dashboard is the lowest
version level of any osd on any given host. I have a few osd's that were down
during the last 2 upgrades (I am planning on taking them out in the long run)
and so
Hello,
I recently upgraded our Luminous ceph cluster to Nautilus. Everything seemed
to go well.
Today I started upgrading from Nautilus to Octopus. I am midway through the
upgrade and I noticed that although using ‘ceph versions shows the correct osd
version information for each OSD, the
Hi Peter
Please remember to include the list address in your reply.
I will not trim so people on the list can read you answer.
On 29.07.2021 12:43, Peter Childs wrote:
On Thu, 29 Jul 2021 at 10:37, Kai Stian Olstad
wrote:
A little disclaimer, I have never used multipath with Ceph.
On
Hello.
I'm trying to delete buckets but the object deletion is extremely slow if I
compare with put operations.
I use the command for delete operation: "radosgw-admin bucket rm
--bucket= --bypass-gc --purge-objects"
My cluster has 190 HDD for EC pool and 30 SSD for index.
Ceph version = 14.2.16
Current bucket policy is, as seen from the rgw side, removed new lines to
shorten it:
{"prefix_map": {"": {
*"status": true,*
"dm_expiration": false,
*"expiration": 2,*
"noncur_expiration": 0,
*"mp_expiration": 1,*
Hi!
I started to delete multipart aborted files (about 1200 objects) and until
now it deleted 3/4 of the and the bucket usage is reduced with 2000GB from
about 3000GB in total usage!
So, it looks like 95% of the bucket size was in multimeta aborted files!
Now, the legit question is why the
A little disclaimer, I have never used multipath with Ceph.
On 28.07.2021 20:19, Peter Childs wrote:
I have a number of disk trays, with 25 ssd's in them, these are
attached to
my servers via a pair of sas cables, so that multipath is used to join
the
together again and maximize speed etc.
Hi,
I saw couple of discussions in the mail list about this topic, so is it working
properly or not? Ceph documentation says octopus needs kernel 4.
Thank you
This message is confidential and is for the sole use of the intended
recipient(s). It may also be
ceph pg ls-by-osd
k
Sent from my iPhone
> On 28 Jul 2021, at 12:46, Manuel Holtgrewe wrote:
>
> How can I find out which pgs are actually on osd.0?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to