[ceph-users] Re: Planning: Ceph User Survey 2020

2021-01-27 Thread Mike Perez
Hey Alexandre, Sorry for the late reply here. I believe Anthony can give you a response on why we chose a matrix rating scale type instead of rank. — — Mike Perez (thingee) On Wed, Nov 25, 2020 at 8:27 AM Alexandre Marangone wrote: > Hi Mike, > > For some of the multiple answer questions

[ceph-users] Re: Where has my capacity gone?

2021-01-27 Thread George Yil
May I ask if enabling pool compression helps for the future space amplification? > George Yil şunları yazdı (27 Oca 2021 18:57): > > Thank you. This helps a lot. > >> Josh Baergen şunları yazdı (27 Oca 2021 17:08): >> >> On Wed, Jan 27, 2021 at 12:24 AM George Yil wrote: >>> May I ask if

[ceph-users] Re: CEPHFS - MDS gracefull handover of rank 0

2021-01-27 Thread Stefan Kooman
On 1/27/21 3:51 PM, Konstantin Shalygin wrote: Martin, also before restart - issue cache drop command to active mds Don't do this if you have a large cache. It will make your MDS unresponsive and replaced by a standby if available. There is a PR to fix this:

[ceph-users] Re: PG inconsistent with empty inconsistent objects

2021-01-27 Thread Richard Bade
Thanks Dan and Anthony your suggestions have pointed me in the right direction. Looking back through the logs at when the first error was detected I found this: ceph-osd: 2021-01-24 01:04:55.905 7f0c17821700 -1 log_channel(cluster) log [ERR] : 17.7ffs0 scrub : stat mismatch, got 112867/112868

[ceph-users] Re: Where has my capacity gone?

2021-01-27 Thread Josh Baergen
On Wed, Jan 27, 2021 at 12:24 AM George Yil wrote: > May I ask if it can be dynamically changed and any disadvantages should be > expected? Unless there's some magic I'm unaware of, there is no way to dynamically change this. Each OSD must be recreated with the new min_alloc_size setting. In

[ceph-users] Re: radosgw not working - upgraded from mimic to octopus

2021-01-27 Thread Youzhong Yang
Anyone running octopus (v15)? Can you please share your experience of radosgw-admin performance? A simple 'radosgw-admin user list' took 11 minutes; if I use a v13.2.4 radosgw-admin, it can be finished in a few seconds. This sounds like a performance regression to me. I've already filed a bug

[ceph-users] Re: Balancing with upmap

2021-01-27 Thread Francois Legrand
Nope ! Le 27/01/2021 à 17:40, Anthony D'Atri a écrit : Do you have any override reweights set to values less than 1.0? The REWEIGHT column when you run `ceph osd df` On Jan 27, 2021, at 8:15 AM, Francois Legrand wrote: Hi all, I have a cluster with 116 disks (24 new disks of 16TB added in

[ceph-users] Re: RGW Bucket notification troubleshooting

2021-01-27 Thread Yuval Lifshitz
On Wed, Jan 27, 2021 at 5:34 PM Schoonjans, Tom (RFI,RAL,-) < tom.schoonj...@rfi.ac.uk> wrote: > Looks like there’s already a ticket open for AMQP SSL support: > https://tracker.ceph.com/issues/42902 (you opened it ;-)) > > I will give a try myself if I have some time, but don’t hold your breath

[ceph-users] Balancing with upmap

2021-01-27 Thread Francois Legrand
Hi all, I have a cluster with 116 disks (24 new disks of 16TB added in december and the rest of 8TB) running nautilus 14.2.16. I moved (8 month ago) from crush_compat to upmap balancing. But the cluster seems not well balanced, with a number of pgs on the 8TB disks varying from 26 to 52 ! And

[ceph-users] Re: Where has my capacity gone?

2021-01-27 Thread George Yil
Thank you. This helps a lot. > Josh Baergen şunları yazdı (27 Oca 2021 17:08): > > On Wed, Jan 27, 2021 at 12:24 AM George Yil wrote: >> May I ask if it can be dynamically changed and any disadvantages should be >> expected? > > Unless there's some magic I'm unaware of, there is no way to >

[ceph-users] Re: PG inconsistent with empty inconsistent objects

2021-01-27 Thread Dan van der Ster
Usually the ceph.log prints the reason for the inconsistency when it is first detected by scrubbing. -- dan On Wed, Jan 27, 2021 at 12:41 AM Richard Bade wrote: > > Hi Everyone, > I also have seen this inconsistent with empty when you do > list-inconsistent-obj > > $ sudo ceph health detail >

[ceph-users] Re: CEPHFS - MDS gracefull handover of rank 0

2021-01-27 Thread Konstantin Shalygin
Martin, also before restart - issue cache drop command to active mds k Sent from my iPhone > On 27 Jan 2021, at 11:58, Dan van der Ster wrote: > > In our experience failovers are largely transparent if the mds has: > >mds session blacklist on timeout = false >mds session blacklist

[ceph-users] Re: "ceph orch restart mgr" command creates mgr restart loop

2021-01-27 Thread Jens Hyllegaard (Soft Design A/S)
Hi Chris Having also recently started exploring Ceph. I too happened upon this problem. I found that terminating the command witch ctrl-c seemed to stop the looping. Which btw. also happens on all other mgr instances in the cluster. Regards Jens -Original Message- From: Chris Read

[ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses

2021-01-27 Thread Adam Boyhan
Doing some more testing. I can demote the rbd image on the primary, promote on the secondary and the image looks great. I can map it, mount it, and it looks just like it should. However, the rbd snapshots are still unusable on the secondary even when promoted. I went as far as taking a 2nd

[ceph-users] Re: CEPHFS - MDS gracefull handover of rank 0

2021-01-27 Thread Dan van der Ster
Hi, In our experience failovers are largely transparent if the mds has: mds session blacklist on timeout = false mds session blacklist on evict = false And clients have client reconnect stale = true Cheers, Dan On Wed, Jan 27, 2021 at 9:09 AM Martin Hronek wrote: > > Hello

[ceph-users] CEPHFS - MDS gracefull handover of rank 0

2021-01-27 Thread Martin Hronek
Hello fellow CEPH-users, currently we are updating our CEPH(14.2.16) and making changes to some config settings. TLDR: is there a way to make a graceful MDS active node shutdown without loosing the caps, open files and client connections? Something like handover active state, promote standby