[ceph-users] Re: Ceph upgrade advice - Luminous to Pacific with OS upgrade

2022-12-06 Thread Wolfpaw - Dale Corse
ers, D. -Original Message- From: David C [mailto:dcsysengin...@gmail.com] Sent: Tuesday, December 6, 2022 8:56 AM To: Wolfpaw - Dale Corse Cc: ceph-users Subject: [SPAM] [ceph-users] Re: [SPAM] Ceph upgrade advice - Luminous to Pacific with OS upgrade Hi Wolfpaw, thanks for the resp

[ceph-users] Re: [SPAM] Ceph upgrade advice - Luminous to Pacific with OS upgrade

2022-12-06 Thread Wolfpaw - Dale Corse
We did this (over a longer timespan).. it worked ok. A couple things I’d add: - Id upgrade to Nautilus on Centos 7 before moving to EL8. We then used AlmaLinux Elevate to love from 7 to 8 without a reinstall. Rocky has a similar path I think. - you will need to love those filestore OSD’s to

[ceph-users] Re: PGs stuck down

2022-11-29 Thread Wolfpaw - Dale Corse
, 2022 1:49 AM To: Wolfpaw - Dale Corse ; 'ceph-users' Subject: [ceph-users] Re: PGs stuck down Hi Dale, > we thought we had set it up to prevent.. and with size = 4 and > min_size set = 1 I'm afraid this is exactly what you didn't. Firstly, min_size=1 is always a bad idea. Secondly, if you

[ceph-users] PGs stuck down

2022-11-28 Thread Wolfpaw - Dale Corse
Hi All, We had a fiber cut tonight between 2 data centers, and a ceph cluster didn't do very well :( We ended up with 98% of PGs as down. This setup has 2 data centers defined, with 4 copies across both, and a minimum of size of 1. We have 1 mon/mgr in each DC, with one in a 3rd data

[ceph-users] Re: loosing one node from a 3-node cluster

2022-04-04 Thread Wolfpaw - Dale Corse
Hi Felix, Where are your monitors located? Do you have one on each node? Dale Corse CEO/CTO Cell: 780-504-1756 24/7 NOC: 888-965-3729 www.wolfpaw.com

[ceph-users] Re: Even number of replicas?

2022-03-25 Thread Wolfpaw - Dale Corse
. -Original Message- From: Nico Schottelius [mailto:nico.schottel...@ungleich.ch] Sent: Friday, March 25, 2022 12:58 PM To: Wolfpaw - Dale Corse Cc: ceph-users@ceph.io Subject: [ceph-users] Re: Even number of replicas? Hey Dale, are you distributing your clusters over 4 DCs via dark fiber

[ceph-users] Re: Even number of replicas?

2022-03-25 Thread Wolfpaw - Dale Corse
Hi George, We use 4/2 for our deployment and it works fine - but it's a huge waste of space :) Our reason is because we want to be able to lose a data center and still have ceph running. You could accomplish that with size=1 on an emergency basis, but we didn't like the redundancy loss.

[ceph-users] Anyone using Crimson in production?

2022-03-02 Thread Wolfpaw - Dale Corse
Hi All, Just curious if anyone is using Crimson-OSD in production, or has any detail on how far from being considered stable it might be? Any input is appreciated, thank you :) Cheers, -Dale ___ ceph-users mailing list -- ceph-users@ceph.io