[ceph-users] OSD upgrades

2020-06-01 Thread Brent Kennedy
We are rebuilding servers and before luminous our process was: 1. Reweight the OSD to 0 2. Wait for rebalance to complete 3. Out the osd 4. Crush remove osd 5. Auth del osd 6. Ceph osd rm # Seems the luminous documentation says that you should: 1.

[ceph-users] 15.2.3 Crush Map Viewer problem.

2020-06-01 Thread Marco Pizzolo
Hello Everyone, We're working on a new cluster and seeing some oddities. The crush map viewer is not showing all hosts or OSDs. Cluster is NVMe w/4 hosts, each having 8 NVMe. Using 2 OSDs per NVMe and Encryption. Using Max size of 3, Min size of 2: [image: image.png] All OSDs appear to exist

[ceph-users] Thread::try_create(): pthread_create failed

2020-06-01 Thread 展荣臻(信泰)
Hi all, We have a hammer ceph cluster with 3 monitor,324 osds. OSD daemon and kvm is collocated on node; The ceph cluster are runing 2 years.Recently we added ~700 osds to the cluster,as process: 1.ceph osd create 2. mkdir -p /var/lib/ceph/osd/ceph-$osd 3. mkfs.xfs -f /dev/$disk 4. mount -o

[ceph-users] Re: Deploy Ceph on the secondary datacenter for DR

2020-06-01 Thread Nghia Viet Tran
Hi Wido, It is a Java application that uses librados to connect directly to the cluster. -- Nghia Viet Tran (Mr) mgm technology partners Vietnam Co. Ltd 7 Phan Châu Trinh Đà Nẵng, Vietnam +84 935905659 nghia.viet.t...@mgm-tp.com www.mgm-tp.com Visit us on LinkedIn and Facebook! Innovation

[ceph-users] Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130

2020-06-01 Thread Gencer W . Genç
Hi All, I am trying to upgrade ceph 15.2.1 to 15.2.3. I've two node setup on small environment for test only. I ran the following commands: $ ceph mon ok-to-stop mon.vx-rg23-rk65-u43-130 >> quorum should be preserved (vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1) >>after stopping

[ceph-users] Ceph Orchestrator 2020-06-01 Meeting recording

2020-06-01 Thread Mike Perez
Hi everyone, Our Ceph Orchestrator meeting recording for 2020-06-01 is now available: https://www.youtube.com/watch?v=4oGb86RNPRs=youtu.be -- Mike Perez He/Him Ceph Community Manager Red Hat Los Angeles thin...@redhat.com M:

[ceph-users] Re: Deploy Ceph on the secondary datacenter for DR

2020-06-01 Thread Wido den Hollander
On 6/1/20 6:46 AM, Nghia Viet Tran wrote: > Hi everyone, > >   > > Currently, our client application and Ceph cluster are running on the > primary datacenter. We’re planning to deploy Ceph on the secondary > datacenter for DR. The secondary datacenter is in the standby mode. If > something

[ceph-users] Re: Using Ceph-ansible for a luminous -> nautilus upgrade?

2020-06-01 Thread Michał Nasiadka
Hi, I’ve been through this path recently - and using rolling_upgrade playbook from stable-4.0 worked just fine. Kind regards, Michal > On 1 Jun 2020, at 15:21, Matthew Vernon wrote: > > Hi, > > For previous Ceph version upgrades, we've used the rolling_upgrade playbook > from Ceph-ansible

[ceph-users] Using Ceph-ansible for a luminous -> nautilus upgrade?

2020-06-01 Thread Matthew Vernon
Hi, For previous Ceph version upgrades, we've used the rolling_upgrade playbook from Ceph-ansible - for example, the stable-3.0 branch supports both Jewel and Luminous, so we used it to migrate our clusters from Jewel to Luminous. As I understand it, upgrading direct from Luminous to