[ceph-users] Unexpected IOPS Ceph Benchmark Result

2019-04-20 Thread Muhammad Fakhri Abdillah
Hey everyone, Currently running a 4 node Proxmox cluster with external Ceph cluster (Ceph using CentOS 7). 4 nodes Ceph OSD installed, each node have spesification like this: - 8 Core intel Xeon processor - 32GB RAM - 2 x 600GB HDD SAS for CentOS (RAID1 as a System) - 9 x 1200GB HDD SAS for Data (R

[ceph-users] Ceph Deploy issues

2019-04-20 Thread Sp, Madhumita
Hi All, Can anyone please help here? I have tried installing Ceph on a physical server as a single node cluster. Steps followed: rpm --import 'https://download.ceph.com/keys/release.asc' yum install http://download.ceph.com/rpm-mimic/el7/noarch/ceph-deploy-2.0.0-0.noarch.rpm ceph-deploy new ho

Re: [ceph-users] SOLVED: Multi-site replication speed

2019-04-20 Thread Brian Topping
Followup: Seems to be solved, thanks again for your help. I did have some issues with the replication that may have been solved by getting the metadata init/run finished first. I haven’t replicated that back to the production servers yet, but I’m a lot more comfortable with the behaviors by sett

Re: [ceph-users] unable to turn on pg_autoscale

2019-04-20 Thread Daniele Riccucci
For future reference, I solved it by running: ceph osd require-osd-release nautilus and then setting pg_autoscale_mode on on pools. Daniele On 05/04/19 17:25, Daniele Riccucci wrote: Hello, I'm running a (very) small cluster and I'd like to turn on pg_autoscale. In the documentation here > h

[ceph-users] Were fixed CephFS lock ups when it's running on nodes with OSDs?

2019-04-20 Thread Igor Podlesny
I remember seeing reports in regards but it's being a while now. Can anyone tell? -- End of message. Next message? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com