[ceph-users] Re: small cluster HW upgrade

2020-02-02 Thread Marc Roos
. -Original Message- Cc: ceph-users; mrxlazuardin Subject: *SPAM* Re: [ceph-users] Re: small cluster HW upgrade This is a natural condition of bonding, it has little to do with ceph-osd. Make sure your hash policy is set appropriatelly, so that you even have a chance of using both

[ceph-users] Re: small cluster HW upgrade

2020-02-02 Thread Anthony D'Atri
To: ceph-users@ceph.io > Subject: [ceph-users] Re: small cluster HW upgrade > > Hi Philipp, > > More nodes is better, more availability, more CPU and more RAM. But, I'm > agree that your 1GbE link will be most limiting factor, especially if > there are some SSDs. I sugge

[ceph-users] Re: small cluster HW upgrade

2020-02-02 Thread Marc Roos
Osd's do not even use bonding effenciently. If it were to use 2 links concurrently it would be a lot better. https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35474.html -Original Message- To: ceph-users@ceph.io Subject: [ceph-users] Re: small cluster HW upgrad

[ceph-users] Re: small cluster HW upgrade

2020-02-01 Thread mrxlazuardin
Hi Philipp, More nodes is better, more availability, more CPU and more RAM. But, I'm agree that your 1GbE link will be most limiting factor, especially if there are some SSDs. I suggest you upgrade your networking to 10GbE (or 25GbE since it will cost you nearly same with 10GbE). Upgrading you