[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-23 Thread lin yunfan
; these bigger chassis can also be deeper to get rid of the compromises. > > > > The reality with an OSD node is you don't need that many slots or network > > ports. > > > > > > > > From: Janne Johansson > > Date: Thursday, 23 April 2020 at 08:08 >

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-23 Thread Richard Hesketh
On Thu, 2020-04-23 at 09:08 +0200, Janne Johansson wrote: > Den tors 23 apr. 2020 kl 08:49 skrev Darren Soothill < > darren.sooth...@suse.com>: > > > If you want the lowest cost per TB then you will be going with > > larger nodes in your cluster but it does mean you minimum cluster > > size is

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-23 Thread Martin Verges
OSD node is you don't need that many slots or network > ports. > > > > From: Janne Johansson > Date: Thursday, 23 April 2020 at 08:08 > To: Darren Soothill > Cc: ceph-users@ceph.io > Subject: Re: [ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard? > Den tors

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-23 Thread Darren Soothill
: Thursday, 23 April 2020 at 08:08 To: Darren Soothill Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard? Den tors 23 apr. 2020 kl 08:49 skrev Darren Soothill mailto:darren.sooth...@suse.com>>: If you want the lowest cost per TB th

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-23 Thread Janne Johansson
Den tors 23 apr. 2020 kl 08:49 skrev Darren Soothill < darren.sooth...@suse.com>: > If you want the lowest cost per TB then you will be going with larger > nodes in your cluster but it does mean you minimum cluster size is going to > be many PB’s in size. > Now the question is what is the tax

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-23 Thread Darren Soothill
to go and will give you the lowest price point. From: Martin Verges Date: Thursday, 23 April 2020 at 06:39 To: lin.yunfan Cc: brian.topp...@gmail.com , ceph-users@ceph.io Subject: [ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard? From all our calculations of clusters, going

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread Martin Verges
From all our calculations of clusters, going with smaller systems reduced the TCO because of much cheaper hardware. Having 100 Ceph nodes is not an issue, therefore you can scale small and large clusters with the exact same hardware. But please, prove me wrong. I would love to see a way to reduce

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread lin . yunfan
Big nodes are most for HDD cluster and with 40G nic or 100G nic I don't think the network would be the bottleneck.

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread Jarett DeAngelis
Well, for starters, "more network" = "faster cluster." On Wed, Apr 22, 2020 at 11:18 PM lin.yunfan wrote: > I have seen a lot of people saying not to go with big nodes. > What is the exact reason for that? > I can understand that if the cluster is not big enough then the total > nodes count

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread lin . yunfan
I have seen a lot of people saying not to go with big nodes. What is the exact reason for that?I can understand that if the cluster is not big enough then the total nodes count could be too small to withstand a node failure, but if the cluster is big enough

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread Brian Topping
Great set of suggestions, thanks! One to consider: > On Apr 22, 2020, at 4:14 PM, Jack wrote: > > I use 32GB flash-based satadom devices for root device > They are basically SSD, and do not take front slots > As they are never burning up, we never replace them > Ergo, the need to "open" the

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread Jack
Hi, On 4/22/20 11:47 PM, cody.schm...@iss-integration.com wrote: > Example 1: > 8x 60-Bay (8TB) Storage nodes (480x 8TB SAS Drives) > Storage Node Spec: > 2x 32C 2.9GHz AMD EPYC >- Documentation mentions .5 cores per OSD for throughput optimized. Are > they talking about .5 Physical cores