Hi Stefan - 

Thanks once again for taking time in explaining. 

Regards
Radha Krishnan S
TCS Enterprise Cloud Practice
Tata Consultancy Services
Cell:- +1 848 466 4870
Mailto: radhakrishnan...@tcs.com
Website: http://www.tcs.com
____________________________________________
Experience certainty.   IT Services
Business Solutions
Consulting
____________________________________________


-----"Stefan Kooman" <ste...@bit.nl> wrote: -----
To: "Radhakrishnan2 S" <radhakrishnan...@tcs.com>
From: "Stefan Kooman" <ste...@bit.nl>
Date: 12/31/2019 07:41AM
Cc: ceph-users@lists.ceph.com, "ceph-users" <ceph-users-boun...@lists.ceph.com>
Subject: Re: [ceph-users] Architecture - Recommendations

"External email. Open with Caution"

Quoting Radhakrishnan2 S (radhakrishnan...@tcs.com):
> In addition, about putting all kinds of disks, putting all drives in
> one box was done for two reasons, 
> 
> 1. Avoid CPU choking
This depends only on what kind of hardware you select and how you
configure it. You can (if need be) restrict #CPU the ceph daemons get
with cgroups for example ... (or use containers).
Radha: Containers are definitely in pipeline with just ephemerals and no 
persistent storage, but we want to take baby steps and not all at once. At the 
moment our nodes have dual sockets on the osd nodes, single socket on monitor 
and again dual socket on gateway nodes, with 18 core per socket. I'm planning 
to increase them to 20 in our production deployment, as our NVMe drives are 
going to have 4 osd per drive.

>2. Example: If my cluster has 20 nodes in total,
> then all 20 nodes will have NVMe SSD and NL-SAS, this way I'll get
> more capacity and performance when compared to homogeneous nodes. If I
> have to break the 20 nodes into 5 NVMe based, 5 SSD based and
> remaining 10 as spindle based with NVMe acting as bcache, then I'm
> restricting the count of drives there by lesser IO density /
> performance. Please advice in detail based on your production
> deployments. 

The drawback of all types of disk in one box is that all pools in your
cluster are affected when one nodes goes down.

If your storage needs change in the future than it does not make sense
to buy similar boxes. I.e. it's cheaper to buy dedicated boxes for
say spinners only if you end up needing that (lower CPU requirements,
cheaper boxes). You need to decide if you want max performance or max
capactity.

Radha: Performance is important from the block storage offering perspective. We 
have Rep3 pools for NVMe and SSD's each, while EC 6+4 for the spinning media 
based pool with NVMe as bcache. Spinning media based pool will host both S3 and 
as Tier 3 block storage target. With that said, performance is extremely 
important for the NVMe and SSD pools, while its equally needed for the Tier 3/ 
S3 based pool as well. I'm not against going into a homogeneous set of nodes, 
just worried about reduction in IO Density. Our current model is 4 NVMe, 10 SSD 
and 12 HDD per node.

More smaller nodes means the overall impact when one node fails is much
smaller. Just check what your budget allowys you to buy with
"all-on-one" boxes versus "dedicated" boxes.

Are you planning on dedicated monitor nodes (I would definately do
that)?
Radha: Yes, we have 3 physical nodes dedicated for monitors, planning to 
increase the monitor nodes to 5 after the cluster size gets over 50. 

Gr. Stefan

-- 
| BIT BV  https://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / i...@bit.nl

=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to