Hi All—
Thanks for the responses. I am mainly curious about performance impact for 
read/write workloads associated with metadata updates as the number of nodes 
increase. Any commentary on performance impact specific to various read/write 
random/sequential IO scenario as the scale increases? Not particularly worried 
about restart/reboot condition as that is an edge use case for us.


Thanks,
Mayur



From: Atin Mukherjee [mailto:amukh...@redhat.com]
Sent: Wednesday, November 1, 2017 8:53 PM
To: Mayur Dewaikar <mdewai...@commvault.com>; gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster Scale Limitations


On Tue, 31 Oct 2017 at 03:32, Mayur Dewaikar 
<mdewai...@commvault.com<mailto:mdewai...@commvault.com>> wrote:
Hi all,
Are there any scale limitations in terms of how many nodes can be in a single 
Gluster Cluster or how much storage capacity can be managed in a single 
cluster? What are some of the large deployments out there that you know of?

The current design of GlusterD is not capable of handling too many nodes in the 
cluster specially on the node restart/reboot condition. We have heard about 
deployments with ~100-150 nodes where things are stable but in node reboot 
scenario some special tweaking of parameters like network.listen-backlog is 
required to ensure TCP packets don’t get overflowed resulting into connection 
between brick to glusterd fail. GlusterD2 project will definitely address this 
aspect of the problems.

Also since all the directory layouts are replicated on all the bricks of a 
volume, mkdir, unlink or any other directory operations are costly and with 
more number of bricks this impacts the latency. We’re also working on a project 
called RIO to address this issue.


Thanks,
Mayur


***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users
--
- Atin (atinm)
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to