On Wed, Jun 7, 2017 at 1:55 PM, Xie Changlong <xiechanglon...@gmail.com> wrote:
> 在 6/7/2017 3:08 PM, Pranith Kumar Karampuri 写道: > >> On Tue, Jun 6, 2017 at 8:14 AM, Xie Changlong <xiechanglon...@gmail.com> >> wrote: >> >> 在 6/5/2017 6:30 PM, Pranith Kumar Karampuri 写道: >>> >>> I meant what are you using gluster for? VM workload or image/video file >>>> creation or lots of small files etc etc. >>>> >>>> >>> 1) We use glusterfs for general purpose, no limit to image/video file >>> creation or small files etc. >>> >>> Okay, this is good. What is the cluster size? >> Is it replica 3 or replica 2 or arbiter or is it EC volume? >> > > I use replica 2 on my test. > But in the real world, "we have deployed more than 100 glusterfs nodes in > producion(20 nodes for the biggest single cluster)" per Liu Yuan. Replica > 2/3, or EC are optional. > > >> Please don't mind me asking so many details. I am delighted to see you >> guys >> > > You are always welcome! > > active in the community because I have seen Xiubo Li's work on tcmu-runner >> who is also from chinamobile and his work is pretty good :-). >> > > Yeah, i would be very pleasant to convey your praise. :) > > >> >> 2) We just want a low performance affect way to calculate each brick's >>> iops/bandwidth for the upper layer management app. BTW, is there any >>> other >>> way to get iops/bandwith for each brick besides profile? >>> >>> >> At the moment we have these stats using profile commands. As per Facebook >> patches they enable json capture of io-stats on their volumes and measure >> these. They have it enabled always. If that is not enough for you, then we >> > > I think you mean FB guys gather data based on io-stats with profile always > on, am i right? > BTW, are these patches open to everyone? so i can dig into it. > Yes, they do. The patches I gave earlier provide the facility to this in io-stats. Do let us know if you have any doubts. > > should probably look at enhancing things. Do send patches if you think >> something would make things better :-). >> >> >> >>> -- >>> Thanks >>> -Xie >>> >>> >> >> >> > -- > Thanks > -Xie > -- Pranith
_______________________________________________ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel