On Thu, Jul 28, 2016 at 05:58:15PM +0530, Mohammed Rafi K C wrote: > > > On 07/27/2016 04:33 PM, Raghavendra G wrote: > > > > > > On Wed, Jul 27, 2016 at 10:29 AM, Mohammed Rafi K C > > <rkavu...@redhat.com <mailto:rkavu...@redhat.com>> wrote: > > > > Thanks for your feedback. > > > > In fact meta xlator is loaded only on fuse mount, is there any > > particular reason to not to use meta-autoload xltor for nfs server > > and libgfapi ? > > > > > > I think its because of lack of resources. I am not aware of any > > technical reason for not using on NFSv3 server and gfapi. > > Cool. I will try to see how we can implement meta-autoliad feature for > nfs-server and libgfapi. Once we have the feature in place, I will > implement the cache memory display/flush feature using meta xlators.
In case you plan to have this ready in a month (before the end of August), you should propose it as a 3.9 feature. Click the "Edir this page on GitHub" link on the bottom of https://www.gluster.org/community/roadmap/3.9/ :) Thanks, Niels > > Thanks for your valuable feedback. > Rafi KC > > > > > > > Regards > > > > Rafi KC > > > > On 07/26/2016 04:05 PM, Niels de Vos wrote: > >> On Tue, Jul 26, 2016 at 12:43:56PM +0530, Kaushal M wrote: > >>> On Tue, Jul 26, 2016 at 12:28 PM, Prashanth Pai <p...@redhat.com> > >>> <mailto:p...@redhat.com> wrote: > >>>> +1 to option (2) which similar to echoing into > >>>> /proc/sys/vm/drop_caches > >>>> > >>>> -Prashanth Pai > >>>> > >>>> ----- Original Message ----- > >>>>> From: "Mohammed Rafi K C" <rkavu...@redhat.com> > >>>>> <mailto:rkavu...@redhat.com> > >>>>> To: "gluster-users" <gluster-us...@gluster.org> > >>>>> <mailto:gluster-us...@gluster.org>, "Gluster Devel" > >>>>> <gluster-devel@gluster.org> <mailto:gluster-devel@gluster.org> > >>>>> Sent: Tuesday, 26 July, 2016 10:44:15 AM > >>>>> Subject: [Gluster-devel] Need a way to display and flush gluster > >>>>> cache ? > >>>>> > >>>>> Hi, > >>>>> > >>>>> Gluster stack has it's own caching mechanism , mostly on client > >>>>> side. > >>>>> But there is no concrete method to see how much memory are > >>>>> consuming by > >>>>> gluster for caching and if needed there is no way to flush the > >>>>> cache memory. > >>>>> > >>>>> So my first question is, Do we require to implement this two > >>>>> features > >>>>> for gluster cache? > >>>>> > >>>>> > >>>>> If so I would like to discuss some of our thoughts towards it. > >>>>> > >>>>> (If you are not interested in implementation discussion, you can > >>>>> skip > >>>>> this part :) > >>>>> > >>>>> 1) Implement a virtual xattr on root, and on doing setxattr, flush > >>>>> all > >>>>> the cache, and for getxattr we can print the aggregated cache size. > >>>>> > >>>>> 2) Currently in gluster native client support .meta virtual > >>>>> directory to > >>>>> get meta data information as analogues to proc. we can implement a > >>>>> virtual file inside the .meta directory to read the cache size. > >>>>> Also we > >>>>> can flush the cache using a special write into the file, (similar to > >>>>> echoing into proc file) . This approach may be difficult to > >>>>> implement in > >>>>> other clients. > >>> +1 for making use of the meta-xlator. We should be making more use of > >>> it. > >> Indeed, this would be nice. Maybe this can also expose the memory > >> allocations like /proc/slabinfo. > >> > >> The io-stats xlator can dump some statistics to > >> /var/log/glusterfs/samples/ and /var/lib/glusterd/stats/ . That seems > >> to > >> be acceptible too, and allows to get statistics from server-side > >> processes without involving any clients. > >> > >> HTH, > >> Niels > >> > >> > >>>>> 3) A cli command to display and flush the data with ip and port as > >>>>> an > >>>>> argument. GlusterD need to send the op to client from the connected > >>>>> client list. But this approach would be difficult to implement for > >>>>> libgfapi based clients. For me, it doesn't seems to be a good > >>>>> option. > >>>>> > >>>>> Your suggestions and comments are most welcome. > >>>>> > >>>>> Thanks to Talur and Poornima for their suggestions. > >>>>> > >>>>> Regards > >>>>> > >>>>> Rafi KC > >>>>> > >>>>> _______________________________________________ > >>>>> Gluster-devel mailing list > >>>>> Gluster-devel@gluster.org <mailto:Gluster-devel@gluster.org> > >>>>> http://www.gluster.org/mailman/listinfo/gluster-devel > >>>>> > >>>> _______________________________________________ > >>>> Gluster-devel mailing list > >>>> Gluster-devel@gluster.org <mailto:Gluster-devel@gluster.org> > >>>> http://www.gluster.org/mailman/listinfo/gluster-devel > >>> _______________________________________________ > >>> Gluster-users mailing list > >>> gluster-us...@gluster.org <mailto:gluster-us...@gluster.org> > >>> http://www.gluster.org/mailman/listinfo/gluster-users > >>> > >>> > >>> _______________________________________________ > >>> Gluster-devel mailing list > >>> Gluster-devel@gluster.org <mailto:Gluster-devel@gluster.org> > >>> http://www.gluster.org/mailman/listinfo/gluster-devel > > > > > > _______________________________________________ > > Gluster-devel mailing list > > Gluster-devel@gluster.org <mailto:Gluster-devel@gluster.org> > > http://www.gluster.org/mailman/listinfo/gluster-devel > > > > > > > > > > -- > > Raghavendra G >
signature.asc
Description: PGP signature
_______________________________________________ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel