On Wed, Sep 6, 2017 at 4:49 PM, Raghavendra G
wrote:
>
>
> On Wed, Sep 6, 2017 at 11:16 AM, Csaba Henk wrote:
>
>> Thanks Du, nice bit of info! It made me wander about the following:
>>
>> - Could it be then the default answer we give to "glusterfs client
>> high memory usage"
>> type of compl
On Wed, Sep 6, 2017 at 11:16 AM, Csaba Henk wrote:
> Thanks Du, nice bit of info! It made me wander about the following:
>
> - Could it be then the default answer we give to "glusterfs client
> high memory usage"
> type of complaints to set vfs_cache_pressure to 100 + x?
>
- And then x = ? Was
Thanks Du, nice bit of info! It made me wander about the following:
- Could it be then the default answer we give to "glusterfs client
high memory usage"
type of complaints to set vfs_cache_pressure to 100 + x?
- And then x = ? Was there proper performance testing done to see how
performance /
On Wed, Sep 6, 2017 at 10:27 AM, Raghavendra G
wrote:
> Also we've an article for sysadmins which has a section:
>
>
>
> With GlusterFS, many users with a lot of storage and many small files
> easily end up using a lot of RAM on the server side
>
>
I think this article speaks about bricks. We ca
Another parallel effort could be trying to configure the number of
inodes/dentries cached by kernel VFS using /proc/sys/vm interface.
==
vfs_cache_pressure
--
This percentage value controls the tendency of the kernel to
+gluster-devel
Ashish just spoke to me about need of GC of inodes due to some state in inode
that is being proposed in EC. Hence adding more people to conversation.
> > On 4 September 2017 at 12:34, Csaba Henk wrote:
> >
> > > I don't know, depends on how sophisticated GC we need/want/can get