Re: [Gluster-devel] inode lru limit

2014-06-03 Thread Raghavendra Gowdappa
> >> Hi,
> >>
> >> But as of now the inode table is bound to bound_xl which is associated
> >> with
> >> the client_t object for the client being connected. As part of fops we can
> >> get the bound_xl (thus the inode table) from the rpc request
> >> (req->trans->xl_private). But in reconfigure we get just the xlator
> >> pointer
> >> of protocol/server and dict containing new options.
> >>
> >> So what I am planning is this. If the xprt_list (transport list
> >> corresponding
> >> to the clients mounted) is empty, then just set the private structure's
> >> variable for lru limit (which will be used to create the inode table when
> >> a
> >> client mounts). If xprt_list of protocol/server's private structure is not
> >> empty, then get one of the transports from that list and get the client_t
> >> object corresponding to the transport, from which bould_xl is obtained
> >> (all
> >> the client_t objects share the same inode table) . Then from bound_xl
> >> pointer to inode table is got and its variable for lru limit is also set
> >> to
> >> the value specified via cli and inode_table_prune is called to purge the
> >> extra inodes.
> > In the above proposal if there are no active clients, lru limit of itable
> > is not reconfigured. Here are two options to improve correctness of your
> > proposal.


> If there are no active clients, then there will not be any itable.
> itable will be created when 1st client connects to the brick. And while
> creating the itable we use the inode_lru_limit variable present in
> protocol/server's private structure and inode table that is created also
> saves the same value.

A zero client current count doesn't mean that itables are absent in bounded_xl. 
There can be previous connections which resulted in itable creations.

> > 1. On a successful handshake, you check whether the lru_limit of itable is
> > equal to configured value. If not equal, set it to the configured value
> > and prune the itable. The cost is that you check inode table's lru limit
> > on every client connection.
> On successful handshake, for the 1st client inode table will be created
> with lru_limit value saved in protocol/server's private. For further
> handshakes since inode table is already there, new inode tables will not
> be created. So instead of waiting for a new handshake to happen to set
> the lru_limit and purge the inode table, I think its better to do it as
> part of reconfigure itself.
> >
> > 2. Traverse through the list of all xlators (since there is no easy way of
> > finding potential candidates for bound_xl other than peaking into options
> > specific to authentication) and if there is an itable associated with that
> > xlator, set its lru limit and prune it. The cost here is traversing the
> > list of xlators. However, our xlator list in brick process is relatively
> > small, this shouldn't have too much performance impact.
> >
> > Comments are welcome.
> 
> Regards,
> Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] inode lru limit

2014-06-02 Thread Raghavendra Bhat

On Tuesday 03 June 2014 07:00 AM, Raghavendra Gowdappa wrote:


- Original Message -

From: "Raghavendra Bhat" 
To: gluster-devel@gluster.org
Cc: "Anand Avati" 
Sent: Monday, June 2, 2014 6:41:30 PM
Subject: Re: [Gluster-devel] inode lru limit

On Monday 02 June 2014 11:06 AM, Raghavendra G wrote:





On Fri, May 30, 2014 at 2:24 PM, Raghavendra Bhat < rab...@redhat.com >
wrote:



Hi,

Currently the lru-limit of the inode table in brick processes is 16384. There
is a option to configure it to some other value. The protocol/server uses
inode_lru_limit variable present in its private structure while creating the
inode table (whose default value is 16384). When the option is reconfigured
via volume set option the protocol/server's inode_lru_limit variable present
in its private structure is changed. But the actual size of the inode table
still remains same as old one. Only when the brick is restarted the newly
set value comes into picture. Is it ok? Should we change the inode table's
lru_limit variable also as part of reconfigure? If so, then probably we
might have to remove the extra inodes present in the lru list by calling
inode_table_prune.

Yes, I think we should change the inode table's lru limit too and call
inode_table_prune. From what I know, I don't think this change would cause
any problems.


But as of now the inode table is bound to bound_xl which is associated with
the client_t object for the client being connected. As part of fops we can
get the bound_xl (thus the inode table) from the rpc request
(req->trans->xl_private). But in reconfigure we get just the xlator pointer
of protocol/server and dict containing new options.

So what I am planning is this. If the xprt_list (transport list corresponding
to the clients mounted) is empty, then just set the private structure's
variable for lru limit (which will be used to create the inode table when a
client mounts). If xprt_list of protocol/server's private structure is not
empty, then get one of the transports from that list and get the client_t
object corresponding to the transport, from which bould_xl is obtained (all
the client_t objects share the same inode table) . Then from bound_xl
pointer to inode table is got and its variable for lru limit is also set to
the value specified via cli and inode_table_prune is called to purge the
extra inodes.

In the above proposal if there are no active clients, lru limit of itable is 
not reconfigured. Here are two options to improve correctness of your proposal.
If there are no active clients, then there will not be any itable. 
itable will be created when 1st client connects to the brick. And while 
creating the itable we use the inode_lru_limit variable present in 
protocol/server's private structure and inode table that is created also 
saves the same value.

1. On a successful handshake, you check whether the lru_limit of itable is 
equal to configured value. If not equal, set it to the configured value and 
prune the itable. The cost is that you check inode table's lru limit on every 
client connection.
On successful handshake, for the 1st client inode table will be created 
with lru_limit value saved in protocol/server's private. For further 
handshakes since inode table is already there, new inode tables will not 
be created. So instead of waiting for a new handshake to happen to set 
the lru_limit and purge the inode table, I think its better to do it as 
part of reconfigure itself.


2. Traverse through the list of all xlators (since there is no easy way of 
finding potential candidates for bound_xl other than peaking into options 
specific to authentication) and if there is an itable associated with that 
xlator, set its lru limit and prune it. The cost here is traversing the list of 
xlators. However, our xlator list in brick process is relatively small, this 
shouldn't have too much performance impact.

Comments are welcome.


Regards,
Raghavendra Bhat

Does it sound OK?

Regards,
Raghavendra Bhat

Regards,
Raghavendra Bhat






Please provide feedback


Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel



--
Raghavendra G


___
Gluster-devel mailing list Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] inode lru limit

2014-06-02 Thread Raghavendra Gowdappa


- Original Message -
> From: "Raghavendra Bhat" 
> To: gluster-devel@gluster.org
> Cc: "Anand Avati" 
> Sent: Monday, June 2, 2014 6:41:30 PM
> Subject: Re: [Gluster-devel] inode lru limit
> 
> On Monday 02 June 2014 11:06 AM, Raghavendra G wrote:
> 
> 
> 
> 
> 
> On Fri, May 30, 2014 at 2:24 PM, Raghavendra Bhat < rab...@redhat.com >
> wrote:
> 
> 
> 
> Hi,
> 
> Currently the lru-limit of the inode table in brick processes is 16384. There
> is a option to configure it to some other value. The protocol/server uses
> inode_lru_limit variable present in its private structure while creating the
> inode table (whose default value is 16384). When the option is reconfigured
> via volume set option the protocol/server's inode_lru_limit variable present
> in its private structure is changed. But the actual size of the inode table
> still remains same as old one. Only when the brick is restarted the newly
> set value comes into picture. Is it ok? Should we change the inode table's
> lru_limit variable also as part of reconfigure? If so, then probably we
> might have to remove the extra inodes present in the lru list by calling
> inode_table_prune.
> 
> Yes, I think we should change the inode table's lru limit too and call
> inode_table_prune. From what I know, I don't think this change would cause
> any problems.
> 
> 
> But as of now the inode table is bound to bound_xl which is associated with
> the client_t object for the client being connected. As part of fops we can
> get the bound_xl (thus the inode table) from the rpc request
> (req->trans->xl_private). But in reconfigure we get just the xlator pointer
> of protocol/server and dict containing new options.
> 
> So what I am planning is this. If the xprt_list (transport list corresponding
> to the clients mounted) is empty, then just set the private structure's
> variable for lru limit (which will be used to create the inode table when a
> client mounts). If xprt_list of protocol/server's private structure is not
> empty, then get one of the transports from that list and get the client_t
> object corresponding to the transport, from which bould_xl is obtained (all
> the client_t objects share the same inode table) . Then from bound_xl
> pointer to inode table is got and its variable for lru limit is also set to
> the value specified via cli and inode_table_prune is called to purge the
> extra inodes.

In the above proposal if there are no active clients, lru limit of itable is 
not reconfigured. Here are two options to improve correctness of your proposal.

1. On a successful handshake, you check whether the lru_limit of itable is 
equal to configured value. If not equal, set it to the configured value and 
prune the itable. The cost is that you check inode table's lru limit on every 
client connection.

2. Traverse through the list of all xlators (since there is no easy way of 
finding potential candidates for bound_xl other than peaking into options 
specific to authentication) and if there is an itable associated with that 
xlator, set its lru limit and prune it. The cost here is traversing the list of 
xlators. However, our xlator list in brick process is relatively small, this 
shouldn't have too much performance impact.

Comments are welcome.

> 
> Does it sound OK?
> 
> Regards,
> Raghavendra Bhat
> 
> Regards,
> Raghavendra Bhat
> 
> 
> 
> 
> 
> 
> Please provide feedback
> 
> 
> Regards,
> Raghavendra Bhat
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 
> 
> 
> --
> Raghavendra G
> 
> 
> ___
> Gluster-devel mailing list Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] inode lru limit

2014-06-02 Thread Raghavendra Bhat

On Monday 02 June 2014 11:06 AM, Raghavendra G wrote:



On Fri, May 30, 2014 at 2:24 PM, Raghavendra Bhat > wrote:



Hi,

Currently the lru-limit of the inode table in brick processes is
16384. There is a option to configure it to some other value. The
protocol/server uses inode_lru_limit variable present in its
private structure while creating the inode table (whose default
value is 16384). When the option is reconfigured via volume set
option the protocol/server's inode_lru_limit variable present in
its private structure is changed. But the actual size of the inode
table still remains same as old one. Only when the brick is
restarted the newly set value comes into picture. Is it ok? Should
we change the inode table's lru_limit variable also as part of
reconfigure? If so, then probably we might have to remove the
extra inodes present in the lru list by calling inode_table_prune.


Yes, I think we should change the inode table's lru limit too and call 
inode_table_prune. From what I know, I don't think this change would 
cause any problems.




But as of now the inode table is bound to bound_xl which is associated 
with the client_t object for the client being connected. As part of fops 
we can get the bound_xl  (thus the inode table) from the rpc request 
(req->trans->xl_private). But in reconfigure we get just the xlator 
pointer of protocol/server and dict containing new options.


So what I am planning is this. If the xprt_list (transport list 
corresponding to the clients mounted) is empty, then just set the 
private structure's variable for lru limit (which will be used to create 
the inode table when a client mounts). If xprt_list of protocol/server's 
private structure is not empty, then get one of the transports from that 
list and get the client_t object corresponding to the transport, from 
which bould_xl is obtained (all the client_t objects share the same 
inode table) . Then from bound_xl pointer to inode table is got and its 
variable for lru limit is also set to the value specified via cli and 
inode_table_prune is called to purge the extra inodes.


Does it sound OK?

Regards,
Raghavendra Bhat

Regards,
Raghavendra Bhat



Please provide feedback


Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-devel




--
Raghavendra G


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] inode lru limit

2014-06-01 Thread Raghavendra G
On Fri, May 30, 2014 at 2:24 PM, Raghavendra Bhat  wrote:

>
> Hi,
>
> Currently the lru-limit of the inode table in brick processes is 16384.
> There is a option to configure it to some other value. The protocol/server
> uses inode_lru_limit variable present in its private structure while
> creating the inode table (whose default value is 16384). When the option is
> reconfigured via volume set option the protocol/server's inode_lru_limit
> variable present in its private structure is changed. But the actual size
> of the inode table still remains same as old one. Only when the brick is
> restarted the newly set value comes into picture. Is it ok? Should we
> change the inode table's lru_limit variable also as part of reconfigure? If
> so, then probably we might have to remove the extra inodes present in the
> lru list by calling inode_table_prune.
>

Yes, I think we should change the inode table's lru limit too and call
inode_table_prune. From what I know, I don't think this change would cause
any problems.


>
> Please provide feedback
>
>
> Regards,
> Raghavendra Bhat
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] inode lru limit

2014-05-30 Thread Raghavendra Bhat


Hi,

Currently the lru-limit of the inode table in brick processes is 16384. 
There is a option to configure it to some other value. The 
protocol/server uses inode_lru_limit variable present in its private 
structure while creating the inode table (whose default value is 16384). 
When the option is reconfigured via volume set option the 
protocol/server's inode_lru_limit variable present in its private 
structure is changed. But the actual size of the inode table still 
remains same as old one. Only when the brick is restarted the newly set 
value comes into picture. Is it ok? Should we change the inode table's 
lru_limit variable also as part of reconfigure? If so, then probably we 
might have to remove the extra inodes present in the lru list by calling 
inode_table_prune.


Please provide feedback


Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel