Re: [Gluster-devel] decoupling network.ping-timeout and transport.tcp-user-timeout

2017-02-28 Thread Raghavendra G
There is a patch on this [1]. Reviews from wider audience will be helpful,
before we merge the patch.

https://review.gluster.org/#/c/16731/

regards,
Raghavendra


On Wed, Jan 11, 2017 at 4:19 PM, Milind Changire 
wrote:

> +gluster-users
>
> Milind
>
>
> On 01/11/2017 03:21 PM, Milind Changire wrote:
>
>> The management connection uses network.ping-timeout to time out and
>> retry connection to a different server if the existing connection
>> end-point is unreachable from the client.
>> Due to the nature of the parameters involved in the TCP/IP network
>> stack, it becomes imperative to control the other network connections
>> using the socket level tunables:
>> * SO_KEEPALIVE
>> * TCP_KEEPIDLE
>> * TCP_KEEPINTVL
>> * TCP_KEEPCNT
>>
>> So, I'd like to decouple the network.ping-timeout and
>> transport.tcp-user-timeout since they are tunables for different
>> aspects of gluster application. network-ping-timeout monitors the
>> brick/node level responsiveness and transport.tcp-user-timeout is one
>> of the attributes that is used to manage the state of the socket.
>>
>> Saying so, we could do away with network.ping-timeout altogether and
>> stick with transport.tcp-user-timeout for types of sockets. It becomes
>> increasingly difficult to work with different tunables across gluster.
>>
>> I believe, there have not been many cases in which the community has
>> found the existing defaults for socket timeout unusable. So we could
>> stick with the system defaults and add the following socket level
>> tunables and make them open for configuration:
>> * client.tcp-user-timeout
>>  which sets transport.tcp-user-timeout
>> * client.keepalive-time
>>  which sets transport.socket.keepalive-time
>> * client.keepalive-interval
>>  which sets transport.socket.keepalive-interval
>> * client.keepalive-count
>>  which sets transport.socket.keepalive-count
>> * server.tcp-user-timeout
>>  which sets transport.tcp-user-timeout
>> * server.keepalive-time
>>  which sets transport.socket.keepalive-time
>> * server.keepalive-interval
>>  which sets transport.socket.keepalive-interval
>> * server.keepalive-count
>>  which sets transport.socket.keepalive-count
>>
>> However, these settings would effect all sockets in gluster.
>> In cases where aggressive timeouts are needed, the community can find
>> gluster options which have 1:1 mapping with socket level options as
>> documented in tcp(7).
>>
>> Please share your thoughts about the risks or effectiveness of the
>> decoupling.
>>
>> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] decoupling network.ping-timeout and transport.tcp-user-timeout

2017-01-11 Thread Milind Changire

+gluster-users

Milind

On 01/11/2017 03:21 PM, Milind Changire wrote:

The management connection uses network.ping-timeout to time out and
retry connection to a different server if the existing connection
end-point is unreachable from the client.
Due to the nature of the parameters involved in the TCP/IP network
stack, it becomes imperative to control the other network connections
using the socket level tunables:
* SO_KEEPALIVE
* TCP_KEEPIDLE
* TCP_KEEPINTVL
* TCP_KEEPCNT

So, I'd like to decouple the network.ping-timeout and
transport.tcp-user-timeout since they are tunables for different
aspects of gluster application. network-ping-timeout monitors the
brick/node level responsiveness and transport.tcp-user-timeout is one
of the attributes that is used to manage the state of the socket.

Saying so, we could do away with network.ping-timeout altogether and
stick with transport.tcp-user-timeout for types of sockets. It becomes
increasingly difficult to work with different tunables across gluster.

I believe, there have not been many cases in which the community has
found the existing defaults for socket timeout unusable. So we could
stick with the system defaults and add the following socket level
tunables and make them open for configuration:
* client.tcp-user-timeout
 which sets transport.tcp-user-timeout
* client.keepalive-time
 which sets transport.socket.keepalive-time
* client.keepalive-interval
 which sets transport.socket.keepalive-interval
* client.keepalive-count
 which sets transport.socket.keepalive-count
* server.tcp-user-timeout
 which sets transport.tcp-user-timeout
* server.keepalive-time
 which sets transport.socket.keepalive-time
* server.keepalive-interval
 which sets transport.socket.keepalive-interval
* server.keepalive-count
 which sets transport.socket.keepalive-count

However, these settings would effect all sockets in gluster.
In cases where aggressive timeouts are needed, the community can find
gluster options which have 1:1 mapping with socket level options as
documented in tcp(7).

Please share your thoughts about the risks or effectiveness of the
decoupling.


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel