Client failures due to failover gets handled seamlessly by having retries,
so need not worry about that.
And by increasing ha.health-monitor.rpc-timeout.ms to a slightly larger
value, you are just avoiding unnecessary failover when namenode busy
processing other client/service requests. This will
1. Is service-rpc configured in namenode?
(dfs.namenode.servicerpc-address - this will create another RPC server
listening on another port (say 8021) to handle all service (non-client)
requests and hence default rpc address (say 8020) will handle only client
requests.)
By doing this way, you
Sandeep,
Can you please share more information on which hadoop version you are using
and also size of the cluster in terms of fsimage size or file/block count.
Also what is the threshold set for rpc latency?
There is very less probability that standbyNN getting rpc latency unless
there is a
issue recently , may be you can have look at *HDFS-10301*
> .
>
>
>
>
>
>
>
> --Brahma Reddy Battula
>
>
>
> *From:* Chackravarthy Esakkimuthu [mailto:chaku.mi...@gmail.com]
> *Sent:* 03 May 2016 18:10
> *To:* Gokul
> *Cc:* user@hadoop.apache.org
> *Subject:* R
in 6 hours in our cluster / when disk failure happens) Is it ok to
reduce the lock granularity?
Please give suggestion on the same. Also correct me if I am wrong.
Thanks,
Chackra
On Mon, May 2, 2016 at 2:12 PM, Gokul <gokulakanna...@gmail.com> wrote:
> *bump*
>
> On Fri, Apr 2
Hi,
Is there any recommendation or guideline on setting no of RPC handlers in
Namenode based on cluster size (no of datanodes)?
Cluster details :
No of datanodes - 1200
NN hardware - 74G heap allocated to NN process, 40 core machine
Total blocks - 80M+
Total Files/Directories - 60M+
Total