Hi All,
HDFS HA (Based on QJM) , 5 journalnodes, Apache 2.5.0 on Redhat 6.5 with
JDK1.7.
Put 1P+ data into HDFS with FSimage about 10G, then keep on making more
requests to this HDFS, namenodes failover frequently. Wanna to know something
as follows:
1.ANN(active namenode)
Hi All,
HDFS HA (Based on QJM) , 5 journalnodes, Apache 2.5.0 on Redhat 6.5 with
JDK1.7.
Put 1P+ data into HDFS with FSimage about 10G, then keep on making more
requests to this HDFS, namenodes failover frequently. Wanna to know something
as follows:
1.ANN(active namenode)
1. Is service-rpc configured in namenode?
Not yet, I was considered to configure servicerpc, but I was thinking about the
possible disadvantages as well.
When failover is happened because of too many waiting rpcs, if zkfc gets
normal process from another port, is it possiable that the clien
Hi All,
HDFS Federation with PB+ rest data (Single Name Service is HA, Based on
QJM) , Apache 2.7.3 on Redhat 6.5 with JDK1.7.
1.Plan to deploy NN on server(32cores, 512G) , any precious advice about
JVM OPTS? If set heap size to about 400G with CMS GC collector, any obvious
pr
Hi All,
There are default values of configs in hdfs-default.xml and
core-default.xml, and I am wondering which situation are they for? Are they
closer to lab use, or closer to real production environment?
Maybe it depends on different configs, then I have questions to these
c
Hi All, As an application over hadoop, is it recommended to use
"org.apache.hadoop.fsClass FileContext" rather then "org.apache.hadoop.fs Class
FileSystem"? And why, or why not? Besides, my target version will be Apache
Hadoop V2.7.3, and the application will be running over both HDFS HA
andFed
Hi All,
As an application over hadoop, is it recommended to use "org.apache.hadoop.fs
Class FileContext" rather then "org.apache.hadoop.fs Class FileSystem"? And
why, or why not?
Besides, my target version will be Apache Hadoop V2.7.3, and the application
will be running over both HDFS HA and