Thanks much,
Cheers,
Usman
You can change the value of hadoop.root.logger in
conf/log4j.properties to change the log level globally. See also the
section "Custom Logging levels" in the same file to set levels on a
per-component basis.
You can also use hadoop daemonlog to set log levels on a temp
You can change the value of hadoop.root.logger in
conf/log4j.properties to change the log level globally. See also the
section "Custom Logging levels" in the same file to set levels on a
per-component basis.
You can also use hadoop daemonlog to set log levels on a temporary
basis (they are reset o
Hi Tom,
Thanks for the trick :).
I tried by setting the replication to 3 in the hadoop-default.xml but
then the namenode-logfile in /var/log/hadoop started getting full with
the messages marked in bold:
2009-06-24 14:39:06,338 INFO org.apache.hadoop.dfs.StateChange: STATE*
SafeModeInfo.leav
Hi Usman,
Before the rebalancer was introduced one trick people used was to
increase the replication on all the files in the system, wait for
re-replication to complete, then decrease the replication to the
original level. You can do this using hadoop fs -setrep.
Cheers,
Tom
On Thu, Jun 25, 2009
Hi,
One of our test clusters is running HADOOP 15.3 with replication level
set to 2. The datanodes are not balanced at all.
Datanode_1: 52%
Datanode_2: 82%
Datanode_3: 30%
15.3 does not have the rebalancer capability, we are planning to upgrade
but not for now.
If i take out Datanode_1 fro