Title: DFS submit client params overrides final params on cluster
------------------------------------------------------------------
Key: HADOOP-2270
URL: https://issues.apache.org/jira/browse/HADOOP-2270
Project: Hadoop
Issue Type: Bug
Components: conf
Affects Versions: 0.15.1
Reporter: Karam Singh
hdfs client params over-rides the params set as final on hdfs cluster nodes.
default valuesv of cleint side hadoop-site.xml values override the final
prameters of hdfs hadoop-site.xml .
oberved the following cases -:
1. dfs.trash.root=/recycle, dfs.trash.interval=10 and dfs.replication=2 marked
final under hadoop-site.xml on hdfs cluster.
When fsShel command "hadoop dfs -put local_dir dest" fired from submission
host
Files will still get replicated 3 times (default) instead of final
dfs.replication=2.
Similarly when "hadoop dfs -rmr dfs_dir OR hadoop dfs -rm file_path " fired
from submit client the file/driectory diectly got deleted without being moved
to /recycle.
Here hadoop-site.xml on submit client does not specify dfs.trash.root,
dfs.trash.interval and dfs.replication.
Same is the case when we submit mapred JOB from client and job.xml dispalys
default values which overrides the lsuter values.
2. dfs.trash.root=/recycle, dfs.trash.interval=10 and dfs.replication=2 marked
final under hadoop-site.xml on hdfs cluster.
And
dfs.trash.root=/rubbish, dfs.trash.interval=2 and dfs.replication=5 under
hadoop-site.xml on submit client.
When fsShel command "hadoop dfs -put local_dir dest" fired from submit client
Files will get replicated 5 times instead of final dfs.replication=2.
Similarly when "hadoop dfs -rmr dfs_dir OR hadoop dfs -rm file_path " fired
from submit client the file/driectory diectly will be moved to /rubbish instead
of /recycle.
Same is the case when we submit mapred job from client, job.xml displays
following values -:
dfs.trash.root=/rubbish, dfs.trash.interval=2 and dfs.replication=5
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.