probably user hdfs does not have access to /root on your local file system.
I would copy the file to /tmp and give it appropriate permissions and then try
again.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Oct 9, 2012, at 11:40 AM, Bai Shen wrote:
> I have a CDH3 cluster
webhdfs.enabled" is set to true on the name
node and the datanodes.
Also take a look at
http://hadoop.apache.org/common/docs/r1.0.3/webhdfs.html#Authentication and use
the appropriate call based on whether you are on a secure cluster or not.
--
Arpit Gupta
Hortonworks Inc.
http://hortonw
JAVA_HOME is set for user root but not for user hdfs. Make sure JAVA_HOME is
set for user hdfs and then try the command again.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Jul 5, 2012, at 9:42 AM, Ying Huang wrote:
> Hello,
>I am installing hadoop according to thi
according to the logs the namespace id in the datanode data directories is
incompatible.
Since you formatted the namenode these id's do not match. Clean up the contents
of the data dir (/app/hadoop/tmp/dfs/data) and then start the datanode.
--
Arpit Gupta
Hortonworks Inc.
t the
datanode.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Mar 30, 2012, at 2:33 PM, Jay Vyas wrote:
> Hi guys !
>
> This is very strange - I have formatted my namenode (psuedo distributed
> mode) and now Im getting some kind of namespace error.
>
> Without furt
It is initiated by the slave. If you have defined files to state which slaves can talk to the namenode (using config dfs.hosts) and which hosts cannot (using property dfs.hosts.exclude) then you would need to edit these files and issue the refresh command.On Mar 1, 2012, at 5:35 PM, Mohit Anchlia w
export HADOOP_HOME_WARN_SUPPRESS=1
Will get rid of the warning.
The warning shows up if HADOOP_HOME env var is set. Instead you can
use HADOOP_PREFIX
--
Arpit
On Jan 26, 2012, at 10:16 PM, Sambit Tripathy wrote:
> Hi,
>
> I recently upgraded to hadoop 1.0 and I am seeing warning messages lik
On the new nodes you are trying to add make sure the dfs/data directories are
empty. You probably have a VERSION file from an older deploy and thus causing
the incompatible namespaceId error.
--
Arpit
ar...@hortonworks.com
On Dec 20, 2011, at 5:35 AM, Sloot, Hans-Peter wrote:
>
>
> But I
hdfs scheme should work but you will have to change the port. To find
the correct port # look for fs.default.name prop in the core-site.xml
or the namenode ui should also state the port.
--
Arpit
On Oct 27, 2011, at 10:52 PM, Jay Vyas wrote:
> I found a way to connect to hadoop via hftp, and it
The reason you are getting multiple prompts is that you have multiple dir's
defined in the dfs.name.dir.
A simple expect script would take care of this.
#!/usr/bin/expect -f
spawn /bin/hadoop namenode -format
expect "Re-format filesystem in"
send Y\n
expect "Re-format filesystem in"
send Y\n
you could try
echo yes | bin/hadoop namenode -format
--
Arpit
ar...@hortonworks.com
On Sep 22, 2011, at 2:43 PM, wrote:
> Hello,
>
> I am trying to automate formatting an HDFS volume. Is there any way to do
> this without the interaction (and using expect)?
>
> Cheers,
> Ivan
11 matches
Mail list logo