Re: copyFromLocal: File does not exist.

2012-10-09 Thread Arpit Gupta
probably user hdfs does not have access to /root on your local file system. I would copy the file to /tmp and give it appropriate permissions and then try again. -- Arpit Gupta Hortonworks Inc. http://hortonworks.com/ On Oct 9, 2012, at 11:40 AM, Bai Shen wrote: > I have a CDH3 cluster

Re: REST API returning op not supported

2012-07-18 Thread Arpit Gupta
webhdfs.enabled" is set to true on the name node and the datanodes. Also take a look at http://hadoop.apache.org/common/docs/r1.0.3/webhdfs.html#Authentication and use the appropriate call based on whether you are on a secure cluster or not. -- Arpit Gupta Hortonworks Inc. http://hortonw

Re: JAVA_HOME is not set

2012-07-05 Thread Arpit Gupta
JAVA_HOME is set for user root but not for user hdfs. Make sure JAVA_HOME is set for user hdfs and then try the command again. -- Arpit Gupta Hortonworks Inc. http://hortonworks.com/ On Jul 5, 2012, at 9:42 AM, Ying Huang wrote: > Hello, >I am installing hadoop according to thi

Re: Data Node is not Started

2012-04-06 Thread Arpit Gupta
according to the logs the namespace id in the datanode data directories is incompatible. Since you formatted the namenode these id's do not match. Clean up the contents of the data dir (/app/hadoop/tmp/dfs/data) and then start the datanode. -- Arpit Gupta Hortonworks Inc.

Re: namespace error after formatting namenode (psuedo distr mode).

2012-03-30 Thread Arpit Gupta
t the datanode. -- Arpit Gupta Hortonworks Inc. http://hortonworks.com/ On Mar 30, 2012, at 2:33 PM, Jay Vyas wrote: > Hi guys ! > > This is very strange - I have formatted my namenode (psuedo distributed > mode) and now Im getting some kind of namespace error. > > Without furt

Re: Adding nodes

2012-03-01 Thread Arpit Gupta
It is initiated by the slave. If you have defined files to state which slaves can talk to the namenode (using config dfs.hosts) and which hosts cannot (using property dfs.hosts.exclude) then you would need to edit these files and issue the refresh command.On Mar 1, 2012, at 5:35 PM, Mohit Anchlia w

Re: How to get rid of deprecated messages

2012-01-26 Thread Arpit Gupta
export HADOOP_HOME_WARN_SUPPRESS=1 Will get rid of the warning. The warning shows up if HADOOP_HOME env var is set. Instead you can use HADOOP_PREFIX -- Arpit On Jan 26, 2012, at 10:16 PM, Sambit Tripathy wrote: > Hi, > > I recently upgraded to hadoop 1.0 and I am seeing warning messages lik

Re: Desperate!!!! Expanding,shrinking cluster or replacing failed nodes.

2011-12-20 Thread Arpit Gupta
On the new nodes you are trying to add make sure the dfs/data directories are empty. You probably have a VERSION file from an older deploy and thus causing the incompatible namespaceId error. -- Arpit ar...@hortonworks.com On Dec 20, 2011, at 5:35 AM, Sloot, Hans-Peter wrote: > > > But I

Re: writing to hdfs via java api

2011-10-27 Thread Arpit Gupta
hdfs scheme should work but you will have to change the port. To find the correct port # look for fs.default.name prop in the core-site.xml or the namenode ui should also state the port. -- Arpit On Oct 27, 2011, at 10:52 PM, Jay Vyas wrote: > I found a way to connect to hadoop via hftp, and it

Re: formatting hdfs without user interaction

2011-09-22 Thread Arpit Gupta
The reason you are getting multiple prompts is that you have multiple dir's defined in the dfs.name.dir. A simple expect script would take care of this. #!/usr/bin/expect -f spawn /bin/hadoop namenode -format expect "Re-format filesystem in" send Y\n expect "Re-format filesystem in" send Y\n

Re: formatting hdfs without user interaction

2011-09-22 Thread Arpit Gupta
you could try echo yes | bin/hadoop namenode -format -- Arpit ar...@hortonworks.com On Sep 22, 2011, at 2:43 PM, wrote: > Hello, > > I am trying to automate formatting an HDFS volume. Is there any way to do > this without the interaction (and using expect)? > > Cheers, > Ivan