Issues while decommissioning node

2014-04-29 Thread divye sheth
pointers to solve this issue. Thanks Divye Sheth

Re: Map Reduce Error

2014-04-29 Thread divye sheth
://blog.cloudera.com/blog/2012/03/hbase-hadoop-xceivers/ Also make sure you do not have a leak in your code where there are open file descriptors. Thanks Divye Sheth On Mon, Apr 28, 2014 at 9:09 PM, Wright, Eric ewri...@soleocommunications.com wrote: Currently, we are working on setting up a production hadoop

Re: Resetting dead datanodes list

2014-04-14 Thread divye sheth
this server as dead though no corresponding value i.e. DFS used, Non-DFS Used etc. would be shown. Thanks Divye Sheth On Mon, Apr 14, 2014 at 12:39 PM, Sandeep Nemuri nhsande...@gmail.comwrote: You can add the hostname/IP in exclude file and run this command hadoop dfsadmin -refreshNodes. On Mon

Re: can't copy between hdfs

2014-04-14 Thread divye sheth
the ulimit set to a high number as hadoop tends to open too may files. check ulimit -n if the value is 1024 increase it to a higher number usually 32K or 64K. Again a restart of the cluster would be required. Thanks Divye Sheth On Mon, Apr 14, 2014 at 11:10 AM, Stanley Shi s...@gopivotal.com

Re: Exception in hadoop jobtracker OOM

2014-04-14 Thread divye sheth
-default.xml for the exact property name. Sent from my phone. Thanks Divye Sheth On Apr 12, 2014 8:33 PM, Viswanathan J jayamviswanat...@gmail.com wrote: Hi, I'm using Hadoop v1.2.1 and it is running fine so long(3 months) without any issues. Suddenly I got the below error in Jobtracker and jobs

Re: Exception in hadoop jobtracker OOM

2014-04-14 Thread divye sheth
Sorry for the error. Did not have a proper look at the logs. This seems to be a JT issue. Ignore the previous email. Thanks Divye Sheth On Apr 14, 2014 6:06 PM, divye sheth divs.sh...@gmail.com wrote: This usually occurs when the task takes more memory and exceeds its heap space. You can

Re: Running MRV1 code on YARN

2014-04-08 Thread divye sheth
to run my WordCount MRV1 example but not the code that I have written for a usecase. Thanks Divye Sheth On Tue, Apr 8, 2014 at 6:12 PM, Kavali, Devaraj devaraj.kav...@intel.comwrote: As per the given exception stack trace, it is trying to use local file system. Can you check whether you have

Re: Running MRV1 code on YARN

2014-04-08 Thread divye sheth
fairly new to the coding aspect of map-reduce. Thanks Divye Sheth On Tue, Apr 8, 2014 at 7:43 PM, divye sheth divs.sh...@gmail.com wrote: Hi, I saw that pretty much after sending the email. I verified the properties file and it has all the correct properties even the mapred.framework.nameis

Re: HADOOP_MAPRED_HOME not found!

2014-03-28 Thread divye sheth
You can execute this command on any machine where you have set the HADOOP_MAPRED_HOME Thanks Divye Sheth On Fri, Mar 28, 2014 at 12:31 PM, Avinash Kujur avin...@gmail.com wrote: we can execute the above command anywhere or do i need to execute it in any particular directory? thanks

Re: HADOOP_MAPRED_HOME not found!

2014-03-28 Thread divye sheth
Hi Avinash, The export command you can execute on any one machine in the cluster as of now. Once you have executed the export command i.e. export HADOOP_MAPRED_HOME=/path/to/your/hadoop/installation you can then execute the mapred job -list command from that very same machine. Thanks Divye Sheth

Re: HADOOP_MAPRED_HOME not found!

2014-03-27 Thread divye sheth
Which version of hadoop are u using? AFAIK the hadoop mapred home is the directory where hadoop is installed or in other words untarred. Thanks Divye Sheth On Mar 28, 2014 10:43 AM, Avinash Kujur avin...@gmail.com wrote: hi, when i am trying to execute this command: hadoop job -history ~/1

Re: GC overhead limit exceeded

2014-03-12 Thread divye sheth
Hi Haihong, Please check out the link below, I believe it should solve your problem. http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits Thanks Divye Sheth On Wed, Mar 12, 2014 at 11:33 AM, haihong lu ung3...@gmail.com wrote: Thanks, even if i had added

Issues with Decommissioning MAchine

2014-03-07 Thread divye sheth
here. Thanks Divye Sheth

Re: Issues with Decommissioning MAchine

2014-03-07 Thread divye sheth
settled. Thanks Divye Sheth On Fri, Mar 7, 2014 at 6:33 PM, divye sheth divs.sh...@gmail.com wrote: Hi, I am using hadoop 0.20.2 and have decommissioned one of my servers. It has around 300G's of data which if I am correct will be distributed across the other machines and slowly all the blocks

Re: Question on DFS Balancing

2014-03-05 Thread divye sheth
this? Data loss is a NO NO for me. Thanks Divye Sheth On Wed, Mar 5, 2014 at 1:28 PM, Azuryy Yu azury...@gmail.com wrote: Hi, That probably break something if you apply the patch from 2.x to 0.20.x, but it depends on. AFAIK, Balancer had a major refactor in HDFSv2, so you'd better fix

Re: Question on DFS Balancing

2014-03-05 Thread divye sheth
. background: our cluster is not balanced, load balancer is very slow, so i wrote this tool to move blocks from one node to another node. On Wed, Mar 5, 2014 at 4:06 PM, divye sheth divs.sh...@gmail.com wrote: I wont be in a position to fix that depending on HDFS-1804 as we are upgrading to CDH4

Question on DFS Balancing

2014-03-04 Thread divye sheth
such scenario, is it not hadoop's job to balance the disk utilization between multiple disks on single datanode? Thanks Divye Sheth

RE: Need help: fsck FAILs, refuses to clean up corrupt fs

2014-03-04 Thread divye sheth
You can force namenode to leave safemode. hadoop dfsadmin -safemode leave Then run the hadoop fsck. Thanks Divye Sheth On Mar 4, 2014 10:03 PM, John Lilley john.lil...@redpoint.net wrote: More information from the NameNode log. I don't understand... it is saying that I cannot delete

Re: Question on DFS Balancing

2014-03-04 Thread divye sheth
Sheth On Wed, Mar 5, 2014 at 11:28 AM, Harsh J ha...@cloudera.com wrote: You're probably looking for https://issues.apache.org/jira/browse/HDFS-1804 On Tue, Mar 4, 2014 at 5:54 AM, divye sheth divs.sh...@gmail.com wrote: Hi, I am new to the mailing list. I am using Hadoop 0.20.2

Re: Exceptions in Hadoop and Hbase log files

2013-10-18 Thread divye sheth
I would recommend you to stop the cluster and then start the daemons one by one. 1. stop-dfs.sh 2. hadoop-daemon.sh start namenode 3. hadoop-daemon.sh start datanode This will show start up errors if any, also verify if the datanode is able to communicate with the namenode. Thanks Divye Sheth