Re: using -libjars in Hadoop 2.2.1

2014-04-16 Thread Abdelrahman Shettia
Hi Kim, It looks like it is pointing to hdfs location. Can you create the hdfs dir and put the jar there? Hope this helps Thanks, Rahman On Apr 16, 2014, at 8:39 AM, Rahul Singh smart.rahul.i...@gmail.com wrote: any help...all are welcome? On Wed, Apr 16, 2014 at 1:13 PM, Rahul Singh

Re: using -libjars in Hadoop 2.2.1

2014-04-16 Thread Abdelrahman Shettia
tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) Therefore, the question is how do I figure out where the ResourceManager is running? TIA Kim On Wed, Apr 16, 2014 at 8:43 AM, Abdelrahman Shettia ashet...@hortonworks.com wrote: Hi

Re: using -libjars in Hadoop 2.2.1

2014-04-16 Thread Abdelrahman Shettia
, Abdelrahman Shettia ashet...@hortonworks.com wrote: Hi Kim, You can try to grep on the RM java process by running the following command: ps aux | grep On Wed, Apr 16, 2014 at 10:31 AM, Kim Chew kchew...@gmail.com wrote: Thanks Rahman, I have mixed things up a little bit in my mapred-site.xml

Re: HDFS file system size issue

2014-04-15 Thread Abdelrahman Shettia
is 58GB and namenode is reporting DFS Used as 1.46TB. Pardon me for making the mail dirty by lot of copy-pastes, hope it's still readable, -- Saumitra S. Shahapure On Tue, Apr 15, 2014 at 2:57 AM, Abdelrahman Shettia ashet...@hortonworks.com wrote: Hi Biswa, Are you sure

Re: Find the task and it's datanode which is taking the most time in a cluster

2014-04-15 Thread Abdelrahman Shettia
Hi Shashi, I am assuming that you are running hadoop 1.x. There is an option to see the failed tasks on the Job tracker UI. Please replace the jobtracker host with the actual host and click on the following link and look for the task failure.

Re: HDFS file system size issue

2014-04-14 Thread Abdelrahman Shettia
Hi Biswa, Are you sure that the replication factor of the files are three? Please run a ‘hadoop fsck / -blocks -files -locations’ and see the replication factor for each file. Also, Post the configuration of namedfs.datanode.du.reserved/name and please check the real space presented by a

Re: Tasktracker not running with LinuxTaskController

2013-11-12 Thread Abdelrahman Shettia
Hi, If you are using the Linux TaskController you need to build the executable. Instructions of doing so can be found in the following document: http://hadoop.apache.org/docs/r1.0.4/cluster_setup.html Thanks -Abdelrahman On Nov 12, 2013, at 1:41 AM, rab ra rab...@gmail.com wrote: Hi I

Re: Auto clean DistCache?

2013-03-26 Thread Abdelrahman Shettia
Let me clarify , If there are lots of files or directories up to 32K ( Depending on the user's # of files sys os config) in those distributed cache dirs, The OS will not be able to create any more files/dirs, Thus M-R jobs wont get initiated on those tasktracker machines. Hope this helps. Thanks

Re: About running a simple wordcount mapreduce

2013-03-22 Thread Abdelrahman Shettia
Hi Redwane , It is possible that the hosts which are running tasks are do not have enough space. Those dirs are confiugred in mapred-site.xml On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui reduno1...@googlemail.com wrote: -- Forwarded message -- From: Redwane

Re: About running a simple wordcount mapreduce

2013-03-22 Thread Abdelrahman Shettia
hard disc . Is there a way too see how much space is in the hdfs without web ui . Sent from Samsung Mobile Serge Blazhievsky hadoop...@gmail.com wrote: Check web ui how much space you have on hdfs??? Sent from my iPhone On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia ashet

Re: DataXceiver error processing WRITE_BLOCK operation src: /x.x.x.x:50373 dest: /x.x.x.x:50010

2013-03-08 Thread Abdelrahman Shettia
Hi, If all of the # of open files limit ( hbase , and hdfs : users ) are set to more than 30 K. Please change the dfs.datanode.max.xcievers to more than the value below. property namedfs.datanode.max.xcievers/name value2096/value descriptionPRIVATE CONFIG VARIABLE/description