Hi Kim,
It looks like it is pointing to hdfs location. Can you create the hdfs dir and
put the jar there? Hope this helps
Thanks,
Rahman
On Apr 16, 2014, at 8:39 AM, Rahul Singh smart.rahul.i...@gmail.com wrote:
any help...all are welcome?
On Wed, Apr 16, 2014 at 1:13 PM, Rahul Singh
tried 0 time(s);
retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10,
sleepTime=1 SECONDS)
Therefore, the question is how do I figure out where the ResourceManager
is running?
TIA
Kim
On Wed, Apr 16, 2014 at 8:43 AM, Abdelrahman Shettia
ashet...@hortonworks.com wrote:
Hi
, Abdelrahman Shettia
ashet...@hortonworks.com wrote:
Hi Kim,
You can try to grep on the RM java process by running the following
command:
ps aux | grep
On Wed, Apr 16, 2014 at 10:31 AM, Kim Chew kchew...@gmail.com wrote:
Thanks Rahman, I have mixed things up a little bit in my mapred-site.xml
is 58GB and namenode is reporting DFS Used as 1.46TB.
Pardon me for making the mail dirty by lot of copy-pastes, hope it's still
readable,
-- Saumitra S. Shahapure
On Tue, Apr 15, 2014 at 2:57 AM, Abdelrahman Shettia
ashet...@hortonworks.com wrote:
Hi Biswa,
Are you sure
Hi Shashi,
I am assuming that you are running hadoop 1.x. There is an option to see
the failed tasks on the Job tracker UI. Please replace the jobtracker host
with the actual host and click on the following link and look for the task
failure.
Hi Biswa,
Are you sure that the replication factor of the files are three? Please run a
‘hadoop fsck / -blocks -files -locations’ and see the replication factor for
each file. Also, Post the configuration of
namedfs.datanode.du.reserved/name and please check the real space presented
by a
Hi,
If you are using the Linux TaskController you need to build the executable.
Instructions of doing so can be found in the following document:
http://hadoop.apache.org/docs/r1.0.4/cluster_setup.html
Thanks
-Abdelrahman
On Nov 12, 2013, at 1:41 AM, rab ra rab...@gmail.com wrote:
Hi
I
Let me clarify , If there are lots of files or directories up to 32K (
Depending on the user's # of files sys os config) in
those distributed cache dirs, The OS will not be able to create any more
files/dirs, Thus M-R jobs wont get initiated on those tasktracker machines.
Hope this helps.
Thanks
Hi Redwane ,
It is possible that the hosts which are running tasks are do not have
enough space. Those dirs are confiugred in mapred-site.xml
On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui
reduno1...@googlemail.com wrote:
-- Forwarded message --
From: Redwane
hard disc . Is there a way too see how much space is in
the hdfs without web ui .
Sent from Samsung Mobile
Serge Blazhievsky hadoop...@gmail.com wrote:
Check web ui how much space you have on hdfs???
Sent from my iPhone
On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia
ashet
Hi,
If all of the # of open files limit ( hbase , and hdfs : users ) are set to
more than 30 K. Please change the dfs.datanode.max.xcievers to more than
the value below.
property
namedfs.datanode.max.xcievers/name
value2096/value
descriptionPRIVATE CONFIG VARIABLE/description
11 matches
Mail list logo