Can someone please let me know the root cause here.
15/06/05 21:21:31 INFO mapred.ClientServiceDelegate: Application state is
completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history
server
15/06/05 21:21:40 INFO mapred.ClientServiceDelegate: Application state is
completed.
(DataXceiver.java:221)
On Tue, May 26, 2015 at 8:29 PM, Ted Yu yuzhih...@gmail.com wrote:
bq. All datanodes 112.123.123.123:50010 are bad. Aborting...
How many datanodes do you have ?
Can you check datanode namenode log ?
Cheers
On Tue, May 26, 2015 at 5:00 PM, S.L simpleliving...@gmail.com
Hi All,
I am on Apache Yarn 2.3.0 and lately I have been seeing this exceptions
happening frequently.Can someone tell me the root cause of this issue.
I have set the the property in mapred-site.xml as follows , is there any
other property that I need to set also?
property
30, 2014 2:08 AM
What I meant is expand the hostname by IP address.
It worked for me.
On 9/30/14, S.L simpleliving...@gmail.com wrote:
The host name is fully qualified , meaning there is nothing more that I can
add , it just seems the ports might be messed up , but I don't know which
ones
in host name:port
Copy paste the link in browser and expand the host name.
I set up the host names in windows user etc/hosts file but still it
could not resolve.
On 9/29/14, S.L simpleliving...@gmail.com wrote:
Hi All,
I am running a 3 node Apache Hadoop YARN 2.3.0 cluster , after a job
Hi All,
I am running a 3 node Apache Hadoop YARN 2.3.0 cluster , after a job is
submitted , when I ascess the application master from the UI I get the
following exception and I am unable to connect to the Application Master
from the UI, can someone let me know , what I need to look at?
HTTP
Does anyone know what the issue could be , is there any property setting
that is causing this behavior ?
On Wed, Sep 17, 2014 at 3:11 PM, S.L simpleliving...@gmail.com wrote:
I don't see any hs_err file in the current directory , so I don't think
that is the case.
On Wed, Sep 17, 2014 at 2
Hi All,
I am running a MRV1 job on Hadoop YARN 2.3.0 cluster , the problem is when
I submit this job YARN the application that is running in YARN is marked as
complete even as on console its reported as only 58% complete . I have
confirmed that its also not printing the log statements that its
getting killed while the
YARN application finishes as usual on the cluster in the background?
+Vinod
On Wed, Sep 17, 2014 at 9:29 AM, S.L simpleliving...@gmail.com wrote:
Hi All,
I am running a MRV1 job on Hadoop YARN 2.3.0 cluster , the problem is
when I submit this job YARN
I don't see any hs_err file in the current directory , so I don't think
that is the case.
On Wed, Sep 17, 2014 at 2:21 PM, S.L simpleliving...@gmail.com wrote:
I am not sure, I running a sequence of MRV1 jobs using a bash script ,
this seems to happen in the 4th iteration consistently, how do
Hi All,
I am running a MRV1 job on Hadoop YARN 2.3.0 cluster , the problem is when
I submit this job YARN created multiple applications for that submitted job
, and the last application that is running in YARN is marked as complete
even as on console its reported as only 58% complete . I have
Hi Folks,
I was not able to find a clear answer to this , I know that on the master
node we need to have a slaves file listing all the slaves , but do we need
to have the slave nodes have a master file listing the single name node( I
am not using a secondary name node). I only have the slaves
in a single block in one node itself.
These could be few hints which might help you
regards
rab
On Sat, Aug 23, 2014 at 12:26 PM, S.L simpleliving...@gmail.com wrote:
Hi Folks,
I was not able to find a clear answer to this , I know that on the master
node we need to have a slaves file
Hi All,
I have a failure in one of the applications consistently after my Nutch job
runs for like an hour, can some please suggest why this error is occurring
looking at the exception message below.
Diagnostics:
Application application_1408512952691_0017 failed 2 times due to AM
Container for
Hi All,
All we submit a job using bin/hadoop script on teh resource manager node,
what if we need our job to be passed in a vm argument like in my case
'target-env' , how o I do that , will this argument be passed to all the
node managers in different nodes ?
/05/2014 12:21 AM, S.L wrote:
The contents are
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
On Sun, Aug 3, 2014 at 11:21 PM, Ritesh Kumar Singh
riteshoneinamill
if there is any
On Aug 4, 2014 3:28 AM, S.L simpleliving...@gmail.com wrote:
Hi All,
I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
three slave nodes , the slave nodes are listed in the
$HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
Master Name node
/hosts' file
On Mon, Aug 4, 2014 at 3:27 AM, S.L simpleliving...@gmail.com wrote:
Hi All,
I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
three slave nodes , the slave nodes are listed in the
$HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves
Hi All,
I am trying to set up a Apache Hadoop 2.3.0 cluster , I have a master and
three slave nodes , the slave nodes are listed in the
$HADOOP_HOME/etc/hadoop/slaves file and I can telnet from the slaves to the
Master Name node on port 9000, however when I start the datanode on any of
the slaves
On Fri, May 2, 2014 at 12:20 PM, S.L simpleliving...@gmail.com wrote:
I am using Hadoop 2.3 , the problem is that my disk runs out of space
(80GB) and then I reboot my machine , which causes my /tmp data to be
deleted and frees up space , I then resubmit the job assuming that since
Can I do this while the job is still running ? I know I cant delete the
directory , but I just want confirmation that the data Hadoop writes into
/tmp/hadoop-df/nm-local-dir (df being my user name) can be discarded while
the job is being executed.
On Wed, Apr 30, 2014 at 6:40 AM, unmesha
Hi Folks,
I am running a map only (no reduce) job on Hadoop 2.3.0 and the
/tmp/hadoop-df/nm-local-dir (df being my user name) directory is growing
exponentially causing me run out of my 80GB disk space.
I would like to know if there is any mechanism(automatic or manual) by
which I can free up
Hi All,
I am using Hadoop2.3.0 and have installed it as single node cluster
(psuedo-distributed mode) on CentOS 6.4 Amazon ec2 instance with an
instance storage of 420GB and 7.5GB of RAM , my understanding is that the
Spill Failed exception only occurs when the node runs out of the disk
space
nameyarn.nodemanager.vmem-check-enabled/name
valuefalse/value
/property
b) If you still want to control the pmem/vmem, do you restart YARN
after doing the chage in the XML file?
Regards./g
*From:* S.L [mailto:simpleliving...@gmail.com]
*Sent:* Wednesday, January 01, 2014 9:51 PM
of shells under your mapper
and YARN's NodeManager is detecting that the total virtual memory usage is
14.5GB. You may want to reduce that number of shells, lest the OS itself
might kill your tasks depend on the system configuration.
Thanks,
+Vinod
On Jan 1, 2014, at 7:50 PM, S.L simpleliving
Hello Folks,
I am running hadoop 2.2 in a pseudo-distributed mode on a laptop with 8GB
RAM.
Whenever I submit a job I get an error that says that the that the virtual
memory usage exceeded , like below.
I have changed the ratio yarn.nodenamager.vmem-pmem-ratio in yarn-site.xml
to 10 , however
Hi folks,
I am unning Haddop 2.2 in pseudo-distributed mode and I have a i5 processor
with 8 GB RAM running on CentOS 6.4 . However my Nutch job fails after a
few minutes into execution withan OOM exception , I have increased the
HADOOP_HEAPSIZE from the 1000MB to 4GB, but I still face the issue.
The hadoop document below suggests that the following variables be set
inorder for Hadoop to prioritize the client jars over the Hadoop jars ,
however , I am not sure how to set them can someone please tell me how to
set these .
*HADOOP_USER_CLASSPATH_FIRST=true* and *HADOOP_CLASSPATH*
Resending my message not sure if it went thru the first time... any help
would be greatly appreciated.
I am consistently running into this excepition , googling tells me that
this is because of the Jar mismatch between Hadoop and the Nutch Job , with
Hadoop2.2 using older version of the
Hello experts,
I am consistently running into this excepition , googling tells me that
this is because of the Jar mismatch between Hadoop and the Nutch Job , with
Hadoop2.2 using older version of the HttpClient.
I am not able to figure out how I could make Hadoop 2.2 pickup the jars
from the
30 matches
Mail list logo