Hi there,
I've been working with pipes for some months and I've finally managed to
get it working as I wanted with some legacy code I had. However, I had many
many issues regarding not only my implementation (it had to be adapted in
several ways to fit pipes, it is very restrictive) but pipes
Yes I installed..
mvn clean install -DskipTests was successful. Only import to eclipse is
failing.
On Tue, Mar 4, 2014 at 12:51 PM, Azuryy Yu azury...@gmail.com wrote:
Have you installed protobuf on your computer?
https://code.google.com/p/protobuf/downloads/list
On Tue, Mar 4, 2014 at
Our cluster has a node that reboot randomly. So I've gone to Ambari,
decommissioned its HDFS service, stopped all services, and deleted the node
from the cluster. I expected and fsck to immediately show under-replicated
blocks, but everything comes up fine. How do I tell the cluster that
OK, restarting all services now fsck shows under-replication. Was it the
NameNode restart?
John
From: John Lilley [mailto:john.lil...@redpoint.net]
Sent: Tuesday, March 04, 2014 5:47 AM
To: user@hadoop.apache.org
Subject: decommissioning a node
Our cluster has a node that reboot randomly. So
Thank you for replay, I got it work.
[hduser@vm38 ~]$ /usr/lib/hadoop-yarn/bin/yarn version
Hadoop 2.2.0.2.0.6.0-101
Subversion g...@github.com:hortonworks/hadoop.git -r
b07b2906c36defd389c8b5bd22bebc1bead8115b
Compiled by jenkins on 2014-01-09T05:18Z
Compiled with protoc 2.5.0
From source
I have a file system with some missing/corrupt blocks. However, running hdfs
fsck -delete also fails with errors. How do I get around this?
Thanks
John
[hdfs@metallica yarn]$ hdfs fsck -delete
/rpdm/tmp/ProjectTemp_461_40/TempFolder_4/data00012_00.dld
Connecting to namenode via
Hi,
I am new to the mailing list.
I am using Hadoop 0.20.2 with an append r1056497 version. The question I
have is related to balancing. I have a 5 datanode cluster and each node has
2 disks attached to it. The second disk was added when the first disk was
reaching its capacity.
Now the
Hi,
I am running an application on a 2-node cluster, which tries to acquire
all the containers that are available on one of those nodes and remaining
containers from the other node in the cluster. When I run this application
continuously in a loop, one of the NM or RM is getting killed at a
Hello list,
I'm currently debugging my Hadoop MR application and I have some general
questions to the messages in the log and the debugging process.
- What does Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143 mean? What does 143 stand
for?
- I also see
Outside hadoop: avro-1.7.6
Inside hadoop: avro-mapred-1.7.6-hadoop2
From: Stanley Shi s...@gopivotal.commailto:s...@gopivotal.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
user@hadoop.apache.orgmailto:user@hadoop.apache.org
Date: Monday, March 3, 2014 at 8:30 PM
To:
More information from the NameNode log. I don't understand... it is saying
that I cannot delete the corrupted file until the NameNode leaves safe mode,
but it won't leave safe mode until the file system is no longer corrupt. How
do I get there from here?
Thanks
john
2014-03-04 06:02:51,584
Ah... found the answer. I had to manually leave safe mode to delete the
corrupt files.
john
From: John Lilley [mailto:john.lil...@redpoint.net]
Sent: Tuesday, March 04, 2014 9:33 AM
To: user@hadoop.apache.org
Subject: RE: Need help: fsck FAILs, refuses to clean up corrupt fs
More information
You can force namenode to leave safemode.
hadoop dfsadmin -safemode leave
Then run the hadoop fsck.
Thanks
Divye Sheth
On Mar 4, 2014 10:03 PM, John Lilley john.lil...@redpoint.net wrote:
More information from the NameNode log. I don't understand... it is
saying that I cannot delete the
join the group
On Fri, Oct 11, 2013 at 10:28 PM, Viswanathan J
jayamviswanat...@gmail.comwrote:
Hi,
I'm running a 14 nodes Hadoop cluster with tasktrackers running in all
nodes.
Have set the jobtracker default memory size in hadoop-env.sh
*HADOOP_HEAPSIZE=1024*
Have set the
I remember you asking this question before. Check if your OS' OOM killer is
killing it.
+Vinod
On Mar 4, 2014, at 6:53 AM, Krishna Kishore Bonagiri write2kish...@gmail.com
wrote:
Hi,
I am running an application on a 2-node cluster, which tries to acquire all
the containers that are
That explains a lot. Thanks for the information. I appreciate your help.
On Mon, Mar 3, 2014 at 7:47 PM, Jian He j...@hortonworks.com wrote:
You said, there are no job logs generated on the server that is
running the job..
that was quoting your previous sentence and answer your question..
bq. Container killed by the ApplicationMaster. Container killed on request.
Exit code is 143 mean? What does 143 stand for?
It's the diagnostic message generated by YARN, which indicates the
container is killed by MR's ApplicationMaster. 143 is a exit code of an
YARN container, which indicates
I’ve been trying to benchmark some of the Hive enhancements in Hadoop 2.0 using
the HDP Sandbox.
I took one of their example queries and executed it with the tables stored as
TEXTFILE, RCFILE, and ORC. I also tried enabling enabling vectorized execution,
and predicate pushdown.
SELECT
Yes Vinod, I was asking this question sometime back, and I got back to
resolve the issue again.
I tried to see if the OOM is killing but it is not. I have checked the free
swap space on my box while my test is going on, but it doesn't seem to be
the issue. Also, I have verified if OOM score is
You're probably looking for https://issues.apache.org/jira/browse/HDFS-1804
On Tue, Mar 4, 2014 at 5:54 AM, divye sheth divs.sh...@gmail.com wrote:
Hi,
I am new to the mailing list.
I am using Hadoop 0.20.2 with an append r1056497 version. The question I
have is related to balancing. I have
Which version of hadoop are you using?
There's a possibility that the hadoop environment already have a avro**.jar
in place, thus caused the jar conflict.
Regards,
*Stanley Shi,*
On Tue, Mar 4, 2014 at 11:25 PM, John Pauley john.pau...@threattrack.comwrote:
Outside hadoop: avro-1.7.6
Thanks Harsh. The jira is fixed in version 2.1.0 whereas I am using Hadoop
0.20.2 (we are in a process of upgrading) is there a workaround for the
short term to balance the disk utilization? The patch in the Jira, if
applied to the version that I am using, will it break anything?
Thanks
Divye
Hi,
That probably break something if you apply the patch from 2.x to 0.20.x,
but it depends on.
AFAIK, Balancer had a major refactor in HDFSv2, so you'd better fix it by
yourself based on HDFS-1804.
On Wed, Mar 5, 2014 at 3:47 PM, divye sheth divs.sh...@gmail.com wrote:
Thanks Harsh. The
23 matches
Mail list logo