On 06/19/12 20:42, Raj Vishwanathan wrote:
You are probably having a very low somaxconn parameter ( default centos has it
at 128 , if I remember correctly). You can check the value under
/proc/sys/net/core/somaxconn
Aha! Excellent, it does seem it's at the default, and that particular
sysct
Hi,
Apparently HiBench has been very popular and Dropbox just suspended the
download of HiBench-Data as it generated too much traffic :( The suspension is
only temporary and will resume in 3 days; in addition, HiBench-Data package is
needed only for nutchindexing and bayes, and other eight work
You are probably having a very low somaxconn parameter ( default centos has it
at 128 , if I remember correctly). You can check the value under
/proc/sys/net/core/somaxconn
Can you also check the value of ulimit -n? It could be low.
Raj
>
> From: Ellis H. W
Hi,
With CDH3u4 and Cloudera Manager, and I am running a hive query to repartition
all our tables. I'm reducing the number of partitions from 5 to 2, because the
performance benefits of a smaller mapred.input.dir is significant, which I only
realized as our tables have grown in size, and there
Hi Amit,
I doubt that it is a problem with some version of hadoop. Could you please
post the stack trace if its available? Along with the output of hadoop
version? How did you set up your cluster? Did you setup mapred and hdfs
users? It seems to me more likely that the user trying to run the job,
On 06/19/12 13:38, Vinod Kumar Vavilapalli wrote:
Replies/more questions inline.
I'm using Hadoop 0.23 on 50 machines, each connected with gigabit ethernet and
each having solely a single hard disk. I am getting the following error
repeatably for the TeraSort benchmark. TeraGen runs witho
Hello,
sorry my mistake. Problem solved.
On 06/19/2012 03:40 PM, Devaraj k wrote:
Can you share the exception stack trace and piece of code where you are trying
to create?
Thanks
Devaraj
From: Ondřej Klimpera [klimp...@fit.cvut.cz]
Sent: Tuesday, Jun
Hi Soham,
I think you should update you hadoop-env.sh file and set JAVA_HOME. Let me know
if you still face this issue.
Thanks,
~Sri.
Sent from my Windows Phone
From: Harsh J
Sent: 6/19/2012 2:14 PM
To: cdh-u...@cloudera.org
Cc: sohamsardart...@gmail.com
Subjec
On 06/19/12 14:11, Minh Duc Nguyen wrote:
Take at look at slide 25:
http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera
It describes a similar error so hopefully this will help you.
I appreciate your prompt response Minh, but as you will notice in the
end of my ve
Hi,
This may be CDH4 related, so I am moving it to cdh-u...@cloudera.org
(https://groups.google.com/a/cloudera.org/forum/?fromgroups#!forum/cdh-user),
bcc'd common-user@ and CC'd you in case you aren't subscribed to the
cdh-user lists.
For your JAVA_HOME issue, could you try setting in your ~/.ba
Replies/more questions inline.
> I'm using Hadoop 0.23 on 50 machines, each connected with gigabit ethernet
> and each having solely a single hard disk. I am getting the following error
> repeatably for the TeraSort benchmark. TeraGen runs without error, but
> TeraSort runs predictably unti
Hi Vseslava.
This part of the ASF FAQ explains everything in this regard I think
https://www.apache.org/foundation/license-faq.html#Translation
In other words "Sure!" ;)
Cos
On Tue, Jun 19, 2012 at 04:34AM, vseslava.kavch...@gmail.com wrote:
> Hey there,
>
> I am a student at the Department o
Take at look at slide 25:
http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera
It describes a similar error so hopefully this will help you.
~ Minh
On Tue, Jun 19, 2012 at 10:27 AM, Ellis H. Wilson III wrote:
> Hi all,
>
> This is my first email to the list, so fe
Just try the steps which i mentioned in your earlier question.
And this would go.
Vseslava, the hadoop project and its documentation are licensed under the
Apache 2 license.
http://www.apache.org/licenses/LICENSE-2.0.html
I'm not a lawyer, but this sounds to me like a "derivative work" which you are
free to create and distribute.
- Tim.
The quick solution to this is
Create *hadoop-env.sh* file in your configuration folder ( same place where
you have your hdfs-site.xml etc)
Add the following into it
export JAVA_HOME=/home/hadoop/software/java/jdk1.6.0_31
Just change the path above to correct JAVA_HOME
Just write back if it doe
Hello,
The documentation is released under Apache 2.0
You are free to modify and translate.
Just go through the below *Redistribution *section also.
http://www.apache.org/licenses/LICENSE-2.0.html
Regards,
Jagat Singh
On Tue, Jun 19, 2012 at 10:04 AM, wrote:
> Hey there,
>
> I am a student
when i am trying to start the namenode there is an error in the start
up and when i try to check the hadoop-env.sh file then i get error
message as
JAVA_HOME not set properly
resourcemanager running as process 6675. Stop it first.
hduser@localhost's password:
localhost: Error: JAVA_HOME is not
Hey there,
I am a student at the Department of Foreign Languages and at the same time
a volunteer at an organization named “Translation for Education”. I love
surfing on the Internet and being informed about the latest happenings
around me. Unfortunately, most of my fellow citizens don’t kn
If your case isn't the HA NameNode feature from Apache Hadoop 2.0,
then there can't be a split-brain situation in your
not-exactly-failover solution, unless your VIP is also messed up
between nodes (i.e. Some clients/DNs resolve the NN hostname as A and
the others as B).
One way this is already pr
Hello Michael, thanks for responding. At the bottom of the email, I have
given the following scenario. This is my understanding of split brain and I
am trying to simulate it, which is where I am getting problems.
My understanding is that split brain happens because of timeouts on the
main namenode
Hi all,
This is my first email to the list, so feel free to be candid in your
complaints if I'm doing something canonically uncouth in my requests for
assistance.
I'm using Hadoop 0.23 on 50 machines, each connected with gigabit
ethernet and each having solely a single hard disk. I am getti
Hi,
> But creating instance of Reader for each reduce() call creates big slow
> down.
It wouldn't, if you wrapped it properly:
if (reader != null) {
// Initialize reader
reader = new MapFile.Reader(…)
}
This way it only initializes/reads/etc. once.
On Tue, Jun 19, 2012 at 6:03 PM, Ondřej K
Can you share the exception stack trace and piece of code where you are trying
to create?
Thanks
Devaraj
From: Ondřej Klimpera [klimp...@fit.cvut.cz]
Sent: Tuesday, June 19, 2012 6:03 PM
To: common-user@hadoop.apache.org
Subject: Creating MapFile.Reader
In your example, you only have one active Name Node. So how would you encounter
a 'split brain' scenario?
Maybe it would be better if you defined what you mean by a split brain?
-Mike
On Jun 18, 2012, at 8:30 PM, hdev ml wrote:
> All hadoop contributors/experts,
>
> I am trying to simulate sp
Hi Ravi,
Yes, I was using a local path instead of HDFS.
I corrected this.
But, now, I am getting the following issue:
12/06/19 14:38:08 INFO gridmix.SubmitterUserResolver: Current user resolver
is SubmitterUserResolver
12/06/19 14:38:08 WARN gridmix.Gridmix: Resource null ignored
12/06/19 14
Hello, I'm tring to use MapFile (stored on HDFS) in my reduce task,
which processes some text data.
When I try to initialize MapFile.Reader in reducer configure() method,
app throws NullPointerException, when the same approach is used for each
reduce() method call with the same parameters, eve
Hi Amit,
In your command the iopath directory ( ~/Desktop/test_gridmix_output )
doesn't seem to be an HDFS location. I believe it needs to be HDFS.
HTH
Ravi.
On Mon, Jun 18, 2012 at 11:16 AM, sangroya wrote:
> Hello Ravi,
>
> Thanks for your response.
>
> I got started by running Rumen and ge
28 matches
Mail list logo