Hi Brian,
Thanks for your response. My java version is 6. And hadoop-0.20.2.
Regards,
Deepti
-Original Message-
From: Brian Bockelman [mailto:bbock...@cse.unl.edu]
Sent: Friday, October 14, 2011 8:48 PM
To: common-user@hadoop.apache.org
Subject: Re: FUSE CRASHING
Hi Deepti,
That appear
As a quick workaround, you can try not starting as user 'root' but as
another user instead. You can revisit this issue later when you
require security.
On Sun, Oct 16, 2011 at 11:02 PM, Majid Azimi wrote:
> I have tested both 0.20.204.0 and 0.20.203.0. But problem still not solved.
> I'm going to
Ahh ok ...I think I understand now. I added the default constructor (just
initializing all values to 0) and now it seems to be running. :-) Thanks
for your help.
On Sun, Oct 16, 2011 at 9:43 PM, Brock Noland wrote:
> Hi,
>
> Inline..
>
> On Sun, Oct 16, 2011 at 9:40 PM, Keith Thompson >wrote:
Hi,
Inline..
On Sun, Oct 16, 2011 at 9:40 PM, Keith Thompson wrote:
> Thanks. I went back and changed to WritableComparable instead of just
> Comparable. So, I added the readFields and write methods. I also took
> care of the typo in the constructor. :P
>
> Now I am getting this error:
>
> 1
Thanks. I went back and changed to WritableComparable instead of just
Comparable. So, I added the readFields and write methods. I also took
care of the typo in the constructor. :P
Now I am getting this error:
11/10/16 21:34:08 INFO mapred.JobClient: Task Id :
attempt_201110162105_0002_m_0
Hi Jessica,
Sorry for the delay. I don't know of a pre-built version of the LZO
libraries that has the fix. I also couldn't quite tell which source
versions might have it. The easiest thing to do would be to pull the
source from github, make any changes, and build it locally:
https://github.com/k
Check bin/hadoop script and search -jvm option in there that is getting
passed to datanode launch command. Removing it should get around this issue.
I am not aware of significance of this flag though.
On Oct 16, 2011 12:32 PM, "Majid Azimi" wrote:
> I have tested both 0.20.204.0 and 0.20.203.0. B
I have tested both 0.20.204.0 and 0.20.203.0. But problem still not solved.
I'm going to test another jvm. I'm using openjdk now.
On Sun, Oct 16, 2011 at 2:53 PM, Uma Maheswara Rao G 72686 <
mahesw...@huawei.com> wrote:
> You are using Which version of Hadoop ?
>
> Please check the recent discuss
yes, the two nodes work as tasktracker
On 16 October 2011 22:20, Uma Maheswara Rao G 72686 wrote:
> I mean, two nodes here is tasktrackers.
>
> - Original Message -
> From: Humayun gmail
> Date: Sunday, October 16, 2011 7:38 pm
> Subject: Re: Too much fetch failure
> To: common-user@hado
Arun,
This was fixed a week ago or so. Here's the infra ticket.
https://issues.apache.org/jira/browse/INFRA-3960
You should be able to add new contributors now.
thanks
mahadev
On Sun, Oct 16, 2011 at 9:36 AM, Arun C Murthy wrote:
> I've tried, and failed, many times recently to add 'contribut
I've tried, and failed, many times recently to add 'contributors' to the Hadoop
projects - something to do with the new UI they rolled out.
Let me try and follow-up with the ASF folks, thanks for being patient!
Arun
On Oct 16, 2011, at 9:32 AM, Jon Allen wrote:
> I've been doing some work on a
I've been doing some work on a Jira and want to assign it to myself but
there doesn't seem to be an option to do this. I believe I need to be
assigned the contributor role before I can have issues assigned to me. Is
this correct and if so how do I get this role?
Thanks,
Jon
I mean, two nodes here is tasktrackers.
- Original Message -
From: Humayun gmail
Date: Sunday, October 16, 2011 7:38 pm
Subject: Re: Too much fetch failure
To: common-user@hadoop.apache.org
> yes we can ping every node (both master and slave).
>
> On 16 October 2011 19:52, Uma Maheswara
You are welcome. *smile*
One of the greatest advantages of open-src s/w is that you can look at the code
while scratching your head in the corner - this way you gain better
understanding of the system and we, the project, will hopefully gain another
valuable contributor... hint, hint. ;-)
Good
no. in my config files i mention as master
On 16 October 2011 20:20, praveenesh kumar wrote:
> why are you formatting the namenode again ?
> 1. Just stop the cluster..
> 2. Just comment out the 127.0.0.1 localhost line
> 3. Restart the cluster.
>
> How have you defined your hadoop config files..
why are you formatting the namenode again ?
1. Just stop the cluster..
2. Just comment out the 127.0.0.1 localhost line
3. Restart the cluster.
How have you defined your hadoop config files..?
Have you mentioned localhost there ?
Thanks,
Praveenesh
On Sun, Oct 16, 2011 at 7:42 PM, Humayun gmail
commenting the line 127.0.0.1 in /etc/hosts is not working. if i format the
namenode then automatically this line is added.
any other solution?
On 16 October 2011 19:13, praveenesh kumar wrote:
> try commenting 127.0.0.1 localhost line in your /etc/hosts and then restart
> the cluster and then t
yes we can ping every node (both master and slave).
On 16 October 2011 19:52, Uma Maheswara Rao G 72686 wrote:
> Are you able to ping the other node with the configured hostnames?
>
> Make sure that you should be able to ping to the other machine with the
> configured hostname in ect/hosts files.
Are you able to ping the other node with the configured hostnames?
Make sure that you should be able to ping to the other machine with the
configured hostname in ect/hosts files.
Regards,
Uma
- Original Message -
From: praveenesh kumar
Date: Sunday, October 16, 2011 6:46 pm
Subject: Re:
try commenting 127.0.0.1 localhost line in your /etc/hosts and then restart
the cluster and then try again.
Thanks,
Praveenesh
On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail wrote:
> we are using hadoop on virtual box. when it is a single node then it works
> fine for big dataset larger than the
Hi all,
Any Idea, when is hadoop 0.20.205 is officially going to release ?
Is Hadoop-0.20.205 rc2 stable enough to start into production ?
I am using hadoop-0.20-append now with hbase 0.90.3, want to switch to 205.
But looking for some valubale suggestions/recommendations ?
Thanks,
Praveenesh
You are using Which version of Hadoop ?
Please check the recent discussion, which will help you related to this problem.
http://search-hadoop.com/m/PPgvNPUoL2&subj=Re+Starting+Datanode
Regards,
Uma
- Original Message -
From: Majid Azimi
Date: Sunday, October 16, 2011 2:22 am
Subject: Un
we are using hadoop on virtual box. when it is a single node then it works
fine for big dataset larger than the default block size. but in case of
multinode cluster (2 nodes) we are facing some problems.
Like when the input dataset is smaller than the default block size(64 MB)
then it works fine. b
Hi Arun,
Your answer sheds extra bright light while I am scratching head in the corner.
1 million thanks for answer and document. I will post back the result.
Thanks again,
P
On Sat, Oct 15, 2011 at 10:32 PM, Arun C Murthy wrote:
>
> Hi Patrick,
>
> It's hard to diagnose CDH since I don't know
24 matches
Mail list logo