configure hadoop-0.22 fairscheduler

2012-09-06 Thread Jameson Li
I want to test version hadoop-0.22. But when configurate the fairescheduler, I have some troublesome. The fairscheduler is not efficient. And I have configured this items in the mapred-site.xml, and also I hava copy the fairscheduler jar file to the $HADOOP_HOME/lib: property

Re: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException in trunk version

2012-09-06 Thread Sherif Akoush
It works now if the number of reducers is limited (4 in my case). However I am not sure why sometimes it doesn't work if the number of reducers is increased? I tried to increase the number of open files for the user as suggested by some blog, but still for large number of reducers it doesn't work.

Is there a way to get notificaiton when the job is failed?

2012-09-06 Thread WangRamon
Hi Guys Is there some 3rd party monitor tool that i can use to monitor the hadoop cluster, especially that i can get a notification/email when there is a job failed? Thanks for any suggestion. CheersRamon

Re: Is there a way to get notificaiton when the job is failed?

2012-09-06 Thread Julien Muller
Hi, We use oozie for this kind of notification. Not really a monitoring tool, it is a workflow system. http://incubator.apache.org/oozie/docs/3.1.3/docs/DG_EmailActionExtension.html Julien 2012/9/6 WangRamon ramon_w...@hotmail.com: Hi Guys Is there some 3rd party monitor tool that i can use

RE: Is there a way to get notificaiton when the job is failed?

2012-09-06 Thread WangRamon
Hi hemanth Does it support Hadoop version 1.0.0? Thanks CheersRamon Date: Thu, 6 Sep 2012 14:43:45 +0530 Subject: Re: Is there a way to get notificaiton when the job is failed? From: yhema...@thoughtworks.com To: user@hadoop.apache.org Hi, There is a provision to get job end notifications for a

Re: Is there a way to get notificaiton when the job is failed?

2012-09-06 Thread Rajiv Chittajallu
Notifications are sequential and doesn't have timeouts -  MAPREDUCE-1688 Not sure why its closed as dupe of an yarn feature. (hey hemanth, welcome back..) From: Hemanth Yamijala yhema...@thoughtworks.com To: user@hadoop.apache.org Sent: Thursday, September 6,

Re: Legal Matter

2012-09-06 Thread Michael Segel
Why can't we use our Ninja's? They are sitting on the bench. On Sep 6, 2012, at 7:52 AM, Russell Jurney russell.jur...@gmail.com wrote: Also there is a copy fee of $80 per page, and this thread is already ten pages, including headers, duplicated in triplicate that comes to $2,400 plus a

Re: Legal Matter

2012-09-06 Thread Russell Jurney
HR is giving us crap over our use of pirates for business development. Russell Jurney http://datasyndrome.com On Sep 6, 2012, at 6:02 AM, Michael Segel michael_se...@hotmail.com wrote: Why can't we use our Ninja's? They are sitting on the bench. On Sep 6, 2012, at 7:52 AM, Russell Jurney

Re: Integrating hadoop with java UI application deployed on tomcat

2012-09-06 Thread Visioner Sadak
Thanks experts for your help finally got the issue i was getting this error org.apache.hadoop.ipc.RemoteException : Server IPC version 5 cannot communicate with client version 4 because my libraries in tomcat were of different version than the libraries of hadoop installation thanks a ton for

Out of memory in identity mapper?

2012-09-06 Thread JOAQUIN GUANTER GONZALBEZ
Hello hadoopers! In a reduce-only Hadoop job input files are handled by the identity mapper and sent to the reducers without modification. In one of my job I was surprised to see the job failing in the map phase with Out of memory error and GC overhead limit exceeded. In my understanding, a

Re: [Cosmos-dev] Out of memory in identity mapper?

2012-09-06 Thread Harsh J
Protobuf involvement makes me more suspicious that this is possibly a corruption or an issue with serialization as well. Perhaps if you can share some stack traces, people can help better. If it is reliably reproducible, then I'd also check for the count of records until after this occurs, and see

Cannot browse job.xml while running the job

2012-09-06 Thread Gaurav Dasgupta
Hi users, I have a 11 node Hadoop cluster. I am facing a strange situation while trying to view the job.xml of a running job. In the JobTracker Web UI, while a job is running, if I click on the job.xml link, it fails to retrieve the file and gives the following error: *Failed to retreive job

RE: Cannot browse job.xml while running the job

2012-09-06 Thread Jeffrey Buell
Does /var/log/hadoop have write permission for the hadoop user? From: Gaurav Dasgupta [mailto:gdsay...@gmail.com] Sent: Thursday, September 06, 2012 9:45 AM To: user@hadoop.apache.org Subject: Cannot browse job.xml while running the job Hi users, I have a 11 node Hadoop cluster. I am facing a

Re: build failure - trying to build hadoop trunk checkout

2012-09-06 Thread Steve Loughran
On 6 September 2012 14:11, Michael Segel michael_se...@hotmail.com wrote: On Sep 6, 2012, at 6:24 AM, Steve Loughran ste...@hortonworks.com wrote: How is this breaking RFC-952? Its not. There is a bug. Under RFC-952, the restrictions deal with 'label' length, 1-63 characters and that the

Re: [Cosmos-dev] Out of memory in identity mapper?

2012-09-06 Thread SEBASTIAN ORTEGA TORRES
There is no trace to check on the task, I get n/a instead of the links to the traces in the web. Some of the maps are retried successfully while others fail again until one of them fails four times in a row and the job is automatically terminated. Is this compatible with protobuf corruption? In

Re: build failure - trying to build hadoop trunk checkout

2012-09-06 Thread Vinod Kumar Vavilapalli
Never mind filing. I recalled that we debugged this issue long time back and cornered this down to problems with kerberos. See https://issues.apache.org/jira/browse/HADOOP-7988. Given that, Tony, changing your hostname seems to be the only option. Thanks, +Vinod On Sep 6, 2012, at 4:24 AM,

Re: One petabyte of data loading into HDFS with in 10 min.

2012-09-06 Thread Gulfie
Back up for a second. Why would you want to do this and where does the data come from? Is this a new PB of data every time? or is it PB total with some new and some old? Only migrating the deltas could help. Can the data migration/load have it's latency hidden? Is the PB of data

Re: Error using hadoop in non-distributed mode

2012-09-06 Thread Pat Ferrel
Thanks! You nailed it. Mahout was using the cache but fortunately there was an easy way to tell it not to and now the jobs run local and therefore in a debugging setup. On Sep 4, 2012, at 9:22 PM, Hemanth Yamijala yhema...@thoughtworks.com wrote: Hi, The path

Re: [Cosmos-dev] Out of memory in identity mapper?

2012-09-06 Thread Hemanth Yamijala
Harsh, Could IsolationRunner be used here. I'd put up a patch for HADOOP-8765, after applying which IsolationRunner works for me. Maybe we could use it to re-run the map task that's failing and debug. Thanks hemanth On Thu, Sep 6, 2012 at 9:42 PM, Harsh J ha...@cloudera.com wrote: Protobuf

RE: Legal Matter

2012-09-06 Thread sathyavageeswaran
That is the reason why it would have been better to use our services From: Russell Jurney [mailto:russell.jur...@gmail.com] Sent: 06 September 2012 18:39 To: user@hadoop.apache.org Subject: Re: Legal Matter HR is giving us crap over our use of pirates for business development. Russell

RE: Legal Matter

2012-09-06 Thread sathyavageeswaran
Yah that would be great! From: Fabio Pitzolu [mailto:fabio.pitz...@gr-ci.com] Sent: 06 September 2012 18:59 To: user@hadoop.apache.org Subject: Re: Legal Matter Upscale the Pastries would be a great name for a BigData blog. Fabio 2012/9/6 Michael Segel michael_se...@hotmail.com You