"Legal Disclaimer: This electronic message and all contents contain information from Cybage Software Private Limited which may be privileged, confidential, or otherwise protected from disclosure. The information is intended to be for the addressee(s) only. If you are not an addressee, any disclo
Felix,
> I'm using the new Job class:
>
> http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapreduce/Job.html
>
> There is a way to set the number of reduce tasks:
>
> setNumReduceTasks(int tasks)
>
> However, I don't see how to set the number of MAP tasks?
>
> I tried to set it
Hi, everyone
Is there some mistakes in the display of the completeness percentage? or the
job is not completed successfully? this is the job details page:
*User:* root
*Job Name:* MyMRJob
*Job File: *hdfs://hadoop-master:9000
/hadoop/dfs/mapred/system/job_201006171158_0002/job.xml
*Job Setup:*
Sam:
>From your counters, it looks like you're not outputting any records
from the mapper. My guess is that you're printing output records to
stderr or stdout and not using the output collector. Check out
http://www.cloudera.com/videos/programming_with_hadoop to learn about
the basics.
Hope that
Hello friends,
I'new to the hadoop environment. My application uses
a small text file (i know that hadoop is not designed for smaller files, but
i am just using it for testing purposes) and generates an intermediate file
which is passed as an input to the mapper function.
Before the new hardware is ready, I suggest you configure jobtracker to
retain fewer jobs in memory - as Todd mentioned.
On Mon, Jun 21, 2010 at 12:49 PM, Bobby Dennett
wrote:
> Thanks all for your suggestions (please note that Tan is my co-worker;
> we are both working to try and resolve this is
Good luck Bobby. I hope that when you get this problem licked you’ll post your
solutions to help us all learn some more stuff as well :)
Cheers
James.
On 2010-06-21, at 1:49 PM, Bobby Dennett wrote:
> Thanks all for your suggestions (please note that Tan is my co-worker;
> we are both working
Thanks all for your suggestions (please note that Tan is my co-worker;
we are both working to try and resolve this issue)... we experienced
another hang this weekend and increased the HADOOP_HEAPSIZE setting to
6000 (MB) as we do periodically see "java.lang.OutOfMemoryError: Java
heap space" errors
I'm using the new Job class:
http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapreduce/Job.html
There is a way to set the number of reduce tasks:
setNumReduceTasks(int tasks)
However, I don't see how to set the number of MAP tasks?
I tried to set it through mapred-site.xml :
Looks like the mounted file system /mnt/namenode-backup does not support
locking.
It should, otherwise hdfs cannot guarantee that only one name-node updates the
directory.
You might want to check with your sysadmins, may be the mount point is
misconfigured.
Thanks,
--Konstantin
On 6/21/2010 1
Again, removing a bunch of CC:'es.
On Jun 21, 2010, at 2:26 AM, Bikash Singhal wrote:
>
> Hi Hadoopers,
>
> I have received WARN in the hadoop cluster. Has anybody seen this . Any
> solution?
>
>
>> 2010-06-06 01:45:04,079 WARN org.apache.hadoop.conf.Configuration:
>> /var/lib/hadoop-0.20/
[Cutting the CC: line down to size ]
On Jun 18, 2010, at 3:37 AM, Bikash Singhal wrote:
> Hi folks ,
>
> I have received this error in the hadoop cluster. Has anybody anybody
> seen this . Any solution.
Since you aren't picking anything out and you've shared a bunch of messages,
I'm going to
Hi All,
I just wanted to check if anybody had a comment on this query.
Thanks,
-Shrinivas
On Wed, Jun 16, 2010 at 9:50 PM, Shrinivas Joshi wrote:
> Sorry if this is repeat email for you, I did send this to common-dev list
> as well.
>
> Hello,
> I am trying to get profiles for a workload runnin
According to hadoop tutorial on Yahoo developer netwrok and hadoop
documentation on apache, a simple way to achieve namenode backup and recovery
from single point namenode failure is to use a folder which is mounted on
namenode machine but actually on a different machine to save dfs meta data as
Got time to work on this. So, is there some sample of illustrating this and
where to deploy this filter class? Thanks for your help!
--- On Fri, 3/5/10, Jakob Homan wrote:
From: Jakob Homan
Subject: Re: user authentication: protect hdfs/job web interface from public
To: common-user@hadoop.apac
Minor correction:
job Id is in the tracking URL
:50030/jobdetails.jsp?jobid=job_201006171925_0106&refresh=0
On Mon, Jun 21, 2010 at 9:36 AM, Ted Yu wrote:
> In new API, info is a private member of Job. I haven't found an easy way to
> retrieve what you wanted.
>
> You can call getTrackingURL() a
In new API, info is a private member of Job. I haven't found an easy way to
retrieve what you wanted.
You can call getTrackingURL() and try parsing the HTML from the contents of
HTML. :-)
You can also file a JIRA about this.
FYI mapred is going to be undeprecated. I would suggest spending your t
On Mon, Jun 21, 2010 at 5:26 AM, Bikash Singhal wrote:
>
> Hi Hadoopers,
>
> I have received WARN in the hadoop cluster. Has anybody seen this . Any
> solution?
>
>
> > 2010-06-06 01:45:04,079 WARN org.apache.hadoop.conf.Configuration:
> /var/lib/hadoop-0.20/cache/hadoop/mapred/local/taskTracker/
I still haven't figured out how to query the JobTracker for a specific running
job using the new API.
Is this at all possible? The old/new APIs are driving me crazy.
As far as I understand the old API entailed using mapred.JobClient & JobConf
JobConf jobConf = new JobConf(new Configuration
Hi,
In default installation hadoop masters and slaves files contain localhost.
Am I correct that masters file contains list of SECONDARY namenodes?
If so, will localhost node try to start secondary namenode even if it
already have one?
More over will datanodes try to contact itself in order to t
Hi Hadoopers,
I have received WARN in the hadoop cluster. Has anybody seen this . Any
solution?
> 2010-06-06 01:45:04,079 WARN org.apache.hadoop.conf.Configuration:
> /var/lib/hadoop-0.20/cache/hadoop/mapred/local/taskTracker/jobcache/job_201006060025_0003/job.xml:a
> attempt to override fin
Stas Oskin wrote:
Hi.
I will point you at this presentation by my colleague Johannes Kirschnick,
Making Hadoop HA,
http://www.slideshare.net/steve_l/high-availability-hadoop
There's a performance graph on that slideset for small, virtualised
clusters, results in large physical clusters may va
22 matches
Mail list logo