I changed dfs.datanode.max.xcievers to 10 and
mapred.job.reuse.jvm.num.tasks
to 10. Still the job fails by throwing the same set of errors as earlier.
Sumanth
On Thu, Feb 16, 2012 at 7:41 PM, Srinivas Surasani wrote:
> Sumanth,
>
> For quick check, try setting this to much bigger value(
Sumanth,
For quick check, try setting this to much bigger value( 1M ), though
this is not good practice( data-node may run into out of memory).
On Thu, Feb 16, 2012 at 10:21 PM, Sumanth V wrote:
> Hi Srinivas,
>
> The *dfs.datanode.max.xcievers* value is set to 4096 in hdfs-site.xml.
>
>
> Suman
Hi Srinivas,
The *dfs.datanode.max.xcievers* value is set to 4096 in hdfs-site.xml.
Sumanth
On Thu, Feb 16, 2012 at 7:11 PM, Srinivas Surasani wrote:
> Sumanth, I think Sreedhar is pointing to "dfs.datanode.max.xceivers"
> property in hdfs-site.xml. Try setting this property to higher value
Sumanth, I think Sreedhar is pointing to "dfs.datanode.max.xceivers"
property in hdfs-site.xml. Try setting this property to higher value.
On Thu, Feb 16, 2012 at 9:51 PM, Sumanth V wrote:
> ulimit values are set to much higher values than the default values
> Here is the /etc/security/limits.c
ulimit values are set to much higher values than the default values
Here is the /etc/security/limits.conf contents -
* - nofile 64000
hdfs- nproc 32768
hdfs- stack 10240
hbase - nproc 32768
hbase - stack 10240
mapred - nproc 32768
ma
Sumanth,
You may want to check ulimit setting for open files.
Set it to a higher value if it is at default value of 1024.
Regards,
Sreedhar
From: Sumanth V
To: common-user@hadoop.apache.org
Sent: Thursday, February 16, 2012 6:25 PM
Subject: ENOENT: No su
Sumanth,
what is the value set to "mapred.job.reuse.jvm.num.tasks" property ?
On Thu, Feb 16, 2012 at 9:25 PM, Sumanth V wrote:
> Hi,
>
> We have a 20 node hadoop cluster running CDH3 U2. Some of our jobs
> are failing with the following errors. We noticed that we are
> consistently hitting this
You ain't gotta like me, you just mad
Cause I tell it how it is, and you tell it how it might be
-Attributed to Puff Daddy
& Now apparently T. Lipcon
On Mon, Feb 13, 2012 at 2:33 PM, Todd Lipcon wrote:
>
> Hey Doug,
>
> Want to also run a comparison test with inter-cluster replication
> turned o
I will be out of the office starting 02/17/2012 and will not return until
02/20/2012.
I am out of office, and will reply you when I am back.
For HAMSTER related things, you can contact Anthony(Fei Xiong/China/IBM)
For CFM related things, you can contact Daniel(Liang SH
Su/China/Contr/IBM)
For
Great! thanks a lot Srinivas !
Mark
On Thu, Feb 16, 2012 at 7:02 AM, Srinivas Surasani wrote:
> 1) Yes option 2 is enough.
> 2) Configuration variable "mapred.child.ulimit" can be used to control
> the maximum virtual memory of the child (map/reduce) processes.
>
> ** value of mapred.child.ulimi
1) Yes option 2 is enough.
2) Configuration variable "mapred.child.ulimit" can be used to control
the maximum virtual memory of the child (map/reduce) processes.
** value of mapred.child.ulimit > value of mapred.child.java.opts
On Thu, Feb 16, 2012 at 12:38 AM, Mark question wrote:
> Thanks for
These are the last entries of the JobTracker log related to the job:
2012-02-15 00:21:51,604 INFO org.apache.hadoop.mapred.JobInProgress: Task
'attempt_201201101557_0094_r_01_1' has completed
task_201201101557_0094_r_01 successfully.
2012-02-15 00:23:12,457 INFO org.apache.hadoop.mapred.Jo
Matthias,
Could you paste-bin a grep of the job ID (If job ID is
job__, grep just for _) from your JobTracker log
and send the link across?
That error you point out to is
https://issues.apache.org/jira/browse/MAPREDUCE-5 and should be
harmless/not-the-real-issue-here.
On Thu, Feb 16, 2012 at 4:2
13 matches
Mail list logo