Re: ENOENT: No such file or directory

2012-02-16 Thread Sumanth V
I changed dfs.datanode.max.xcievers to 10 and mapred.job.reuse.jvm.num.tasks to 10. Still the job fails by throwing the same set of errors as earlier. Sumanth On Thu, Feb 16, 2012 at 7:41 PM, Srinivas Surasani wrote: > Sumanth, > > For quick check, try setting this to much bigger value(

Re: ENOENT: No such file or directory

2012-02-16 Thread Srinivas Surasani
Sumanth, For quick check, try setting this to much bigger value( 1M ), though this is not good practice( data-node may run into out of memory). On Thu, Feb 16, 2012 at 10:21 PM, Sumanth V wrote: > Hi Srinivas, > > The *dfs.datanode.max.xcievers* value is set to 4096 in hdfs-site.xml. > > > Suman

Re: ENOENT: No such file or directory

2012-02-16 Thread Sumanth V
Hi Srinivas, The *dfs.datanode.max.xcievers* value is set to 4096 in hdfs-site.xml. Sumanth On Thu, Feb 16, 2012 at 7:11 PM, Srinivas Surasani wrote: > Sumanth, I think Sreedhar is pointing to "dfs.datanode.max.xceivers" > property in hdfs-site.xml. Try setting this property to higher value

Re: ENOENT: No such file or directory

2012-02-16 Thread Srinivas Surasani
Sumanth, I think Sreedhar is pointing to "dfs.datanode.max.xceivers" property in hdfs-site.xml. Try setting this property to higher value. On Thu, Feb 16, 2012 at 9:51 PM, Sumanth V wrote: > ulimit values are set to much higher values than the default values > Here is the /etc/security/limits.c

Re: ENOENT: No such file or directory

2012-02-16 Thread Sumanth V
ulimit values are set to much higher values than the default values Here is the /etc/security/limits.conf contents - * - nofile 64000 hdfs- nproc 32768 hdfs- stack 10240 hbase - nproc 32768 hbase - stack 10240 mapred - nproc 32768 ma

Re: ENOENT: No such file or directory

2012-02-16 Thread Sree K
Sumanth, You may want to check ulimit setting for open files. Set it to a higher value if it is at default value of 1024. Regards, Sreedhar From: Sumanth V To: common-user@hadoop.apache.org Sent: Thursday, February 16, 2012 6:25 PM Subject: ENOENT: No su

Re: ENOENT: No such file or directory

2012-02-16 Thread Srinivas Surasani
Sumanth, what is the value set to "mapred.job.reuse.jvm.num.tasks" property ? On Thu, Feb 16, 2012 at 9:25 PM, Sumanth V wrote: > Hi, > > We have a 20 node hadoop cluster running CDH3 U2. Some of our jobs > are failing with the following errors. We noticed that we are > consistently hitting this

Re: Addendum to Hypertable vs. HBase Performance Test (w/ mslab enabled)

2012-02-16 Thread Edward Capriolo
You ain't gotta like me, you just mad Cause I tell it how it is, and you tell it how it might be -Attributed to Puff Daddy & Now apparently T. Lipcon On Mon, Feb 13, 2012 at 2:33 PM, Todd Lipcon wrote: > > Hey Doug, > > Want to also run a comparison test with inter-cluster replication > turned o

Yuan Jin is out of the office.

2012-02-16 Thread Yuan Jin
I will be out of the office starting 02/17/2012 and will not return until 02/20/2012. I am out of office, and will reply you when I am back. For HAMSTER related things, you can contact Anthony(Fei Xiong/China/IBM) For CFM related things, you can contact Daniel(Liang SH Su/China/Contr/IBM) For

Re: memory of mappers and reducers

2012-02-16 Thread Mark question
Great! thanks a lot Srinivas ! Mark On Thu, Feb 16, 2012 at 7:02 AM, Srinivas Surasani wrote: > 1) Yes option 2 is enough. > 2) Configuration variable "mapred.child.ulimit" can be used to control > the maximum virtual memory of the child (map/reduce) processes. > > ** value of mapred.child.ulimi

Re: memory of mappers and reducers

2012-02-16 Thread Srinivas Surasani
1) Yes option 2 is enough. 2) Configuration variable "mapred.child.ulimit" can be used to control the maximum virtual memory of the child (map/reduce) processes. ** value of mapred.child.ulimit > value of mapred.child.java.opts On Thu, Feb 16, 2012 at 12:38 AM, Mark question wrote: > Thanks for

Re: tasktracker Error with mortbay

2012-02-16 Thread Matthias Zengler
These are the last entries of the JobTracker log related to the job: 2012-02-15 00:21:51,604 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201201101557_0094_r_01_1' has completed task_201201101557_0094_r_01 successfully. 2012-02-15 00:23:12,457 INFO org.apache.hadoop.mapred.Jo

Re: tasktracker Error with mortbay

2012-02-16 Thread Harsh J
Matthias, Could you paste-bin a grep of the job ID (If job ID is job__, grep just for _) from your JobTracker log and send the link across? That error you point out to is https://issues.apache.org/jira/browse/MAPREDUCE-5 and should be harmless/not-the-real-issue-here. On Thu, Feb 16, 2012 at 4:2