Re: Cannot run program bash: java.io.IOException: error=12, Cannot allocate memory

2008-11-26 Thread Comfuzed


I found in my fstab I had accidentally disabled my swap partition

typing free, I saw I had no swap space.

Then I followed this guide http://www.linux.com/feature/121916 and all was
well.

hth
m
-- 
View this message in context: 
http://www.nabble.com/Cannot-run-program-%22bash%22%3A-java.io.IOException%3A-error%3D12%2C-Cannot-allocate-memory-tp19891450p20712473.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



RE: Cannot run program bash: java.io.IOException: error=12, Cannot allocate memory

2008-11-18 Thread Xavier Stevens
I'm still seeing this problem on a cluster using Hadoop 0.18.2.  I tried
dropping the max number of map tasks per node from 8 to 7.  I still get
the error although it's less frequent.  But I don't get the error at all
when using Hadoop 0.17.2.

Anyone have any suggestions?


-Xavier

-Original Message-
From: [EMAIL PROTECTED] On Behalf Of Edward J. Yoon
Sent: Thursday, October 09, 2008 2:07 AM
To: core-user@hadoop.apache.org
Subject: Re: Cannot run program bash: java.io.IOException: error=12,
Cannot allocate memory

Thanks Alexander!!

On Thu, Oct 9, 2008 at 4:49 PM, Alexander Aristov
[EMAIL PROTECTED] wrote:
 I received such errors when I overloaded data nodes. You may increase 
 swap space or run less tasks.

 Alexander

 2008/10/9 Edward J. Yoon [EMAIL PROTECTED]

 Hi,

 I received below message. Can anyone explain this?

 08/10/09 11:53:33 INFO mapred.JobClient: Task Id :
 task_200810081842_0004_m_00_0, Status : FAILED
 java.io.IOException: Cannot run program bash: java.io.IOException:
 error=12, Cannot allocate memory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
at org.apache.hadoop.util.Shell.run(Shell.java:134)
at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
at

org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathF
orWrite(LocalDirAllocator.java:296)
at

org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllo
cator.java:124)
at

org.apache.hadoop.mapred.MapOutputFile.getSpillFileForWrite(MapOutputFil
e.java:107)
at

org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.ja
va:734)
at

org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:694)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:220)
at
 org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2124
 ) Caused by: java.io.IOException: java.io.IOException: error=12, 
 Cannot allocate memory
at java.lang.UNIXProcess.init(UNIXProcess.java:148)
at java.lang.ProcessImpl.start(ProcessImpl.java:65)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
... 10 more

 --
 Best regards, Edward J. Yoon
 [EMAIL PROTECTED]
 http://blog.udanax.org




 --
 Best Regards
 Alexander Aristov




--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org




Re: Cannot run program bash: java.io.IOException: error=12, Cannot allocate memory

2008-11-18 Thread Brian Bockelman

Hey Xavier,

Don't forget, the Linux kernel reserves the memory; current heap space  
is disregarded.  How much heap space does your data node and  
tasktracker get?  (PS: overcommit ratio is disregarded if  
overcommit_memory=2).


You also have to remember that there is some overhead from the OS, the  
Java code cache, and a bit from running the JVM.  Add at least 64 MB  
per JVM for code cache and running, and we get 400MB of memory left  
for the OS and any other process running.


You're definitely running out of memory.  Either allow overcommitting  
(which will mean Java is no longer locked out of swap) or reduce  
memory consumption.


Brian

On Nov 18, 2008, at 4:57 PM, Xavier Stevens wrote:

1) It doesn't look like I'm out of memory but it is coming really  
close.

2) overcommit_memory is set to 2, overcommit_ratio = 100

As for the JVM, I am using Java 1.6.

**Note of Interest**: The virtual memory I see allocated in top for  
each

task is more than what I am specifying in the hadoop job/site configs.

Currently each physical box has 16 GB of memory.  I see the datanode  
and

tasktracker using:

   RESVIRT
Datanode145m   1408m
Tasktracker 206m   1439m

When idle.

So taking that into account I do 16000 MB - (1408+1439) MB which would
leave me with 13200 MB.  In my old settings I was using 8 map tasks   
so

13200 / 8 = 1650 MB.

My mapred.child.java.opts is -Xmx1536m which should leave me a little
head room.

When running though I see some tasks reporting 1900m.


-Xavier


-Original Message-
From: Brian Bockelman [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 18, 2008 2:42 PM
To: core-user@hadoop.apache.org
Subject: Re: Cannot run program bash: java.io.IOException: error=12,
Cannot allocate memory

Hey Xavier,

1) Are you out of memory (dumb question, but doesn't hurt to ask...)?
What does Ganglia tell you about the node?
2) Do you have /proc/sys/vm/overcommit_memory set to 2?

Telling Linux not to overcommit memory on Java 1.5 JVMs can be very
problematic.  Java 1.5 asks for min heap size + 1 GB of reserved, non-
swap memory on Linux systems by default.  The 1GB of reserved, non-  
swap
memory is used for the JIT to compile code; this bug wasn't fixed  
until

later Java 1.5 updates.

Brian

On Nov 18, 2008, at 4:32 PM, Xavier Stevens wrote:


I'm still seeing this problem on a cluster using Hadoop 0.18.2.  I
tried
dropping the max number of map tasks per node from 8 to 7.  I still
get
the error although it's less frequent.  But I don't get the error at
all
when using Hadoop 0.17.2.

Anyone have any suggestions?


-Xavier

-Original Message-
From: [EMAIL PROTECTED] On Behalf Of Edward J. Yoon
Sent: Thursday, October 09, 2008 2:07 AM
To: core-user@hadoop.apache.org
Subject: Re: Cannot run program bash: java.io.IOException:  
error=12,

Cannot allocate memory

Thanks Alexander!!

On Thu, Oct 9, 2008 at 4:49 PM, Alexander Aristov
[EMAIL PROTECTED] wrote:
I received such errors when I overloaded data nodes. You may  
increase

swap space or run less tasks.

Alexander

2008/10/9 Edward J. Yoon [EMAIL PROTECTED]


Hi,

I received below message. Can anyone explain this?

08/10/09 11:53:33 INFO mapred.JobClient: Task Id :
task_200810081842_0004_m_00_0, Status : FAILED
java.io.IOException: Cannot run program bash:  
java.io.IOException:

error=12, Cannot allocate memory
 at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
 at org.apache.hadoop.util.Shell.run(Shell.java:134)
 at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
 at


org.apache.hadoop.fs.LocalDirAllocator
$AllocatorPerContext.getLocalPathF
orWrite(LocalDirAllocator.java:296)

 at


org
.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllo
cator.java:124)

 at


org
.apache.hadoop.mapred.MapOutputFile.getSpillFileForWrite(MapOutputFil
e.java:107)

 at


org.apache.hadoop.mapred.MapTask
$MapOutputBuffer.sortAndSpill(MapTask.ja
va:734)

 at


org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:
694)

 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:220)
 at
org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:
2124
) Caused by: java.io.IOException: java.io.IOException: error=12,
Cannot allocate memory
 at java.lang.UNIXProcess.init(UNIXProcess.java:148)
 at java.lang.ProcessImpl.start(ProcessImpl.java:65)
 at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
 ... 10 more

--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org





--
Best Regards
Alexander Aristov





--
Best regards, Edward J. Yoon
[EMAIL PROTECTED]
http://blog.udanax.org








RE: Cannot run program bash: java.io.IOException: error=12, Cannot allocate memory

2008-11-18 Thread Xavier Stevens
1) It doesn't look like I'm out of memory but it is coming really close.
2) overcommit_memory is set to 2, overcommit_ratio = 100

As for the JVM, I am using Java 1.6.

**Note of Interest**: The virtual memory I see allocated in top for each
task is more than what I am specifying in the hadoop job/site configs.

Currently each physical box has 16 GB of memory.  I see the datanode and
tasktracker using: 

RESVIRT
Datanode145m   1408m
Tasktracker 206m   1439m

When idle.

So taking that into account I do 16000 MB - (1408+1439) MB which would
leave me with 13200 MB.  In my old settings I was using 8 map tasks  so
13200 / 8 = 1650 MB.

My mapred.child.java.opts is -Xmx1536m which should leave me a little
head room.

When running though I see some tasks reporting 1900m.


-Xavier


-Original Message-
From: Brian Bockelman [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 18, 2008 2:42 PM
To: core-user@hadoop.apache.org
Subject: Re: Cannot run program bash: java.io.IOException: error=12,
Cannot allocate memory

Hey Xavier,

1) Are you out of memory (dumb question, but doesn't hurt to ask...)?   
What does Ganglia tell you about the node?
2) Do you have /proc/sys/vm/overcommit_memory set to 2?

Telling Linux not to overcommit memory on Java 1.5 JVMs can be very
problematic.  Java 1.5 asks for min heap size + 1 GB of reserved, non-
swap memory on Linux systems by default.  The 1GB of reserved, non- swap
memory is used for the JIT to compile code; this bug wasn't fixed until
later Java 1.5 updates.

Brian

On Nov 18, 2008, at 4:32 PM, Xavier Stevens wrote:

 I'm still seeing this problem on a cluster using Hadoop 0.18.2.  I  
 tried
 dropping the max number of map tasks per node from 8 to 7.  I still  
 get
 the error although it's less frequent.  But I don't get the error at  
 all
 when using Hadoop 0.17.2.

 Anyone have any suggestions?


 -Xavier

 -Original Message-
 From: [EMAIL PROTECTED] On Behalf Of Edward J. Yoon
 Sent: Thursday, October 09, 2008 2:07 AM
 To: core-user@hadoop.apache.org
 Subject: Re: Cannot run program bash: java.io.IOException: error=12,
 Cannot allocate memory

 Thanks Alexander!!

 On Thu, Oct 9, 2008 at 4:49 PM, Alexander Aristov
 [EMAIL PROTECTED] wrote:
 I received such errors when I overloaded data nodes. You may increase
 swap space or run less tasks.

 Alexander

 2008/10/9 Edward J. Yoon [EMAIL PROTECTED]

 Hi,

 I received below message. Can anyone explain this?

 08/10/09 11:53:33 INFO mapred.JobClient: Task Id :
 task_200810081842_0004_m_00_0, Status : FAILED
 java.io.IOException: Cannot run program bash: java.io.IOException:
 error=12, Cannot allocate memory
   at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
   at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
   at org.apache.hadoop.util.Shell.run(Shell.java:134)
   at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
   at

 org.apache.hadoop.fs.LocalDirAllocator 
 $AllocatorPerContext.getLocalPathF
 orWrite(LocalDirAllocator.java:296)
   at

 org 
 .apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllo
 cator.java:124)
   at

 org 
 .apache.hadoop.mapred.MapOutputFile.getSpillFileForWrite(MapOutputFil
 e.java:107)
   at

 org.apache.hadoop.mapred.MapTask 
 $MapOutputBuffer.sortAndSpill(MapTask.ja
 va:734)
   at

 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java: 
 694)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:220)
   at
 org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java: 
 2124
 ) Caused by: java.io.IOException: java.io.IOException: error=12,
 Cannot allocate memory
   at java.lang.UNIXProcess.init(UNIXProcess.java:148)
   at java.lang.ProcessImpl.start(ProcessImpl.java:65)
   at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
   ... 10 more

 --
 Best regards, Edward J. Yoon
 [EMAIL PROTECTED]
 http://blog.udanax.org




 --
 Best Regards
 Alexander Aristov




 --
 Best regards, Edward J. Yoon
 [EMAIL PROTECTED]
 http://blog.udanax.org






RE: Cannot run program bash: java.io.IOException: error=12, Cannot allocate memory

2008-11-18 Thread Koji Noguchi


We had a similar issue before with Secondary Namenode failing with 

2008-10-09 02:00:58,288 ERROR org.apache.hadoop.dfs.NameNode.Secondary:
java.io.IOException:
javax.security.auth.login.LoginException: Login failed: Cannot run
program whoami: java.io.IOException:
error=12, Cannot allocate memory

In our case, simply increasing the swap space fixed our problem.

http://hudson.gotdns.com/wiki/display/HUDSON/IOException+Not+enough+spac
e

When checking with strace, it was failing at 

[pid  7927] clone(child_stack=0,
flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD,
child_tidptr=0x4133c9f0) = -1 ENOMEM (Cannot allocate memory)


Without CLONE_VM. In the clone man page, 

 If  CLONE_VM  is not set, the child process runs in a separate copy of
the memory space of the calling process
at the time of clone.  Memory writes or file mappings/unmappings
performed by one of the processes do not affect the 
other,  as with fork(2). 

Koji


-Original Message-
From: Brian Bockelman [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 18, 2008 3:12 PM
To: core-user@hadoop.apache.org
Subject: Re: Cannot run program bash: java.io.IOException: error=12,
Cannot allocate memory

Hey Xavier,

Don't forget, the Linux kernel reserves the memory; current heap space  
is disregarded.  How much heap space does your data node and  
tasktracker get?  (PS: overcommit ratio is disregarded if  
overcommit_memory=2).

You also have to remember that there is some overhead from the OS, the  
Java code cache, and a bit from running the JVM.  Add at least 64 MB  
per JVM for code cache and running, and we get 400MB of memory left  
for the OS and any other process running.

You're definitely running out of memory.  Either allow overcommitting  
(which will mean Java is no longer locked out of swap) or reduce  
memory consumption.

Brian

On Nov 18, 2008, at 4:57 PM, Xavier Stevens wrote:

 1) It doesn't look like I'm out of memory but it is coming really  
 close.
 2) overcommit_memory is set to 2, overcommit_ratio = 100

 As for the JVM, I am using Java 1.6.

 **Note of Interest**: The virtual memory I see allocated in top for  
 each
 task is more than what I am specifying in the hadoop job/site configs.

 Currently each physical box has 16 GB of memory.  I see the datanode  
 and
 tasktracker using:

RESVIRT
 Datanode145m   1408m
 Tasktracker 206m   1439m

 When idle.

 So taking that into account I do 16000 MB - (1408+1439) MB which would
 leave me with 13200 MB.  In my old settings I was using 8 map tasks   
 so
 13200 / 8 = 1650 MB.

 My mapred.child.java.opts is -Xmx1536m which should leave me a little
 head room.

 When running though I see some tasks reporting 1900m.


 -Xavier


 -Original Message-
 From: Brian Bockelman [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, November 18, 2008 2:42 PM
 To: core-user@hadoop.apache.org
 Subject: Re: Cannot run program bash: java.io.IOException: error=12,
 Cannot allocate memory

 Hey Xavier,

 1) Are you out of memory (dumb question, but doesn't hurt to ask...)?
 What does Ganglia tell you about the node?
 2) Do you have /proc/sys/vm/overcommit_memory set to 2?

 Telling Linux not to overcommit memory on Java 1.5 JVMs can be very
 problematic.  Java 1.5 asks for min heap size + 1 GB of reserved, non-
 swap memory on Linux systems by default.  The 1GB of reserved, non-  
 swap
 memory is used for the JIT to compile code; this bug wasn't fixed  
 until
 later Java 1.5 updates.

 Brian

 On Nov 18, 2008, at 4:32 PM, Xavier Stevens wrote:

 I'm still seeing this problem on a cluster using Hadoop 0.18.2.  I
 tried
 dropping the max number of map tasks per node from 8 to 7.  I still
 get
 the error although it's less frequent.  But I don't get the error at
 all
 when using Hadoop 0.17.2.

 Anyone have any suggestions?


 -Xavier

 -Original Message-
 From: [EMAIL PROTECTED] On Behalf Of Edward J. Yoon
 Sent: Thursday, October 09, 2008 2:07 AM
 To: core-user@hadoop.apache.org
 Subject: Re: Cannot run program bash: java.io.IOException:  
 error=12,
 Cannot allocate memory

 Thanks Alexander!!

 On Thu, Oct 9, 2008 at 4:49 PM, Alexander Aristov
 [EMAIL PROTECTED] wrote:
 I received such errors when I overloaded data nodes. You may  
 increase
 swap space or run less tasks.

 Alexander

 2008/10/9 Edward J. Yoon [EMAIL PROTECTED]

 Hi,

 I received below message. Can anyone explain this?

 08/10/09 11:53:33 INFO mapred.JobClient: Task Id :
 task_200810081842_0004_m_00_0, Status : FAILED
 java.io.IOException: Cannot run program bash:  
 java.io.IOException:
 error=12, Cannot allocate memory
  at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
  at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
  at org.apache.hadoop.util.Shell.run(Shell.java:134)
  at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
  at

 org.apache.hadoop.fs.LocalDirAllocator
 $AllocatorPerContext.getLocalPathF
 orWrite(LocalDirAllocator.java:296

Re: Cannot run program bash: java.io.IOException: error=12, Cannot allocate memory

2008-11-18 Thread Brian Bockelman

Hey Koji,

Possibly won't work here (but possibly will!).  When overcommit_memory  
is turned off, Java locks its VM memory into non-swap (this request is  
additionally ignored when overcommit_memory is turned on...).


The problem occurs when spawning a bash process and not a JVM, so  
there's a fighting chance that the process can be launched more in  
swap, but you aren't exactly solving the problem...


Brian

On Nov 18, 2008, at 5:32 PM, Koji Noguchi wrote:




We had a similar issue before with Secondary Namenode failing with

2008-10-09 02:00:58,288 ERROR  
org.apache.hadoop.dfs.NameNode.Secondary:

java.io.IOException:
javax.security.auth.login.LoginException: Login failed: Cannot run
program whoami: java.io.IOException:
error=12, Cannot allocate memory

In our case, simply increasing the swap space fixed our problem.

http://hudson.gotdns.com/wiki/display/HUDSON/IOException+Not+enough+spac
e

When checking with strace, it was failing at

[pid  7927] clone(child_stack=0,
flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD,
child_tidptr=0x4133c9f0) = -1 ENOMEM (Cannot allocate memory)


Without CLONE_VM. In the clone man page,

If  CLONE_VM  is not set, the child process runs in a separate copy  
of

the memory space of the calling process
at the time of clone.  Memory writes or file mappings/unmappings
performed by one of the processes do not affect the
other,  as with fork(2). 

Koji


-Original Message-
From: Brian Bockelman [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 18, 2008 3:12 PM
To: core-user@hadoop.apache.org
Subject: Re: Cannot run program bash: java.io.IOException: error=12,
Cannot allocate memory

Hey Xavier,

Don't forget, the Linux kernel reserves the memory; current heap space
is disregarded.  How much heap space does your data node and
tasktracker get?  (PS: overcommit ratio is disregarded if
overcommit_memory=2).

You also have to remember that there is some overhead from the OS, the
Java code cache, and a bit from running the JVM.  Add at least 64 MB
per JVM for code cache and running, and we get 400MB of memory left
for the OS and any other process running.

You're definitely running out of memory.  Either allow overcommitting
(which will mean Java is no longer locked out of swap) or reduce
memory consumption.

Brian

On Nov 18, 2008, at 4:57 PM, Xavier Stevens wrote:


1) It doesn't look like I'm out of memory but it is coming really
close.
2) overcommit_memory is set to 2, overcommit_ratio = 100

As for the JVM, I am using Java 1.6.

**Note of Interest**: The virtual memory I see allocated in top for
each
task is more than what I am specifying in the hadoop job/site  
configs.


Currently each physical box has 16 GB of memory.  I see the datanode
and
tasktracker using:

  RESVIRT
Datanode145m   1408m
Tasktracker 206m   1439m

When idle.

So taking that into account I do 16000 MB - (1408+1439) MB which  
would

leave me with 13200 MB.  In my old settings I was using 8 map tasks
so
13200 / 8 = 1650 MB.

My mapred.child.java.opts is -Xmx1536m which should leave me a little
head room.

When running though I see some tasks reporting 1900m.


-Xavier


-Original Message-
From: Brian Bockelman [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 18, 2008 2:42 PM
To: core-user@hadoop.apache.org
Subject: Re: Cannot run program bash: java.io.IOException:  
error=12,

Cannot allocate memory

Hey Xavier,

1) Are you out of memory (dumb question, but doesn't hurt to ask...)?
What does Ganglia tell you about the node?
2) Do you have /proc/sys/vm/overcommit_memory set to 2?

Telling Linux not to overcommit memory on Java 1.5 JVMs can be very
problematic.  Java 1.5 asks for min heap size + 1 GB of reserved,  
non-

swap memory on Linux systems by default.  The 1GB of reserved, non-
swap
memory is used for the JIT to compile code; this bug wasn't fixed
until
later Java 1.5 updates.

Brian

On Nov 18, 2008, at 4:32 PM, Xavier Stevens wrote:


I'm still seeing this problem on a cluster using Hadoop 0.18.2.  I
tried
dropping the max number of map tasks per node from 8 to 7.  I still
get
the error although it's less frequent.  But I don't get the error at
all
when using Hadoop 0.17.2.

Anyone have any suggestions?


-Xavier

-Original Message-
From: [EMAIL PROTECTED] On Behalf Of Edward J. Yoon
Sent: Thursday, October 09, 2008 2:07 AM
To: core-user@hadoop.apache.org
Subject: Re: Cannot run program bash: java.io.IOException:
error=12,
Cannot allocate memory

Thanks Alexander!!

On Thu, Oct 9, 2008 at 4:49 PM, Alexander Aristov
[EMAIL PROTECTED] wrote:

I received such errors when I overloaded data nodes. You may
increase
swap space or run less tasks.

Alexander

2008/10/9 Edward J. Yoon [EMAIL PROTECTED]


Hi,

I received below message. Can anyone explain this?

08/10/09 11:53:33 INFO mapred.JobClient: Task Id :
task_200810081842_0004_m_00_0, Status : FAILED
java.io.IOException: Cannot run program bash:
java.io.IOException:
error=12, Cannot

Re: Cannot run program bash: java.io.IOException: error=12, Cannot allocate memory

2008-11-18 Thread Edward J. Yoon
Hmm. In my experience, It often occurs on PC commodity cluster. small
PIEstimator job also throws this error on PC cluster.

 But I don't get the error at all when using Hadoop 0.17.2.

Yes, I was wonder about this. :)

On Wed, Nov 19, 2008 at 7:32 AM, Xavier Stevens [EMAIL PROTECTED] wrote:
 I'm still seeing this problem on a cluster using Hadoop 0.18.2.  I tried
 dropping the max number of map tasks per node from 8 to 7.  I still get
 the error although it's less frequent.  But I don't get the error at all
 when using Hadoop 0.17.2.

 Anyone have any suggestions?


 -Xavier

 -Original Message-
 From: [EMAIL PROTECTED] On Behalf Of Edward J. Yoon
 Sent: Thursday, October 09, 2008 2:07 AM
 To: core-user@hadoop.apache.org
 Subject: Re: Cannot run program bash: java.io.IOException: error=12,
 Cannot allocate memory

 Thanks Alexander!!

 On Thu, Oct 9, 2008 at 4:49 PM, Alexander Aristov
 [EMAIL PROTECTED] wrote:
 I received such errors when I overloaded data nodes. You may increase
 swap space or run less tasks.

 Alexander

 2008/10/9 Edward J. Yoon [EMAIL PROTECTED]

 Hi,

 I received below message. Can anyone explain this?

 08/10/09 11:53:33 INFO mapred.JobClient: Task Id :
 task_200810081842_0004_m_00_0, Status : FAILED
 java.io.IOException: Cannot run program bash: java.io.IOException:
 error=12, Cannot allocate memory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
at org.apache.hadoop.util.Shell.run(Shell.java:134)
at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
at

 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathF
 orWrite(LocalDirAllocator.java:296)
at

 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllo
 cator.java:124)
at

 org.apache.hadoop.mapred.MapOutputFile.getSpillFileForWrite(MapOutputFil
 e.java:107)
at

 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.ja
 va:734)
at

 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:694)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:220)
at
 org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2124
 ) Caused by: java.io.IOException: java.io.IOException: error=12,
 Cannot allocate memory
at java.lang.UNIXProcess.init(UNIXProcess.java:148)
at java.lang.ProcessImpl.start(ProcessImpl.java:65)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
... 10 more

 --
 Best regards, Edward J. Yoon
 [EMAIL PROTECTED]
 http://blog.udanax.org




 --
 Best Regards
 Alexander Aristov




 --
 Best regards, Edward J. Yoon
 [EMAIL PROTECTED]
 http://blog.udanax.org






-- 
Best Regards, Edward J. Yoon @ NHN, corp.
[EMAIL PROTECTED]
http://blog.udanax.org


Re: Cannot run program bash: java.io.IOException: error=12, Cannot allocate memory

2008-10-09 Thread Alexander Aristov
I received such errors when I overloaded data nodes. You may increase swap
space or run less tasks.

Alexander

2008/10/9 Edward J. Yoon [EMAIL PROTECTED]

 Hi,

 I received below message. Can anyone explain this?

 08/10/09 11:53:33 INFO mapred.JobClient: Task Id :
 task_200810081842_0004_m_00_0, Status : FAILED
 java.io.IOException: Cannot run program bash: java.io.IOException:
 error=12, Cannot allocate memory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
at org.apache.hadoop.util.Shell.run(Shell.java:134)
at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
at
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:296)
at
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
at
 org.apache.hadoop.mapred.MapOutputFile.getSpillFileForWrite(MapOutputFile.java:107)
at
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:734)
at
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:694)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:220)
at
 org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2124)
 Caused by: java.io.IOException: java.io.IOException: error=12, Cannot
 allocate memory
at java.lang.UNIXProcess.init(UNIXProcess.java:148)
at java.lang.ProcessImpl.start(ProcessImpl.java:65)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
... 10 more

 --
 Best regards, Edward J. Yoon
 [EMAIL PROTECTED]
 http://blog.udanax.org




-- 
Best Regards
Alexander Aristov