On 02/04/11 21:23, Bill Shannon wrote:
Edward Martinez wrote on 02/04/2011 07:22 PM:
On 02/04/11 16:19, Bill Shannon wrote:
Edward Martinez wrote on 02/ 4/11 04:11 PM:
On 02/04/11 14:07, Bill Shannon wrote:
Edward Martinez wrote on 02/ 4/11 01:52 PM:
On 02/04/11 13:06, Bill Shannon wrote:
Bart Smaalders wrote on 02/ 4/11 11:49 AM:
On 02/04/11 11:33, Bill Shannon wrote:
This isn't really specific to OpenSolaris since it also happens on
Solaris 10, but maybe someone here can give me some ideas?

I have a java program that is failing because it does something that calls
fork1(), which fails with ENOMEM (confirmed using truss):

27010/2: fork1() Err#12 ENOMEM

I've used both the 32-bit and 64-bit versions of the JVM. Using the 64-bit
version, I see the heap expand to just over 4GB before it forks:

27010/30: brk(0x107DE3D90) = 0

I have almost 160 GB of swap space:

$ swap -s
total: 806336k bytes allocated + 167872k reserved = 974208k used,
159746184k available

It doesn't seem like it can possibly be running out of swap space.

What other reasons would cause fork1() to fail with ENOMEM?
_______________________________________________

The vm system is rather fond of using ENOMEM as a generic error bucket. If you have a way of reproducing the problem, a bit of DTrace will quickly
turn
up the reason. I assume it's only the 32 bit version
that is failing to fork, right?

Sorry, I wasn't clear. The 64-bit version fails as well. With the 32-bit version I limit the Java heap to 2GB. It doesn't run out of heap, but the
fork fails.

I'm sure dtrace will solve every problem I have, but I don't know how
to use it to solve this problem! :-( Hints?
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org


Hi,

Please excuse me for getting into the discussion.
I think in solaris 10 the /tmp file is place by default in swap, if /tmp is
full
it may give that error.
I found an article on that theory.

http://nilesh-joshi.blogspot.com/2010/03/tmp-file-system-full-swap-space-limit.html



/tmp isn't full:

$ df -h /tmp
Filesystem size used avail capacity Mounted on
swap 152G 384K 152G 1% /tmp

It's hard to tell if it runs out of space during the execution of the process but I doubt it. I don't get any console messages saying the filesystem is
full.


Hi,

Interesting... it may be running out of "heap" space, i think limits can be
checked with ulimit -a

$ ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) unlimited
nofiles(descriptors) 256
vmemory(kbytes) unlimited


Hi,

I think stack may be low. On my solaris 10, defaults are:

# ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 10240
coredump(blocks) unlimited
nofiles(descriptors) 256
memory(kbytes) unlimited

Why would that cause fork to fail?




    Hi,

I just had a theory, since data is structured as blocks of data called /segments/ andq each contain a certain type of data.eg heap and it appears your program is running out of heap space, if data stack was raised it, may help out, but again it was only a theory:-)


--
Regards,
Edward

_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to