Have you looked at the dump I linked to before?
https://dl.dropboxusercontent.com/u/122806/jvm8_gc2.zip
Of course it could be a memory leak in my code, but by far the largest
amount of instances are held by lambda forms. It could be that they take
very little memory, so it might not be
Am 2014-03-06 14:23, schrieb Tal Liron:
Have you looked at the dump I linked to before?
https://dl.dropboxusercontent.com/u/122806/jvm8_gc2.zip
Yes, this is the dump I looked at.
Of course it could be a memory leak in my code, but by far the largest
amount of instances are held by lambda
I've been away for a month. Has anyone with knowhow followed up on this?
The issue is still present.
On 01/18/2014 02:51 PM, Tal Liron wrote:
I have a new dump that will hopefully be more useful:
https://dl.dropboxusercontent.com/u/122806/jvm8_gc2.zip
From what I can tell, indeed lambda
Just run the Prudence example applications. There's the default
example that comes with the distribution, but it's not data-driven.
You can try the MongoVision application to test a MongoDB backend. Or
the Stickstick demo to test relational databases (comes with H2 built
in, but can be easily
Hi,
Haven't had chance yet to look at the zip. But, I plan to look at it
before EOD.
-Sundar
On Saturday 18 January 2014 12:21 PM, Tal Liron wrote:
I have a new dump that will hopefully be more useful:
https://dl.dropboxusercontent.com/u/122806/jvm8_gc2.zip
From what I can tell, indeed
On 2014-01-09 16:29, Kirk Pepperdine wrote:
Hi Marcus,
Looks like some of the details have been chopped off. Is there a GC log
available? If there is a problem with MethodHandle a work around might be a
simple as expanding perm.. but wait, this is meta space now and it should grow
as long as
Hi,
The heap dump contains not much info. When I tried to open with 'jhat'
tool, I see only basic JDK core classes (Class, ClassLoader etc.) and
nothing else. jmap with -F flag uses hotspot serviceability agent to
dump the heap. i.e., It is read from outside the process. Such a dump is
done
Thanks! I've restarted everything with more flags, so hopefully we'll
get more data next time.
In the meantime, I've also learned about this Ubuntu-specific issue with
ptrace that affects jmap use:
http://blog.thecodingmachine.com/fr/content/fixing-java-memory-leaks-ubuntu-1104-using-jmap
This almost certainly stems from the implementation from MethodHandle
combinators being implemented as lambda forms as anonymous java classes. One of
the things that is being done for 8u20 is to drastically reduce the number of
lambda forms created. I don’t know of any workaround at the
Regarding OOME, it's expected in this situation.
If you look at the end of the log, you'll see a set of consecutive Full
GCs. It means Java heap is almost full and reached it's maximum size.
And application is almost halted - VM collects the whole heap over and
over again (98% of application
Unfortunately, this workaround is unacceptable in many deployment
environments. I would thus consider this a showstopping bug for Nashorn,
and I hope it can be escalated.
(I understand that this is not the Nashorn project's fault, but the
bottom line is that Nashorn cannot be used in
Tal,
I've been thowing requests at the Prudence test app for the last 20
minutes or so. I do see that it uses a lot of metaspace, close to 50M in
my case. The test app seems to load/unload 2 classes per request with
Rhino compared to 4 classes per request with Nashorn, which is probably
due
Indeed, scripts are reused in this case, though I can't guarantee that
there isn't a bug somewhere on my end.
I'm wondering if it might be triggered by another issue: Prudence
supports an internal crontab-life feature (based on cron4j), and these
are again Nashorn scripts being run, once a
Hi Marcus,
Looks like some of the details have been chopped off. Is there a GC log
available? If there is a problem with MethodHandle a work around might be a
simple as expanding perm.. but wait, this is meta space now and it should grow
as long as your system has memory to give to the
Heap dumps enables post-mortem analysis of OOMs.
Pass -XX:+HeapDumpOnOutOfMemoryError to VM and it'll dump the heap
before exiting or use jmap (-dump:live,format=b,file=name pid) or
visualvm to take a snapshot from a running process.
There are a number of tools to browse the contents.
Best
It happened again, and here's the gc.log: http://pastebin.com/DFA7CYC1
Interestingly enough, the application kept working, though I was getting
intermittent 100% CPU use.
On 01/06/2014 01:57 PM, Benjamin Sieffert wrote:
Hi everyone,
we have been observing similar symptoms from 7u40 onwards
Hi everyone,
we have been observing similar symptoms from 7u40 onwards (using
nashorn-backport with j7 -- j8 has the same problems as 7u40 and 7u45...
7u25 is the last version that works fine) and suspect the cause to be the
JSR-292 changes that took place there. Iirc I already asked over on
Can you be more specific?
What kind of errors, what are your current GC flags, etc? How have you
determined that you don't have a memory leak what the correct size of
your working set is?, etc.
Thanks,
Ben
On Sat, Jan 4, 2014 at 6:58 AM, Tal Liron tal.li...@threecrickets.comwrote:
I've
If this is a serverside application, then presumably you have at least the
minimum GC logging flags on?
-Xloggc:pathtofile -XX:+PrintGCDetails -XX:+PrintTenuringDistribution
I would regard these as the absolute minimum information for tools to be
able to help you - no JVM server process (and
Thanks! I didn't know of these. I'm not sure how to read the log, but
this doesn't look so good. I get a lot of allocation failures that
look like this:
Java HotSpot(TM) 64-Bit Server VM (25.0-b63) for linux-amd64 JRE
(1.8.0-ea-b121), built on Dec 19 2013 17:29:18 by java_re with gcc
4.3.0
I've been getting GC errors for long-running Prudence/Nashorn processes.
Is this a known issue, perhaps JVM-related and not specific to Nashorn?
21 matches
Mail list logo