On 05/28/14 06:26 PM, Sergiu Dumitriu wrote:
The standalone .zip package is not designed to hold many pages. It uses
an in-memory database that requires as much heap space as the amount of
data that you have (plus all the other memory that XWiki normally
requires). I thought there was a bigger warning on the download page
that clarified that the standalone package is only supposed to be used
for small tests...

Unfortunately I've not been able to judge what's for small test and what's for bigger deployment based on the description provided on the download page here: http://enterprise.xwiki.org/xwiki/bin/view/Main/Download -- it just divides various installers based on user maturity to work with xwiki. Tells nothing about scalability at all.

The pgsql package should behave better, though, since it separates the
database from the live objects, except that you need to make sure
Tomcat's default memory is increased.

Indeed, I've installed tomcat/pgsql/xwiki 6.0.1, then I've used exactly xwiki.org's catalina opts. With this I've been able to create and get those 100k of clear/empty pages I'm testing here, but I still get OOM errors. I've tested twice and twice it hits RMI connection to JConsole so basically JConsole disconnects with this message thrown to the separate window:

ay 29, 2014 5:02:20 AM ClientCommunicatorAdmin Checker-run
WARNING: Failed to check the connection: java.net.SocketTimeoutException: Read timed out
May 29, 2014 5:03:17 AM ClientNotifForwarder NotifFetcher-run
SEVERE: Failed to fetch notification, stopping thread. Error is: java.rmi.UnmarshalException: error unmarshalling return; nested exception is:
        java.io.EOFException
java.rmi.UnmarshalException: error unmarshalling return; nested exception is:
        java.io.EOFException
        at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:191)
        at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
at javax.management.remote.rmi.RMIConnectionImpl_Stub.fetchNotifications(Unknown Source) at javax.management.remote.rmi.RMIConnector$RMINotifClient.fetchNotifs(RMIConnector.java:1337) at com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.fetchNotifs(ClientNotifForwarder.java:587) at com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:470) at com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:451) at com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:107)
Caused by: java.io.EOFException
at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2571)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1315)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
        at sun.rmi.server.UnicastRef.unmarshalValue(UnicastRef.java:324)
        at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:173)
        ... 7 more

May 29, 2014 6:04:09 PM ClientCommunicatorAdmin Checker-run
WARNING: Failed to check the connection: java.net.SocketTimeoutException: Read timed out



and on the tomcat console I see messages like:
Exception in thread "RMI TCP Connection(idle)" java.lang.OutOfMemoryError: Java heap space Exception in thread "RMI TCP Connection(idle)" java.lang.OutOfMemoryError: Java heap space Exception in thread "RMI TCP Connection(idle)" java.lang.OutOfMemoryError: Java heap space Exception in thread "RMI TCP Connection(idle)" java.lang.OutOfMemoryError: Java heap space Exception in thread "RMI TCP Connection(idle)" java.lang.OutOfMemoryError: Java heap space Exception in thread "RMI TCP Connection(idle)" java.lang.OutOfMemoryError: Java heap space Exception in thread "RMI TCP Connection(idle)" java.lang.OutOfMemoryError: Java heap space Exception in thread "RMI TCP Connection(idle)" java.lang.OutOfMemoryError: Java heap space Exception in thread "RMI TCP Connection(idle)" java.lang.OutOfMemoryError: Java heap space


Still my test runs, but honestly speaking the stage of server itself is not the most trusted one. I even uncommented *cache* lines in xwiki.cfg and left default values there, but this does help neither.

As I said, this is when running with: CATALINA_OPTS="-server -Xms800m -Xmx800m -XX:MaxPermSize=196m -Dfile.encoding=utf-8 -Djava.awt.headless=true -XX:+UseParallelGC -XX:MaxGCPauseMillis=100"


Thanks!
Karel


On 05/28/2014 12:02 PM, Karel Gardas wrote:

Thomas,

thanks for your fast response. My comments are below.

On 05/28/14 05:27 PM, Thomas Mortagne wrote:
You are mixing several things here. It's not because you hit Out of
memory errors that you have a memory leak, it can simply mean that you
have a document cache too big for the memory you allocated for
example. You can modify the documents cache size in xwiki.cfg.

I see two caches in that file:

xwiki.store.cache
xwiki.render.cache

both seems to be commented out.

How much memory did you allocated ? XWiki is not a small beast and it
require a minimum to work. See
http://platform.xwiki.org/xwiki/bin/view/AdminGuide/Performances#HMemory.

To prevent misunderstanding. I've not set any limit on memory etc. I
just simply use xwiki-enterprise-jetty-hsqldb-6.0.1.zip as distributed
on xwiki.org. This distro does have 512MB RAM cap which following your
link above should be good for medium installs.

The question is, if the value of caches above are commented out in
xwiki.cfg, then what are actual default values which are used in the
xwiki-enterprise-jetty-hsqldb-6.0.1.zip distro? Just to know from which
value I shall go lower...

Thanks!
Karel


On Wed, May 28, 2014 at 4:58 PM, Karel
Gardas<[email protected]>   wrote:

Folks,

I'm testing kind of scalability of XWiki by simple benchmark which
creates N
pages in a loop (one page at a time, it's not parallel run!) and then
when
this loop finishes, it again in another loop gets all the pages from the
server (again serially, one page at the time). For page creation
we're using
REST API, for getting the pages we're using common browseable URL
(/xwiki/bin/view/...).
Now the problem is that if I try to attempt creation of 100k pages,
then I
hit Java's out of memory errors and server is unresponsive from that
time.
I've tested this on:

- xwiki-jetty-hsql-6.0.0
- xwiki-jetty-hsql-6.0.1
- xwiki-tomcat7-pgsql -- debian xwiki packages running on top of
debian 7.5

Of course I know the way how to increase Java's memory space/heap
space. The
problem is that this will not help here. Simply if I do so and then
create
100 millions of pages on one run I will still get to the same issue just
it'll take a lot longer.

I've googled a bit for memoryleaks issues on Java and found an
interesting
recommendation to use parallel GC. So I've changed start_xwiki.sh to
include
-XX:+UseParallelGC in XWIKI_OPTS

Anyway, the situation is still looking suspiciously. I've connected
JConsole
to the xwiki java process and overall view looks:

https://app.box.com/s/udndu96pl2fvuz3igvor

this is whole run overview, but perhaps even more clear is it on last 2
hours view which is here:

https://app.box.com/s/deuix33fzejra4uur941

Sidenote this all is from debugging xwiki-jetty-hsql-6.0.1 distro.

Now, what worries me a lot is this bottom cap which is growing. You
can see
that clearly in Heap Memory Usage from 15:15. In CPU usage you can
also see
that around the same time the CPU consumption went up from ~15% to ~45%

When I switch to Memory Tab in JConsole and click several times on
"Perform
GC" button, the bottom cap is still there and I cannot get lower in
memory
usage. With this going on I can also see server failing after some
time on
OOM error.

Any help with this is highly appreciated here.

Thanks!
Karel



_______________________________________________
devs mailing list
[email protected]
http://lists.xwiki.org/mailman/listinfo/devs

Reply via email to