Two more questions: 
1. what is the optimal heap size for a production server? Apparently
512m is still not enough for 30000 files for us.
2. does hippo has a self-recovery process to detect and recover problems
like that?

Thanks
Jun
 
-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ni, Jun
(FindLaw)
Sent: Tuesday, November 25, 2008 11:36 AM
To: [email protected]
Subject: [HippoCMS-dev] Hippo Repository crashes frequently under load

Hi guys,

We are running hippo-repository 1.2.15.1 on RedHat with JRE 1.5.0.6. We
are using the Oracle backend with max heap size of 512m. 

 

We are recently trying to get 30000+ small xmls in and out of the
repository via webdav, hippo repository is slow and frequently dies out
of memory. The upload speed is around 5-7 per sec, while the CPU on both
the repository and db server are over 25%. When we try to list directory
content, the repository often dies out of memory (the max # of files in
the directories is around 800-900)

 

I dig around the logs, but the closest log I got from our recent fetch
all document action is this:

 

2008-11-25 10:42:35.144 ERROR fortress.cron.scheduler
Cron job name 'default-index-update' died.

java.lang.OutOfMemoryError: Java heap space

 

I know that I can speed up upload by decrease the index update interval.
But this time we are only doing read, what does index update have to do
with read?

 

Any suggestions are appreciated!

Thanks

Jun 

********************************************
Hippocms-dev: Hippo CMS development public mailinglist

Searchable archives can be found at:
MarkMail: http://hippocms-dev.markmail.org
Nabble: http://www.nabble.com/Hippo-CMS-f26633.html


********************************************
Hippocms-dev: Hippo CMS development public mailinglist

Searchable archives can be found at:
MarkMail: http://hippocms-dev.markmail.org
Nabble: http://www.nabble.com/Hippo-CMS-f26633.html

Reply via email to