[
https://issues.apache.org/jira/browse/JCR-2832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Omid Milani updated JCR-2832:
-----------------------------
Attachment: jackrabbit-cluster-outofmem.patch
The way I solved this, but it's not very clean and possibly can cause side
effects. batchMode on connHelper is required to connect to PSQL in
transactional mode, this can cause problem if DatabaseJournal.conHelper is used
meanwhile for writing to DB.
Then, there's explicit specification of fetchMode in
ConnectionHelper.reallyExec, which would be called for all database
interactions, I couldn't find a better way to just call it for connection used
for doSync. It seems harmless, but I'm not sure, also the size I have used
(10000) may not be the best.
> Crash when adding node to cluster with big journal on PSQL DB
> -------------------------------------------------------------
>
> Key: JCR-2832
> URL: https://issues.apache.org/jira/browse/JCR-2832
> Project: Jackrabbit Content Repository
> Issue Type: Bug
> Components: clustering, jackrabbit-core
> Affects Versions: 2.1.2
> Environment: Clustering with database journal using PSQL
> Reporter: Omid Milani
> Fix For: 2.1.3
>
> Attachments: jackrabbit-cluster-outofmem.patch
>
> Original Estimate: 1h
> Remaining Estimate: 1h
>
> When adding a new node to a cluster with big journal on PSQL database
> application runs out of memory on the new node and crashes (no exception,
> application exits with code 137).
> It's because with PSQL, when no fetchSize is specified, all the results of
> query are loaded into memory before being passed to application. Furthermore,
> specification of fetchSize only works in transactional mode and have no
> effect if autoCommit is true. (these are configured in ConnectionHelper)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.