[ 
https://issues.apache.org/jira/browse/DERBY-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891051#comment-16891051
 ] 

Marco edited comment on DERBY-7049 at 7/23/19 2:10 PM:
-------------------------------------------------------

Thanks a lot for your quick reaction!

[~bryanpendleton]

I tried to create a test-case, but did not succeed, yet. In production, the 
problem occurs after _days_ (not hours) and much more complex queries than my 
test-case creates so far. I'll give the test-case another try, but I can only 
do this on the week-end (I have urgent work for a customer till then).

[~rhillegas]

=> 1) I did not profile the memory usage, because I was not yet able to 
reproduce the problem myself. It occurs on a friend's machine 10000 km far away 
from me. And since he's a doctor, handles very sensitive patient data (which is 
*very* *large* – which is the very reason for this whole process to take so 
long and run into the OOM-error after more than a day) and the data is 
encrypted (that's the point of the project [subshare|http://subshare.org] ), I 
cannot easily reproduce it with his productive data.

=> 2) I use [DataNucleus|http://datanucleus.org], which is a persistence-layer, 
and do not write/manage the PreparedStatements myself. However, according to 
its documentation, PreparedStatement instances are created and forgotten by 
default, unless configured differently.

=> 3) As said, this is managed by DataNucleus.

=> 2), 3) I tried multiple things:

a) I introduced the code to shut-down the derby-database once in (roughly) 20 
minutes. This code also shuts down the DataNucleus-PersistenceManagerFactory 
and there should thus be nothing left holding any resources.

b) My friend tried enabling PreparedStatement-caching in DataNucleus, but it 
had no effect. I did not (yet) dig into this further and I don't know how long 
the PreparedStatements are cached and in which scopes (e.g. whether it's only 
within a transaction).

In general, most of the statements are static, i.e. annotated to the 
entity-classes. There are a few which are dynamic. But to say more about when 
statements are created or taken from a cache (by DataNucleus), I'd have to do 
more research. AFAIK DataNucleus translates all JDO-queries to 
SQL-PreparedStatement-instances using ?-parameters, i.e. it does _not_ use 
statements with inline-parameters.


was (Author: nlmarco):
Thanks a lot for your quick reaction!

[~bryanpendleton]

I tried to create a test-case, but did not succeed, yet. In production, the 
problem occurs after _days_ (not hours) and much more complex queries than my 
test-case creates so far. I'll give the test-case another try, but I can only 
do this on the week-end (I have urgent work for a customer till then).

[~rhillegas]

=> 1) I did not profile the memory usage, because I was not yet able to 
reproduce the problem myself. It occurs on a friend's machine 10000 km far away 
from me. And since he's a doctor, handles very sensitive patient data (which is 
*very* *large* – which is the very reason for this whole process to take so 
long and run into the OOM-error after more than a day) and the data is 
encrypted (that's the point of the project [subshare|http://subshare.org] ), I 
cannot easily reproduce it with his productive data.

=> 2) I use DataNucleus, which is a persistence-layer, and do not write/manage 
the PreparedStatements myself. However, according to its documentation, 
PreparedStatement instances are created and forgotten by default, unless 
configured differently.

=> 3) As said, this is managed by DataNucleus.

=> 2), 3) I tried multiple things:

a) I introduced the code to shut-down the derby-database once in (roughly) 20 
minutes. This code also shuts down the DataNucleus-PersistenceManagerFactory 
and there should thus be nothing left holding any resources.

b) My friend tried enabling PreparedStatement-caching in DataNucleus, but it 
had no effect. I did not (yet) dig into this further and I don't know how long 
the PreparedStatements are cached and in which scopes (e.g. whether it's only 
within a transaction).

In general, most of the statements are static, i.e. annotated to the 
entity-classes. There are a few which are dynamic. But to say more about when 
statements are created or taken from a cache (by DataNucleus), I'd have to do 
more research. AFAIK DataNucleus translates all JDO-queries to 
SQL-PreparedStatement-instances using ?-parameters, i.e. it does _not_ use 
statements with inline-parameters.

> OutOfMemoryError: Compressed class space
> ----------------------------------------
>
>                 Key: DERBY-7049
>                 URL: https://issues.apache.org/jira/browse/DERBY-7049
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 10.13.1.1
>            Reporter: Marco
>            Priority: Major
>
> After a few days of working with an embedded Derby database (currently 
> version 10.13.1.1 on Oracle Java 1.8.0_201-b09), the following error occurs:
> *java.lang.OutOfMemoryError: Compressed class space*
> {code:java}
> java.lang.OutOfMemoryError: Compressed class space
>     at java.lang.ClassLoader.defineClass1(Native Method) ~[na:1.8.0_201]
>     at java.lang.ClassLoader.defineClass(ClassLoader.java:763) ~[na:1.8.0_201]
>     at java.lang.ClassLoader.defineClass(ClassLoader.java:642) ~[na:1.8.0_201]
>     at 
> org.apache.derby.impl.services.reflect.ReflectLoaderJava2.loadGeneratedClass(Unknown
>  Source) ~[derby-10.13.1.1.jar:na]
>     at 
> org.apache.derby.impl.services.reflect.ReflectClassesJava2.loadGeneratedClassFromData(Unknown
>  Source) ~[derby-10.13.1.1.jar:na]
>     at 
> org.apache.derby.impl.services.reflect.DatabaseClasses.loadGeneratedClass(Unknown
>  Source) ~[derby-10.13.1.1.jar:na]
>     at 
> org.apache.derby.impl.services.bytecode.GClass.getGeneratedClass(Unknown 
> Source) ~[derby-10.13.1.1.jar:na]
>     at 
> org.apache.derby.impl.sql.compile.ExpressionClassBuilder.getGeneratedClass(Unknown
>  Source) ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.impl.sql.compile.StatementNode.generate(Unknown 
> Source) ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source) 
> ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source) 
> ~[derby-10.13.1.1.jar:na]
>     at 
> org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(Unknown
>  Source) ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.impl.jdbc.EmbedPreparedStatement.<init>(Unknown 
> Source) ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.<init>(Unknown 
> Source) ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown 
> Source) ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.13.1.1.jar:na]
>     at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.13.1.1.jar:na]
>     at 
> org.datanucleus.store.rdbms.datasource.dbcp.DelegatingConnection.prepareStatement(DelegatingConnection.java:259)
>  ~[datanucleus-rdbms-4.0.12.jar:na]{code}
> I tried to solve the problem by periodically shutting down the database, 
> because I read that the generated classes as well as all other allocated 
> resources should be released when the DB is shut-down.
> I thus perform the following code once per roughly 20 minutes:
> {code:java}
> String shutdownConnectionURL = connectionURL + ";shutdown=true";
> try {
>     DriverManager.getConnection(shutdownConnectionURL);
> } catch (SQLException e) {
>     int errorCode = e.getErrorCode();
>     if (DERBY_ERROR_CODE_SHUTDOWN_DATABASE_SUCCESSFULLY != errorCode &&
>             DERBY_ERROR_CODE_SHUTDOWN_DATABASE_WAS_NOT_RUNNING != errorCode) {
>         throw new RuntimeException(e);
>     }
> }
> {code}
> Unfortunately, this has no effect :( The OutOfMemoryError still occurs after 
> about 2 days. Do I assume correctly that the above code _should_ properly 
> shut-down the database? And do I assume correctly that this shut-down should 
> release the generated classes?
> IMHO, it is already a bug in Derby that I need to shut-down the database at 
> all in order to prevent it from piling up generated classes. Shouldn't it 
> already release the generated classes at the end of each transaction? But 
> even if I really have to shut-down the DB, it is certainly a bug that the 
> classes are still kept in "compressed class space" even after the shut-down.
> I searched the release notes and the existing bugs (here in JIRA) and did not 
> find anything related to this {{OutOfMemoryError}}. Hence, I open this 
> bug-report, now.
> This issue was originally reported in 
> [subshare#74|https://github.com/subshare/subshare/issues/74], but it is IMHO 
> clearly a Derby bug.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Reply via email to