[ 
https://issues.apache.org/jira/browse/CAY-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrus Adamchik updated CAY-2642:
---------------------------------
    Description: 
h2. Background

Just ran across a scenario when a user can inadvertently introduce a memory 
leak in their app. This happens when an app is using query caching with 
JCacheQueryCache and EhCache provider in the backend, and the cache key space 
is large / growing. The last criterion is met when local cache is in use and 
new DataContexts are created for new jobs/requests (each DataContext introduces 
its own key subspace). In this case cache entries (including their 
DataContexts) are retained in memory indefinitely, eventually causing the app 
to crash with OutOfMemory

h2. Cause

If there is a query with a cache group that does not have a cache configured 
explicitly in the backend (in "ehcache.xml"), JCacheQueryCache creates a new 
cache on the fly using JCacheDefaultConfigurationFactory. While 
JCacheDefaultConfigurationFactory has a default expiration of 10 minutes, it 
doesn't have an upper limit on the number of entries (there's no API in JCache 
to set it), so such cache becomes essentially boundless.

Since cache groups are assigned by query, and their number can increase as the 
app evolves, it is very easy to overlook the need for a matching <cache> 
configuration entry. So previously stable apps can suddenly acquire such time 
bombs as they evolve over time. 

h2. Solution

I wish we were able to create default cache with fixed size bounds, but I don't 
see how to do it in JCache. So a minimal possible solution would be to print a 
big warning in the logs whenever we need to call 
"JCacheQueryCache.createCache". 

In the future versions we might replace the warning with an exception (??) (Or 
make this behavior - warn vs exception - configurable via a property).

  was:
h2. Background

Just ran across a scenario when a user can inadvertently introduce a memory 
leak in their app. This happens when an app is using query caching with 
JCacheQueryCache and EhCache provider in the backend, and the cache key space 
is large / growing. The last criterion is met when local cache is in use and 
new DataContexts are created for new jobs/requests (each DataContext introduces 
its own key subspace). In this case cache entries (including their 
DataContexts) are retained in memory indefinitely, eventually causing the app 
to crash with OutOfMemory

h2. Cause

If there is a query with a cache group that does not have a cache configured 
explicitly in the backend (in "ehcache.xml"), JCacheQueryCache creates a new 
cache on the fly using JCacheDefaultConfigurationFactory. While 
JCacheDefaultConfigurationFactory has a default expiration of 10 minutes, it 
doesn't have an upper limit on the number of entries (there's no API in JCache 
to set it), so such case becomes essentially boundless.

Since cache groups are assigned by query, and their number can increase as the 
app evolves, it is very easy to overlook the need for a matching <cache> 
configuration entry. So previously stable apps can suddenly acquire such time 
bombs as they evolve over time. 

h2. Solution

I wish we were able to create default cache with fixed size bounds, but I don't 
see how to do it in JCache. So a minimal possible solution would be to print a 
big warning in the logs whenever we need to call 
"JCacheQueryCache.createCache". 

In the future versions we might replace the warning with an exception (??) (Or 
make this behavior - warn vs exception - configurable via a property).


> EhCache memory leak due to misconfiguration
> -------------------------------------------
>
>                 Key: CAY-2642
>                 URL: https://issues.apache.org/jira/browse/CAY-2642
>             Project: Cayenne
>          Issue Type: Task
>    Affects Versions: 4.1.RC2
>            Reporter: Andrus Adamchik
>            Priority: Major
>             Fix For: 4.1.RC3, 4.2.M1
>
>
> h2. Background
> Just ran across a scenario when a user can inadvertently introduce a memory 
> leak in their app. This happens when an app is using query caching with 
> JCacheQueryCache and EhCache provider in the backend, and the cache key space 
> is large / growing. The last criterion is met when local cache is in use and 
> new DataContexts are created for new jobs/requests (each DataContext 
> introduces its own key subspace). In this case cache entries (including their 
> DataContexts) are retained in memory indefinitely, eventually causing the app 
> to crash with OutOfMemory
> h2. Cause
> If there is a query with a cache group that does not have a cache configured 
> explicitly in the backend (in "ehcache.xml"), JCacheQueryCache creates a new 
> cache on the fly using JCacheDefaultConfigurationFactory. While 
> JCacheDefaultConfigurationFactory has a default expiration of 10 minutes, it 
> doesn't have an upper limit on the number of entries (there's no API in 
> JCache to set it), so such cache becomes essentially boundless.
> Since cache groups are assigned by query, and their number can increase as 
> the app evolves, it is very easy to overlook the need for a matching <cache> 
> configuration entry. So previously stable apps can suddenly acquire such time 
> bombs as they evolve over time. 
> h2. Solution
> I wish we were able to create default cache with fixed size bounds, but I 
> don't see how to do it in JCache. So a minimal possible solution would be to 
> print a big warning in the logs whenever we need to call 
> "JCacheQueryCache.createCache". 
> In the future versions we might replace the warning with an exception (??) 
> (Or make this behavior - warn vs exception - configurable via a property).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to