[ 
https://issues.apache.org/jira/browse/CASSANDRA-4141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252452#comment-13252452
 ] 

Vijay commented on CASSANDRA-4141:
----------------------------------

initialCapacity here is the number of elements in the hashmap and not the size. 
Size should be controlled by maximumWeightedCapacity (yes the names are 
confusing through)
We are running OOM because we where trying to allocate 1500 * 1024 *1024 
elements.

the intial capacity is used to create the ConcurrentHashMap
{code}
new ConcurrentHashMap<K, Node>(builder.initialCapacity, 0.75f, 
concurrencyLevel);
{code}
                
> Looks like Serializing cache broken in 1.1
> ------------------------------------------
>
>                 Key: CASSANDRA-4141
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-4141
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.1.0
>            Reporter: Vijay
>            Assignee: Vijay
>             Fix For: 1.1.0
>
>         Attachments: 0001-CASSANDRA-4141.patch
>
>
> I get the following error while setting the row cache to be 1500 MB
> INFO 23:27:25,416 Initializing row cache with capacity of 1500 MBs and 
> provider org.apache.cassandra.cache.SerializingCacheProvider
> java.lang.OutOfMemoryError: Java heap space
> Dumping heap to java_pid26402.hprof ...
> havent spend a lot of time looking into the issue but looks like SC 
> constructor has 
> .initialCapacity(capacity)
> .maximumWeightedCapacity(capacity)
>  which 1500Mb

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to