[ 
https://issues.apache.org/jira/browse/CASSANDRA-6369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13825381#comment-13825381
 ] 

Lyuben Todorov commented on CASSANDRA-6369:
-------------------------------------------

LGTM.

> Fix prepared statement size computation
> ---------------------------------------
>
>                 Key: CASSANDRA-6369
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6369
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Sylvain Lebresne
>            Assignee: Sylvain Lebresne
>             Fix For: 1.2.12, 2.0.3
>
>         Attachments: 6369.txt
>
>
> When computed the size of CQLStatement to limit the prepared statement cache 
> (CASSANDRA-6107), we overestimate the actual memory used because the 
> statement include a reference to the table CFMetaData which measureDeep 
> counts. And as it happens, that reference is big: on a simple test preparing 
> a very trivial select statement, I was able to only prepare 87 statements 
> before some started to be evicted because each statement was more than 93K 
> big and more than 92K of that was the CFMetaData object. As it happens there 
> is no reason to account the CFMetaData object at all since it's in memory 
> anyway whether or not there is prepared statements or not.
> Attaching a simple (if not extremely elegant) patch to remove what we don't 
> care about of the computation. Another solution would be to use the 
> MemoryMeter.withTrackerProvider option as we do in Memtable, but in the 
> QueryProcessor case we currently use only one MemoryMeter, not one per CF, so 
> it didn't felt necessarilly cleaner. We could create one-shot MemoryMeter 
> object each time we need to measure a CQLStatement but that doesn't feel a 
> lot simpler/cleaner either. But if someone feels religious about some other 
> solution, I don't care.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to