[ 
https://issues.apache.org/jira/browse/CASSANDRA-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13066392#comment-13066392
 ] 

Daniel Doubleday commented on CASSANDRA-2868:
---------------------------------------------

Well either it's environment specific or (more likely) others didn't notice / 
care because they have enough memory and/or restart the nodes often enough.

We have 16GB of RAM and run Cassandra with 3GB. Within one month we loose ~3GB 
(13GB -> 10GB) files system cache because of the mem leak. Looking at our 
graphs I can't really tell a difference performance wise. So I guess only 
people with weaker servers (less memory headroom) will really notice. We 
noticed only because we got the system oom on a cluster that's not critical and 
which we didn't really monitor.

> Native Memory Leak
> ------------------
>
>                 Key: CASSANDRA-2868
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2868
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7.6
>            Reporter: Daniel Doubleday
>            Priority: Minor
>         Attachments: 2868-v1.txt, low-load-36-hours-initial-results.png
>
>
> We have memory issues with long running servers. These have been confirmed by 
> several users in the user list. That's why I report.
> The memory consumption of the cassandra java process increases steadily until 
> it's killed by the os because of oom (with no swap)
> Our server is started with -Xmx3000M and running for around 23 days.
> pmap -x shows
> Total SST: 1961616 (mem mapped data and index files)
> Anon  RSS: 6499640
> Total RSS: 8478376
> This shows that > 3G are 'overallocated'.
> We will use BRAF on one of our less important nodes to check wether it is 
> related to mmap and report back.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to