[jira] [Comment Edited] (LUCENE-5928) WildcardQuery may has memory leak
[ https://issues.apache.org/jira/browse/LUCENE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14139919#comment-14139919 ] Shawn Heisey edited comment on LUCENE-5928 at 9/19/14 3:18 AM: --- I have seen this exact memory reporting issue on my Solr installs. Java appears to misreport memory usage. See how your SHR value is 37GB? If you subtract that from your RES value, I think that's a lot closer to how much memory it's actually using. For what follows, reference the screenshot that I have attached. !idxb1-top-sorted-mem.png! Solr on this machine has a max heap of 6144M. The machine has 64GB of RAM. Let's add up some numbers, each of which I would ordinarily consider to be absolute truth: 16GB: RES size of the Solr process. 9.8GB: The amount of memory listed as free. 44GB: The amount of memory used by the OS disk cache. 1.1GB: next largest java process. 0.3GB: next largest java process. These numbers add up to more than 70GB ... but the machine only has 64GB total. But if you subtract the 11GB value in the SHR column, then it all fits, and the resulting number is also less than the max heap of 6144M. was (Author: elyograg): I have seen this exact memory reporting issue on my Solr installs. Java appears to misreport memory usage. See how your SHR value is 37GB? If you subtract that from your RES value, I think that's a lot closer to how much memory it's actually using. For what follows, reference the screenshot that I have attached. !idxb1-top-sorted-mem.png! Solr on this machine has a max heap of 6144M. The machine has 64GB of RAM. Let's add up some numbers, each of which I would ordinarily consider to be absolute truth: 16GB: RES size of the Solr process. 9.8GB: The amount of memory listed as free. 44GB: The amount of memory used by the OS disk cache. 1.1GB: next largest java process. 0.3GB: next largest java process. These numbers add up to more than 70GB ... but the machine only has 64GB total. WildcardQuery may has memory leak - Key: LUCENE-5928 URL: https://issues.apache.org/jira/browse/LUCENE-5928 Project: Lucene - Core Issue Type: Bug Components: core/search Affects Versions: 4.9 Environment: SSD 1.5T, RAM 256G, records 15*1*1 Reporter: Littlestar Assignee: Uwe Schindler Attachments: idxb1-top-sorted-mem.png, top_java.jpg data 800G, records 15*1*1. one search thread. content:??? content:* content:*1 content:*2 content:*3 jvm heap=96G, but the jvm memusage over 130g? run more wildcard, use memory more Does luence search/index use a lot of DirectMemory or Native Memory? I use -XX:MaxDirectMemorySize=4g, it does nothing better. Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-5928) WildcardQuery may has memory leak
[ https://issues.apache.org/jira/browse/LUCENE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14126867#comment-14126867 ] Uwe Schindler edited comment on LUCENE-5928 at 9/9/14 11:44 AM: Hi, this is not an issue of WildcardQuery. This is also not related to used heap space. What you see differences in is in most cases a common misunderstanding about those 2 terms: - Virtual Memory (VIRT): This is allocated address space, *it is not allocated memory*. On 64 bit platforms this is for free and is not limited by physical memory (it is not even related to each other). If you use mmap, VIRT is something like RES + up to 2 times the size of all open indexes. Internally the whole index is seen like a swap file to the OS kernel. - Resident Memory (RES): This is size of heap space + size of direct memory. This is *allocated* memory, but may reside on swap, too. Please keep in mind, that some operating systems also count memory, which was mmapped from file system cache to the process there, because this is resident. You can see this looking at SHR (share), which is memory shared with other processes (in that case the kernel). For Lucene this RES memory is also not a problem, becaus ethe file system cache is managed by the kernel and freed on request (SHR/RES goes down then). By executing a Wildcard like *:* you just access the whole term dictionary and all positings lists, so they are accessed on disk and therefore loaded into file system cache. When using MMap, the space in file system cache is also shown in VIRT of the process, because the linux/windows kernel maps the file system memory into the address space. But its does not waste memory. For more information, see: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html was (Author: thetaphi): Hi, this is not an issue of WildcardQuery. This is also not related to used heap space. What you see differences in is in most cases a common misunderstanding about those 2 terms: - Virtual Memory (VIRT): This is allocated address space, *it is not allocated memory*. On 64 bit platforms this is for free and is not limited by physical memory (it is not even related to each other). If you use mmap, VIRT is something like RES + up to 2 times the size of all open indexes. Internally the whole index is seen like a swap file to the OS kernel. - Resident Memory (RES): This is size of heap space + size of direct memory. This is *allocated* memory, but may reside on swap, too. By executing a Wildcard like *:* you just access the whole term dictionary and all positings lists, so they are accessed on disk and therefore loaded into file system cache. When using MMap, the space in file system cache is also shown in VIRT of the process, because the linux/windows kernel maps the file system memory into the address space. But its does not waste memory. For more information, see: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html WildcardQuery may has memory leak - Key: LUCENE-5928 URL: https://issues.apache.org/jira/browse/LUCENE-5928 Project: Lucene - Core Issue Type: Bug Components: core/search Affects Versions: 4.9 Environment: SSD 1.5T, RAM 256G 10*1 Reporter: Littlestar Assignee: Uwe Schindler data 800G, records 15*1*1. one search thread. content:??? content:* content:*1 content:*2 content:*3 jvm heap=96G, but the jvm memusage over 130g? run more wildcard, use memory more Does luence search/index use a lot of DirectMemory or Native Memory? I use -XX:MaxDirectMemorySize=4g, it does nothing better. Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-5928) WildcardQuery may has memory leak
[ https://issues.apache.org/jira/browse/LUCENE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14127909#comment-14127909 ] Littlestar edited comment on LUCENE-5928 at 9/10/14 1:45 AM: - hi, when using default MMapDirectory, jvm heap=96G, the java process RES over 130g, not VIRT(VIRT =900G). see attachement, thanks. !top_java.jpg! was (Author: cnstar9988): hi, when using default MMapDirectory, jvm heap=96G, the java process RES over 130g, not VIRT(VIRT =900G). see attachement, thanks. WildcardQuery may has memory leak - Key: LUCENE-5928 URL: https://issues.apache.org/jira/browse/LUCENE-5928 Project: Lucene - Core Issue Type: Bug Components: core/search Affects Versions: 4.9 Environment: SSD 1.5T, RAM 256G 10*1 Reporter: Littlestar Assignee: Uwe Schindler Attachments: top_java.jpg data 800G, records 15*1*1. one search thread. content:??? content:* content:*1 content:*2 content:*3 jvm heap=96G, but the jvm memusage over 130g? run more wildcard, use memory more Does luence search/index use a lot of DirectMemory or Native Memory? I use -XX:MaxDirectMemorySize=4g, it does nothing better. Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org