[ 
https://issues.apache.org/jira/browse/SOLR-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14014530#comment-14014530
 ] 

Shalin Shekhar Mangar commented on SOLR-5808:
---------------------------------------------

I just ran into this as well. A large segment on a 500M doc index (12GB heap) 
took down the node. I'll investigate and try to reduce the memory requirements.

> collections?action=SPLITSHARD running out of heap space due to large segments
> -----------------------------------------------------------------------------
>
>                 Key: SOLR-5808
>                 URL: https://issues.apache.org/jira/browse/SOLR-5808
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 4.7
>            Reporter: Will Butler
>            Assignee: Shalin Shekhar Mangar
>              Labels: outofmemory, shard, split
>
> This issue is related to [https://issues.apache.org/jira/browse/SOLR-5214]. 
> Although memory issues due to merging have been resolved, we still run out of 
> memory when splitting a shard containing a large segment (created by 
> optimizing). The Lucene MultiPassIndexSplitter is able to split the index 
> without error.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to