[ 
https://issues.apache.org/jira/browse/SOLR-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14680263#comment-14680263
 ] 

Guido commented on SOLR-5808:
-----------------------------

Hello. As a workaround, what if I create a replica of each shard and then I 
split the replicas? If the creation of the replica re-builds completely the 
index on the replica, then each replica will contain several segments of 
reduced dimensions and the shard split will not go in oom. Could it be a 
workaround or the creation of the replica just copies the same index keeping 
the same number of segments? Thanks

> collections?action=SPLITSHARD running out of heap space due to large segments
> -----------------------------------------------------------------------------
>
>                 Key: SOLR-5808
>                 URL: https://issues.apache.org/jira/browse/SOLR-5808
>             Project: Solr
>          Issue Type: Bug
>          Components: update
>    Affects Versions: 4.7
>            Reporter: Will Butler
>            Assignee: Shalin Shekhar Mangar
>              Labels: outofmemory, shard, split
>
> This issue is related to [https://issues.apache.org/jira/browse/SOLR-5214]. 
> Although memory issues due to merging have been resolved, we still run out of 
> memory when splitting a shard containing a large segment (created by 
> optimizing). The Lucene MultiPassIndexSplitter is able to split the index 
> without error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to