On 3/2/2015 6:12 AM, tuxedomoon wrote:
> Shawn, in light of Garth's response below
>
> "You can't just add a new core to an existing collection. You can add the
> new node to the cloud, but it won't be part of any collection. You're not
> going to be able to just slide it in as a 4th shard to an
72066.n3.nabble.com/Does-shard-splitting-double-host-count-tp4189595p4190320.html
Sent from the Solr - User mailing list archive at Nabble.com.
I'd forgotten that DzkHost refers to the Zookeeper hosts not SOLR hosts.
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Does-shard-splitting-double-host-count-tp4189595p4189703.html
Sent from the Solr - User mailing list archive at Nabble.com.
a new collection for the next set. Keep
the query alias updated to span the collections you're interested in.
-Original Message-
From: tuxedomoon [mailto:dancolem...@yahoo.com]
Sent: Friday, February 27, 2015 12:43 PM
To: solr-user@lucene.apache.org
Subject: Re: Does shard splitting doub
On 2/27/2015 11:42 AM, tuxedomoon wrote:
> What about adding one new leader/replica pair? It seems that would entail
>
> a) creating the r3.large instances and volumes
> b) adding 2 new Zookeeper hosts?
> c) updating my Zookeeper configs (new hosts, new ids, new SOLR config)
> d) restarting all ZK
removing it from shard2.
I'm looking for a migration strategy to achieve 25% docs per shard. I would
also consider deleting docs by daterange from shards1,2,3 and reindexing
them to redistribute evenly.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Does-shard-spli
On 2/27/2015 7:15 AM, tuxedomoon wrote:
> I currently have a SolrCloud with 3 shards + replicas, it is holding 130M
> documents and the r3.large hosts are running out of memory. As it's on 4.2
> there is no shard splitting, I will have to reindex to a 4.3+ version.
>
> If I had that feature would
nt: Friday, February 27, 2015 8:16 AM
To: solr-user@lucene.apache.org
Subject: Does shard splitting double host count
I currently have a SolrCloud with 3 shards + replicas, it is holding 130M
documents and the r3.large hosts are running out of memory. As it's on 4.2
there is no shard splitting,
--
View this message in context:
http://lucene.472066.n3.nabble.com/Does-shard-splitting-double-host-count-tp4189595.html
Sent from the Solr - User mailing list archive at Nabble.com.