Didier, I'm starting to look at SOLR-6399
after the core was unloaded, it was absent from the collection list, as
if it never existed. On the other hand, re-issuing a CREATE call with the
same collection restored the collection, along with its data
The collection is sill in ZK though?
upon
It would be a huge step forward if one could have several hundreds of Solr
collections, but only have a small portion of them opened/loaded at the
same time. This is similar to ElasticSearch's close index api, listed here:
I've tried a few variations, with 3 x ZK, 6 X nodes, solr 4.10.3, solr 5.0
without any success and no real difference. There is a tipping point at
around 3,000-4,000 cores (varies depending on hardware) from where I can
restart the cloud OK within ~4min, to the cloud not working and
continuous
On 3/4/2015 2:09 AM, Shawn Heisey wrote:
I've come to one major conclusion about this whole thing, even before
I reach the magic number of 4000 collections. Thousands of collections
is not at all practical with SolrCloud currently.
I've now encountered a new problem. I may have been hasty in
I'm running on Solaris x86, I have plenty of memory and no real limits
# plimit 15560
15560: /opt1/jdk/bin/java -d64 -server -Xss512k -Xms32G -Xmx32G
-XX:MaxMetasp
resource current maximum
time(seconds) unlimited unlimited
file(blocks) unlimited
On 3/4/2015 5:37 PM, Damien Kamerman wrote:
I'm running on Solaris x86, I have plenty of memory and no real limits
# plimit 15560
15560: /opt1/jdk/bin/java -d64 -server -Xss512k -Xms32G -Xmx32G
-XX:MaxMetasp
resource current maximum
time(seconds) unlimited
On 3/3/2015 9:22 PM, Damien Kamerman wrote:
I've done a similar thing to create the collections. You're going to need
more memory I think.
OK, so maxThreads limit on jetty could be causing a distributed dead-lock?
I don't know what the exact problems would be if maxThreads is reached.
It's
On 3/4/2015 1:02 AM, Shawn Heisey wrote:
Even now, nearly three hours after startup, the Solr log is still
spitting out thousands of lines that look like this, so I don't think I
can call it stable:
INFO - 2015-03-04 07:35:51.166;
org.apache.solr.common.cloud.ZkStateReader; Updating data
After one minute from startup I sometimes see the
'org.apache.solr.cloud.ZkController; Timed out waiting to see all nodes
published as DOWN in our cluster state.'
And I see the 'Still seeing conflicting information about the leader of
shard' after about 5 minutes.
Thanks Shawn, I will create an
On 3/2/2015 12:54 AM, Damien Kamerman wrote:
I still see the same cloud startup issue with Solr 5.0.0. I created 4,000
collections from scratch and then attempted to stop/start the cloud.
I have been trying to duplicate your setup using the -e cloud example
included in the Solr 5.0 download and
I've done a similar thing to create the collections. You're going to need
more memory I think.
OK, so maxThreads limit on jetty could be causing a distributed dead-lock?
On 4 March 2015 at 13:18, Shawn Heisey apa...@elyograg.org wrote:
On 3/2/2015 12:54 AM, Damien Kamerman wrote:
I still
On 3/3/2015 6:55 AM, Shawn Heisey wrote:
With a longer zkClientTimeout, does the failure happen on a later
collection? I had hoped that it would solve the problem, but I'm
curious about whether it was able to load more collections before it
finally died, or whether it made no difference...
On 3/3/2015 12:42 AM, Damien Kamerman wrote:
Still no luck starting solr with 40s zkClientTimeout. I'm not seeing any
expired sessions...
There must be a way to start solr with many collections. It runs fine..
until a restart is required.
With a longer zkClientTimeout, does the failure
Still no luck starting solr with 40s zkClientTimeout. I'm not seeing any
expired sessions...
There must be a way to start solr with many collections. It runs fine..
until a restart is required.
On 3 March 2015 at 03:33, Shawn Heisey apa...@elyograg.org wrote:
On 3/2/2015 12:54 AM, Damien
On 3/2/2015 12:54 AM, Damien Kamerman wrote:
I still see the same cloud startup issue with Solr 5.0.0. I created 4,000
collections from scratch and then attempted to stop/start the cloud.
node1:
WARN - 2015-03-02 18:09:02.371;
org.eclipse.jetty.server.handler.RequestLogHandler; !RequestLog
I still see the same cloud startup issue with Solr 5.0.0. I created 4,000
collections from scratch and then attempted to stop/start the cloud.
node1:
WARN - 2015-03-02 18:09:02.371;
org.eclipse.jetty.server.handler.RequestLogHandler; !RequestLog
WARN - 2015-03-02 18:10:07.196;
On 2/26/2015 11:14 PM, Damien Kamerman wrote:
I've run into an issue with starting my solr cloud with many collections.
My setup is:
3 nodes (solr 4.10.3 ; 64GB RAM each ; jdk1.8.0_25) running on a single
server (256GB RAM).
5,000 collections (1 x shard ; 2 x replica) = 10,000 cores
1 x
Oh, and I was wondering if 'leaderVoteWait' might help in Solr4.
On 27 February 2015 at 18:04, Damien Kamerman dami...@gmail.com wrote:
This is going to push SolrCloud beyond its limits. Is this just an
exercise to see how far you can push Solr, or are you looking at setting
up a production
This is going to push SolrCloud beyond its limits. Is this just an
exercise to see how far you can push Solr, or are you looking at setting
up a production install with several thousand collections?
I'm looking towards production.
In Solr 4.x, the clusterstate is one giant JSON structure
I've run into an issue with starting my solr cloud with many collections.
My setup is:
3 nodes (solr 4.10.3 ; 64GB RAM each ; jdk1.8.0_25) running on a single
server (256GB RAM).
5,000 collections (1 x shard ; 2 x replica) = 10,000 cores
1 x Zookeeper 3.4.6
Java arg -Djute.maxbuffer=67108864 added
20 matches
Mail list logo