Hello folks, has anyone tried to use the autoscaling simulation framework to
simulate a lost node in a solr cluster? I was trying to do the following:
1.- Take a current production cluster state snapshout using bin/solr
autoscaling -save
2.- Modify the clusterstate and livenodes json files in th
Hi,
I have use cases of features which require a query function and some more
math on top of the result of the query function
Eg of a feature : no of extra terms in the document from input text
I am trying various ways of representing this feature but always getting an
exception
java.lang.Runtim
Hi,
I also tried this 2 rules and I still have all replicas of all shards of
the collection created in one single zone
curl 'http://localhost:8983/api/cluster/autoscaling' -H
'Content-type:application/json' -d '{ "set-policy": { "policyzone": [
{"replica": "#EQUAL", "shard": "#EACH", "nodeset":[{
In a word, yes. G1GC still has spikes, and the larger the heap the more likely
you’ll be to encounter them. So having multiple JVMS rather than one large JVM
with a ginormous heap is still recommended.
I’ve seen some cases that used the Zing zero-pause product with very large
heaps, but they we
I was observing a high degradation in performance when adding more features
to my solr LTR model even if the model complexity (no of trees, depth of
tree) remains same. I am using the MultipleAdditiveTreesModel model
Moreover, if model complexity increases keeping no of features constant,
performa
Not sure how solr cloud works but if your still facing issues, can try this
1. Deploy the features and models as a _schema_feature-store.json
and _schema_model-store.json file in the right config set.
2. Can either deploy to all nodes (works for me) or add these files
to confFiles in /replication
Hey folks,
I've always heard that it's preferred to have a SolrCloud setup with
many smaller instances under the CompressedOops limit in terms of
memory, instead of having larger instances with, say, 256GB worth of
heap space.
Does this recommendation still hold true with newer garbage collectors
Hi Matthew & all,
Why not? Try the code 'evenearlier' for a further discount! (Oh and we
extended the earlybird period for another week).
Cheers
Charlie
On 17/09/2020 21:00, matthew sporleder wrote:
Is there a friends-on-the-mailing list discount? I had a bit of sticker shock!
On Wed, Sep