Re: All shards placed on the same node

2020-04-06 Thread Kudrettin Güleryüz
Thank you Sandeep. Actually there is a reasoning behind core precision to
be 10. I didn't want cores to be the only criteria. Setting precision to 1
would pretty much work like that.

I have a higher preference for freedisk to be distributed evenly but
doesn't seem like it works that well if core is missing or secondary in the
preferences list. Couldn't locate a good resource on Autoscaling API.

Nevertheless, setting precision to 3 seems to be doing what I intend to.

On Sun, Apr 5, 2020 at 9:49 PM Sandeep Dharembra <
sandeep.dharem...@gmail.com> wrote:

> Hey,
>
> Please change the precision in cluster preference for core to 1 instead of
> 10 and then give a try.
>
> With current settings, 2 nodes are not treated different till they have a
> difference of 10 cores.
>
> Thanks,
>
>
> On Mon, Apr 6, 2020, 2:09 AM Kudrettin Güleryüz 
> wrote:
>
> > Hi,
> >
> > Running 7.3.1 on an 8 node Solr cloud. Why would solr create all 6 shards
> > on the same node? I don't want to restrict Solr to create up to x number
> of
> > shards per node but creating all shards on the same node doesn't look
> right
> > to me.
> >
> > Will Solr use all space on one node before using another one? Here is my
> > autoscaling configuration:
> >
> > {
> >   "cluster-preferences":[
> > {
> >   "minimize":"cores",
> >   "precision":10},
> > {
> >   "precision":100,
> >   "maximize":"freedisk"},
> > {
> >   "minimize":"sysLoadAvg",
> >   "precision":3}],
> >   "cluster-policy":[{
> >   "freedisk":"<10",
> >   "replica":"0",
> >   "strict":"true"}],
> >   "triggers":{".auto_add_replicas":{
> >   "name":".auto_add_replicas",
> >   "event":"nodeLost",
> >   "waitFor":120,
> >   "actions":[
> > {
> >   "name":"auto_add_replicas_plan",
> >   "class":"solr.AutoAddReplicasPlanAction"},
> > {
> >   "name":"execute_plan",
> >   "class":"solr.ExecutePlanAction"}],
> >   "enabled":true}},
> >   "listeners":{".auto_add_replicas.system":{
> >   "trigger":".auto_add_replicas",
> >   "afterAction":[],
> >   "stage":[
> > "STARTED",
> > "ABORTED",
> > "SUCCEEDED",
> > "FAILED",
> > "BEFORE_ACTION",
> > "AFTER_ACTION",
> > "IGNORED"],
> >   "class":"org.apache.solr.cloud.autoscaling.SystemLogListener",
> >   "beforeAction":[]}},
> >   "properties":{}}
> >
>


Re: All shards placed on the same node

2020-04-05 Thread Sandeep Dharembra
Hey,

Please change the precision in cluster preference for core to 1 instead of
10 and then give a try.

With current settings, 2 nodes are not treated different till they have a
difference of 10 cores.

Thanks,


On Mon, Apr 6, 2020, 2:09 AM Kudrettin Güleryüz  wrote:

> Hi,
>
> Running 7.3.1 on an 8 node Solr cloud. Why would solr create all 6 shards
> on the same node? I don't want to restrict Solr to create up to x number of
> shards per node but creating all shards on the same node doesn't look right
> to me.
>
> Will Solr use all space on one node before using another one? Here is my
> autoscaling configuration:
>
> {
>   "cluster-preferences":[
> {
>   "minimize":"cores",
>   "precision":10},
> {
>   "precision":100,
>   "maximize":"freedisk"},
> {
>   "minimize":"sysLoadAvg",
>   "precision":3}],
>   "cluster-policy":[{
>   "freedisk":"<10",
>   "replica":"0",
>   "strict":"true"}],
>   "triggers":{".auto_add_replicas":{
>   "name":".auto_add_replicas",
>   "event":"nodeLost",
>   "waitFor":120,
>   "actions":[
> {
>   "name":"auto_add_replicas_plan",
>   "class":"solr.AutoAddReplicasPlanAction"},
> {
>   "name":"execute_plan",
>   "class":"solr.ExecutePlanAction"}],
>   "enabled":true}},
>   "listeners":{".auto_add_replicas.system":{
>   "trigger":".auto_add_replicas",
>   "afterAction":[],
>   "stage":[
> "STARTED",
> "ABORTED",
> "SUCCEEDED",
> "FAILED",
> "BEFORE_ACTION",
> "AFTER_ACTION",
> "IGNORED"],
>   "class":"org.apache.solr.cloud.autoscaling.SystemLogListener",
>   "beforeAction":[]}},
>   "properties":{}}
>