Hi,

I'm deploying ElasticSearch on a cluster with different node sizes, some 
have 32GB memory, and some have 16GB. I hope more shards will be allocated 
on nodes with bigger memory.

I googled a bit, there're some settings that can exclude some indices from 
some nodes. But it's not very convenient. So I'm wondering whether there's 
a 'weight' setting for individual node, or ES has already been allocating 
shards based on node memory size?

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3bdccd1c-b18d-41b2-bc70-78a6d9aed51d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to