2 Servers with 2 primary shards... optimization questions

2015-01-22 Thread Darin Hensley
With this set up:


   Server1Server2

Primary Shard 1 Primary Shard 2
Replica Shard 1 Replica Shard 2



1) If primary shard 1 failed, would replica shard 1 take over and become 
primary shard 1?
2) Is reading performance optimized from 2 primary shards on 2 separate 
severs?
3) With documentation stating we would end up with two nodes having one 
shard each, and one node doing double the work with two 
shardsspecifically the one node doing double the work with two 
shardsdo they mean 1 server having a primary shard and a replica 
shard? 
--
Also,


   Server1Server2

Primary Shard 1 Primary Shard 2
Replica Shard 2 Replica Shard 1



1) is this set up possible and would it be beneficial? If so, would I have 
to manually assign the shards or is this how ES does it default wise with 2 
servers and 2 shards?
2) with this set up...if it is even possible...is there a read performance 
from having Primary Shard 1 on server 1 and having Replica Shard 1 on 
server 2?


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3a16db74-e41f-469a-aecd-acf515074391%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: 2 Servers with 2 primary shards... optimization questions

2015-01-22 Thread Kimbro Staken
Your second example is what elasticsearch will do by default. It will never
allocate a primary and replica for the same shard on the same node. In that
example if one of the nodes were to go down both primaries would move to
the remaining node and the replicas would be unallocated and the cluster
will have yellow status.

Kimbro

On Thu, Jan 22, 2015 at 9:30 AM, Darin Hensley darin.hens...@gmail.com
wrote:

 With this set up:


Server1Server2
 
 Primary Shard 1 Primary Shard 2
 Replica Shard 1 Replica Shard 2



 1) If primary shard 1 failed, would replica shard 1 take over and become
 primary shard 1?
 2) Is reading performance optimized from 2 primary shards on 2 separate
 severs?
 3) With documentation stating we would end up with two nodes having one
 shard each, and one node doing double the work with two
 shardsspecifically the one node doing double the work with two
 shardsdo they mean 1 server having a primary shard and a replica
 shard?

 --
 Also,


Server1Server2
 
 Primary Shard 1 Primary Shard 2
 Replica Shard 2 Replica Shard 1



 1) is this set up possible and would it be beneficial? If so, would I have
 to manually assign the shards or is this how ES does it default wise with 2
 servers and 2 shards?
 2) with this set up...if it is even possible...is there a read performance
 from having Primary Shard 1 on server 1 and having Replica Shard 1 on
 server 2?


  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/3a16db74-e41f-469a-aecd-acf515074391%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/3a16db74-e41f-469a-aecd-acf515074391%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAA0DmXbmJBAFQgN8%2BKGmtsd8m6vnYYOhzCuczig9Y%2BSNbKnWCg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Optimization Questions

2014-08-19 Thread Michael McCandless
You could turn on TRACE logging for the lucene.iw component.  This will
give tons of details about what merges are being done.

Normally, if there are no writes going to the index at the same time, an
optimize with max_num_segments=1 really should get down to 1 segment in the
end ... not sure why it isn't in your case.  Was there a refresh after the
optimize?

Mike McCandless

http://blog.mikemccandless.com


On Mon, Aug 18, 2014 at 12:33 PM, Andrew Selden 
andrew.sel...@elasticsearch.com wrote:

 Hi Greg,

 I believe max_num_segments is technically a hint that can be overridden by
 the merge algorithm if it decides to. You might try simply re-running the
 optimize again to get from ~25 down closer to 1. Sorry but I don’t know of
 any way to see when the optimize is finished - it’s really just forcing a
 merge so looking at merge stats is what you want.

 Hope that helps.
 Andrew


 On Aug 15, 2014, at 8:01 PM, Gregory Sutcliffe gsutcli...@publishthis.com
 wrote:

 Hey Guys,
 We were doing some updates to our es(1.3.1) clusters recently and had some
 questions about _optimize.  We optimized with max_num_segments 1 and we're
 still seeing ~25 segments per shard.  The index that was optimized had no
 writes going to it during the time, it was actually freshly re-opened after
 an upgrade.  Also, are there any tricks to seeing when an optimize is done
 other that watching merges stats and disk IO?  Maybe some data in marvel?

 Thanks for your assistance,
 Greg

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/17622c72-f004-4fda-92fb-dda393a64807%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/17622c72-f004-4fda-92fb-dda393a64807%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/88238DE4-AFC4-41D0-B495-ED9938D7CB9C%40elasticsearch.com
 https://groups.google.com/d/msgid/elasticsearch/88238DE4-AFC4-41D0-B495-ED9938D7CB9C%40elasticsearch.com?utm_medium=emailutm_source=footer
 .

 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAD7smRcMuZHzruXtto_K%3Dw7uNqABRzLe3rjSgd182iF6xBi5Gg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Optimization Questions

2014-08-18 Thread Andrew Selden
Hi Greg,

I believe max_num_segments is technically a hint that can be overridden by the 
merge algorithm if it decides to. You might try simply re-running the optimize 
again to get from ~25 down closer to 1. Sorry but I don't know of any way to 
see when the optimize is finished - it's really just forcing a merge so looking 
at merge stats is what you want.

Hope that helps.
Andrew


On Aug 15, 2014, at 8:01 PM, Gregory Sutcliffe gsutcli...@publishthis.com 
wrote:

 Hey Guys, 
 We were doing some updates to our es(1.3.1) clusters recently and had some 
 questions about _optimize.  We optimized with max_num_segments 1 and we're 
 still seeing ~25 segments per shard.  The index that was optimized had no 
 writes going to it during the time, it was actually freshly re-opened after 
 an upgrade.  Also, are there any tricks to seeing when an optimize is done 
 other that watching merges stats and disk IO?  Maybe some data in marvel? 
 
 Thanks for your assistance, 
 Greg
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/17622c72-f004-4fda-92fb-dda393a64807%40googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/88238DE4-AFC4-41D0-B495-ED9938D7CB9C%40elasticsearch.com.
For more options, visit https://groups.google.com/d/optout.


Optimization Questions

2014-08-15 Thread Gregory Sutcliffe
Hey Guys, 
We were doing some updates to our es(1.3.1) clusters recently and had some 
questions about _optimize.  We optimized with max_num_segments 1 and we're 
still seeing ~25 segments per shard.  The index that was optimized had no 
writes going to it during the time, it was actually freshly re-opened after 
an upgrade.  Also, are there any tricks to seeing when an optimize is done 
other that watching merges stats and disk IO?  Maybe some data in marvel? 

Thanks for your assistance, 
Greg

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/17622c72-f004-4fda-92fb-dda393a64807%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.