Re: Disk changes forced resync

2014-05-12 Thread Duncan Innes
Not doing any monitoring yet - this is my dev cluster running on 3 
workstations.

I thought I was quick enough that the rebalance wouldn't have marched ahead 
and changed much - clearly my admin skills need sharpening!

Is there a way to get the cluster to avoid rebalancing when a node is 
removed from the cluster?  I wouldn't want a cluster rebalance starting 
just because I'm patching the OS and need a reboot.

Thanks

Duncan

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8eed4547-1ca7-4fc7-b291-529653618f94%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Disk changes forced resync

2014-05-09 Thread Duncan Innes
Apologies if this is a silly question.

I recently changed the disk layout on one of my ES nodes to put 
/var/lib/elasticsearch on it's own disk partition.  Around 100Gb data was 
set to one side, new disk created, then rsync'd to it's new partition.

As far as I can be certain, everything was the same before and after the 
operation - just that the data now sat on it's own partition.  Ownership, 
permissions, ACL's, SELinux contexts were all synced.

When I started elasticsearch back up again, however, the 100Gb in the 
partition vanished and the node started to rebuild itself from scratch, 
copying data in from all the other nodes.

The time between stopping and restarting elasticsearch was around 15 
minutes.  I expected the data that I'd put back in place to be used first 
as most of it is historical and won't have changed.

Did I do something wrong in my procedure, or am I just expecting the wrong 
thing.

I'm hoping that doing disk maintenance like this in a production system 
doesn't trigger such a rebuild as my prod systems will have significantly 
more data.

Many thanks

Duncan

RHEL 6.5 x86_64
java-1.7.0-oracle-1.7.0.51-1jpp.1.el6_5.x86_64
elasticsearch-1.1.0-1.noarch

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e0a204a7-318b-447b-8fcb-799b5a3b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Disk changes forced resync

2014-05-09 Thread Mark Walkom
In the past I have mounted the new partition under (eg) /mnt, then moved
/var/lib/elasticsearch/* to it, then remounted the new partition to
/var/lib/elasticsearch, and this has worked fine.

However in your case the cluster may have rebalanced itself in the
meantime, so when your node joined the master told it to ignore what data
it was holding. Are you using anything to monitor your cluster state that
could verify this?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 10 May 2014 00:38, Duncan Innes duncan.in...@gmail.com wrote:

 Apologies if this is a silly question.

 I recently changed the disk layout on one of my ES nodes to put
 /var/lib/elasticsearch on it's own disk partition.  Around 100Gb data was
 set to one side, new disk created, then rsync'd to it's new partition.

 As far as I can be certain, everything was the same before and after the
 operation - just that the data now sat on it's own partition.  Ownership,
 permissions, ACL's, SELinux contexts were all synced.

 When I started elasticsearch back up again, however, the 100Gb in the
 partition vanished and the node started to rebuild itself from scratch,
 copying data in from all the other nodes.

 The time between stopping and restarting elasticsearch was around 15
 minutes.  I expected the data that I'd put back in place to be used first
 as most of it is historical and won't have changed.

 Did I do something wrong in my procedure, or am I just expecting the wrong
 thing.

 I'm hoping that doing disk maintenance like this in a production system
 doesn't trigger such a rebuild as my prod systems will have significantly
 more data.

 Many thanks

 Duncan

 RHEL 6.5 x86_64
 java-1.7.0-oracle-1.7.0.51-1jpp.1.el6_5.x86_64
 elasticsearch-1.1.0-1.noarch

  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/e0a204a7-318b-447b-8fcb-799b5a3b%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/e0a204a7-318b-447b-8fcb-799b5a3b%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624YH9X%2BQVJyFHNHEVqQjNqNSXrBJgE%3DP5oNXF8yRLz%2BAAQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.