Re: Reindex into another Elasticsearch

2015-06-01 Thread bitsofinfo . g
Also try this tool for more easily aggregating FS repo snapshots across a 
cluster for restoring on a different cluster. I had to make this tool for a 
similar scenario I had, might help in your situation 
too https://github.com/bitsofinfo/elasticsearch-snapshot-manager

On Thursday, May 14, 2015 at 5:50:33 PM UTC-6, Frederico Barnard wrote:
>
> I'm sorry for the long delay it took to answer.
> Every index is a folder inside the data folder. I just simply compressed 
> those folders and sent to S3.
> But, now, we just found an "answer": 
>
>- we built (at another dc) another ES cluster and we've put those 
>folders inside the data directory 
>- this is the part that i didn't participate: 
>   - we had a Logstash querying ES and outputting to our ES cluster 
>- That's our answer to what we were looking for
>
>
> Att
> Frederico Ferreira
> (21) 98714-1445
>
> 2015-04-27 18:28 GMT-03:00 Mark Walkom >:
>
>> 1 shard per index doesn't make a lot of sense unless you have very small 
>> amounts of data, You'd be better off going back to the default as you are 
>> solving the wrong problem there.
>>
>> What are these backup file you mention, how did you get them out of ES?
>>
>> On 27 April 2015 at 21:50, Frederico Ferreira > > wrote:
>>
>>> This is my first e-mail, so, if this problem is already explained, i'm 
>>> sorry, couldn't find out where it is.
>>> I'm out of ideas. This is my question:
>>> I had an Elasticsearch up and running with 1 replica, 5 shards, 1 master 
>>> (data false) and 10 slaves, and every index configured by day (from 
>>> logstash). Since we changed to a hourly index, after 2 weeks and a needed a 
>>> maintenance reboot, Elasticsearch wasn't able to start properly. It started 
>>> assigning unassigned shards and a lot of timeouts came to happen.
>>>
>>> After 5 days trying to recover, we decided to change the configuration 
>>> of our cluster to 1 master (data false), 10 salves and 1 shard 2 replicas 
>>> indexes, from scratch, without any old index.
>>> My task now is to reindex those lost indexes. This is my problem:
>>> I have 10 backup files (up to 400gb each) and i'm looking for ways to 
>>> reindex those indexes (little by little).
>>>
>>>
>>>- Should i copy those indexes folder to the new cluster folder?
>>>   - I don't need to change to a daily shard, i just need 
>>>   Elasticsearch to assign those indexes.
>>>- Is there any way i'm able to differentiate replica folders from 
>>>shards folders?
>>>
>>>
>>> We're using Elasticsearch 1.4.4 and each Elasticsearch is in an 8-core, 
>>> 16gb ram dedicated machine.
>>>
>>>  -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to elasticsearc...@googlegroups.com .
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/CAM0Xh3hG7BfiTwDgc0cCseTg4dVNFvav6LWvOmHS_-0Q3Ey0Tw%40mail.gmail.com
>>>  
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/CAEYi1X92zDgRK18wNak-Q%2BsJVP8C8%2BqQz70bvxu_jG%2BPmbq9CQ%40mail.gmail.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
Please update your bookmarks! We have moved to https://discuss.elastic.co/
--- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4c6feaec-276d-4fc2-8600-244acd1d1571%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: 1.3.2 snapshot file system question

2015-06-01 Thread bitsofinfo . g
in case anyone else comes across this, I ended up making this to assist w/ 
my issue of aggregating fs snapshots 

https://github.com/bitsofinfo/elasticsearch-snapshot-manager

On Monday, January 5, 2015 at 1:08:05 PM UTC-7, Mark Walkom wrote:
>
> It won't work, the snapshot is run against any node that has shards of the 
> index and doesn't funnel data back to the node you ran the command on.
>
> On 6 January 2015 at 02:40, > wrote:
>
>> I have a cluster (1.3.2) of 10 data nodes and 5 master nodes.
>>
>> I want to take a snapshot of one index. 
>>
>> I'd like to configure a new "fs" snapshot "mybackupdir" where the 
>> "location" is ONLY accessible from the node (master node) I am issuing the 
>> snapshot creation PUT against. 
>>
>> Next, if I issue a snapshot PUT for "mybackupdir/backup1" against the 
>> master node where that location is indeed accessible, will this work? Does 
>> the node that gets the snapshot request pull all the shard data from the 
>> data nodes over to itself and write them to the snapshot dir on disk? Or 
>> does each data-node responsible for each shard attempt to write to that 
>> same location? (thereby requiring that the snapshot "location" be 
>> accessible by all 15 nodes...)
>>
>> I ask this because I have a cluster that spans two data-centers and they 
>> don't all have access to a globally available NFS share where I could have 
>> a common mount path for the snapshots root
>>
>> thanks
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/1b2274d3-304d-4470-8cda-f9462c831aad%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
Please update your bookmarks! We have moved to https://discuss.elastic.co/
--- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/359543f4-89c7-4194-8ac1-d6c26fdbdffa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


java recovery api, stale shard information?

2015-04-30 Thread bitsofinfo . g
Hi,
I am using ES 1.5.2

For a given index I am trying to determine which ES node(s) hold the 
primary shards at any given time.

I am using the Java API

I make a RecoveryRequest for a specific index to a node, and get back 
a RecoveryResponse, I then do the following to create a simple list that 
only contains nodes which are primary for a given shard. I just switch on 
getPrimary then match on nodeId from a previously built map of nodes that I 
have.

response.shardResponses.values.foreach(srrList => {
srrList.foreach(shardRecoveryResponse => {

val recoveryState = shardRecoveryResponse.recoveryState;
if (recoveryState.getPrimary) {
primaryNodes += 
nodeMap(recoveryState.getSourceNode.getId)
}
})
})


The issue is this. 

a) I startup my cluster with a test index, 5 shards and only a few 
documents. The cluster is 4 nodes. The shards are distributed as such (via 
the head plugin) (all green)

node1: s(1)-replica, s(2)-primary, s(3)-primary
node2: s(2)-replica, s(4)-replica
node3: s(0)-replica, s(3)-replica, s(4)-primary
node4: s(0)-primary, s(1)-primary

b) I run my little bit of code like the above and I get back what I would 
expect, (node1, node3 and node4) as the only nodes in my list because they 
are the only ones w/ primary shards

c) I then shut down node1 (currently holds 2 primary shards)

d) The cluster now balances and looks like this (via the head plugin) (all 
green)

node2: s(1)-replica, s(4)-replica, s(2)-primary
node3: s(0)-replica, s(3)-primary, s(4)-primary
node4: s(0)-primary, s(1)-primary, s(2)-replica, s(3)-replica

e) I run my little bit of code again and I DON'T get back what I expect the 
data within the RecoveryResponse states primary shard holding nodes are 
(node3 and node4). Node2 (according to data within RecoveryResponse) does 
not hold any primary shards. Even after killing my client and 
completely re-connecting I get the same response. 

f) The only way I can get a correct view of the cluster after a rebalance 
is by closing the index, then re-opening it. Once this is done then the 
data in RecoveryResponse is correct (matches what head plugin says)

I am using this api wrong? Is this expected?

thanks

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2f3b5492-5340-484e-b751-b909aaea6b08%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: java API, get "_all" snapshots in a given repository

2015-04-15 Thread bitsofinfo . g
Answered my own question here (just pass an empty array to snapshots)

https://github.com/elastic/elasticsearch/blob/4ab268bab2cfd7fc3cb4c4808f706d5049c1fae5/src/main/java/org/elasticsearch/rest/action/admin/cluster/snapshots/status/RestSnapshotsStatusAction.java

On Wednesday, April 15, 2015 at 4:16:00 PM UTC-6, bitsof...@gmail.com wrote:
>
> Hi,
> Via the REST api I can get an listing of all snapshots within a given 
> repository such as 
>
> "_snapshot/myrepo/_all"
>
> How can I do this via the Java API? It appears I can only specify "known" 
> snapshots via "GetSnapshotsRequest" 
>
> Is there anyway via the Java API to get a complete listing without any 
> prior knowledge of snapshot names?
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d3f620fa-3595-4cf4-80c4-6cfca856452c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


java API, get "_all" snapshots in a given repository

2015-04-15 Thread bitsofinfo . g
Hi,
Via the REST api I can get an listing of all snapshots within a given 
repository such as 

"_snapshot/myrepo/_all"

How can I do this via the Java API? It appears I can only specify "known" 
snapshots via "GetSnapshotsRequest" 

Is there anyway via the Java API to get a complete listing without any 
prior knowledge of snapshot names?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a198c928-3805-4ef3-b52e-8f3aff8d3a37%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


java api question

2015-04-08 Thread bitsofinfo . g
Hi,
Just started doing some development w/ the Java API and one thing I 
immediately noticed are things like this

http://javadoc.kyubu.de/elasticsearch/v1.4.2/org/elasticsearch/common/collect/ImmutableOpenMap.html

Why does something like ImmutableOpenMap not implement "Map"? 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6ce43f50-f6f6-46b4-802a-5730d3ab448e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


1.3.2 snapshot file system question

2015-01-05 Thread bitsofinfo . g
I have a cluster (1.3.2) of 10 data nodes and 5 master nodes.

I want to take a snapshot of one index. 

I'd like to configure a new "fs" snapshot "mybackupdir" where the 
"location" is ONLY accessible from the node (master node) I am issuing the 
snapshot creation PUT against. 

Next, if I issue a snapshot PUT for "mybackupdir/backup1" against the 
master node where that location is indeed accessible, will this work? Does 
the node that gets the snapshot request pull all the shard data from the 
data nodes over to itself and write them to the snapshot dir on disk? Or 
does each data-node responsible for each shard attempt to write to that 
same location? (thereby requiring that the snapshot "location" be 
accessible by all 15 nodes...)

I ask this because I have a cluster that spans two data-centers and they 
don't all have access to a globally available NFS share where I could have 
a common mount path for the snapshots root

thanks

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1b2274d3-304d-4470-8cda-f9462c831aad%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Kibana version 4 architecture

2015-01-05 Thread bitsofinfo . g
Hi 

Starting to experiment w/ Kibana 4. I see that now there is a server side 
component where it appears all client requests proxy through?

What is the recommended topology for deploying this for HA,* is there any 
client session state maintained in this server side process or is it pretty 
much a stateless proxy*?

What does the server side component actually do now, that was offloaded 
from the prior kibana architecture where everything was only in the UI?

thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/634084f9-c04d-488e-baa7-c1f692c06d8e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


kibana4 architecture

2014-11-13 Thread bitsofinfo . g
Hi 

Starting to experiment w/ Kibana 4. I see that now there is a server side 
component where it appears all client requests proxy through?

What is the recommended topology for deploying this for HA, is there any 
client session state maintained in this server side process or is it pretty 
much a stateless proxy?

What does the server side component actually do now, that was offloaded 
from the prior kibana architecture where everything was only in the UI?

thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/281afb15-e468-476e-9fc2-c35c43e84699%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


es 1.3.2, filesystem snapshot question

2014-11-12 Thread bitsofinfo . g
I have a cluster (1.3.2) of 10 data nodes and 5 master nodes.

I want to take a snapshot of one index. 

I'd like to configure a new "fs" snapshot "mybackupdir" where the 
"location" is ONLY accessible from the node (master node) I am issuing the 
snapshot creation PUT against. 

Next, if I issue a snapshot PUT for "mybackupdir/backup1" against the 
master node where that location is indeed accessible, will this work? Does 
the node that gets the snapshot request pull all the shard data from the 
data nodes over to itself and write them to the snapshot dir on disk? Or 
does each data-node responsible for each shard attempt to write to that 
same location? (thereby requiring that the snapshot "location" be 
accessible by all 15 nodes...)

I ask this because I have a cluster that spans two data-centers and they 
don't all have access to a globally available NFS share where I could have 
a common mount path for the snapshots root

thanks

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9ca04187-b4ed-4445-abf4-c258bf962722%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: validation failed, source is missing? what does this mean

2014-10-03 Thread bitsofinfo . g
Even if i do it like this, same thing

DELETE /index/type/_query?term:value

On Friday, October 3, 2014 11:58:18 AM UTC-6, bitsof...@gmail.com wrote:
>
> I'm doing this through the es-head plugin, not curl
>
> On Friday, October 3, 2014 10:05:42 AM UTC-6, vineeth mohan wrote:
>>
>> Hi , 
>>
>> Can you paste the complete curl query.
>> I see this when i forget to put the -d flag for data.
>>
>> Thanks
>>Vineeth
>>
>> On Fri, Oct 3, 2014 at 9:04 PM,  wrote:
>>
>>> Hi,
>>> What does this error mean (es 1.3.2) when I do a delete by query?
>>>
>>> DELETE such as /myIndex/type/_query
>>>
>>> {
>>>   "query": {
>>> "term": {
>>>   "term": "whatever"
>>> }
>>>   }
>>> }
>>>
>>> RESPONSE
>>>
>>> {
>>>
>>>- error: ActionRequestValidationException[Validation Failed: 1: 
>>>source is missing;]
>>>- status: 500
>>>
>>> }
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/9d8e26ee-ea82-4f78-b9c1-434767b2f621%40googlegroups.com
>>>  
>>> 
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ffbfb061-51a3-4664-924a-0e2a212f3f5a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: validation failed, source is missing? what does this mean

2014-10-03 Thread bitsofinfo . g
I'm doing this through the es-head plugin, not curl

On Friday, October 3, 2014 10:05:42 AM UTC-6, vineeth mohan wrote:
>
> Hi , 
>
> Can you paste the complete curl query.
> I see this when i forget to put the -d flag for data.
>
> Thanks
>Vineeth
>
> On Fri, Oct 3, 2014 at 9:04 PM, > wrote:
>
>> Hi,
>> What does this error mean (es 1.3.2) when I do a delete by query?
>>
>> DELETE such as /myIndex/type/_query
>>
>> {
>>   "query": {
>> "term": {
>>   "term": "whatever"
>> }
>>   }
>> }
>>
>> RESPONSE
>>
>> {
>>
>>- error: ActionRequestValidationException[Validation Failed: 1: 
>>source is missing;]
>>- status: 500
>>
>> }
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/9d8e26ee-ea82-4f78-b9c1-434767b2f621%40googlegroups.com
>>  
>> 
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8da52ae7-9961-4acd-add4-1c65ebfb7766%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


validation failed, source is missing? what does this mean

2014-10-03 Thread bitsofinfo . g
Hi,
What does this error mean (es 1.3.2) when I do a delete by query?

DELETE such as /myIndex/type/_query

{
  "query": {
"term": {
  "term": "whatever"
}
  }
}

RESPONSE

{
   
   - error: ActionRequestValidationException[Validation Failed: 1: source 
   is missing;]
   - status: 500

}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9d8e26ee-ea82-4f78-b9c1-434767b2f621%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: clarity for shard allocation disable/enable during upgrade

2014-08-12 Thread bitsofinfo . g
Also, Clinton, per the upgrade page it states the below, so what you are 
saying is that re-enabling allocation after each node is restarted (going 
from 1.2.1 to 1.31) that the below *will not* apply (incompatibility) 
because shards would be going from 1.2.1 to 1.3.1 vs the reverse Correct?

"Running multiple versions of Elasticsearch in the same cluster for any 
length of time beyond that required for an upgrade is not supported, as 
shard replication from the more recent version to the previous versions 
will not work."

On Tuesday, August 12, 2014 4:04:28 AM UTC-4, Clinton Gormley wrote:
>
>
>
> On Monday, 11 August 2014 15:31:28 UTC+2, bitsof...@gmail.com wrote:
>>
>> I have 8 data nodes and 6 coordinator nodes in an active cluster running 
>> 1.2.1
>>
>> I want to upgrade to 1.3.1
>>
>> When reading 
>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-upgrade.html
>>  
>> the upgrade docs am I correct to assume:
>>
>> a) disable shard allocation before doing anything
>>
>> b) proceed to upgrade each node to 1.3.1
>>
>> c) only after ALL nodes are @ 1.3.1 then I can re-enable shard allocation.
>>
>> My question is that at some point during the upgrade of all the data 
>> nodes, the shards on them will be "unassigned" and the cluster will not 
>> function... correct?
>>
>> So in other words running some nodes as 1.2.1 and others as 1.3.1 with 
>> shard allocation *enabled* is NOT advised and in general cluster 
>> un-availability is expected due to shards being in an unassigned state as 
>> each data node is upgraded.
>>
>> At least this is the behavior I see today, (not during an upgrade) when I 
>> disable allocation and restart a node, those shards are unassigned until I 
>> re-enable allocation
>>
>>
> No, the procedure outlined above is not correct and would indeed result in 
> unassigned shards, as you suspect.  Instead, you should:
>
> 1. Disable allocation
> 2. Upgrade ONE node
> 3. Reenable allocation
> 4. Wait for green
> 5. Repeat
>
> Even when following the above process, you will likely end up with shards 
> being copied over from one node to another (once allocation has been 
> reenabled).  After restart, a replica will only reuse the segments that are 
> exactly the same as those in the primary.  However, because primaries and 
> replicas refresh, flush, and merge at different times, shards diverge from 
> each other over time. The longer it has been since a replica was copied 
> over from the primary, the fewer identical segments they will have in 
> common.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0dc5d597-3658-447f-94e0-b4b357fa4b08%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: clarity for shard allocation disable/enable during upgrade

2014-08-12 Thread bitsofinfo . g
Mark - isn't the shard allocation all/none a cluster wide setting? Hence 
why it does that on all nodes?

Clinton - What you said makes sense, however if that procedure is incorrect 
then the official upgrade page on the elasticsearch site should be changed, 
as it states

"When the process is complete on all nodes, you can re-enable shard 
reallocation"


On Tuesday, August 12, 2014 4:04:28 AM UTC-4, Clinton Gormley wrote:
>
>
>
> On Monday, 11 August 2014 15:31:28 UTC+2, bitsof...@gmail.com wrote:
>>
>> I have 8 data nodes and 6 coordinator nodes in an active cluster running 
>> 1.2.1
>>
>> I want to upgrade to 1.3.1
>>
>> When reading 
>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-upgrade.html
>>  
>> the upgrade docs am I correct to assume:
>>
>> a) disable shard allocation before doing anything
>>
>> b) proceed to upgrade each node to 1.3.1
>>
>> c) only after ALL nodes are @ 1.3.1 then I can re-enable shard allocation.
>>
>> My question is that at some point during the upgrade of all the data 
>> nodes, the shards on them will be "unassigned" and the cluster will not 
>> function... correct?
>>
>> So in other words running some nodes as 1.2.1 and others as 1.3.1 with 
>> shard allocation *enabled* is NOT advised and in general cluster 
>> un-availability is expected due to shards being in an unassigned state as 
>> each data node is upgraded.
>>
>> At least this is the behavior I see today, (not during an upgrade) when I 
>> disable allocation and restart a node, those shards are unassigned until I 
>> re-enable allocation
>>
>>
> No, the procedure outlined above is not correct and would indeed result in 
> unassigned shards, as you suspect.  Instead, you should:
>
> 1. Disable allocation
> 2. Upgrade ONE node
> 3. Reenable allocation
> 4. Wait for green
> 5. Repeat
>
> Even when following the above process, you will likely end up with shards 
> being copied over from one node to another (once allocation has been 
> reenabled).  After restart, a replica will only reuse the segments that are 
> exactly the same as those in the primary.  However, because primaries and 
> replicas refresh, flush, and merge at different times, shards diverge from 
> each other over time. The longer it has been since a replica was copied 
> over from the primary, the fewer identical segments they will have in 
> common.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/095cf279-5c51-49bc-8fb3-536e3160a9ec%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


clarity for shard allocation disable/enable during upgrade

2014-08-11 Thread bitsofinfo . g
I have 8 data nodes and 6 coordinator nodes in an active cluster running 
1.2.1

I want to upgrade to 1.3.1

When 
reading 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-upgrade.html
 
the upgrade docs am I correct to assume:

a) disable shard allocation before doing anything

b) proceed to upgrade each node to 1.3.1

c) only after ALL nodes are @ 1.3.1 then I can re-enable shard allocation.

My question is that at some point during the upgrade of all the data nodes, 
the shards on them will be "unassigned" and the cluster will not 
function... correct?

So in other words running some nodes as 1.2.1 and others as 1.3.1 with 
shard allocation *enabled* is NOT advised and in general cluster 
un-availability is expected due to shards being in an unassigned state as 
each data node is upgraded.

At least this is the behavior I see today, (not during an upgrade) when I 
disable allocation and restart a node, those shards are unassigned until I 
re-enable allocation


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a7c8a15b-3e74-4c38-ab03-ff3e7d6fcd32%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


log index creation API requests

2014-07-30 Thread bitsofinfo . g
Hi - any tips for how I should configure the logging.yml file to give me 
more verbose output, including source ip address if possible, to give more 
info when an index is created?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2b423a9b-1e60-4e53-87b0-4681be643683%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


does minimum_master_nodes include ones "self"?

2014-06-16 Thread bitsofinfo . g
running 1.2.1

If a cluster has 3 master eligible nodes, and one node dies leaving nodeA 
and nodeB and minimum_master_nodes = 2

Does nodeA when up, include itself when evaluating minimum_master_nodes?


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5674f8bd-fb02-4e68-83ae-e093cc66a074%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Any fix timeline for split brain issue: 2488

2014-01-07 Thread bitsofinfo . g
Hi, is there any timeline on a fix 
for https://github.com/elasticsearch/elasticsearch/issues/2488 ?

thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/79fc8f45-08f5-4abc-9349-06b23debc3a2%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


multi-datacenter and issue 2448

2014-01-06 Thread bitsofinfo . g
Hi - we are trying to get elastic search to work in a multi-datacenter 
deployment, and our desired setup is pretty much what the OP describes (and 
we get the same split brain behavior) in this outstanding issue at 
https://github.com/elasticsearch/elasticsearch/issues/2488

Whats the roadmap for resolving this? My use case is w/ logstash, would 
prefer admins only have to go to one place to see log data across all of 
our DC's rather than having to have separate indices/cluster for each DC's 
logstash data due to this slit brain issue.

thanks


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0654be12-c090-4457-9ca1-a9d9babce65c%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.