Re: Unassigned Shards Of A Newer Elasticsearh Version

2014-12-01 Thread Umutcan
Yes, I am sure that there is enough disk space. Also, I have tested 
before that I can solve this by upgrading other nodes.


On 01-12-2014 10:24, David Pilato wrote:

Are you sure you are not running low on disk space?

If you have less than 15% free, elasticsearch won’t allocate replicas.



--
*David Pilato* | /Technical Advocate/ | *Elasticsearch.com 
*
@dadoonet  | @elasticsearchfr 
 |@scrutmydocs 





Le 1 déc. 2014 à 09:21, Umutcan > a écrit :


Hi,

When I was doing some experiments on my Elasticsearch cluster, I 
found that cluster cannot assign replicas of a shards on the node 
which have a new version of Elasticsearch (1.4.1). I have seen this 
before and I can fix this by upgrading old nodes. But, I am wondering 
that if there is a way to fix this issue without upgrade.


Is there any idea or a solution?

Thanks,

Umutcan

PS: Other nodes in the cluster have Elasticsearch 1.4.

--
You received this message because you are subscribed to the Google 
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to elasticsearch+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/547C251B.9060809%40gamegos.com.

For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google 
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to elasticsearch+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/C12508F1-79F2-4346-B6C0-3F020CE7E567%40pilato.fr 
.

For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/547C5D82.60304%40gamegos.com.
For more options, visit https://groups.google.com/d/optout.


Re: Unassigned Shards Of A Newer Elasticsearh Version

2014-12-01 Thread David Pilato
Are you sure you are not running low on disk space?

If you have less than 15% free, elasticsearch won’t allocate replicas.



-- 
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet  | @elasticsearchfr 
 | @scrutmydocs 




> Le 1 déc. 2014 à 09:21, Umutcan  a écrit :
> 
> Hi,
> 
> When I was doing some experiments on my Elasticsearch cluster, I found that 
> cluster cannot assign replicas of a shards on the node which have a new 
> version of Elasticsearch (1.4.1). I have seen this before and I can fix this 
> by upgrading old nodes. But, I am wondering that if there is a way to fix 
> this issue without upgrade.
> 
> Is there any idea or a solution?
> 
> Thanks,
> 
> Umutcan
> 
> PS: Other nodes in the cluster have Elasticsearch 1.4.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/547C251B.9060809%40gamegos.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/C12508F1-79F2-4346-B6C0-3F020CE7E567%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


Unassigned Shards Of A Newer Elasticsearh Version

2014-12-01 Thread Umutcan

Hi,

When I was doing some experiments on my Elasticsearch cluster, I found 
that cluster cannot assign replicas of a shards on the node which have a 
new version of Elasticsearch (1.4.1). I have seen this before and I can 
fix this by upgrading old nodes. But, I am wondering that if there is a 
way to fix this issue without upgrade.


Is there any idea or a solution?

Thanks,

Umutcan

PS: Other nodes in the cluster have Elasticsearch 1.4.

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/547C251B.9060809%40gamegos.com.
For more options, visit https://groups.google.com/d/optout.


Issues with ES 1.4 Status Red, Unassigned shards, etc..

2014-11-10 Thread Jeremy Viteka
I recently began having issues with my ELK box after a rebuild to the 
latest Beta and I have not been receiving logs like normal. Here is my 
status and attempt to reassign a shard. Any help would be appreciated. 
Thanks!

curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
{
  "cluster_name" : "elasticsearch",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 101,
  "active_shards" : 101,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 111


curl -XPOST -d '{ "commands" : [ { "allocate" : { "index" : 
"logstash-2014.10.30", "shard" : 2, "node" : "My Node" } } ] }' 
http://localhost:9200/_cluster/reroute?pretty
{
  "error" : "ElasticsearchIllegalArgumentException[[allocate] allocation of 
[logstash-2014.10.30][2] on node 
[MyNode][Z-5tPNJWTvyyHXZ6n8uLBg][server.mydomain.us][inet[localhost/127.0.0.1:9300]]
 
is not allowed, reason: [NO(shard cannot be allocated on same node 
[Z-5tPNJWTvyyHXZ6n8uLBg] it already exists on)][YES(node passes 
include/exclude/require filters)][YES(primary is already active)][YES(below 
shard recovery limit of [2])][YES(allocation disabling is 
ignored)][YES(allocation disabling is ignored)][YES(no allocation awareness 
enabled)][YES(total shard limit disabled: [-1] <= 0)][YES(target node 
version [1.4.0.Beta1] is same or newer than source node version 
[1.4.0.Beta1])][YES(only a single node is present)][YES(shard not primary 
or relocation disabled)]]",
  "status" : 400
}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/61675dc8-cbea-4fbf-a41d-5bc23c4a8d69%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Unassigned Shards and Upgrading Elasticsearch

2014-11-06 Thread Mark Walkom
You shouldn't run multiple versions unless you are upgrading, so it would
make sense to upgrade the other nodes ASAP.

The logs on your nodes should also shed some more light on the problem.

On 6 November 2014 19:29, Umutcan  wrote:

> Hi,
>
> Yesterday, I have added a new node to my ES cluster and after that some of
> the shards started to remain unassigned. These shards are the replicas of
> the shards on the new node. When I inspected the reason, I found out that
> this new node has a different ES version. The new one is 1.3.4 and the
> older ones are 1.3.2.
>
> Does this difference cause the problem? If I upgrade the older ones,  does
> this solves the issue?
>
>
> I installed ES via apt-get in Ubuntu. If I upgrade the current nodes via
> apt-get, does this cause problems with old indices?
>
> Thanks,
>
> Umutcan
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/elasticsearch/545B3152.1080606%40gamegos.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZmHbWBotFmt4dLxUmnG2v95f5nwc%2Bz27cEWpTDnp-vv9Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Unassigned Shards and Upgrading Elasticsearch

2014-11-06 Thread Umutcan

Hi,

Yesterday, I have added a new node to my ES cluster and after that some 
of the shards started to remain unassigned. These shards are the 
replicas of the shards on the new node. When I inspected the reason, I 
found out that this new node has a different ES version. The new one is 
1.3.4 and the older ones are 1.3.2.


Does this difference cause the problem? If I upgrade the older ones,  
does this solves the issue?



I installed ES via apt-get in Ubuntu. If I upgrade the current nodes via 
apt-get, does this cause problems with old indices?


Thanks,

Umutcan

--
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/545B3152.1080606%40gamegos.com.
For more options, visit https://groups.google.com/d/optout.


Re: list Unassigned shards by index by node

2014-10-13 Thread PrasathRajan
Try this...

curl localhost:9200/_cat/shards



--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/list-Unassigned-shards-by-index-by-node-tp4037162p4064803.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1413199114656-4064803.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


Indexing stops when too many unassigned shards

2014-10-11 Thread Veselin Kantsev
Hello,
I have a test cluster of 2 ES nodes (Elasticsearch 1.1.1-1).
I've noticed that whenever the number of unassigned shards increases past a 
threshold, the cluster stops accepting Write operations.


*Example1*:
 nodes: 2
 primary shards: 6
 replicas: 1

The cluster works as expected.

*Example2*:
 nodes: 2
 primary shards: 6
 replicas: 2

Cluster state: 6 unassigned shards. The cluster works as expected.

*Example3*:
 nodes: 2
 primary shards: 6
 replicas: 3

Cluster state: 12 unassigned shards. The cluster serves Read operations, 
but not Write. 
Error: {"error":"UnavailableShardsException[[myIndex][0] [4] shardIt, [2] 
active : Timeout waiting for [59.9s], request: index 
{[myIndex][myType][1G_yO1Y8TvGzaw9DH2Nbog], source[{ \"name\" : \"Earl\" 
}]}]","status":503}
Indexing resumes if I reduce the number of replicas back to 2.



So, anything above 2 replicas seems to cause the 2-node cluster to stop 
indexing.
Is this the expected behaviour?
Is there a max number of unassigned shards allowed?

Thank you,
Veselin

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/16b211c4-77d2-4180-a8b7-9a310206a7d2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Unassigned shards after updating # of replica

2014-08-11 Thread Yaniv Yancovich


Hi,
I am using ES 0.90.13 with 10 clusters. Java Sun 1.7.53.

We find out that the # of replicas on our ES clusters was zero. 
Hence, we increased the # of replica last week on ES2 to 1. 
Since then the cluster became yellow <-> green. It runs fine for several 
hours and then became yellow (with X unassigned shards). After few hours it 
became green.

Most of the time it became green, but from time few time it is stacked on 2 
unassigned shards.

>From the logs:

[index.shard.service ] [es2-s01] [ds7][3] suspect illegal state: trying 
to move shard from primary mode to replica mode

[2014-08-11 00:33:10,384][WARN ][cluster.action.shard ] [es2-s02] 
[ds6765431_1_s][1] sending failed shard for [ds6765431_1_s][1], 
node[M34ta1aGTv-S4ahkrXK5DA], [R], s[STARTED], indexUUID 
[4T8zV8r7RUu8UkvQYsoz3w], reason [master 
[es2-s03][4rJC0nkVR1SgZhAcOK2MZw][inet[/10.1.112.3:9300]]{data_center=ny4, 
max_local_storage_nodes=1, master=true} marked shard as started, but shard 
has not been created, mark shard as failed]

This is a producation cluster. Please assist.
Yaniv

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d2934403-9c04-42bd-9259-68313c25e1f6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Red status unassigned shards help

2014-06-03 Thread Jason Weber
Mark, appreciate the response I will look into both!


On Fri, May 30, 2014 at 5:47 PM, Mark Walkom 
wrote:

> You can set the replicas for an index using the API (or kopf).
>
> As for your upgrade concerns, see
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-upgrade.html
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
>
>
> On 31 May 2014 00:15, Jason Weber  wrote:
>
>> Thanks Mark and pawan,
>>
>> Here is my output from netstat:
>>
>> tcp6   0  0 :::9200 :::*
>> LISTEN  1155/java
>>
>> Mark are you talking about upgrading to the lastest 0.9 or to 1.x.x?
>> Still waiting on a good method to go to the lastest 1.x in ES with out
>> messing up a bunch of stuff. Still in development but dont want to loose my
>> data.
>>
>> I think you are right about the replica set, I read about a setting I
>> need to change in elasticsearch.yml, I will see if I can find that doc.
>> Also will install kopf. Thanks again for the help!
>>
>>
>> On Friday, May 30, 2014 12:18:47 AM UTC-4, Mark Walkom wrote:
>>>
>>> It could also be the elasticsearch integrated output in ES, which adds
>>> the LS instance as a client node to the cluster.
>>> And you probably don't want to kill that.
>>>
>>> Regards,
>>> Mark Walkom
>>>
>>> Infrastructure Engineer
>>> Campaign Monitor
>>> email: ma...@campaignmonitor.com
>>> web: www.campaignmonitor.com
>>>
>>>
>>> On 30 May 2014 14:11, Pawan Sharma  wrote:
>>>
>>>> In the node another instances of elasticsearch is started, so the
>>>> solution is first you have to find the PID ok another instances of es by
>>>>
>>>>
>>>> *netstat -lnp | grep 920*
>>>> and kill the PID if there is another es is started in 9201  port
>>>>
>>>> Thanks
>>>>
>>>>
>>>> On Fri, May 30, 2014 at 4:03 AM, Mark Walkom >>> > wrote:
>>>>
>>>>> Install a visual monitoring plugin like kopf and ElasticHQ, you will
>>>>> be able to see which shards are unassigned.
>>>>> However I think you may have replicas set, which, given you only have
>>>>> one one, will always result in a yellow state as the cluster cannot assign
>>>>> replicas to another node.
>>>>>
>>>>> You should also upgrade ES to a newer version if you can :)
>>>>>
>>>>> Regards,
>>>>> Mark Walkom
>>>>>
>>>>> Infrastructure Engineer
>>>>> Campaign Monitor
>>>>> email: ma...@campaignmonitor.com
>>>>> web: www.campaignmonitor.com
>>>>>
>>>>>
>>>>> On 29 May 2014 23:45, Jason Weber  wrote:
>>>>>
>>>>>> I rebooted several times and I believe its collecting the correct
>>>>>> data now. I still show 520 unassigned shards, but its collecting all my
>>>>>> logs now. Is this something I can use the redirect command for to assign 
>>>>>> it
>>>>>> to a new index?
>>>>>>
>>>>>> Jason
>>>>>>
>>>>>> On Tuesday, May 27, 2014 11:39:49 AM UTC-4, Jason Weber wrote:
>>>>>>>
>>>>>>> Could someone walk me through getting my cluster up and running.
>>>>>>> Came in from long weekend and my cluster was red status, I am showing a 
>>>>>>> lot
>>>>>>> of unassigned shards.
>>>>>>>
>>>>>>> jmweber@MIDLOG01:/var/log/logstash$ curl localhost:9200/_cluster/
>>>>>>> health?pretty
>>>>>>> {
>>>>>>>   "cluster_name" : "midlogcluster",
>>>>>>>   "status" : "red",
>>>>>>>   "timed_out" : false,
>>>>>>>   "number_of_nodes" : 2,
>>>>>>>   "number_of_data_nodes" : 1,
>>>>>>>   "active_primary_shards" : 512,
>>>>>>>   "active_shards" : 512,
>>>>>>>   "relocating_shards" : 0,
>>>>>>>   "initializing_shards" : 0,
>>>>>>>   "unassigned_shards" : 520
>&g

Re: Red status unassigned shards help

2014-05-30 Thread Mark Walkom
You can set the replicas for an index using the API (or kopf).

As for your upgrade concerns, see
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-upgrade.html

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 31 May 2014 00:15, Jason Weber  wrote:

> Thanks Mark and pawan,
>
> Here is my output from netstat:
>
> tcp6   0  0 :::9200 :::*
> LISTEN  1155/java
>
> Mark are you talking about upgrading to the lastest 0.9 or to 1.x.x? Still
> waiting on a good method to go to the lastest 1.x in ES with out messing up
> a bunch of stuff. Still in development but dont want to loose my data.
>
> I think you are right about the replica set, I read about a setting I need
> to change in elasticsearch.yml, I will see if I can find that doc. Also
> will install kopf. Thanks again for the help!
>
>
> On Friday, May 30, 2014 12:18:47 AM UTC-4, Mark Walkom wrote:
>>
>> It could also be the elasticsearch integrated output in ES, which adds
>> the LS instance as a client node to the cluster.
>> And you probably don't want to kill that.
>>
>> Regards,
>> Mark Walkom
>>
>> Infrastructure Engineer
>> Campaign Monitor
>> email: ma...@campaignmonitor.com
>> web: www.campaignmonitor.com
>>
>>
>> On 30 May 2014 14:11, Pawan Sharma  wrote:
>>
>>> In the node another instances of elasticsearch is started, so the
>>> solution is first you have to find the PID ok another instances of es by
>>>
>>>
>>> *netstat -lnp | grep 920*
>>> and kill the PID if there is another es is started in 9201  port
>>>
>>> Thanks
>>>
>>>
>>> On Fri, May 30, 2014 at 4:03 AM, Mark Walkom 
>>> wrote:
>>>
>>>> Install a visual monitoring plugin like kopf and ElasticHQ, you will be
>>>> able to see which shards are unassigned.
>>>> However I think you may have replicas set, which, given you only have
>>>> one one, will always result in a yellow state as the cluster cannot assign
>>>> replicas to another node.
>>>>
>>>> You should also upgrade ES to a newer version if you can :)
>>>>
>>>> Regards,
>>>> Mark Walkom
>>>>
>>>> Infrastructure Engineer
>>>> Campaign Monitor
>>>> email: ma...@campaignmonitor.com
>>>> web: www.campaignmonitor.com
>>>>
>>>>
>>>> On 29 May 2014 23:45, Jason Weber  wrote:
>>>>
>>>>> I rebooted several times and I believe its collecting the correct data
>>>>> now. I still show 520 unassigned shards, but its collecting all my logs
>>>>> now. Is this something I can use the redirect command for to assign it to 
>>>>> a
>>>>> new index?
>>>>>
>>>>> Jason
>>>>>
>>>>> On Tuesday, May 27, 2014 11:39:49 AM UTC-4, Jason Weber wrote:
>>>>>>
>>>>>> Could someone walk me through getting my cluster up and running. Came
>>>>>> in from long weekend and my cluster was red status, I am showing a lot of
>>>>>> unassigned shards.
>>>>>>
>>>>>> jmweber@MIDLOG01:/var/log/logstash$ curl localhost:9200/_cluster/
>>>>>> health?pretty
>>>>>> {
>>>>>>   "cluster_name" : "midlogcluster",
>>>>>>   "status" : "red",
>>>>>>   "timed_out" : false,
>>>>>>   "number_of_nodes" : 2,
>>>>>>   "number_of_data_nodes" : 1,
>>>>>>   "active_primary_shards" : 512,
>>>>>>   "active_shards" : 512,
>>>>>>   "relocating_shards" : 0,
>>>>>>   "initializing_shards" : 0,
>>>>>>   "unassigned_shards" : 520
>>>>>> }
>>>>>>
>>>>>>
>>>>>> I am running ES 0.90.11
>>>>>>
>>>>>> LS and ES are on a single server, I only have 1 node, although it
>>>>>> shows 2, I get yellow status normally, it works fine with that. But I am
>>>>>> only collecting like 43 events per minute vs my usual 50K.
>>>>>>
>>>>>> I have seen several write ups but I seem to get a lot of no handler
>>>>>> found for uri statements wh

Re: Red status unassigned shards help

2014-05-30 Thread Jason Weber
Thanks Mark and pawan,

Here is my output from netstat:

tcp6   0  0 :::9200 :::*
LISTEN  1155/java  

Mark are you talking about upgrading to the lastest 0.9 or to 1.x.x? Still 
waiting on a good method to go to the lastest 1.x in ES with out messing up 
a bunch of stuff. Still in development but dont want to loose my data.

I think you are right about the replica set, I read about a setting I need 
to change in elasticsearch.yml, I will see if I can find that doc. Also 
will install kopf. Thanks again for the help!


On Friday, May 30, 2014 12:18:47 AM UTC-4, Mark Walkom wrote:
>
> It could also be the elasticsearch integrated output in ES, which adds the 
> LS instance as a client node to the cluster.
> And you probably don't want to kill that.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com 
> web: www.campaignmonitor.com
>  
>
> On 30 May 2014 14:11, Pawan Sharma > 
> wrote:
>
>> In the node another instances of elasticsearch is started, so the 
>> solution is first you have to find the PID ok another instances of es by 
>>
>>
>> *netstat -lnp | grep 920*
>> and kill the PID if there is another es is started in 9201  port
>>
>> Thanks
>>
>>
>> On Fri, May 30, 2014 at 4:03 AM, Mark Walkom > > wrote:
>>
>>> Install a visual monitoring plugin like kopf and ElasticHQ, you will be 
>>> able to see which shards are unassigned.
>>> However I think you may have replicas set, which, given you only have 
>>> one one, will always result in a yellow state as the cluster cannot assign 
>>> replicas to another node.
>>>
>>> You should also upgrade ES to a newer version if you can :)
>>>
>>> Regards,
>>> Mark Walkom
>>>
>>> Infrastructure Engineer
>>> Campaign Monitor
>>> email: ma...@campaignmonitor.com 
>>> web: www.campaignmonitor.com
>>>  
>>>
>>> On 29 May 2014 23:45, Jason Weber > 
>>> wrote:
>>>
>>>> I rebooted several times and I believe its collecting the correct data 
>>>> now. I still show 520 unassigned shards, but its collecting all my logs 
>>>> now. Is this something I can use the redirect command for to assign it to 
>>>> a 
>>>> new index?
>>>>
>>>> Jason
>>>>
>>>> On Tuesday, May 27, 2014 11:39:49 AM UTC-4, Jason Weber wrote:
>>>>>
>>>>> Could someone walk me through getting my cluster up and running. Came 
>>>>> in from long weekend and my cluster was red status, I am showing a lot of 
>>>>> unassigned shards.
>>>>>
>>>>> jmweber@MIDLOG01:/var/log/logstash$ curl localhost:9200/_cluster/
>>>>> health?pretty
>>>>> {
>>>>>   "cluster_name" : "midlogcluster",
>>>>>   "status" : "red",
>>>>>   "timed_out" : false,
>>>>>   "number_of_nodes" : 2,
>>>>>   "number_of_data_nodes" : 1,
>>>>>   "active_primary_shards" : 512,
>>>>>   "active_shards" : 512,
>>>>>   "relocating_shards" : 0,
>>>>>   "initializing_shards" : 0,
>>>>>   "unassigned_shards" : 520
>>>>> }
>>>>>
>>>>>
>>>>> I am running ES 0.90.11
>>>>>
>>>>> LS and ES are on a single server, I only have 1 node, although it 
>>>>> shows 2, I get yellow status normally, it works fine with that. But I am 
>>>>> only collecting like 43 events per minute vs my usual 50K.
>>>>>
>>>>> I have seen several write ups but I seem to get a lot of no handler 
>>>>> found for uri statements when I try to run them.
>>>>>
>>>>> Thanks,
>>>>> Jason
>>>>>
>>>>  -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "elasticsearch" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to elasticsearc...@googlegroups.com .
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/elasticsearch/1307dd8d-411e-4690-a6d1-8e27ce26ecec%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/elasticsearch/130

Re: Red status unassigned shards help

2014-05-29 Thread Mark Walkom
It could also be the elasticsearch integrated output in ES, which adds the
LS instance as a client node to the cluster.
And you probably don't want to kill that.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 30 May 2014 14:11, Pawan Sharma  wrote:

> In the node another instances of elasticsearch is started, so the solution
> is first you have to find the PID ok another instances of es by
>
>
> *netstat -lnp | grep 920*
> and kill the PID if there is another es is started in 9201  port
>
> Thanks
>
>
> On Fri, May 30, 2014 at 4:03 AM, Mark Walkom 
> wrote:
>
>> Install a visual monitoring plugin like kopf and ElasticHQ, you will be
>> able to see which shards are unassigned.
>> However I think you may have replicas set, which, given you only have one
>> one, will always result in a yellow state as the cluster cannot assign
>> replicas to another node.
>>
>> You should also upgrade ES to a newer version if you can :)
>>
>> Regards,
>> Mark Walkom
>>
>> Infrastructure Engineer
>> Campaign Monitor
>> email: ma...@campaignmonitor.com
>> web: www.campaignmonitor.com
>>
>>
>> On 29 May 2014 23:45, Jason Weber  wrote:
>>
>>> I rebooted several times and I believe its collecting the correct data
>>> now. I still show 520 unassigned shards, but its collecting all my logs
>>> now. Is this something I can use the redirect command for to assign it to a
>>> new index?
>>>
>>> Jason
>>>
>>> On Tuesday, May 27, 2014 11:39:49 AM UTC-4, Jason Weber wrote:
>>>>
>>>> Could someone walk me through getting my cluster up and running. Came
>>>> in from long weekend and my cluster was red status, I am showing a lot of
>>>> unassigned shards.
>>>>
>>>> jmweber@MIDLOG01:/var/log/logstash$ curl localhost:9200/_cluster/
>>>> health?pretty
>>>> {
>>>>   "cluster_name" : "midlogcluster",
>>>>   "status" : "red",
>>>>   "timed_out" : false,
>>>>   "number_of_nodes" : 2,
>>>>   "number_of_data_nodes" : 1,
>>>>   "active_primary_shards" : 512,
>>>>   "active_shards" : 512,
>>>>   "relocating_shards" : 0,
>>>>   "initializing_shards" : 0,
>>>>   "unassigned_shards" : 520
>>>> }
>>>>
>>>>
>>>> I am running ES 0.90.11
>>>>
>>>> LS and ES are on a single server, I only have 1 node, although it shows
>>>> 2, I get yellow status normally, it works fine with that. But I am only
>>>> collecting like 43 events per minute vs my usual 50K.
>>>>
>>>> I have seen several write ups but I seem to get a lot of no handler
>>>> found for uri statements when I try to run them.
>>>>
>>>> Thanks,
>>>> Jason
>>>>
>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/1307dd8d-411e-4690-a6d1-8e27ce26ecec%40googlegroups.com
>>> <https://groups.google.com/d/msgid/elasticsearch/1307dd8d-411e-4690-a6d1-8e27ce26ecec%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/CAEM624Y%2BPsF8a4C0mh-Jsi%3Dc6ogiXctAuA-Hn2oO6MVvv7SkBQ%40mail.gmail.com
>> <https://groups.google.com/d/msgid/elasticsearch/CAEM624Y%2BPsF8a4C0mh-Jsi%3Dc6ogiXctAuA-Hn2oO6MVvv7SkBQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAMUueYn0EkSL1qAH%2Bb5s0PHMW%3Ds5dK48n3dLgFAuEDziSpBfDg%40mail.gmail.com
> <https://groups.google.com/d/msgid/elasticsearch/CAMUueYn0EkSL1qAH%2Bb5s0PHMW%3Ds5dK48n3dLgFAuEDziSpBfDg%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624Z9yPJtT%3DefLuwUkf8_BhKtEE24OyS4E8X7GXD8FDyUQA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Red status unassigned shards help

2014-05-29 Thread Pawan Sharma
In the node another instances of elasticsearch is started, so the solution
is first you have to find the PID ok another instances of es by


*netstat -lnp | grep 920*
and kill the PID if there is another es is started in 9201  port

Thanks


On Fri, May 30, 2014 at 4:03 AM, Mark Walkom 
wrote:

> Install a visual monitoring plugin like kopf and ElasticHQ, you will be
> able to see which shards are unassigned.
> However I think you may have replicas set, which, given you only have one
> one, will always result in a yellow state as the cluster cannot assign
> replicas to another node.
>
> You should also upgrade ES to a newer version if you can :)
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
>
>
> On 29 May 2014 23:45, Jason Weber  wrote:
>
>> I rebooted several times and I believe its collecting the correct data
>> now. I still show 520 unassigned shards, but its collecting all my logs
>> now. Is this something I can use the redirect command for to assign it to a
>> new index?
>>
>> Jason
>>
>> On Tuesday, May 27, 2014 11:39:49 AM UTC-4, Jason Weber wrote:
>>>
>>> Could someone walk me through getting my cluster up and running. Came in
>>> from long weekend and my cluster was red status, I am showing a lot of
>>> unassigned shards.
>>>
>>> jmweber@MIDLOG01:/var/log/logstash$ curl localhost:9200/_cluster/
>>> health?pretty
>>> {
>>>   "cluster_name" : "midlogcluster",
>>>   "status" : "red",
>>>   "timed_out" : false,
>>>   "number_of_nodes" : 2,
>>>   "number_of_data_nodes" : 1,
>>>   "active_primary_shards" : 512,
>>>   "active_shards" : 512,
>>>   "relocating_shards" : 0,
>>>   "initializing_shards" : 0,
>>>   "unassigned_shards" : 520
>>> }
>>>
>>>
>>> I am running ES 0.90.11
>>>
>>> LS and ES are on a single server, I only have 1 node, although it shows
>>> 2, I get yellow status normally, it works fine with that. But I am only
>>> collecting like 43 events per minute vs my usual 50K.
>>>
>>> I have seen several write ups but I seem to get a lot of no handler
>>> found for uri statements when I try to run them.
>>>
>>> Thanks,
>>> Jason
>>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/1307dd8d-411e-4690-a6d1-8e27ce26ecec%40googlegroups.com
>> <https://groups.google.com/d/msgid/elasticsearch/1307dd8d-411e-4690-a6d1-8e27ce26ecec%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAEM624Y%2BPsF8a4C0mh-Jsi%3Dc6ogiXctAuA-Hn2oO6MVvv7SkBQ%40mail.gmail.com
> <https://groups.google.com/d/msgid/elasticsearch/CAEM624Y%2BPsF8a4C0mh-Jsi%3Dc6ogiXctAuA-Hn2oO6MVvv7SkBQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAMUueYn0EkSL1qAH%2Bb5s0PHMW%3Ds5dK48n3dLgFAuEDziSpBfDg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Red status unassigned shards help

2014-05-29 Thread Mark Walkom
Install a visual monitoring plugin like kopf and ElasticHQ, you will be
able to see which shards are unassigned.
However I think you may have replicas set, which, given you only have one
one, will always result in a yellow state as the cluster cannot assign
replicas to another node.

You should also upgrade ES to a newer version if you can :)

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 29 May 2014 23:45, Jason Weber  wrote:

> I rebooted several times and I believe its collecting the correct data
> now. I still show 520 unassigned shards, but its collecting all my logs
> now. Is this something I can use the redirect command for to assign it to a
> new index?
>
> Jason
>
> On Tuesday, May 27, 2014 11:39:49 AM UTC-4, Jason Weber wrote:
>>
>> Could someone walk me through getting my cluster up and running. Came in
>> from long weekend and my cluster was red status, I am showing a lot of
>> unassigned shards.
>>
>> jmweber@MIDLOG01:/var/log/logstash$ curl localhost:9200/_cluster/
>> health?pretty
>> {
>>   "cluster_name" : "midlogcluster",
>>   "status" : "red",
>>   "timed_out" : false,
>>   "number_of_nodes" : 2,
>>   "number_of_data_nodes" : 1,
>>   "active_primary_shards" : 512,
>>   "active_shards" : 512,
>>   "relocating_shards" : 0,
>>   "initializing_shards" : 0,
>>   "unassigned_shards" : 520
>> }
>>
>>
>> I am running ES 0.90.11
>>
>> LS and ES are on a single server, I only have 1 node, although it shows
>> 2, I get yellow status normally, it works fine with that. But I am only
>> collecting like 43 events per minute vs my usual 50K.
>>
>> I have seen several write ups but I seem to get a lot of no handler found
>> for uri statements when I try to run them.
>>
>> Thanks,
>> Jason
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/1307dd8d-411e-4690-a6d1-8e27ce26ecec%40googlegroups.com
> <https://groups.google.com/d/msgid/elasticsearch/1307dd8d-411e-4690-a6d1-8e27ce26ecec%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624Y%2BPsF8a4C0mh-Jsi%3Dc6ogiXctAuA-Hn2oO6MVvv7SkBQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Red status unassigned shards help

2014-05-29 Thread Jason Weber
I rebooted several times and I believe its collecting the correct data now. 
I still show 520 unassigned shards, but its collecting all my logs now. Is 
this something I can use the redirect command for to assign it to a new 
index?

Jason

On Tuesday, May 27, 2014 11:39:49 AM UTC-4, Jason Weber wrote:
>
> Could someone walk me through getting my cluster up and running. Came in 
> from long weekend and my cluster was red status, I am showing a lot of 
> unassigned shards.
>
> jmweber@MIDLOG01:/var/log/logstash$ curl 
> localhost:9200/_cluster/health?pretty
> {
>   "cluster_name" : "midlogcluster",
>   "status" : "red",
>   "timed_out" : false,
>   "number_of_nodes" : 2,
>   "number_of_data_nodes" : 1,
>   "active_primary_shards" : 512,
>   "active_shards" : 512,
>   "relocating_shards" : 0,
>   "initializing_shards" : 0,
>   "unassigned_shards" : 520
> }
>
>
> I am running ES 0.90.11
>
> LS and ES are on a single server, I only have 1 node, although it shows 2, 
> I get yellow status normally, it works fine with that. But I am only 
> collecting like 43 events per minute vs my usual 50K.
>
> I have seen several write ups but I seem to get a lot of no handler found 
> for uri statements when I try to run them.
>
> Thanks,
> Jason
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1307dd8d-411e-4690-a6d1-8e27ce26ecec%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Red status unassigned shards help

2014-05-27 Thread Jason Weber
Could someone walk me through getting my cluster up and running. Came in 
from long weekend and my cluster was red status, I am showing a lot of 
unassigned shards.

jmweber@MIDLOG01:/var/log/logstash$ curl 
localhost:9200/_cluster/health?pretty
{
  "cluster_name" : "midlogcluster",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 512,
  "active_shards" : 512,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 520
}


I am running ES 0.90.11

LS and ES are on a single server, I only have 1 node, although it shows 2, 
I get yellow status normally, it works fine with that. But I am only 
collecting like 43 events per minute vs my usual 50K.

I have seen several write ups but I seem to get a lot of no handler found 
for uri statements when I try to run them.

Thanks,
Jason

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f8c8d310-4819-41eb-ae94-973ce9d06dd7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Unassigned Shards Problem

2014-05-23 Thread Brian Wilkins
I removed all the extra allocation stuff. When I did that, the shards were 
all propogated. Health is green again.

On Thursday, May 22, 2014 6:56:24 PM UTC-4, Brian Wilkins wrote:
>
> Went back and read the page again. So I made one master, workhorse, and 
> balancer with rackid of rack_two for testing. One master shows rackid of 
> rack_one. All nodes were restarted. The shards are still unassigned. 
> Also,the indices in ElasticHQ are empty.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b6e26a5c-7709-40e2-ae06-7c94e830ae2c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Unassigned Shards Problem

2014-05-22 Thread Brian Wilkins
Went back and read the page again. So I made one master, workhorse, and 
balancer with rackid of rack_two for testing. One master shows rackid of 
rack_one. All nodes were restarted. The shards are still unassigned. Also,the 
indices in ElasticHQ are empty.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4ac7b65e-3bc7-4f51-85e5-65ca3719880d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Unassigned Shards Problem

2014-05-22 Thread Brian Wilkins
Thanks for your reply. I set the node.rack to rack_one on all the nodes as 
a test. In ElasticHQ, on the right it shows no indices. It is empty. In my 
master, I see that the nodes are identifying with rack_one (all of them). 

Any other clues?

Thanks

Brian

On Thursday, May 22, 2014 5:10:25 PM UTC-4, Mark Walkom wrote:
>
> It does create an index, it says so in the log - [logstash-2014.05.22] 
> creating index - it's jut not assigning things.
>
> You've set routing.allocation.awareness.attribute, but have you set the 
> node value, ie node.rack?
> See 
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-cluster.html#allocation-awareness
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com 
> web: www.campaignmonitor.com
>
>
> On 23 May 2014 02:22, Brian Wilkins >wrote:
>
>> I have five nodes : Two Master Nodes, One Balancer Node, One Workhorse 
>> Node, and One Coordinator Node.
>>
>> I am shipping events from logstash, redis, to elasticsearch.
>>
>> At the moment, my cluster is RED. The shards are created but no index is 
>> created. I used to get an index like logstash.2014-05-22, but not anymore.
>>
>> I deleted all my data, Cluster health goes GREEN.
>>
>> However, as soon as data is sent from logstash -> redis -> elasticsearch, 
>> my cluster health goes RED. I end up with unassigned shards. In my 
>> /var/log/elasticsearch/logstash.log on my master, I see this in the log:
>>
>> [2014-05-22 12:03:20,599][INFO ][cluster.metadata ] [Bora] 
>> [logstash-2014.05.22] creating index, cause [auto(bulk api)], shards 
>> [5]/[1], mappings [_default_]
>>
>> On my master, this is the configuration:
>>
>> cluster:
>>   name: logstash
>>   routing:
>> allocation:
>>   awareness:
>> attributes: rack
>> node:
>>   data: true
>>   master: true
>>
>> curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
>> {
>>   "cluster_name" : "logstash",
>>   "status" : "red",
>>   "timed_out" : false,
>>   "number_of_nodes" : 5,
>>   "number_of_data_nodes" : 3,
>>   "active_primary_shards" : 0,
>>   "active_shards" : 0,
>>   "relocating_shards" : 0,
>>   "initializing_shards" : 0,
>>   "unassigned_shards" : 10
>> }
>>
>> Is there an incorrect setting? I also installed ElasticHQ. It tells me 
>> the same information.
>>
>> Brian
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/03c5974b-ae50-4f1c-9ba3-4ef94b564323%40googlegroups.com<https://groups.google.com/d/msgid/elasticsearch/03c5974b-ae50-4f1c-9ba3-4ef94b564323%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5bd4c3d5-3be5-44ef-a8a6-5dba6876130c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Unassigned Shards Problem

2014-05-22 Thread Mark Walkom
It does create an index, it says so in the log - [logstash-2014.05.22]
creating index - it's jut not assigning things.

You've set routing.allocation.awareness.attribute, but have you set the
node value, ie node.rack?
See
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-cluster.html#allocation-awareness

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 23 May 2014 02:22, Brian Wilkins  wrote:

> I have five nodes : Two Master Nodes, One Balancer Node, One Workhorse
> Node, and One Coordinator Node.
>
> I am shipping events from logstash, redis, to elasticsearch.
>
> At the moment, my cluster is RED. The shards are created but no index is
> created. I used to get an index like logstash.2014-05-22, but not anymore.
>
> I deleted all my data, Cluster health goes GREEN.
>
> However, as soon as data is sent from logstash -> redis -> elasticsearch,
> my cluster health goes RED. I end up with unassigned shards. In my
> /var/log/elasticsearch/logstash.log on my master, I see this in the log:
>
> [2014-05-22 12:03:20,599][INFO ][cluster.metadata ] [Bora]
> [logstash-2014.05.22] creating index, cause [auto(bulk api)], shards
> [5]/[1], mappings [_default_]
>
> On my master, this is the configuration:
>
> cluster:
>   name: logstash
>   routing:
> allocation:
>   awareness:
> attributes: rack
> node:
>   data: true
>   master: true
>
> curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
> {
>   "cluster_name" : "logstash",
>   "status" : "red",
>   "timed_out" : false,
>   "number_of_nodes" : 5,
>   "number_of_data_nodes" : 3,
>   "active_primary_shards" : 0,
>   "active_shards" : 0,
>   "relocating_shards" : 0,
>   "initializing_shards" : 0,
>   "unassigned_shards" : 10
> }
>
> Is there an incorrect setting? I also installed ElasticHQ. It tells me the
> same information.
>
> Brian
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/03c5974b-ae50-4f1c-9ba3-4ef94b564323%40googlegroups.com<https://groups.google.com/d/msgid/elasticsearch/03c5974b-ae50-4f1c-9ba3-4ef94b564323%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624YGofrc55ND%3DXb3QW9cu799R0AKjB6_5Nmse_rMc9qcAw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Unassigned Shards Problem

2014-05-22 Thread Brian Wilkins
I have five nodes : Two Master Nodes, One Balancer Node, One Workhorse 
Node, and One Coordinator Node.

I am shipping events from logstash, redis, to elasticsearch.

At the moment, my cluster is RED. The shards are created but no index is 
created. I used to get an index like logstash.2014-05-22, but not anymore.

I deleted all my data, Cluster health goes GREEN.

However, as soon as data is sent from logstash -> redis -> elasticsearch, 
my cluster health goes RED. I end up with unassigned shards. In my 
/var/log/elasticsearch/logstash.log on my master, I see this in the log:

[2014-05-22 12:03:20,599][INFO ][cluster.metadata ] [Bora] 
[logstash-2014.05.22] creating index, cause [auto(bulk api)], shards 
[5]/[1], mappings [_default_]

On my master, this is the configuration:

cluster:
  name: logstash
  routing:
allocation:
  awareness:
attributes: rack
node:
  data: true
  master: true

curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
{
  "cluster_name" : "logstash",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 5,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 10
}

Is there an incorrect setting? I also installed ElasticHQ. It tells me the 
same information.

Brian

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/03c5974b-ae50-4f1c-9ba3-4ef94b564323%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Unassigned shards, v2

2014-05-20 Thread abr
Hello,

I'm joining this topic because I'm having the same king of issue on my
system.
I'm trying to build a log indexation engine based on elastic search and I
have:
ES master node
ES slave node
logstash

logstash outputs to ES slave node:
output {
elasticsearch {
  bind_host => "10.30.19.87"
  cluster => "ubiqube"
}
}
 
the issue is that every time a new logstash index is created, it is
unassigned but appart from this it looks fine.

I did the test with curl -X PUT localhost:9200/mytest

and this new index is also created as unassigned.
with the "head" plugin I can see each shard status:

{

state: UNASSIGNED
primary: false
node: null
relocating_node: null
shard: 3
index: mytest

}

Any idea?

logstash and ES version V1-1-1


Antoine Brun




--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/Unassigned-shards-v2-tp4046370p4056142.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1400583879699-4056142.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch always leaves unassigned shards on startup

2014-05-19 Thread f13ts
Hi, 

Just getting started with ES. I'm integrating it into my Play! 2 
application using the scalastic scala wrapper for the java API.

I can happily index & retrieve documents, but ONLY if I delete and recreate 
my index every time the application starts, which is pretty useless. 

I have a Gist showing how I start and stop a node when my application boots 
up/shuts down:
https://gist.github.com/anonymous/cb60c8b5ad9d777c31ec

and here is what my Elasticsearch-head page looks like after starting up 
the app without recreating the index: 

http://imgur.com/eA6AF4u

Which shows yellow status, however curl -XGET 
'http://localhost:9200/_cluster/health returns RED status, and that's also 
what my application reports! 

Many thanks for any help/advice,

A


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2ccab4a0-6529-40eb-9f12-fc7abf10b313%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


restart leave unassigned shards

2014-04-21 Thread suleman mubarik
Hi 
we have 5 data nodes and 3 master in cluster and we are on ES 1.0 
2 week ago when we restarted the it leave 1 shard unassigned for some 
indices. 
We check the Lucien index and they are fine. We create a test index and 
copy the Lucien index to that and it came back. 
But the we don’t know what is the problem why restart leave unassigned 
shard. There is nothing help full in logs 
The segments.gen file are missing for those shards. And we have 2 data 
directories. 
Andy hell will be appreciated 
Thanks 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/286ad5fd-7ac1-4ad5-baaf-3aa686cf68c1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


restart leave unassigned shards

2014-04-08 Thread suleman mubarik
Hi

I have to restart elasticsearch cluster nodes because of some timeout 
exceptions

I did restart the nodes one by one there are 5 data nodes in cluster

But when I did that it leave some shards unassigned and I have no replicas

There is nothing help full in logs.

Can I get some help to know why these shards remain unassigned and any 
solution

Thanks 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9b9fcede-16a6-4ca0-84d0-2d7cb58e523e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: permanent unassigned shards in latest logstash index

2014-04-03 Thread Alexander Gray II
ok. i stopped the entire cluster and started one ES node at a time, and 
that seemed to do the trick, even though that's one of the first things I 
did when things went ary.
I have no idea how it could have gotten into that state to begin with, but 
it's all good now.
We lost a ton of logs, but it looks like everything is OK now.
Alex

On Thursday, April 3, 2014 11:42:41 PM UTC-4, Alexander Gray II wrote:
>
> I installed elastichq.
> Interestingly enough, it doesn't even show that index, but 
> head/paramedic/bigdesk does. weird.
> All the diagnostics of elastichq shows mostly green. There are few 
> "yellows" under Index Activity for "Get Total", but it doesn't strike me as 
> something that is related to this.
> What I did find was that 2 of the shards have been in the "initializing" 
> state for quite some time.
> Sounds bad, but maybe I should wait. maybe it will just go away.
> Note that since this happened, we are pretty much dead in the water, since 
> no new logs are being ingested at all.
> I almost want to say things worked better and with the m1.larges, instead 
> of the m2.xlarges, but I don't see how that is possible.
> I'm open for any wild suggestions to get this cluster back to a working 
> state.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/dc353f0a-a136-4fd9-9a6b-d6ded561905e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: permanent unassigned shards in latest logstash index

2014-04-03 Thread Alexander Gray II
I installed elastichq.
Interestingly enough, it doesn't even show that index, but 
head/paramedic/bigdesk does. weird.
All the diagnostics of elastichq shows mostly green. There are few 
"yellows" under Index Activity for "Get Total", but it doesn't strike me as 
something that is related to this.
What I did find was that 2 of the shards have been in the "initializing" 
state for quite some time.
Sounds bad, but maybe I should wait. maybe it will just go away.
Note that since this happened, we are pretty much dead in the water, since 
no new logs are being ingested at all.
I almost want to say things worked better and with the m1.larges, instead 
of the m2.xlarges, but I don't see how that is possible.
I'm open for any wild suggestions to get this cluster back to a working 
state.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/493e6629-f24b-4c51-970b-4aafa2a81e91%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: permanent unassigned shards in latest logstash index

2014-04-03 Thread Mark Walkom
You should only need to do that if you issued a true to begin with.

What version are you on? How many nodes, indexes, shards? Try installing a
plugin like elastichq or marvel to give you a better idea of what your
cluster status is. Bigdesk is good, but you only see what one individual
node is doing.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 4 April 2014 14:01, Alexander Gray II  wrote:

> Maybe I have to do this?
>
> http://stackoverflow.com/questions/19967472/elasticsearch-unassigned-shards-how-to-fix
> ie:
>
> curl -XPUT 'localhost:9200//_settings' -d ' 
> {"index.routing.allocation.disable_allocation": false}'
>
> I have no idea what that does, but it can't get any more broke than it is,
> so I'll give it a try...
>
>
> On Thursday, April 3, 2014 10:57:39 PM UTC-4, Alexander Gray II wrote:
>>
>> No love. I deleted the index via the head plugin, but the index is still
>> there, and the shards are still unassigned.
>> Nothing in the logs showed any errors either.
>> Maybe this is not the proper way to delete an index? (or maybe it got
>> deleted and re-created so fast that I missed it...)
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/54be9890-4f9d-48cb-aa39-5b28f0331ebd%40googlegroups.com<https://groups.google.com/d/msgid/elasticsearch/54be9890-4f9d-48cb-aa39-5b28f0331ebd%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624YSwPm5Bvb-srOV4JmrH3Y06kS%3DyvyohUxxdsLnzacAJg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: permanent unassigned shards in latest logstash index

2014-04-03 Thread Alexander Gray II
Maybe I have to do this?
http://stackoverflow.com/questions/19967472/elasticsearch-unassigned-shards-how-to-fix
ie:

curl -XPUT 'localhost:9200//_settings' -d ' 
{"index.routing.allocation.disable_allocation": false}'

I have no idea what that does, but it can't get any more broke than it is, 
so I'll give it a try...


On Thursday, April 3, 2014 10:57:39 PM UTC-4, Alexander Gray II wrote:
>
> No love. I deleted the index via the head plugin, but the index is still 
> there, and the shards are still unassigned.
> Nothing in the logs showed any errors either.
> Maybe this is not the proper way to delete an index? (or maybe it got 
> deleted and re-created so fast that I missed it...)
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/54be9890-4f9d-48cb-aa39-5b28f0331ebd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: permanent unassigned shards in latest logstash index

2014-04-03 Thread Alexander Gray II
No love. I deleted the index via the head plugin, but the index is still 
there, and the shards are still unassigned.
Nothing in the logs showed any errors either.
Maybe this is not the proper way to delete an index? (or maybe it got 
deleted and re-created so fast that I missed it...)

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5a4e72b4-6781-4fa1-8088-6134ad8e9489%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: permanent unassigned shards in latest logstash index

2014-04-03 Thread Mohit Anchlia
Is there a way to fix this type of issue without having to delete an index?

On Thu, Apr 3, 2014 at 7:46 PM, Mark Walkom wrote:

> If it has data you're ok with losing, just delete the index and let it get
> recreated automatically.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com
> web: www.campaignmonitor.com
>
>
> On 4 April 2014 13:45, Alexander Gray II  wrote:
>
>> We have a 3 node m1.large ES/logstash cluster with version 0.90.3.
>> Everything has been running fine for a very long time.
>>
>> We needed to upgrade our cluster to use m2.xlarge, so we killed a node
>> and brought up a new brand new m2.xlarge machine and let the cluster go
>> from yellow to green.
>>
>> But when we did the next machine, things went south.
>>
>> The latest logstash index has all of its primary and replica shards
>> unassigned, and they are stuck that way.
>> I'm attaching a screenshot of the head plugin of the last index (you can
>> see that we have replica = 1):
>>
>>
>> 
>>
>>
>> The only exception i see in the logs is:
>>
>> [2014-04-04 01:59:06,829][DEBUG][action.admin.indices.close] [Crimson
>> Dynamo] failed to close indices [logstash-2014.04.04]
>> org.elasticsearch.indices.IndexPrimaryShardNotAllocatedException:
>> [logstash-2014.04.04] primary not allocated post api|
>> at
>> org.elasticsearch.cluster.metadata.MetaDataIndexStateService$1.execute(MetaDataIndexStateService.java:95)
>> at
>> org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:285)
>> at
>> org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:143)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:722)
>>
>> I've tried restarting the cluster, but no love.
>>
>> My cluster is no longer consuming any logs because of this.
>>
>> Any idea how I can even begin to trouble shoot this?
>>
>> Thanks,
>>
>> alex
>>
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/8dc86c75-7a31-4885-8450-d44cc833140b%40googlegroups.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAEM624Zh1jNBCCc4RA9zkLvzAS2JsPG5hre5yVjWSeuehq8YTQ%40mail.gmail.com
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAOT3TWroCX4REmwUC1SRGmUMvYnGc3mkUO6G6U6Dp-nQjNo3CQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: permanent unassigned shards in latest logstash index

2014-04-03 Thread Mark Walkom
If it has data you're ok with losing, just delete the index and let it get
recreated automatically.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 4 April 2014 13:45, Alexander Gray II  wrote:

> We have a 3 node m1.large ES/logstash cluster with version 0.90.3.
> Everything has been running fine for a very long time.
>
> We needed to upgrade our cluster to use m2.xlarge, so we killed a node and
> brought up a new brand new m2.xlarge machine and let the cluster go from
> yellow to green.
>
> But when we did the next machine, things went south.
>
> The latest logstash index has all of its primary and replica shards
> unassigned, and they are stuck that way.
> I'm attaching a screenshot of the head plugin of the last index (you can
> see that we have replica = 1):
>
>
> 
>
>
> The only exception i see in the logs is:
>
> [2014-04-04 01:59:06,829][DEBUG][action.admin.indices.close] [Crimson
> Dynamo] failed to close indices [logstash-2014.04.04]
> org.elasticsearch.indices.IndexPrimaryShardNotAllocatedException:
> [logstash-2014.04.04] primary not allocated post api|
> at
> org.elasticsearch.cluster.metadata.MetaDataIndexStateService$1.execute(MetaDataIndexStateService.java:95)
> at
> org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:285)
> at
> org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:143)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
>
> I've tried restarting the cluster, but no love.
>
> My cluster is no longer consuming any logs because of this.
>
> Any idea how I can even begin to trouble shoot this?
>
> Thanks,
>
> alex
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/8dc86c75-7a31-4885-8450-d44cc833140b%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624Zh1jNBCCc4RA9zkLvzAS2JsPG5hre5yVjWSeuehq8YTQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


permanent unassigned shards in latest logstash index

2014-04-03 Thread Alexander Gray II
We have a 3 node m1.large ES/logstash cluster with version 0.90.3.
Everything has been running fine for a very long time.

We needed to upgrade our cluster to use m2.xlarge, so we killed a node and 
brought up a new brand new m2.xlarge machine and let the cluster go from 
yellow to green.

But when we did the next machine, things went south.

The latest logstash index has all of its primary and replica shards 
unassigned, and they are stuck that way.
I'm attaching a screenshot of the head plugin of the last index (you can 
see that we have replica = 1):




The only exception i see in the logs is:

[2014-04-04 01:59:06,829][DEBUG][action.admin.indices.close] [Crimson 
Dynamo] failed to close indices [logstash-2014.04.04]
org.elasticsearch.indices.IndexPrimaryShardNotAllocatedException: 
[logstash-2014.04.04] primary not allocated post api|
at 
org.elasticsearch.cluster.metadata.MetaDataIndexStateService$1.execute(MetaDataIndexStateService.java:95)
at 
org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:285)
at 
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:143)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)

I've tried restarting the cluster, but no love.  

My cluster is no longer consuming any logs because of this.

Any idea how I can even begin to trouble shoot this?

Thanks,

alex


-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8dc86c75-7a31-4885-8450-d44cc833140b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Unassigned shards

2014-02-19 Thread Tal Shemesh
ok, i found the problem.

the problematic server was in the wrong version of elasticsearch.
i hope it will help others :)

thanks.


On Wed, Feb 19, 2014 at 2:29 PM, Itamar Syn-Hershko wrote:

> There's a lot of info missing: how many servers, indexes, shards and
> replicas do you have? have you made sure to set all the configurations
> correctly (quorum size, expected number of nodes etc)?
>
> You may be experiencing a split-brain situation (if your cluster state is
> red) or just a temporary hiccup (if it's yellow). But without more info its
> hard to tell. Either way, restarting the entire cluster usually helps
> immediately, if this is an option.
>
> With regards to the response "is not allowed, reason: NO()" you are
> getting, there is an open issue on this for getting more feedback in the
> response, that I _think_ was made it to v1 and maybe even 0.90.11.
>
> --
>
> Itamar Syn-Hershko
> http://code972.com | @synhershko 
> Freelance Developer & Consultant
> Author of RavenDB in Action 
>
>
> On Wed, Feb 19, 2014 at 2:22 PM, Tal Shemesh wrote:
>
>> Hi,
>> i am using elasticsearch for a while and one morning i have noticed i
>> have an unassigned shard.
>> i started to play with the cluster (restart some nodes, add more nodes,
>> try to move allocate the shards) and got some more unassigned
>> i realized all the shards which are primary in one of the nodes has their
>>  replicas unassigned.
>> also, i tired to allocate or move those shards, and all i get is:
>>
>> org.elasticsearch.transport.RemoteTransportException:
>> [foa02][inet[/192.168.2.2:9300]][cluster/reroute]
>> Caused by: org.elasticsearch.ElasticSearchIllegalArgumentException:
>> [allocate] allocation of [logstash-2014.02.18][0] on node
>> [foa8][Rhmt4uF6Rf2_8WAw-WieZA][inet[
>> 192.168.2.8/192.168.2.8:9300]]{master=false}is
>>  not allowed, reason: NO()
>> at
>> org.elasticsearch.cluster.routing.allocation.command.AllocateAllocationCommand.execute(AllocateAllocationCommand.java:188)
>> at
>> org.elasticsearch.cluster.routing.allocation.command.AllocationCommands.execute(AllocationCommands.java:116)
>> at
>> org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:125)
>> at
>> org.elasticsearch.action.admin.cluster.reroute.TransportClusterRerouteAction$1.execute(TransportClusterRerouteAction.java:112)
>> at
>> org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:300)
>> at
>> org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:135)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:724)
>> i attached an image from the "head" plugin
>>
>>
>> 
>>
>> any idea what is wrong?
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>>
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/c2a5ba79-3bfd-41c5-981f-6e535c6c0052%40googlegroups.com
>> .
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/1aVNnWNnIPs/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAHTr4Ztk7J1tQMnxCD851L--i6%2BL20vVU47svmu%2B11d9Y_a5sw%40mail.gmail.com
> .
>
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAAfe%2B1GLm88yKCx8KS%3DwkMPQzTvCPrdr-ALMDX_1oNaLRJtQ8Q%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Unassigned shards

2014-02-19 Thread Itamar Syn-Hershko
There's a lot of info missing: how many servers, indexes, shards and
replicas do you have? have you made sure to set all the configurations
correctly (quorum size, expected number of nodes etc)?

You may be experiencing a split-brain situation (if your cluster state is
red) or just a temporary hiccup (if it's yellow). But without more info its
hard to tell. Either way, restarting the entire cluster usually helps
immediately, if this is an option.

With regards to the response "is not allowed, reason: NO()" you are
getting, there is an open issue on this for getting more feedback in the
response, that I _think_ was made it to v1 and maybe even 0.90.11.

--

Itamar Syn-Hershko
http://code972.com | @synhershko 
Freelance Developer & Consultant
Author of RavenDB in Action 


On Wed, Feb 19, 2014 at 2:22 PM, Tal Shemesh  wrote:

> Hi,
> i am using elasticsearch for a while and one morning i have noticed i have
> an unassigned shard.
> i started to play with the cluster (restart some nodes, add more nodes,
> try to move allocate the shards) and got some more unassigned
> i realized all the shards which are primary in one of the nodes has their
>  replicas unassigned.
> also, i tired to allocate or move those shards, and all i get is:
>
> org.elasticsearch.transport.RemoteTransportException:
> [foa02][inet[/192.168.2.2:9300]][cluster/reroute]
> Caused by: org.elasticsearch.ElasticSearchIllegalArgumentException:
> [allocate] allocation of [logstash-2014.02.18][0] on node
> [foa8][Rhmt4uF6Rf2_8WAw-WieZA][inet[
> 192.168.2.8/192.168.2.8:9300]]{master=false}is
>  not allowed, reason: NO()
> at
> org.elasticsearch.cluster.routing.allocation.command.AllocateAllocationCommand.execute(AllocateAllocationCommand.java:188)
> at
> org.elasticsearch.cluster.routing.allocation.command.AllocationCommands.execute(AllocationCommands.java:116)
> at
> org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:125)
> at
> org.elasticsearch.action.admin.cluster.reroute.TransportClusterRerouteAction$1.execute(TransportClusterRerouteAction.java:112)
> at
> org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:300)
> at
> org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:135)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> i attached an image from the "head" plugin
>
>
> 
>
> any idea what is wrong?
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/c2a5ba79-3bfd-41c5-981f-6e535c6c0052%40googlegroups.com
> .
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAHTr4Ztk7J1tQMnxCD851L--i6%2BL20vVU47svmu%2B11d9Y_a5sw%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Unassigned shards

2014-02-19 Thread Tal Shemesh
Hi,
i am using elasticsearch for a while and one morning i have noticed i have 
an unassigned shard.
i started to play with the cluster (restart some nodes, add more nodes, try 
to move allocate the shards) and got some more unassigned
i realized all the shards which are primary in one of the nodes has their 
 replicas unassigned. 
also, i tired to allocate or move those shards, and all i get is:

org.elasticsearch.transport.RemoteTransportException: 
[foa02][inet[/192.168.2.2:9300]][cluster/reroute]
Caused by: org.elasticsearch.ElasticSearchIllegalArgumentException: 
[allocate] allocation of [logstash-2014.02.18][0] on node 
[foa8][Rhmt4uF6Rf2_8WAw-WieZA][inet[192.168.2.8/192.168.2.8:9300]]{master=false}
 
is not allowed, reason: NO()
at 
org.elasticsearch.cluster.routing.allocation.command.AllocateAllocationCommand.execute(AllocateAllocationCommand.java:188)
at 
org.elasticsearch.cluster.routing.allocation.command.AllocationCommands.execute(AllocationCommands.java:116)
at 
org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:125)
at 
org.elasticsearch.action.admin.cluster.reroute.TransportClusterRerouteAction$1.execute(TransportClusterRerouteAction.java:112)
at 
org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:300)
at 
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:135)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
i attached an image from the "head" plugin



any idea what is wrong?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c2a5ba79-3bfd-41c5-981f-6e535c6c0052%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Unassigned Shards

2013-12-20 Thread Eric Luellen
I got the initial issue fixed of me getting data back again. However I 
still don't understand how to fix the unassigned shards issue and how to 
properly restart elasticsearch without it complaining.

On Friday, December 20, 2013 9:28:53 AM UTC-5, Eric Luellen wrote:
>
> Mark,
>
> I used the rpm install. I'll take a look at the plugins. Thanks.
>
> On Thursday, December 19, 2013 5:07:53 PM UTC-5, Mark Walkom wrote:
>>
>> Did you install ES via a rpm/deb or using the zip? I ask because your 
>> data store directory is custom.subl
>>
>> Check out these plugins for monitoring - elastichq, kopf, bigdesk. They 
>> will give you an overview of your cluster and might give you insight into 
>> where your problem lies. The other best place to check is the ES logs.
>>
>> Regards,
>> Mark Walkom
>>
>> Infrastructure Engineer
>> Campaign Monitor
>> email: ma...@campaignmonitor.com
>> web: www.campaignmonitor.com
>>
>>
>> On 20 December 2013 08:52, Eric Luellen  wrote:
>>
>>> I think I made my situation even worse. I tried deleting the shards and 
>>> starting over and now elasticsearch isn't even creating the 
>>> /etc/elasticsearch/data/my-cluster/node folder.
>>>
>>>
>>> On Thursday, December 19, 2013 4:04:41 PM UTC-5, Eric Luellen wrote:
>>>>
>>>> Hello,
>>>>
>>>> Currently I have my syslog-ng --> logstash --> elasticsearch1 & 
>>>> elastisearch2 setup working pretty good. It's accepting over 300 events 
>>>> per 
>>>> second and hasn't bogged the systems down at all. However I'm running into 
>>>> 2 issues that I don't quite understand. 
>>>>
>>>> 1. When viewing the information in Kibana, it appears to be anywhere 
>>>> from 15 min to an hr behind on the "all events" view. Sometimes when I 
>>>> search for new logs it shows up correctly but overall it seems like it's 
>>>> lagging behind trying to keep up with what logstash is sending it. That 
>>>> being said, I'm concerned that logs are being dropped and I don't know 
>>>> about it. Are there any commands I can use to validate this type of 
>>>> information or what I can do to make sure elasticsearch/KIbana is keeping 
>>>> up?
>>>>
>>>> 2. I've had to restart elasticsearch a few times and every time I do, 
>>>> it completely breaks things. Once it starts back up it doesn't continue to 
>>>> show the logs in Kibana correctly and when I run a health check, it says 
>>>> there are unassigned shards. I've not been able to fix this and in the 
>>>> past 
>>>> I've always just had to delete them and start from scratch again.
>>>>
>>>> Any idea what is going on with this or how I can more cleanly restart 
>>>> or reboot the servers and recover from it?
>>>>
>>>> Thanks,
>>>> Eric
>>>>
>>>  -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/1f9a1c4a-94cf-49d7-a4d1-22ffb0b64727%40googlegroups.com
>>> .
>>>
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9c2df0af-0f11-485d-b892-ca8d7b8a108f%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Unassigned Shards

2013-12-20 Thread Eric Luellen
Mark,

I used the rpm install. I'll take a look at the plugins. Thanks.

On Thursday, December 19, 2013 5:07:53 PM UTC-5, Mark Walkom wrote:
>
> Did you install ES via a rpm/deb or using the zip? I ask because your data 
> store directory is custom.subl
>
> Check out these plugins for monitoring - elastichq, kopf, bigdesk. They 
> will give you an overview of your cluster and might give you insight into 
> where your problem lies. The other best place to check is the ES logs.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com 
> web: www.campaignmonitor.com
>
>
> On 20 December 2013 08:52, Eric Luellen 
> > wrote:
>
>> I think I made my situation even worse. I tried deleting the shards and 
>> starting over and now elasticsearch isn't even creating the 
>> /etc/elasticsearch/data/my-cluster/node folder.
>>
>>
>> On Thursday, December 19, 2013 4:04:41 PM UTC-5, Eric Luellen wrote:
>>>
>>> Hello,
>>>
>>> Currently I have my syslog-ng --> logstash --> elasticsearch1 & 
>>> elastisearch2 setup working pretty good. It's accepting over 300 events per 
>>> second and hasn't bogged the systems down at all. However I'm running into 
>>> 2 issues that I don't quite understand. 
>>>
>>> 1. When viewing the information in Kibana, it appears to be anywhere 
>>> from 15 min to an hr behind on the "all events" view. Sometimes when I 
>>> search for new logs it shows up correctly but overall it seems like it's 
>>> lagging behind trying to keep up with what logstash is sending it. That 
>>> being said, I'm concerned that logs are being dropped and I don't know 
>>> about it. Are there any commands I can use to validate this type of 
>>> information or what I can do to make sure elasticsearch/KIbana is keeping 
>>> up?
>>>
>>> 2. I've had to restart elasticsearch a few times and every time I do, it 
>>> completely breaks things. Once it starts back up it doesn't continue to 
>>> show the logs in Kibana correctly and when I run a health check, it says 
>>> there are unassigned shards. I've not been able to fix this and in the past 
>>> I've always just had to delete them and start from scratch again.
>>>
>>> Any idea what is going on with this or how I can more cleanly restart or 
>>> reboot the servers and recover from it?
>>>
>>> Thanks,
>>> Eric
>>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/1f9a1c4a-94cf-49d7-a4d1-22ffb0b64727%40googlegroups.com
>> .
>>
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/384e753f-6e9a-49d4-9db9-c03af81d9ba3%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Unassigned shards, v2

2013-12-20 Thread Alexander Reelsen
Hey,

there must be some log files which contain some reasons, why a shard
suddenly could not be assigned anymore. Can you check, whether you can dig
up any information in the logs, why your setup worked so long and then
suddenly didnt...

Also when you create an index manually (curl -X PUT node:9200/mytest) -
does this index get correctly created and assigned?


--Alex


On Thu, Dec 19, 2013 at 5:39 PM, Honza  wrote:

> Hello,
> thank you for the answer.
> I didn't do any specific configuration or anything nonstandard. But I had
> the problem I have mentioned, that there was set 1 replica by default and I
> had only one node, which I resolved with second node. Then I had some
> troubles with too many open files limit and that I solved with setting
> ulimit.
>
> Then it worked few weeks without problem, but then the shards started to
> be unassigned. I found it yesterday, but because there are made indicies by
> logstash every day, I know that shards older then 5 days are ok, but newer
> are not (I should mentioned that
> the index.routing.allocation.total_shards_per_node is always -1).
>
> So I tried the thing with removing replica and adding nodes, but it made
> it worse.
>
> But the point is, that the shards seem to be all right. They are not
> assigned only. So I only need to assigne them to the one node and I think
> it will be ok. Do you think is it possible, please?
>
> Thank you
>
> Dne čtvrtek, 19. prosince 2013 16:39:41 UTC+1 Alexander Reelsen napsal(a):
>>
>> Hey,
>>
>> did you do any allocation specific configuration? Disabling allocation at
>> all? Anything in your cluster settings or in your configuration? Did you do
>> any configuration before firing up your cluster or remember setting any
>> special option?
>>
>> Can you reproduce this when you set up a new system, so we could
>> reproduce this behaviour locally as well?
>>
>>
>> --Alex
>>
>>
>>
>> On Thu, Dec 19, 2013 at 3:17 PM, Honza  wrote:
>>
>>> Hi there,
>>> few weeks ago I had a problem with some unassigned shards, where I had
>>> same number of unassigned and assigned and I solved that thanks to an
>>> advice 
>>> (here<https://groups.google.com/forum/#!searchin/elasticsearch/unassigned$20shards/elasticsearch/Y2QQ-G0hICM/weIznt5PkKQJ>)
>>> with adding a new node.
>>> But now it appeared another problem. I had unassigned shards even 2
>>> nodes were running. So I decided to turn off the replica. That caused
>>> disappearing of half shards (of course), but some unassigned were still
>>> remaining.
>>>
>>> So I tried to add few new nodes. I ended on 10 nodes. Thanks to that
>>> some shards disappeared, but most of them did not. But if I turn them down
>>> and then start only one node again (it should be enough without replica),
>>> this is the health status:
>>>
>>> {
>>>   "cluster_name" : "elasticsearch",
>>>   "status" : "red",
>>>   "timed_out" : false,
>>>   "number_of_nodes" : 2,
>>>   "number_of_data_nodes" : 1,
>>>   "active_primary_shards" : 22,
>>>   "active_shards" : 22,
>>>   "relocating_shards" : 0,
>>>   "initializing_shards" : 0,
>>>   "unassigned_shards" : 198
>>> }
>>>
>>> So 1/10 was assigned and 9/10 was not. It seems the shards are still
>>> "connected" to old nodes and I have to reroute them to the only one node.
>>>
>>> But I couldn't do it.
>>> I used 
>>> this<http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html>
>>>  webpage
>>> and following code:
>>>
>>> curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
>>> "commands" : [ {
>>>   "allocate" : {
>>>   "index" : "logstash-2013.12.10",
>>>   "shard" : 0,
>>>   "node" : "Q6hyVtoPTrSxm_xIGTg3CQ",
>>>   "allow_primary": 1
>>>   }
>>> }
>>> ]
>>> }'
>>>
>>> (first without allow_primary, but it throws error "trying to allocate a
>>> primary shard [logstash-2013.12.10][0]], which is disabled" so I used the
>>> allow_primary flag), but also throws some exception:
>>>
>>> org.elasticsearch.index.gateway.IndexShardGatewayReco

Re: Unassigned Shards

2013-12-19 Thread Mark Walkom
Did you install ES via a rpm/deb or using the zip? I ask because your data
store directory is custom.subl

Check out these plugins for monitoring - elastichq, kopf, bigdesk. They
will give you an overview of your cluster and might give you insight into
where your problem lies. The other best place to check is the ES logs.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 20 December 2013 08:52, Eric Luellen  wrote:

> I think I made my situation even worse. I tried deleting the shards and
> starting over and now elasticsearch isn't even creating the
> /etc/elasticsearch/data/my-cluster/node folder.
>
>
> On Thursday, December 19, 2013 4:04:41 PM UTC-5, Eric Luellen wrote:
>>
>> Hello,
>>
>> Currently I have my syslog-ng --> logstash --> elasticsearch1 &
>> elastisearch2 setup working pretty good. It's accepting over 300 events per
>> second and hasn't bogged the systems down at all. However I'm running into
>> 2 issues that I don't quite understand.
>>
>> 1. When viewing the information in Kibana, it appears to be anywhere from
>> 15 min to an hr behind on the "all events" view. Sometimes when I search
>> for new logs it shows up correctly but overall it seems like it's lagging
>> behind trying to keep up with what logstash is sending it. That being said,
>> I'm concerned that logs are being dropped and I don't know about it. Are
>> there any commands I can use to validate this type of information or what I
>> can do to make sure elasticsearch/KIbana is keeping up?
>>
>> 2. I've had to restart elasticsearch a few times and every time I do, it
>> completely breaks things. Once it starts back up it doesn't continue to
>> show the logs in Kibana correctly and when I run a health check, it says
>> there are unassigned shards. I've not been able to fix this and in the past
>> I've always just had to delete them and start from scratch again.
>>
>> Any idea what is going on with this or how I can more cleanly restart or
>> reboot the servers and recover from it?
>>
>> Thanks,
>> Eric
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/1f9a1c4a-94cf-49d7-a4d1-22ffb0b64727%40googlegroups.com
> .
>
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624YkhRdv1b8hA9kM%3D-f006Cxnu9zt5BJz1zMXAO%2BbdUEAw%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Unassigned Shards

2013-12-19 Thread Eric Luellen
I think I made my situation even worse. I tried deleting the shards and 
starting over and now elasticsearch isn't even creating the 
/etc/elasticsearch/data/my-cluster/node folder.

On Thursday, December 19, 2013 4:04:41 PM UTC-5, Eric Luellen wrote:
>
> Hello,
>
> Currently I have my syslog-ng --> logstash --> elasticsearch1 & 
> elastisearch2 setup working pretty good. It's accepting over 300 events per 
> second and hasn't bogged the systems down at all. However I'm running into 
> 2 issues that I don't quite understand. 
>
> 1. When viewing the information in Kibana, it appears to be anywhere from 
> 15 min to an hr behind on the "all events" view. Sometimes when I search 
> for new logs it shows up correctly but overall it seems like it's lagging 
> behind trying to keep up with what logstash is sending it. That being said, 
> I'm concerned that logs are being dropped and I don't know about it. Are 
> there any commands I can use to validate this type of information or what I 
> can do to make sure elasticsearch/KIbana is keeping up?
>
> 2. I've had to restart elasticsearch a few times and every time I do, it 
> completely breaks things. Once it starts back up it doesn't continue to 
> show the logs in Kibana correctly and when I run a health check, it says 
> there are unassigned shards. I've not been able to fix this and in the past 
> I've always just had to delete them and start from scratch again.
>
> Any idea what is going on with this or how I can more cleanly restart or 
> reboot the servers and recover from it?
>
> Thanks,
> Eric
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1f9a1c4a-94cf-49d7-a4d1-22ffb0b64727%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Unassigned Shards

2013-12-19 Thread Eric Luellen
Hello,

Currently I have my syslog-ng --> logstash --> elasticsearch1 & 
elastisearch2 setup working pretty good. It's accepting over 300 events per 
second and hasn't bogged the systems down at all. However I'm running into 
2 issues that I don't quite understand. 

1. When viewing the information in Kibana, it appears to be anywhere from 
15 min to an hr behind on the "all events" view. Sometimes when I search 
for new logs it shows up correctly but overall it seems like it's lagging 
behind trying to keep up with what logstash is sending it. That being said, 
I'm concerned that logs are being dropped and I don't know about it. Are 
there any commands I can use to validate this type of information or what I 
can do to make sure elasticsearch/KIbana is keeping up?

2. I've had to restart elasticsearch a few times and every time I do, it 
completely breaks things. Once it starts back up it doesn't continue to 
show the logs in Kibana correctly and when I run a health check, it says 
there are unassigned shards. I've not been able to fix this and in the past 
I've always just had to delete them and start from scratch again.

Any idea what is going on with this or how I can more cleanly restart or 
reboot the servers and recover from it?

Thanks,
Eric

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/cab9c4e5-4e1a-4acd-b3ac-77fdc7ef6bef%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Unassigned shards, v2

2013-12-19 Thread Honza
Hello,
thank you for the answer.
I didn't do any specific configuration or anything nonstandard. But I had 
the problem I have mentioned, that there was set 1 replica by default and I 
had only one node, which I resolved with second node. Then I had some 
troubles with too many open files limit and that I solved with setting 
ulimit. 

Then it worked few weeks without problem, but then the shards started to be 
unassigned. I found it yesterday, but because there are made indicies by 
logstash every day, I know that shards older then 5 days are ok, but newer 
are not (I should mentioned that 
the index.routing.allocation.total_shards_per_node is always -1).

So I tried the thing with removing replica and adding nodes, but it made it 
worse.

But the point is, that the shards seem to be all right. They are not 
assigned only. So I only need to assigne them to the one node and I think 
it will be ok. Do you think is it possible, please?

Thank you

Dne čtvrtek, 19. prosince 2013 16:39:41 UTC+1 Alexander Reelsen napsal(a):
>
> Hey,
>
> did you do any allocation specific configuration? Disabling allocation at 
> all? Anything in your cluster settings or in your configuration? Did you do 
> any configuration before firing up your cluster or remember setting any 
> special option?
>
> Can you reproduce this when you set up a new system, so we could reproduce 
> this behaviour locally as well?
>
>
> --Alex
>
>
>
> On Thu, Dec 19, 2013 at 3:17 PM, Honza >wrote:
>
>> Hi there,
>> few weeks ago I had a problem with some unassigned shards, where I had 
>> same number of unassigned and assigned and I solved that thanks to an 
>> advice 
>> (here<https://groups.google.com/forum/#!searchin/elasticsearch/unassigned$20shards/elasticsearch/Y2QQ-G0hICM/weIznt5PkKQJ>)
>>  
>> with adding a new node.
>> But now it appeared another problem. I had unassigned shards even 2 nodes 
>> were running. So I decided to turn off the replica. That caused 
>> disappearing of half shards (of course), but some unassigned were still 
>> remaining.
>>
>> So I tried to add few new nodes. I ended on 10 nodes. Thanks to that some 
>> shards disappeared, but most of them did not. But if I turn them down and 
>> then start only one node again (it should be enough without replica), this 
>> is the health status:
>>
>> {
>>   "cluster_name" : "elasticsearch",
>>   "status" : "red",
>>   "timed_out" : false,
>>   "number_of_nodes" : 2,
>>   "number_of_data_nodes" : 1,
>>   "active_primary_shards" : 22,
>>   "active_shards" : 22,
>>   "relocating_shards" : 0,
>>   "initializing_shards" : 0,
>>   "unassigned_shards" : 198
>> }
>>
>> So 1/10 was assigned and 9/10 was not. It seems the shards are still 
>> "connected" to old nodes and I have to reroute them to the only one node.
>>
>> But I couldn't do it.
>> I used 
>> this<http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html>
>>  webpage 
>> and following code:
>>
>> curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
>> "commands" : [ {
>>   "allocate" : {
>>   "index" : "logstash-2013.12.10", 
>>   "shard" : 0, 
>>   "node" : "Q6hyVtoPTrSxm_xIGTg3CQ",
>>   "allow_primary": 1
>>   }
>> }
>> ]
>> }'
>>
>> (first without allow_primary, but it throws error "trying to allocate a 
>> primary shard [logstash-2013.12.10][0]], which is disabled" so I used the 
>> allow_primary flag), but also throws some exception:
>>
>> org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: 
>> [logstash-2013.12.10][0] shard allocated for local recovery (post api), 
>> should exists, but doesn't
>>
>> So I really don't know, what can I do or if I am doing right steps. 
>> Can somebody give me an advice, please?
>>
>> Thank you
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/d0585313-ee35-435b-b530-ff8389a2577c%40googlegroups.com
>> .
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fb737aa1-313c-4ab4-8bbe-3a1898f5288c%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Unassigned shards, v2

2013-12-19 Thread Alexander Reelsen
Hey,

did you do any allocation specific configuration? Disabling allocation at
all? Anything in your cluster settings or in your configuration? Did you do
any configuration before firing up your cluster or remember setting any
special option?

Can you reproduce this when you set up a new system, so we could reproduce
this behaviour locally as well?


--Alex



On Thu, Dec 19, 2013 at 3:17 PM, Honza  wrote:

> Hi there,
> few weeks ago I had a problem with some unassigned shards, where I had
> same number of unassigned and assigned and I solved that thanks to an
> advice 
> (here<https://groups.google.com/forum/#!searchin/elasticsearch/unassigned$20shards/elasticsearch/Y2QQ-G0hICM/weIznt5PkKQJ>)
> with adding a new node.
> But now it appeared another problem. I had unassigned shards even 2 nodes
> were running. So I decided to turn off the replica. That caused
> disappearing of half shards (of course), but some unassigned were still
> remaining.
>
> So I tried to add few new nodes. I ended on 10 nodes. Thanks to that some
> shards disappeared, but most of them did not. But if I turn them down and
> then start only one node again (it should be enough without replica), this
> is the health status:
>
> {
>   "cluster_name" : "elasticsearch",
>   "status" : "red",
>   "timed_out" : false,
>   "number_of_nodes" : 2,
>   "number_of_data_nodes" : 1,
>   "active_primary_shards" : 22,
>   "active_shards" : 22,
>   "relocating_shards" : 0,
>   "initializing_shards" : 0,
>   "unassigned_shards" : 198
> }
>
> So 1/10 was assigned and 9/10 was not. It seems the shards are still
> "connected" to old nodes and I have to reroute them to the only one node.
>
> But I couldn't do it.
> I used 
> this<http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html>
>  webpage
> and following code:
>
> curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
> "commands" : [ {
>   "allocate" : {
>   "index" : "logstash-2013.12.10",
>   "shard" : 0,
>   "node" : "Q6hyVtoPTrSxm_xIGTg3CQ",
>   "allow_primary": 1
>   }
> }
> ]
> }'
>
> (first without allow_primary, but it throws error "trying to allocate a
> primary shard [logstash-2013.12.10][0]], which is disabled" so I used the
> allow_primary flag), but also throws some exception:
>
> org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
> [logstash-2013.12.10][0] shard allocated for local recovery (post api),
> should exists, but doesn't
>
> So I really don't know, what can I do or if I am doing right steps.
> Can somebody give me an advice, please?
>
> Thank you
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/d0585313-ee35-435b-b530-ff8389a2577c%40googlegroups.com
> .
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGCwEM81Bif0JJjXLB%2Bw1wfg_LLUhxHi%2B1Rg%3D3c1u9dW-d05_Q%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Unassigned shards, v2

2013-12-19 Thread Honza
Hi there,
few weeks ago I had a problem with some unassigned shards, where I had same 
number of unassigned and assigned and I solved that thanks to an advice (
here<https://groups.google.com/forum/#!searchin/elasticsearch/unassigned$20shards/elasticsearch/Y2QQ-G0hICM/weIznt5PkKQJ>)
 
with adding a new node.
But now it appeared another problem. I had unassigned shards even 2 nodes 
were running. So I decided to turn off the replica. That caused 
disappearing of half shards (of course), but some unassigned were still 
remaining.

So I tried to add few new nodes. I ended on 10 nodes. Thanks to that some 
shards disappeared, but most of them did not. But if I turn them down and 
then start only one node again (it should be enough without replica), this 
is the health status:

{
  "cluster_name" : "elasticsearch",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 22,
  "active_shards" : 22,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 198
}

So 1/10 was assigned and 9/10 was not. It seems the shards are still 
"connected" to old nodes and I have to reroute them to the only one node.

But I couldn't do it.
I used 
this<http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html>
 webpage 
and following code:

curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands" : [ {
  "allocate" : {
  "index" : "logstash-2013.12.10", 
  "shard" : 0, 
  "node" : "Q6hyVtoPTrSxm_xIGTg3CQ",
  "allow_primary": 1
  }
}
]
}'

(first without allow_primary, but it throws error "trying to allocate a 
primary shard [logstash-2013.12.10][0]], which is disabled" so I used the 
allow_primary flag), but also throws some exception:

org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: 
[logstash-2013.12.10][0] shard allocated for local recovery (post api), 
should exists, but doesn't

So I really don't know, what can I do or if I am doing right steps. 
Can somebody give me an advice, please?

Thank you

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d0585313-ee35-435b-b530-ff8389a2577c%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.