Hello David,

I found and this online that made my cluster go 'green'.
http://blog.trifork.com/2013/10/24/how-to-avoid-the-split-brain-problem-in-elasticsearch/
 I don't know for certain if that was 100% of the problem, but there are no
longer unassigned shards.

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
  "cluster_name" : "es-logstash",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 2792,
  "active_shards" : 5584,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0
}
root@logstash:/#

However, the root of my problem still exists.  I did restart the
forwarders, and TCP dump does show that traffic is indeed hitting the
server.  But my indicies folder does not contain fresh data except for one
source.

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103

Facebook <http://www.facebook.com/RealTruck> | Youtube
<http://www.youtube.com/realtruckcom>| Twitter
<http://twitter.com/realtruck> | Google+ <https://google.com/+Realtruck> |
Instagram <http://instagram.com/realtruckcom> | Linkedin
<http://www.linkedin.com/company/realtruck> | Our Guiding Principles
<http://www.realtruck.com/our-guiding-principles/>
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
<http://realtruck.com/>

On Sun, Apr 19, 2015 at 10:04 PM, David Pilato <da...@pilato.fr> wrote:

> Are you using the same exact JVM version?
> Where do those logs come from? LS ? ES ?
>
> Could you try the same with a cleaned Elasticsearch ? I mean with no data ?
> My suspicion is that you have too many shards allocated on a single
> (tiny?) node.
>
> What is your node size BTW (memory / heap size)?
>
> David
>
> Le 19 avr. 2015 à 23:09, Don Pich <dp...@realtruck.com> a écrit :
>
> Thanks for taking the time to answer David.
>
> Again, got my training wheels on with an ELK stack so I will do my best to
> answer.
>
> Here is an example.  The one indecy that is working has a fresh directory
> with todays date in the elasticsearch directory.  The ones that are not
> working do not have a directory.
>
> Logstash and Elastisearch are running with the logs not generating much
> information as far as pointing to any error.
>
> log4j, [2015-04-19T13:41:44.723]  WARN: org.elasticsearch.transport.netty:
> [logstash-logstash-3170-2032] Message not fully read (request) for [2] and
> action [internal:discovery/zen/unicast_gte_1_4], resetting
> log4j, [2015-04-19T13:41:49.569]  WARN: org.elasticsearch.transport.netty:
> [logstash-logstash-3170-2032] Message not fully read (request) for [5] and
> action [internal:discovery/zen/unicast_gte_1_4], resetting
> log4j, [2015-04-19T13:41:54.572]  WARN: org.elasticsearch.transport.netty:
> [logstash-logstash-3170-2032] Message not fully read (request) for [10] and
> action [internal:discovery/zen/unicast_gte_1_4], resetting
>
>
>
> Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925
>
> 3320 Westrac Drive South, Suite A * Fargo, ND 58103
>
> Facebook <http://www.facebook.com/RealTruck> | Youtube
> <http://www.youtube.com/realtruckcom>| Twitter
> <http://twitter.com/realtruck> | Google+ <https://google.com/+Realtruck>
> | Instagram <http://instagram.com/realtruckcom> | Linkedin
> <http://www.linkedin.com/company/realtruck> | Our Guiding Principles
> <http://www.realtruck.com/our-guiding-principles/>
> “If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
> <http://realtruck.com/>
>
> On Sun, Apr 19, 2015 at 2:38 PM, David Pilato <da...@pilato.fr> wrote:
>
>> From an Elasticsearch point of view, I don't see anything wrong.
>> You have a way too much shards for sure so you might hit OOM exception or
>> other troubles.
>>
>> So to answer to your question, check your Elasticsearch logs and if
>> nothing looks wrong, check logstash.
>>
>> Just adding that Elasticsearch is not generating data so you probably
>> meant that logstash stopped generating data, right?
>>
>> HTH
>>
>> David
>>
>> Le 19 avr. 2015 à 21:08, dp...@realtruck.com a écrit :
>>
>> I am new to elasticsearch and have a problem.  I have 5 indicies.  At
>> first all of them were running without issue.  However, over the last 2
>> weeks, all but one have stopped generating data.  I have run a tcpdump on
>> the logstash server and confirmed that logging packets are getting to the
>> server.  I have looked into the servers health.  I have issued the
>> following to check on the cluster:
>>
>> root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
>> {
>>   "cluster_name" : "es-logstash",
>>   "status" : "yellow",
>>   "timed_out" : false,
>>   "number_of_nodes" : 1,
>>   "number_of_data_nodes" : 1,
>>   "active_primary_shards" : 2791,
>>   "active_shards" : 2791,
>>   "relocating_shards" : 0,
>>   "initializing_shards" : 0,
>>   "unassigned_shards" : 2791
>> }
>> root@logstash:/#
>>
>>
>> Can some one please point me in the right direction on troubleshooting
>> this?
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com
>> <https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>>  --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr
>> <https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com
> <https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
>  --
> You received this message because you are subscribed to a topic in the
> Google Groups "elasticsearch" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr
> <https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr?utm_medium=email&utm_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Q8gOSQ57uF4CWUq0MYX8jvVf-B%3D-Qv2qeL_bqJoe4YkQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to