Hi, I have a 5 node cluster that I'm using as part of an ELK system. Most
of the time it works great but today we saw a spike in writes for one of
the nodes and around the same time we saw indexing on that node spike up
too -- which makes sense, if you have more writes you'll need to do more
indexing. But none of the other servers were all that taxed, we don't have
anything else writing to Elasticsearch other than Logstash and normally it
does a pretty good job of load balancing. Any idea where I could start
looking for clues? I was looking through the logs but there doesn't seem to
be much information in there, most of it are just debug errors that say
something like this: [logstash-2015.02.13][3] failed to execute bulk item
(index) index {[logstash-2015.02.13] but they show up pretty consistently
so it doesn't seem like anything to be worried about. Where else could I
look to see why only one server is getting all the writes and how do I
mitigate it if this is happening because it ended up making Elasticsearch
unresponsive to information being sent from Logstash.
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/2570e228-1d3e-48e0-af8d-0946b7c58197%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.