Hey Mark,
What are you calling lot of resources ? And how do you go about detecting
it?
Currently I'm ussing ttls for rolling old logs from my cluster. Its pretty
small currently (about 40GB of data), but as its get bigger I want to know
it it will pose a problem.
Thanks
On Wednesday, June
I thought I replied to this yesterdayAnyways it was with kibana. Thank
you for that.
On Wednesday, June 4, 2014 9:29:18 AM UTC-7, Antonio Augusto Santos wrote:
Hey There,
Did you remember to change the Timestamping on Kibana so that it would
know you are using an hourly index ? Go the
It depends on a few factors, document size, index size, etc etc.
If you are using ES for logging data, then best practise is to use
timestamped indexes and then just drop old ones as needed using curator.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email:
Hello All,
I have a question about hourly sharding with either logstash or fluentd.
Since we are, or will be using, a set up called FLEKZ. I am trying to
integrate both logstash and fluentd together, which work well with each
other. However, I have a business requirement for a rolling 24hour
Hey There,
Did you remember to change the Timestamping on Kibana so that it would
know you are using an hourly index ? Go the index configuration screen to
see that.
Also, if you have the requirement for 24 hour roll out, did you try
enabling _ttl
TTL isn't the best idea as it consumes a lot of resources. You're better
off getting your hourly indexes working.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 5 June 2014 02:29, Antonio Augusto Santos