Hello 

We have a 10 node elasticsearch cluster which is receieving roughly 10k/s 
worth of logs lines from our application.

Each elasticsearch node has 132gb of memory - 48gb heap size, the disk 
subsystem is not great, but it seems to be keeping up. (This could be an 
issue, but i'm not sure that it is)

The logs path is: 

app server -> redis (via logstash) -> logstash filters (3 dedicated boxes) 
-> elasticsearch_http 

 
We currently bulk import from logstash at 5k documents per flush to keep up 
with the volume of data that comes in. 

Here are the es non standard configs.

indices.memory.index_buffer_size: 50%
index.translog.flush_threshold_ops: 50000
# Refresh tuning.
index.refresh_interval: 15s
# Field Data cache tuning
indices.fielddata.cache.size: 24g
indices.fielddata.cache.expire: 10m
#Segment Merging Tuning
index.merge.policy.max_merged_segment: 15g
# Thread Tuning
threadpool:
    bulk:
        type: fixed
        queue_size: -1

We have not had this cluster stay up for more than a week, but it also 
seems to crash for no real reason. 

It seems like one node starts having issues and then it takes the entire 
cluster down. 

Does anyone from the community have any experience with this kind of setup?

Thanks in Advance,
Rob



-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d04a643e-990b-40b0-b230-2ba560f08eea%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to