I am looking for suggestions on cluster configuration.

I have 2 nodes (master/data and data), 544 indices, about 800 mil documents.

If I try to insert more documents and create more indices, I will 
catch error "too many open files".

My node's configuration:

CentOS 7
Intel(R) Xeon(R) CPU x16
RAM 62 Gb

# ulimit -n
100000

In future I will have a lot of indices (about 2000) and a lot of documents 
(~5 bil or maybe more)

How can I avoid the error "too many open files"?



-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c5d45b95-b3d7-4b6a-80fa-111d66f3f65a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Attachment: es_config.pp
Description: Binary data

Reply via email to