Re: "too many open files" problems and suggestions on cluster configuration

2015-05-01 Thread joergpra...@gmail.com
The number of open files does not depend on the number of documents. A shard comes not for free. Each shard can take around ~150 open file descriptors (sockets, segment files) and up to 400-500 if actively being indexed. Take care of number of shards, if you have 5 shards per index, and 2000 indi

Re: "too many open files" problems and suggestions on cluster configuration

2015-05-01 Thread Ann Yablunovskaya
How to calculate the best amount of shards? пятница, 1 мая 2015 г., 18:21:47 UTC+3 пользователь David Pilato написал: > > Add more nodes or reduce the number of shards per node. > > -- > David ;-) > Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs > > Le 1 mai 2015 à 17:05, Ann Yablunovskaya

Re: "too many open files" problems and suggestions on cluster configuration

2015-05-01 Thread David Pilato
Add more nodes or reduce the number of shards per node. -- David ;-) Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs > Le 1 mai 2015 à 17:05, Ann Yablunovskaya a écrit : > > I am looking for suggestions on cluster configuration. > > I have 2 nodes (master/data and data), 544 indices, abo

"too many open files" problems and suggestions on cluster configuration

2015-05-01 Thread Ann Yablunovskaya
I am looking for suggestions on cluster configuration. I have 2 nodes (master/data and data), 544 indices, about 800 mil documents. If I try to insert more documents and create more indices, I will catch error "too many open files". My node's configuration: CentOS 7 Intel(R) Xeon(R) CPU x16 RA