Re: too many open files problems and suggestions on cluster configuration

2015-05-01 Thread Ann Yablunovskaya
@gmail.com javascript: a écrit : I am looking for suggestions on cluster configuration. I have 2 nodes (master/data and data), 544 indices, about 800 mil documents. If I try to insert more documents and create more indices, I will catch error too many open files. My node's

too many open files problems and suggestions on cluster configuration

2015-05-01 Thread Ann Yablunovskaya
I am looking for suggestions on cluster configuration. I have 2 nodes (master/data and data), 544 indices, about 800 mil documents. If I try to insert more documents and create more indices, I will catch error too many open files. My node's configuration: CentOS 7 Intel(R) Xeon(R) CPU x16 RAM

Re: too many open files problems and suggestions on cluster configuration

2015-05-01 Thread joergpra...@gmail.com
more indices, I will catch error too many open files. My node's configuration: CentOS 7 Intel(R) Xeon(R) CPU x16 RAM 62 Gb # ulimit -n 10 In future I will have a lot of indices (about 2000) and a lot of documents (~5 bil or maybe more) How can I avoid the error too many open

Re: too many open files problems and suggestions on cluster configuration

2015-05-01 Thread David Pilato
), 544 indices, about 800 mil documents. If I try to insert more documents and create more indices, I will catch error too many open files. My node's configuration: CentOS 7 Intel(R) Xeon(R) CPU x16 RAM 62 Gb # ulimit -n 10 In future I will have a lot of indices (about 2000

Too many open files issue

2015-01-30 Thread shashi kiran
Hi All, I am new to this group. In one of our project we are using ElasticSearch server. In normal circumstance we are not facing this issue, but in production, we are facing SocketException:Too many open files from Elastic Search, Work around found after surfing over the internet was to

Re: Too many open files issue

2015-01-30 Thread vineeth mohan
Hi , You will need to increase the number of maximum open file descriptors from OS. You can find more info here - http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html#file-descriptors Thanks Vineeth Mohan, Elasticsearch

Too many open files /var/lib/elasticsearch doesn't exist on the nodes

2014-08-13 Thread José Andrés
shard, message [IndexShardGatewayRecoveryException[[all][4] failed recovery]; nested: EngineCreationFailureException[[all][4] failed to open reader on writer]; nested: FileSystemException[/var/lib/elasticsearch/mycluster/nodes/0/indices/all/4/index/_ m4bz_es090_0.tim: Too many open files]; ]] 2:11

Re: Too many open files /var/lib/elasticsearch doesn't exist on the nodes

2014-08-13 Thread Adrien Grand
It seems that this particular node is complaining about too many open files. This usually happens if you have very low limits on your operating system and/or if you have many shards on a single node. When this happens, things degrade pretty badly as there is no way to open new files anymore

too many open files

2014-07-17 Thread Seungjin Lee
(BasicBoltExecutor.java:43) at backtype.storm.daemon.executor$fn__5641$fn__5653.invoke(executor.clj:690) at backtype.storm.util$async_loop$fn__457.invoke(util.clj:429) at clojure.lang.AFn.run(AFn.java:24) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: too many open files

Re: too many open files

2014-07-17 Thread Andrew Selden
by: java.io.IOException: too many open files at sun.nio.ch.IOUtil.makePipe(Native Method) at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:65) at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36) at java.nio.channels.Selector.open

Re: ElasticSearch giving FileNotFoundException: (Too many open files)

2014-04-29 Thread InquiringMind
Prateek , I've collected this from various sources and put it all together. Works fine for me, though I haven't yet dived into ELK: - You may verify the current soft limit by logging into the user that runs the Elastisearch JVM and issuing the following command: $ ulimit -Sn Finally,

Re: Too Many Open Files

2014-03-04 Thread smonasco
, Shannon Monasco On Wednesday, January 22, 2014 10:09:42 AM UTC-7, Ivan Brusic wrote: The first thing to do is check if your limits are actually being persisted and used. The elasticsearch site has a good writeup: http://www.elasticsearch.org/tutorials/too-many-open-files/ Second

Re: Too Many Open Files

2014-01-22 Thread Ivan Brusic
The first thing to do is check if your limits are actually being persisted and used. The elasticsearch site has a good writeup: http://www.elasticsearch.org/tutorials/too-many-open-files/ Second, it might be possible that you are reaching the 128k limit. How many shards per node do you have? Do

Re: Too Many Open Files

2014-01-21 Thread smonasco
Sorry wrong error message. [2014-01-18 06:47:06,232][WARN ][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to accept a connection. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method

Re: Too many open files

2014-01-07 Thread Adolfo Rodriguez
Hi, my model is quite slow with just about some thousands documents I realised that, when opening a node = builder.client(clientOnly).data(!clientOnly).local(local).node(); client = node.client(); from my Java program to ES with such a small model, ES automatically

Re: Too many open files

2014-01-07 Thread Adolfo Rodriguez
I guess, my problem with excessive number of sockets could be also a consequence of having 2 JVM running ES, one embedded in Tomcat, a second embedded in other Java app, as said here:

Re: Too many open files

2014-01-07 Thread Adolfo Rodriguez
Happily, the problem of missing highlight records looks to be gone by making a config change. * Initially I had 2 ES in 2 different apps (a Tomcat and a standalone) configured equal (both listening for incoming TransportClients requests on port 9300 and both open with client(false)) and a