Re: "too many open files" problems and suggestions on cluster configuration

2015-05-01 Thread joergpra...@gmail.com
reate more indices, I will > catch error "too many open files". > > My node's configuration: > > CentOS 7 > Intel(R) Xeon(R) CPU x16 > RAM 62 Gb > > # ulimit -n > 10 > > In future I will have a lot of indices (about 2000) and a lot of documents

Re: "too many open files" problems and suggestions on cluster configuration

2015-05-01 Thread Ann Yablunovskaya
17:05, Ann Yablunovskaya > > a écrit : > > I am looking for suggestions on cluster configuration. > > I have 2 nodes (master/data and data), 544 indices, about 800 mil > documents. > > If I try to insert more documents and create more indices, I will > catch er

Re: "too many open files" problems and suggestions on cluster configuration

2015-05-01 Thread David Pilato
a), 544 indices, about 800 mil documents. > > If I try to insert more documents and create more indices, I will catch error > "too many open files". > > My node's configuration: > > CentOS 7 > Intel(R) Xeon(R) CPU x16 > RAM 62 Gb > > # ulimit -n &

"too many open files" problems and suggestions on cluster configuration

2015-05-01 Thread Ann Yablunovskaya
I am looking for suggestions on cluster configuration. I have 2 nodes (master/data and data), 544 indices, about 800 mil documents. If I try to insert more documents and create more indices, I will catch error "too many open files". My node's configuration: CentOS 7 Intel(R)

Re: Too many open files issue

2015-01-30 Thread vineeth mohan
Hi , You will need to increase the number of maximum open file descriptors from OS. You can find more info here - http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html#file-descriptors Thanks Vineeth Mohan, Elasticsearch consultan

Too many open files issue

2015-01-30 Thread shashi kiran
Hi All, I am new to this group. In one of our project we are using ElasticSearch server. In normal circumstance we are not facing this issue, but in production, we are facing SocketException:Too many open files from Elastic Search, Work around found after surfing over the internet was to increa

Re: Too many open files /var/lib/elasticsearch doesn't exist on the nodes

2014-08-13 Thread Adrien Grand
It seems that this particular node is complaining about too many open files. This usually happens if you have very low limits on your operating system and/or if you have many shards on a single node. When this happens, things degrade pretty badly as there is no way to open new files anymore

Too many open files /var/lib/elasticsearch doesn't exist on the nodes

2014-08-13 Thread José Andrés
shard, message [IndexShardGatewayRecoveryException[[all][4] failed recovery]; nested: EngineCreationFailureException[[all][4] failed to open reader on writer]; nested: FileSystemException[/var/lib/elasticsearch/mycluster/nodes/0/indices/all/4/index/_ m4bz_es090_0.tim: Too many open files]; ]] 2:11

Re: too many open files

2014-07-17 Thread Andrew Selden
tor$fn__5641$fn__5653.invoke(executor.clj:690) > at backtype.storm.util$async_loop$fn__457.invoke(util.clj:429) > at clojure.lang.AFn.run(AFn.java:24) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: too many open files > at sun.nio.ch.IOUtil.makeP

too many open files

2014-07-17 Thread Seungjin Lee
.prepare(BasicBoltExecutor.java:43) at backtype.storm.daemon.executor$fn__5641$fn__5653.invoke(executor.clj:690) at backtype.storm.util$async_loop$fn__457.invoke(util.clj:429) at clojure.lang.AFn.run(AFn.java:24) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: too many open

Re: ElasticSearch giving FileNotFoundException: (Too many open files)

2014-04-29 Thread InquiringMind
Prateek , I've collected this from various sources and put it all together. Works fine for me, though I haven't yet dived into ELK: - You may verify the current soft limit by logging into the user that runs the Elastisearch JVM and issuing the following command: $ ulimit -Sn Finally, ver

ElasticSearch giving FileNotFoundException: (Too many open files)

2014-04-29 Thread Prateek Lal
reader on writer]; nested: FileNotFoundException[/usr/local/elasticsearch-0.90.9/data/elasticsearch/nodes/0/indices/logstash-2014.04.20/1/index/_f0r_es090_0.doc (Too many open files)]; ]]* *[2014-04-29 15:13:00,033][WARN ][cluster.action.shard ] [Whitemane, Aelfyre] [logstash-2014.04.20][1

Re: Too Many Open Files

2014-03-04 Thread smonasco
k you, Shannon Monasco On Wednesday, January 22, 2014 10:09:42 AM UTC-7, Ivan Brusic wrote: > > The first thing to do is check if your limits are actually being persisted > and used. The elasticsearch site has a good writeup: > http://www.elasticsearch.org/tutorials/too-many-open-

Re: Too Many Open Files

2014-01-22 Thread Ivan Brusic
The first thing to do is check if your limits are actually being persisted and used. The elasticsearch site has a good writeup: http://www.elasticsearch.org/tutorials/too-many-open-files/ Second, it might be possible that you are reaching the 128k limit. How many shards per node do you have? Do

Re: Too Many Open Files

2014-01-21 Thread smonasco
Sorry wrong error message. [2014-01-18 06:47:06,232][WARN ][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to accept a connection. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at

Too Many Open Files

2014-01-21 Thread smonasco
: java.io.IOException: Too many open files at sun.nio.ch.IOUtil.makePipe(Native Method) at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:65) at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36) at java.nio.channels.Selector.open

Re: Too many open files

2014-01-07 Thread Adolfo Rodriguez
Happily, the problem of missing highlight records looks to be gone by making a config change. * Initially I had 2 ES in 2 different apps (a Tomcat and a standalone) configured equal (both listening for incoming TransportClients requests on port 9300 and both open with client(false)) and a third

Re: Too many open files

2014-01-07 Thread Adolfo Rodriguez
I guess, my problem with excessive number of sockets could be also a consequence of having 2 JVM running ES, one embedded in Tomcat, a second embedded in other Java app, as said here: https://groups.google.com/forum/?hl=en-GB#!topicsearchin/elasticsearch/scale%7Csort:date%7Cspell:true/elasticsea

Re: Too many open files

2014-01-07 Thread Adolfo Rodriguez
Hi, my model is quite slow with just about some thousands documents I realised that, when opening a node = builder.client(clientOnly).data(!clientOnly).local(local).node(); client = node.client(); from my Java program to ES with such a small model, ES automatically

Re: Too many open files: does a "service wrapper" end that issue?

2013-12-30 Thread joergpra...@gmail.com
The log says, the kernel OOM killer has stopped your process because of getting too big. You have to configure less memory for the JVM or put in more RAM. Jörg -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and

Re: Too many open files: does a "service wrapper" end that issue?

2013-12-30 Thread Jack Park
These lines in that log Dec 30 03:37:11 bloomer kernel: [24733.355784] [] oom_kill_process+0x81/0x180 Dec 30 03:37:11 bloomer kernel: [24733.355786] [] __out_of_memory+0x58/0xd0 Dec 30 03:37:11 bloomer kernel: [24733.355788] [] out_of_memory+0x86/0x1c0 suggest something about out of memory. On

Re: Too many open files: does a "service wrapper" end that issue?

2013-12-30 Thread joergpra...@gmail.com
Have you checked /var/log/messages for a "killed" message? Jörg -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To

Re: Too many open files: does a "service wrapper" end that issue?

2013-12-30 Thread Jack Park
Running bare on various Ubuntu boxes. Interesting finding still deep in testing; For Ubuntu, what *does* work is this: Add this line to elasticsearch.in.sh: ulimit -n 32000 (or whatever) The notion of doing what the es tutorial says seems required, but insufficient when on Ubuntu: even after doing

Re: Too many open files: does a "service wrapper" end that issue?

2013-12-29 Thread joergpra...@gmail.com
Are you running ES on a virtual machine in a guest OS? The service wrapper https://github.com/elasticsearch/elasticsearch-servicewrapper works well but it is not related to file descriptor limit setting. This is a case for the host OS. Jörg -- You received this message because you are subscribe

Re: Too many open files: does a "service wrapper" end that issue?

2013-12-29 Thread Jack Park
t; Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs >> >> >> Le 30 déc. 2013 à 03:01, Jack Park a écrit : >> >> The tutorial at http://www.elasticsearch.org/tutorials/too-many-open-files/ >> says this: >> >> If sudo -u elasticsearch -s "ulimit -Sn"

Re: Too many open files: does a "service wrapper" end that issue?

2013-12-29 Thread Jack Park
rrent Max open files setting when starting elasticsearch > using https://github.com/elasticsearch/elasticsearch/issues/483 > > HTH > > -- > David ;-) > Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs > > > Le 30 déc. 2013 à 03:01, Jack Park a écrit : > >

Re: Too many open files: does a "service wrapper" end that issue?

2013-12-29 Thread David Pilato
/elasticsearch/issues/483 HTH -- David ;-) Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs Le 30 déc. 2013 à 03:01, Jack Park a écrit : The tutorial at http://www.elasticsearch.org/tutorials/too-many-open-files/ says this: If sudo -u elasticsearch -s "ulimit -Sn" shows 32000 but

Too many open files: does a "service wrapper" end that issue?

2013-12-29 Thread Jack Park
The tutorial at http://www.elasticsearch.org/tutorials/too-many-open-files/ says this: If sudo -u elasticsearch -s "ulimit -Sn" shows 32000 but you still have a low limit when you run ElasticSearch, you’re probably running it through another program that doesn’t support PAM: a frequen