Re: too many open files problems and suggestions on cluster configuration

2015-05-01 Thread Ann Yablunovskaya
How to calculate the best amount of shards?

пятница, 1 мая 2015 г., 18:21:47 UTC+3 пользователь David Pilato написал:

 Add more nodes or reduce the number of shards per node.

 --
 David ;-)
 Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

 Le 1 mai 2015 à 17:05, Ann Yablunovskaya lad@gmail.com javascript: 
 a écrit :

 I am looking for suggestions on cluster configuration.

 I have 2 nodes (master/data and data), 544 indices, about 800 mil 
 documents.

 If I try to insert more documents and create more indices, I will 
 catch error too many open files.

 My node's configuration:

 CentOS 7
 Intel(R) Xeon(R) CPU x16
 RAM 62 Gb

 # ulimit -n
 10

 In future I will have a lot of indices (about 2000) and a lot of documents 
 (~5 bil or maybe more)

 How can I avoid the error too many open files?



  -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearc...@googlegroups.com javascript:.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/c5d45b95-b3d7-4b6a-80fa-111d66f3f65a%40googlegroups.com
  
 https://groups.google.com/d/msgid/elasticsearch/c5d45b95-b3d7-4b6a-80fa-111d66f3f65a%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.

 es_config.pp



-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7c2e1952-e718-4563-ac5c-bb92b45b0aa5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


too many open files problems and suggestions on cluster configuration

2015-05-01 Thread Ann Yablunovskaya
I am looking for suggestions on cluster configuration.

I have 2 nodes (master/data and data), 544 indices, about 800 mil documents.

If I try to insert more documents and create more indices, I will 
catch error too many open files.

My node's configuration:

CentOS 7
Intel(R) Xeon(R) CPU x16
RAM 62 Gb

# ulimit -n
10

In future I will have a lot of indices (about 2000) and a lot of documents 
(~5 bil or maybe more)

How can I avoid the error too many open files?



-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c5d45b95-b3d7-4b6a-80fa-111d66f3f65a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


es_config.pp
Description: Binary data


Re: too many open files problems and suggestions on cluster configuration

2015-05-01 Thread joergpra...@gmail.com
The number of open files does not depend on the number of documents.

A shard comes not for free. Each shard can take around ~150 open file
descriptors (sockets, segment files) and up to 400-500 if actively being
indexed.

Take care of number of shards, if you have 5 shards per index, and 2000
indices per node,  you would hvae to prepare 10k * 150 open file
descriptors. That is a challenge on a single RHEL 7 system providing 131072
file descriptors by default so you would have to change system limits (cat
/proc/sys/fs/file-max) - the default is already very high.

I recommend using fewer shards and redesign the application for fewer
indices (or even a single index) if you are limited to 2 nodes only. You
can look at shard routing and index aliasing if this helps:

http://www.elastic.co/guide/en/elasticsearch/guide/master/kagillion-shards.html

http://www.elastic.co/guide/en/elasticsearch/guide/master/faking-it.html

Jörg



On Fri, May 1, 2015 at 5:05 PM, Ann Yablunovskaya lad.sh...@gmail.com
wrote:

 I am looking for suggestions on cluster configuration.

 I have 2 nodes (master/data and data), 544 indices, about 800 mil
 documents.

 If I try to insert more documents and create more indices, I will
 catch error too many open files.

 My node's configuration:

 CentOS 7
 Intel(R) Xeon(R) CPU x16
 RAM 62 Gb

 # ulimit -n
 10

 In future I will have a lot of indices (about 2000) and a lot of documents
 (~5 bil or maybe more)

 How can I avoid the error too many open files?



  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/c5d45b95-b3d7-4b6a-80fa-111d66f3f65a%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/c5d45b95-b3d7-4b6a-80fa-111d66f3f65a%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoE_EjkMHgT_M_KPvV%3DDSdf-NyidqOziZvg5HXizx8J8rQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: too many open files problems and suggestions on cluster configuration

2015-05-01 Thread David Pilato
Add more nodes or reduce the number of shards per node.

--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

 Le 1 mai 2015 à 17:05, Ann Yablunovskaya lad.sh...@gmail.com a écrit :
 
 I am looking for suggestions on cluster configuration.
 
 I have 2 nodes (master/data and data), 544 indices, about 800 mil documents.
 
 If I try to insert more documents and create more indices, I will catch error 
 too many open files.
 
 My node's configuration:
 
 CentOS 7
 Intel(R) Xeon(R) CPU x16
 RAM 62 Gb
 
 # ulimit -n
 10
 
 In future I will have a lot of indices (about 2000) and a lot of documents 
 (~5 bil or maybe more)
 
 How can I avoid the error too many open files?
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/c5d45b95-b3d7-4b6a-80fa-111d66f3f65a%40googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.
 es_config.pp

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/54E58499-F862-4427-A765-E72FCBDC8D92%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


Too many open files issue

2015-01-30 Thread shashi kiran
Hi All,

I am new to this group. In one of our project we are using ElasticSearch 
server. In normal circumstance we are not facing this issue, but in 
production, we are facing SocketException:Too many open files from Elastic 
Search, Work around found after surfing over the internet was to increase 
the file count, even then we face this issue. We are basically using 
JestClient as a client to connect to ElasticSearch Server. We have a web 
service which basically create client object and execute the Elastic Search 
Query. It would of great help, if any one can help on this issue, basically 
I am trying to understand actual root cause and solution.

Thanks in advance.
shashi

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/69c5ff17-bb3c-4262-bdba-fcfdb4ef0ed6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Too many open files issue

2015-01-30 Thread vineeth mohan
Hi ,

You will need to increase the number of maximum open file descriptors from
OS.
You can find more info here -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html#file-descriptors

Thanks
 Vineeth Mohan,
 Elasticsearch consultant,
 qbox.io ( Elasticsearch service provider http://qbox.io)


On Fri, Jan 30, 2015 at 11:49 PM, shashi kiran shuklacs...@gmail.com
wrote:

 Hi All,

 I am new to this group. In one of our project we are using ElasticSearch
 server. In normal circumstance we are not facing this issue, but in
 production, we are facing SocketException:Too many open files from Elastic
 Search, Work around found after surfing over the internet was to increase
 the file count, even then we face this issue. We are basically using
 JestClient as a client to connect to ElasticSearch Server. We have a web
 service which basically create client object and execute the Elastic Search
 Query. It would of great help, if any one can help on this issue, basically
 I am trying to understand actual root cause and solution.

 Thanks in advance.
 shashi

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/69c5ff17-bb3c-4262-bdba-fcfdb4ef0ed6%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/69c5ff17-bb3c-4262-bdba-fcfdb4ef0ed6%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5kgQPczoBmcRg1sBkcp-Azc3BrEsFsZnz%3D8aSZVw8kWNA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Too many open files /var/lib/elasticsearch doesn't exist on the nodes

2014-08-13 Thread José Andrés
Can someone elaborate as to why the exception below is thrown?

[2014-08-11 11:27:41,189][WARN ][cluster.action.shard ] [mycluster] 
[all][4] received shard failed for [all][4], node[MLNWasasasWi2V58hRA93mg], 
[P], s[INITIALIZING], indexUUID [KSScFASsasas6yTEcK7HqlmA], reason [Failed 
to start shard, message [IndexShardGatewayRecoveryException[[all][4] failed 
recovery]; nested: EngineCreationFailureException[[all][4] failed to open 
reader on writer]; nested: 
FileSystemException[/var/lib/elasticsearch/mycluster/nodes/0/indices/all/4/index/_
m4bz_es090_0.tim: Too many open files]; ]] 2:11 /var/lib/elasticsearch 
doesn't exist on the nodes

Thank you very much for your feedback.

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/16968aac-dbfe-44c4-b3fe-d25af0dfbe37%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Too many open files /var/lib/elasticsearch doesn't exist on the nodes

2014-08-13 Thread Adrien Grand
It seems that this particular node is complaining about too many open
files. This usually happens if you have very low limits on your operating
system and/or if you have many shards on a single node. When this happens,
things degrade pretty badly as there is no way to open new files anymore.

Please see
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html#file-descriptors
for more information.


On Wed, Aug 13, 2014 at 9:23 PM, José Andrés japmycr...@gmail.com wrote:

 Can someone elaborate as to why the exception below is thrown?

 [2014-08-11 11:27:41,189][WARN ][cluster.action.shard ] [mycluster]
 [all][4] received shard failed for [all][4], node[MLNWasasasWi2V58hRA93mg],
 [P], s[INITIALIZING], indexUUID [KSScFASsasas6yTEcK7HqlmA], reason [Failed
 to start shard, message [IndexShardGatewayRecoveryException[[all][4] failed
 recovery]; nested: EngineCreationFailureException[[all][4] failed to open
 reader on writer]; nested:
 FileSystemException[/var/lib/elasticsearch/mycluster/nodes/0/indices/all/4/index/_
 m4bz_es090_0.tim: Too many open files]; ]] 2:11 /var/lib/elasticsearch
 doesn't exist on the nodes

 Thank you very much for your feedback.

 --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/16968aac-dbfe-44c4-b3fe-d25af0dfbe37%40googlegroups.com
 https://groups.google.com/d/msgid/elasticsearch/16968aac-dbfe-44c4-b3fe-d25af0dfbe37%40googlegroups.com?utm_medium=emailutm_source=footer
 .
 For more options, visit https://groups.google.com/d/optout.




-- 
Adrien Grand

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAL6Z4j5n0x_zBEYEP1B3a1uf0vCFGoCphZw%3DCZwJF7nuF24ERQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


too many open files

2014-07-17 Thread Seungjin Lee
hello, I'm using elasticsearch with storm, Java TransportClient.

I have total 128 threads across machines which communicate with
elasticsearch cluster.

From time to time, error below occurs



org.elasticsearch.common.netty.channel.ChannelException: Failed to create a
selector.
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.(AbstractNioSelector.java:100)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.(AbstractNioWorker.java:52)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.(NioWorker.java:45)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:143)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:81)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.(NioWorkerPool.java:39)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.(NioWorkerPool.java:33)
at
org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:254)
at
org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at
org.elasticsearch.transport.TransportService.doStart(TransportService.java:92)
at
org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at
org.elasticsearch.client.transport.TransportClient.(TransportClient.java:189)
at
org.elasticsearch.client.transport.TransportClient.(TransportClient.java:125)
at
com.naver.labs.nelo2.notifier.utils.ElasticSearchUtil.prepareElasticSearch(ElasticSearchUtil.java:30)
at
com.naver.labs.nelo2.notifier.bolt.PercolatorBolt.prepare(PercolatorBolt.java:48)
at
backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:43)
at backtype.storm.daemon.executor$fn__5641$fn__5653.invoke(executor.clj:690)
at backtype.storm.util$async_loop$fn__457.invoke(util.clj:429)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: too many open files
at sun.nio.ch.IOUtil.makePipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:65)
at
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
at java.nio.channels.Selector.open(Selector.java:227)
at
org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.open(SelectorUtil.java:63)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:341)
... 22 more


code itself is very simple

I get client as follows

Settings settings =
ImmutableSettings.settingsBuilder().put(cluster.name,
clusterName).put(client.transport.sniff, true).build();
ListInetSocketTransportAddress transportAddressList = new
 ArrayListInetSocketTransportAddress();
for (String host : ESHost) {
transportAddressList.add(new InetSocketTransportAddress(host,
ESPort));
}
return new TransportClient(settings)
.addTransportAddresses(transportAddressList.toArray(new
InetSocketTransportAddress[transportAddressList.size()]));

and for each execution, it percolates as follows

return
client.preparePercolate().setIndices(indexName).setDocumentType(projectName).setPercolateDoc(docBuilder().setDoc(log)).setRouting(projectName).setPercolateFilter(FilterBuilders.termFilter(projects,
projectName).cache(true)).execute().actionGet();


ES cluster consists of 5 machines with almost default setting.

What can be the cause of this problem?

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAL3_U43SQ2KCGFCQ%3DcisaLGQXxdg_r1mzjrcunreaJOj0Ln-%2BQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: too many open files

2014-07-17 Thread Andrew Selden

This is a fairly common problem and not necessarily specific to Elasticsearch. 
It is simple to solve. In Linux you can increase the operating system's max 
file descriptor limit. Other Unix-like operating systems have the same concept. 
You can find how to do this for your specific Linux distribution from a little 
googling on linux max file descriptor.

Cheers.


On Jul 17, 2014, at 9:40 PM, Seungjin Lee sweetest0...@gmail.com wrote:

 hello, I'm using elasticsearch with storm, Java TransportClient.
 
 I have total 128 threads across machines which communicate with elasticsearch 
 cluster.
 
 From time to time, error below occurs
 
 
 
 org.elasticsearch.common.netty.channel.ChannelException: Failed to create a 
 selector.
   at 
 org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
   at 
 org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.(AbstractNioSelector.java:100)
   at 
 org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.(AbstractNioWorker.java:52)
   at 
 org.elasticsearch.common.netty.channel.socket.nio.NioWorker.(NioWorker.java:45)
   at 
 org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
   at 
 org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
   at 
 org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:143)
   at 
 org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:81)
   at 
 org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.(NioWorkerPool.java:39)
   at 
 org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.(NioWorkerPool.java:33)
   at 
 org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:254)
   at 
 org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
   at 
 org.elasticsearch.transport.TransportService.doStart(TransportService.java:92)
   at 
 org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
   at 
 org.elasticsearch.client.transport.TransportClient.(TransportClient.java:189)
   at 
 org.elasticsearch.client.transport.TransportClient.(TransportClient.java:125)
   at 
 com.naver.labs.nelo2.notifier.utils.ElasticSearchUtil.prepareElasticSearch(ElasticSearchUtil.java:30)
   at 
 com.naver.labs.nelo2.notifier.bolt.PercolatorBolt.prepare(PercolatorBolt.java:48)
   at 
 backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:43)
   at 
 backtype.storm.daemon.executor$fn__5641$fn__5653.invoke(executor.clj:690)
   at backtype.storm.util$async_loop$fn__457.invoke(util.clj:429)
   at clojure.lang.AFn.run(AFn.java:24)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.io.IOException: too many open files
   at sun.nio.ch.IOUtil.makePipe(Native Method)
   at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:65)
   at 
 sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
   at java.nio.channels.Selector.open(Selector.java:227)
   at 
 org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.open(SelectorUtil.java:63)
   at 
 org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:341)
   ... 22 more
 
 
 code itself is very simple
 
 I get client as follows
 
 Settings settings =
 ImmutableSettings.settingsBuilder().put(cluster.name, 
 clusterName).put(client.transport.sniff, true).build();
 ListInetSocketTransportAddress transportAddressList = new  
 ArrayListInetSocketTransportAddress();
 for (String host : ESHost) {
 transportAddressList.add(new InetSocketTransportAddress(host, 
 ESPort));
 }
 return new TransportClient(settings)
 .addTransportAddresses(transportAddressList.toArray(new 
 InetSocketTransportAddress[transportAddressList.size()]));
 
 and for each execution, it percolates as follows
 
 return 
 client.preparePercolate().setIndices(indexName).setDocumentType(projectName).setPercolateDoc(docBuilder().setDoc(log)).setRouting(projectName).setPercolateFilter(FilterBuilders.termFilter(projects,
  projectName).cache(true)).execute().actionGet();
 
 
 ES cluster consists of 5 machines with almost default setting.
 
 What can be the cause of this problem?
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit 
 https://groups.google.com/d/msgid/elasticsearch/CAL3_U43SQ2KCGFCQ

Re: ElasticSearch giving FileNotFoundException: (Too many open files)

2014-04-29 Thread InquiringMind
Prateek ,

I've collected this from various sources and put it all together. Works 
fine for me, though I haven't yet dived into ELK:

-

You may verify the current soft limit by logging into the user that runs 
the Elastisearch JVM and issuing the following command:

$ ulimit -Sn

Finally, verify that Elasticsearch is indeed able to open up to this number 
of file handles from the max_file_descriptors value for each node via the 
_nodes API:

$ curl localhost:9200/_nodes/process?pretty  echo

ON LINUX

Update the */etc/security/limits.conf* file and ensure that it contains the 
following two lines and all is well again:

*username*  hardnofile  65537
*username*  softnofile  65536

Of course, replace the '*username*' user name to reflect the user on your 
own machines that are running Elasticsearch.

ON SOLARIS

In Solaris 9, the default limit of file descriptors per process was raised 
from 1024 to 65536.

ON MAC OS X

Create or edit */etc/launchd.conf* and add the following line:

limit maxfiles 40 40

Then shutdown OS X and restart the Mac. Verify the settings by opening a 
new terminal, and running either or both commands below:

$ launchctl limit maxfiles
$ ulimit -a

You should see the maxfiles set to 40 from the output of both of those 
commands.

-

Note that this is more of a Unix thing than an Elasticsearch thing. So if 
you are still having issues you may wish to ask on a newsgroup that 
specifically targets your operating system.

Also note that it's not a good practice to run an application as root. Too 
much chance to wipe out something from which you could never recover, and 
all that. I remember than once our operations folks started ES as root, and 
then after that the data files were owned by root and the non-root ES user 
now had troubles starting with locking errors all over the logs. I ended up 
performing a recursive chown to the ES filesystem and when restarted as the 
non-root user all was well again.

Brian

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ddf6f61e-8a4e-4c6d-b09b-54361805cdf7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Too Many Open Files

2014-03-04 Thread smonasco
Sorry to have taken so long to reply.  So I went ahead and followed your 
link.  I'd been there before, but decided to give it a deeper look.  I 
found actually, however, that bigdesk told me how many max open files the 
process was using and from there I was able to determine that my settings 
in limits.conf was not being honored even though if I switched to the 
context Elasticsearch was running under I would get the appropriate limits.

I then dug into the service script and found someone dropped a ulimit 
statement into the script that was overwriting the limits.conf setting.

Thank you,
Shannon Monasco



On Wednesday, January 22, 2014 10:09:42 AM UTC-7, Ivan Brusic wrote:

 The first thing to do is check if your limits are actually being persisted 
 and used. The elasticsearch site has a good writeup: 
 http://www.elasticsearch.org/tutorials/too-many-open-files/

 Second, it might be possible that you are reaching the 128k limit. How 
 many shards per node do you have? Do you have non standard merge settings? 
 You can use the status API to find out how many open files you have. I do 
 not have a link since it might have changed since 0.19.

 Also, be aware that it is not possible to do rolling upgrades with nodes 
 have different major versions of elasticsearch. The underlying data will be 
 fine and does not need to be upgraded, but nodes will not be able to 
 communicate with each other.

 Cheers,

 Ivan



 On Tue, Jan 21, 2014 at 7:42 AM, smonasco smon...@gmail.com javascript:
  wrote:

 Sorry wrong error message.

 [2014-01-18 06:47:06,232][WARN 
 ][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to accept a 
 connection.
 java.io.IOException: Too many open files
 at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
 at 
 sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:226)
 at 
 org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:227)
 at 
 org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:722)


 The other posted error message is newer and seems to follow the too many 
 open files error message.

 --Shannon Monasco

 On Tuesday, January 21, 2014 8:35:18 AM UTC-7, smonasco wrote:

 Hi,

 I am using version 0.19.3 I have the nofile limit set to 128K and am 
 getting errors like

 [2014-01-18 06:52:54,857][WARN 
 ][netty.channel.socket.nio.NioServerSocketPipelineSink] 
 Failed to initialize an accepted socket.
 org.elasticsearch.common.netty.channel.ChannelException: Failed to 
 create a selector.
 at org.elasticsearch.common.netty.channel.socket.nio.
 AbstractNioWorker.start(AbstractNioWorker.java:154)
 at org.elasticsearch.common.netty.channel.socket.nio.
 AbstractNioWorker.register(AbstractNioWorker.java:131)
 at org.elasticsearch.common.netty.channel.socket.nio.
 NioServerSocketPipelineSink$Boss.registerAcceptedChannel(
 NioServerSocketPipelineSink.java:269)
 at org.elasticsearch.common.netty.channel.socket.nio.
 NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.
 java:231)
 at org.elasticsearch.common.netty.util.internal.
 DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(
 ThreadPoolExecutor.java:1110)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(
 ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.io.IOException: Too many open files
 at sun.nio.ch.IOUtil.makePipe(Native Method)
 at sun.nio.ch.EPollSelectorImpl.init(EPollSelectorImpl.java:
 65)
 at sun.nio.ch.EPollSelectorProvider.openSelector(
 EPollSelectorProvider.java:36)
 at java.nio.channels.Selector.open(Selector.java:227)
 at org.elasticsearch.common.netty.channel.socket.nio.
 AbstractNioWorker.start(AbstractNioWorker.java:152)
 ... 7 more

 I am aware that version 0.19.3 is old.  We have been having trouble 
 getting our infrastructure group to build out new nodes so we can have a 
 rolling upgrade with testing for both versions going on.  I am now setting 
 the limit to 1048576 as per http://stackoverflow.com/
 questions/1212925/on-linux-set-maximum-open-files-to-unlimited-possible, 
 however, I'm concerned this may cause other issues.

 If anyone has any suggestions I'd love to hear them.  I am using this as 
 fuel for the please pay attention and get us the support we need so we can 
 upgrade campaign.

 --Shannon Monasco

  -- 
 You received this message because you are subscribed to the Google Groups 
 elasticsearch group.
 To unsubscribe from this group and stop receiving

Re: Too Many Open Files

2014-01-22 Thread Ivan Brusic
The first thing to do is check if your limits are actually being persisted
and used. The elasticsearch site has a good writeup:
http://www.elasticsearch.org/tutorials/too-many-open-files/

Second, it might be possible that you are reaching the 128k limit. How many
shards per node do you have? Do you have non standard merge settings? You
can use the status API to find out how many open files you have. I do not
have a link since it might have changed since 0.19.

Also, be aware that it is not possible to do rolling upgrades with nodes
have different major versions of elasticsearch. The underlying data will be
fine and does not need to be upgraded, but nodes will not be able to
communicate with each other.

Cheers,

Ivan



On Tue, Jan 21, 2014 at 7:42 AM, smonasco smona...@gmail.com wrote:

 Sorry wrong error message.

 [2014-01-18 06:47:06,232][WARN
 ][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to accept a
 connection.
 java.io.IOException: Too many open files
 at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
 at
 sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:226)
 at
 org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:227)
 at
 org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:722)


 The other posted error message is newer and seems to follow the too many
 open files error message.

 --Shannon Monasco

 On Tuesday, January 21, 2014 8:35:18 AM UTC-7, smonasco wrote:

 Hi,

 I am using version 0.19.3 I have the nofile limit set to 128K and am
 getting errors like

 [2014-01-18 06:52:54,857][WARN 
 ][netty.channel.socket.nio.NioServerSocketPipelineSink]
 Failed to initialize an accepted socket.
 org.elasticsearch.common.netty.channel.ChannelException: Failed to
 create a selector.
 at org.elasticsearch.common.netty.channel.socket.nio.
 AbstractNioWorker.start(AbstractNioWorker.java:154)
 at org.elasticsearch.common.netty.channel.socket.nio.
 AbstractNioWorker.register(AbstractNioWorker.java:131)
 at org.elasticsearch.common.netty.channel.socket.nio.
 NioServerSocketPipelineSink$Boss.registerAcceptedChannel(
 NioServerSocketPipelineSink.java:269)
 at org.elasticsearch.common.netty.channel.socket.nio.
 NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.
 java:231)
 at org.elasticsearch.common.netty.util.internal.
 DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(
 ThreadPoolExecutor.java:1110)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(
 ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.io.IOException: Too many open files
 at sun.nio.ch.IOUtil.makePipe(Native Method)
 at sun.nio.ch.EPollSelectorImpl.init(EPollSelectorImpl.java:65)
 at sun.nio.ch.EPollSelectorProvider.openSelector(
 EPollSelectorProvider.java:36)
 at java.nio.channels.Selector.open(Selector.java:227)
 at org.elasticsearch.common.netty.channel.socket.nio.
 AbstractNioWorker.start(AbstractNioWorker.java:152)
 ... 7 more

 I am aware that version 0.19.3 is old.  We have been having trouble
 getting our infrastructure group to build out new nodes so we can have a
 rolling upgrade with testing for both versions going on.  I am now setting
 the limit to 1048576 as per http://stackoverflow.com/
 questions/1212925/on-linux-set-maximum-open-files-to-unlimited-possible,
 however, I'm concerned this may cause other issues.

 If anyone has any suggestions I'd love to hear them.  I am using this as
 fuel for the please pay attention and get us the support we need so we can
 upgrade campaign.

 --Shannon Monasco

  --
 You received this message because you are subscribed to the Google Groups
 elasticsearch group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to elasticsearch+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/elasticsearch/6c77b1f9-7838-40f9-b7e2-6006370265a5%40googlegroups.com
 .

 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQDXxLy2Ovm7jMUCREc7Jkva3Hwn3cesu%2BqccGgi%2BP0p_w%40mail.gmail.com.
For more options, visit https://groups.google.com/groups

Re: Too Many Open Files

2014-01-21 Thread smonasco
Sorry wrong error message.

[2014-01-18 06:47:06,232][WARN 
][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to accept a 
connection.
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:226)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:227)
at 
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)


The other posted error message is newer and seems to follow the too many 
open files error message.

--Shannon Monasco

On Tuesday, January 21, 2014 8:35:18 AM UTC-7, smonasco wrote:

 Hi,

 I am using version 0.19.3 I have the nofile limit set to 128K and am 
 getting errors like

 [2014-01-18 06:52:54,857][WARN 
 ][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to 
 initialize an accepted socket.
 org.elasticsearch.common.netty.channel.ChannelException: Failed to create 
 a selector.
 at 
 org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.start(AbstractNioWorker.java:154)
 at 
 org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.register(AbstractNioWorker.java:131)
 at 
 org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.registerAcceptedChannel(NioServerSocketPipelineSink.java:269)
 at 
 org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:231)
 at 
 org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.io.IOException: Too many open files
 at sun.nio.ch.IOUtil.makePipe(Native Method)
 at sun.nio.ch.EPollSelectorImpl.init(EPollSelectorImpl.java:65)
 at 
 sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
 at java.nio.channels.Selector.open(Selector.java:227)
 at 
 org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.start(AbstractNioWorker.java:152)
 ... 7 more

 I am aware that version 0.19.3 is old.  We have been having trouble 
 getting our infrastructure group to build out new nodes so we can have a 
 rolling upgrade with testing for both versions going on.  I am now setting 
 the limit to 1048576 as per 
 http://stackoverflow.com/questions/1212925/on-linux-set-maximum-open-files-to-unlimited-possible,
  
 however, I'm concerned this may cause other issues.

 If anyone has any suggestions I'd love to hear them.  I am using this as 
 fuel for the please pay attention and get us the support we need so we can 
 upgrade campaign.

 --Shannon Monasco


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6c77b1f9-7838-40f9-b7e2-6006370265a5%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Too many open files

2014-01-07 Thread Adolfo Rodriguez
Hi, my model is quite slow with just about some thousands documents

I realised that, when opening a

node = 
builder.client(clientOnly).data(!clientOnly).local(local).node();
client = node.client();

from my Java program to ES with such a small model, ES automatically 
creates 10 sockets. Casually I have 10 shards (?).

* Is this the expected behavior?
* Can I reduce the number of ES shards dynamically to reduce the number of 
sockets or should I redeploy my ES install?
* By opening other connections I finally get up to 200 simultaneous open 
sockets and, I am afraid, that, when fetching highlight information, some 
of the results are randomly being lost. Can this missing results be somehow 
as a consequence of a too large number of open sockets?

Thanks for your pointers.

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4c0a4660-ef70-491d-998f-5ed73c4a9025%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Too many open files

2014-01-07 Thread Adolfo Rodriguez
I guess, my problem with excessive number of sockets could be also a 
consequence of having 2 JVM running ES, one embedded in Tomcat, a second 
embedded in other Java app, as said here:

https://groups.google.com/forum/?hl=en-GB#!topicsearchin/elasticsearch/scale%7Csort:date%7Cspell:true/elasticsearch/m9IWpGzoLLE

Is there any experience running an unique embedded ES (as jar files), for 
example, in tomcat's lib folder, being consumed by several tomcat apps and 
other standalone apps in different JVMs?

Any opinion on this configuration as an starting point?

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3ba7b377-9b66-4d8b-ad65-de362318f9f2%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Too many open files

2014-01-07 Thread Adolfo Rodriguez
Happily, the problem of missing highlight records looks to be gone by 
making a config change.

* Initially I had 2 ES in 2 different apps (a Tomcat and a standalone) 
configured equal (both listening for incoming TransportClients requests on 
port 9300 and both open with client(false)) and a third ES connecting to 
then opened with new TransportClient() to fetch highlighting info. It looks 
that this third ES was randomly loosing highlighting records. (?)

* What I did to fix it was a configuration change to have only one 
client(false)) ES listening for TransportClients and 2 new 
TransportClient()s connecting to it.

It looks this change fixes the issue which was some kind of coupling 
between both client(false)) ESs listening on port 9300.

Regards

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fbe72b9f-eeac-4d2b-9545-6851352aa3d5%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.