Re: "too many open files" problems and suggestions on cluster configuration

2015-05-01 Thread joergpra...@gmail.com
The number of open files does not depend on the number of documents.

A shard comes not for free. Each shard can take around ~150 open file
descriptors (sockets, segment files) and up to 400-500 if actively being
indexed.

Take care of number of shards, if you have 5 shards per index, and 2000
indices per node,  you would hvae to prepare 10k * 150 open file
descriptors. That is a challenge on a single RHEL 7 system providing 131072
file descriptors by default so you would have to change system limits (cat
/proc/sys/fs/file-max) - the default is already very high.

I recommend using fewer shards and redesign the application for fewer
indices (or even a single index) if you are limited to 2 nodes only. You
can look at shard routing and index aliasing if this helps:

http://www.elastic.co/guide/en/elasticsearch/guide/master/kagillion-shards.html

http://www.elastic.co/guide/en/elasticsearch/guide/master/faking-it.html

Jörg



On Fri, May 1, 2015 at 5:05 PM, Ann Yablunovskaya 
wrote:

> I am looking for suggestions on cluster configuration.
>
> I have 2 nodes (master/data and data), 544 indices, about 800 mil
> documents.
>
> If I try to insert more documents and create more indices, I will
> catch error "too many open files".
>
> My node's configuration:
>
> CentOS 7
> Intel(R) Xeon(R) CPU x16
> RAM 62 Gb
>
> # ulimit -n
> 10
>
> In future I will have a lot of indices (about 2000) and a lot of documents
> (~5 bil or maybe more)
>
> How can I avoid the error "too many open files"?
>
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/c5d45b95-b3d7-4b6a-80fa-111d66f3f65a%40googlegroups.com
> <https://groups.google.com/d/msgid/elasticsearch/c5d45b95-b3d7-4b6a-80fa-111d66f3f65a%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoE_EjkMHgT_M_KPvV%3DDSdf-NyidqOziZvg5HXizx8J8rQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: "too many open files" problems and suggestions on cluster configuration

2015-05-01 Thread Ann Yablunovskaya
How to calculate the best amount of shards?

пятница, 1 мая 2015 г., 18:21:47 UTC+3 пользователь David Pilato написал:
>
> Add more nodes or reduce the number of shards per node.
>
> --
> David ;-)
> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
>
> Le 1 mai 2015 à 17:05, Ann Yablunovskaya > 
> a écrit :
>
> I am looking for suggestions on cluster configuration.
>
> I have 2 nodes (master/data and data), 544 indices, about 800 mil 
> documents.
>
> If I try to insert more documents and create more indices, I will 
> catch error "too many open files".
>
> My node's configuration:
>
> CentOS 7
> Intel(R) Xeon(R) CPU x16
> RAM 62 Gb
>
> # ulimit -n
> 10
>
> In future I will have a lot of indices (about 2000) and a lot of documents 
> (~5 bil or maybe more)
>
> How can I avoid the error "too many open files"?
>
>
>
>  -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearc...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/c5d45b95-b3d7-4b6a-80fa-111d66f3f65a%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/elasticsearch/c5d45b95-b3d7-4b6a-80fa-111d66f3f65a%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
> 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7c2e1952-e718-4563-ac5c-bb92b45b0aa5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: "too many open files" problems and suggestions on cluster configuration

2015-05-01 Thread David Pilato
Add more nodes or reduce the number of shards per node.

--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

> Le 1 mai 2015 à 17:05, Ann Yablunovskaya  a écrit :
> 
> I am looking for suggestions on cluster configuration.
> 
> I have 2 nodes (master/data and data), 544 indices, about 800 mil documents.
> 
> If I try to insert more documents and create more indices, I will catch error 
> "too many open files".
> 
> My node's configuration:
> 
> CentOS 7
> Intel(R) Xeon(R) CPU x16
> RAM 62 Gb
> 
> # ulimit -n
> 10
> 
> In future I will have a lot of indices (about 2000) and a lot of documents 
> (~5 bil or maybe more)
> 
> How can I avoid the error "too many open files"?
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/c5d45b95-b3d7-4b6a-80fa-111d66f3f65a%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/54E58499-F862-4427-A765-E72FCBDC8D92%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.


"too many open files" problems and suggestions on cluster configuration

2015-05-01 Thread Ann Yablunovskaya
I am looking for suggestions on cluster configuration.

I have 2 nodes (master/data and data), 544 indices, about 800 mil documents.

If I try to insert more documents and create more indices, I will 
catch error "too many open files".

My node's configuration:

CentOS 7
Intel(R) Xeon(R) CPU x16
RAM 62 Gb

# ulimit -n
10

In future I will have a lot of indices (about 2000) and a lot of documents 
(~5 bil or maybe more)

How can I avoid the error "too many open files"?



-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/c5d45b95-b3d7-4b6a-80fa-111d66f3f65a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


es_config.pp
Description: Binary data


Re: Too many open files issue

2015-01-30 Thread vineeth mohan
Hi ,

You will need to increase the number of maximum open file descriptors from
OS.
You can find more info here -
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html#file-descriptors

Thanks
 Vineeth Mohan,
 Elasticsearch consultant,
 qbox.io ( Elasticsearch service provider )


On Fri, Jan 30, 2015 at 11:49 PM, shashi kiran 
wrote:

> Hi All,
>
> I am new to this group. In one of our project we are using ElasticSearch
> server. In normal circumstance we are not facing this issue, but in
> production, we are facing SocketException:Too many open files from Elastic
> Search, Work around found after surfing over the internet was to increase
> the file count, even then we face this issue. We are basically using
> JestClient as a client to connect to ElasticSearch Server. We have a web
> service which basically create client object and execute the Elastic Search
> Query. It would of great help, if any one can help on this issue, basically
> I am trying to understand actual root cause and solution.
>
> Thanks in advance.
> shashi
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/69c5ff17-bb3c-4262-bdba-fcfdb4ef0ed6%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGdPd5kgQPczoBmcRg1sBkcp-Azc3BrEsFsZnz%3D8aSZVw8kWNA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Too many open files issue

2015-01-30 Thread shashi kiran
Hi All,

I am new to this group. In one of our project we are using ElasticSearch 
server. In normal circumstance we are not facing this issue, but in 
production, we are facing SocketException:Too many open files from Elastic 
Search, Work around found after surfing over the internet was to increase 
the file count, even then we face this issue. We are basically using 
JestClient as a client to connect to ElasticSearch Server. We have a web 
service which basically create client object and execute the Elastic Search 
Query. It would of great help, if any one can help on this issue, basically 
I am trying to understand actual root cause and solution.

Thanks in advance.
shashi

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/69c5ff17-bb3c-4262-bdba-fcfdb4ef0ed6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Too many open files /var/lib/elasticsearch doesn't exist on the nodes

2014-08-13 Thread Adrien Grand
It seems that this particular node is complaining about too many open
files. This usually happens if you have very low limits on your operating
system and/or if you have many shards on a single node. When this happens,
things degrade pretty badly as there is no way to open new files anymore.

Please see
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html#file-descriptors
for more information.


On Wed, Aug 13, 2014 at 9:23 PM, José Andrés  wrote:

> Can someone elaborate as to why the exception below is thrown?
>
> [2014-08-11 11:27:41,189][WARN ][cluster.action.shard ] [mycluster]
> [all][4] received shard failed for [all][4], node[MLNWasasasWi2V58hRA93mg],
> [P], s[INITIALIZING], indexUUID [KSScFASsasas6yTEcK7HqlmA], reason [Failed
> to start shard, message [IndexShardGatewayRecoveryException[[all][4] failed
> recovery]; nested: EngineCreationFailureException[[all][4] failed to open
> reader on writer]; nested:
> FileSystemException[/var/lib/elasticsearch/mycluster/nodes/0/indices/all/4/index/_
> m4bz_es090_0.tim: Too many open files]; ]] 2:11 /var/lib/elasticsearch
> doesn't exist on the nodes
>
> Thank you very much for your feedback.
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/16968aac-dbfe-44c4-b3fe-d25af0dfbe37%40googlegroups.com
> <https://groups.google.com/d/msgid/elasticsearch/16968aac-dbfe-44c4-b3fe-d25af0dfbe37%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Adrien Grand

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAL6Z4j5n0x_zBEYEP1B3a1uf0vCFGoCphZw%3DCZwJF7nuF24ERQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Too many open files /var/lib/elasticsearch doesn't exist on the nodes

2014-08-13 Thread José Andrés
Can someone elaborate as to why the exception below is thrown?

[2014-08-11 11:27:41,189][WARN ][cluster.action.shard ] [mycluster] 
[all][4] received shard failed for [all][4], node[MLNWasasasWi2V58hRA93mg], 
[P], s[INITIALIZING], indexUUID [KSScFASsasas6yTEcK7HqlmA], reason [Failed 
to start shard, message [IndexShardGatewayRecoveryException[[all][4] failed 
recovery]; nested: EngineCreationFailureException[[all][4] failed to open 
reader on writer]; nested: 
FileSystemException[/var/lib/elasticsearch/mycluster/nodes/0/indices/all/4/index/_
m4bz_es090_0.tim: Too many open files]; ]] 2:11 /var/lib/elasticsearch 
doesn't exist on the nodes

Thank you very much for your feedback.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/16968aac-dbfe-44c4-b3fe-d25af0dfbe37%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: too many open files

2014-07-17 Thread Andrew Selden

This is a fairly common problem and not necessarily specific to Elasticsearch. 
It is simple to solve. In Linux you can increase the operating system's max 
file descriptor limit. Other Unix-like operating systems have the same concept. 
You can find how to do this for your specific Linux distribution from a little 
googling on "linux max file descriptor".

Cheers.


On Jul 17, 2014, at 9:40 PM, Seungjin Lee  wrote:

> hello, I'm using elasticsearch with storm, Java TransportClient.
> 
> I have total 128 threads across machines which communicate with elasticsearch 
> cluster.
> 
> From time to time, error below occurs
> 
> 
> 
> org.elasticsearch.common.netty.channel.ChannelException: Failed to create a 
> selector.
>   at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
>   at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.(AbstractNioSelector.java:100)
>   at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.(AbstractNioWorker.java:52)
>   at 
> org.elasticsearch.common.netty.channel.socket.nio.NioWorker.(NioWorker.java:45)
>   at 
> org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
>   at 
> org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
>   at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:143)
>   at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:81)
>   at 
> org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.(NioWorkerPool.java:39)
>   at 
> org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.(NioWorkerPool.java:33)
>   at 
> org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:254)
>   at 
> org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
>   at 
> org.elasticsearch.transport.TransportService.doStart(TransportService.java:92)
>   at 
> org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
>   at 
> org.elasticsearch.client.transport.TransportClient.(TransportClient.java:189)
>   at 
> org.elasticsearch.client.transport.TransportClient.(TransportClient.java:125)
>   at 
> com.naver.labs.nelo2.notifier.utils.ElasticSearchUtil.prepareElasticSearch(ElasticSearchUtil.java:30)
>   at 
> com.naver.labs.nelo2.notifier.bolt.PercolatorBolt.prepare(PercolatorBolt.java:48)
>   at 
> backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:43)
>   at 
> backtype.storm.daemon.executor$fn__5641$fn__5653.invoke(executor.clj:690)
>   at backtype.storm.util$async_loop$fn__457.invoke(util.clj:429)
>   at clojure.lang.AFn.run(AFn.java:24)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: too many open files
>   at sun.nio.ch.IOUtil.makePipe(Native Method)
>   at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:65)
>   at 
> sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
>   at java.nio.channels.Selector.open(Selector.java:227)
>   at 
> org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.open(SelectorUtil.java:63)
>   at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:341)
>   ... 22 more
> 
> 
> code itself is very simple
> 
> I get client as follows
> 
> Settings settings =
> ImmutableSettings.settingsBuilder().put("cluster.name", 
> clusterName).put("client.transport.sniff", "true").build();
> List transportAddressList = new  
> ArrayList();
> for (String host : ESHost) {
> transportAddressList.add(new InetSocketTransportAddress(host, 
> ESPort));
> }
> return new TransportClient(settings)
> .addTransportAddresses(transportAddressList.toArray(new 
> InetSocketTransportAddress[transportAddressList.size()]));
> 
> and for each execution, it percolates as follows
> 
> return 
> client.preparePercolate().setIndices(indexName).setDocumentType(projectName).setPercolateDoc(docBuilder().setDoc(log)).setRouting(projectName).setPercolateFilter(FilterBuilders.termFilter("projects",
>  projectName).cache(true)).execute().actionGet();
> 
> 
> ES cluster consists of 5 machines with almost default setting.
> 
> What can be the cause of this pro

too many open files

2014-07-17 Thread Seungjin Lee
hello, I'm using elasticsearch with storm, Java TransportClient.

I have total 128 threads across machines which communicate with
elasticsearch cluster.

>From time to time, error below occurs



org.elasticsearch.common.netty.channel.ChannelException: Failed to create a
selector.
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:343)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.(AbstractNioSelector.java:100)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.(AbstractNioWorker.java:52)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.(NioWorker.java:45)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:45)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.createWorker(NioWorkerPool.java:28)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.newWorker(AbstractNioWorkerPool.java:143)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:81)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.(NioWorkerPool.java:39)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorkerPool.(NioWorkerPool.java:33)
at
org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:254)
at
org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at
org.elasticsearch.transport.TransportService.doStart(TransportService.java:92)
at
org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at
org.elasticsearch.client.transport.TransportClient.(TransportClient.java:189)
at
org.elasticsearch.client.transport.TransportClient.(TransportClient.java:125)
at
com.naver.labs.nelo2.notifier.utils.ElasticSearchUtil.prepareElasticSearch(ElasticSearchUtil.java:30)
at
com.naver.labs.nelo2.notifier.bolt.PercolatorBolt.prepare(PercolatorBolt.java:48)
at
backtype.storm.topology.BasicBoltExecutor.prepare(BasicBoltExecutor.java:43)
at backtype.storm.daemon.executor$fn__5641$fn__5653.invoke(executor.clj:690)
at backtype.storm.util$async_loop$fn__457.invoke(util.clj:429)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: too many open files
at sun.nio.ch.IOUtil.makePipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:65)
at
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
at java.nio.channels.Selector.open(Selector.java:227)
at
org.elasticsearch.common.netty.channel.socket.nio.SelectorUtil.open(SelectorUtil.java:63)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:341)
... 22 more


code itself is very simple

I get client as follows

Settings settings =
ImmutableSettings.settingsBuilder().put("cluster.name",
clusterName).put("client.transport.sniff", "true").build();
List transportAddressList = new
 ArrayList();
for (String host : ESHost) {
transportAddressList.add(new InetSocketTransportAddress(host,
ESPort));
}
return new TransportClient(settings)
.addTransportAddresses(transportAddressList.toArray(new
InetSocketTransportAddress[transportAddressList.size()]));

and for each execution, it percolates as follows

return
client.preparePercolate().setIndices(indexName).setDocumentType(projectName).setPercolateDoc(docBuilder().setDoc(log)).setRouting(projectName).setPercolateFilter(FilterBuilders.termFilter("projects",
projectName).cache(true)).execute().actionGet();


ES cluster consists of 5 machines with almost default setting.

What can be the cause of this problem?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAL3_U43SQ2KCGFCQ%3DcisaLGQXxdg_r1mzjrcunreaJOj0Ln-%2BQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: ElasticSearch giving FileNotFoundException: (Too many open files)

2014-04-29 Thread InquiringMind
Prateek ,

I've collected this from various sources and put it all together. Works 
fine for me, though I haven't yet dived into ELK:

-

You may verify the current soft limit by logging into the user that runs 
the Elastisearch JVM and issuing the following command:

$ ulimit -Sn

Finally, verify that Elasticsearch is indeed able to open up to this number 
of file handles from the max_file_descriptors value for each node via the 
_nodes API:

$ curl localhost:9200/_nodes/process?pretty && echo

ON LINUX

Update the */etc/security/limits.conf* file and ensure that it contains the 
following two lines and all is well again:

*username*  hardnofile  65537
*username*  softnofile  65536

Of course, replace the '*username*' user name to reflect the user on your 
own machines that are running Elasticsearch.

ON SOLARIS

In Solaris 9, the default limit of file descriptors per process was raised 
from 1024 to 65536.

ON MAC OS X

Create or edit */etc/launchd.conf* and add the following line:

limit maxfiles 40 40

Then shutdown OS X and restart the Mac. Verify the settings by opening a 
new terminal, and running either or both commands below:

$ launchctl limit maxfiles
$ ulimit -a

You should see the maxfiles set to 40 from the output of both of those 
commands.

-

Note that this is more of a Unix thing than an Elasticsearch thing. So if 
you are still having issues you may wish to ask on a newsgroup that 
specifically targets your operating system.

Also note that it's not a good practice to run an application as root. Too 
much chance to wipe out something from which you could never recover, and 
all that. I remember than once our operations folks started ES as root, and 
then after that the data files were owned by root and the non-root ES user 
now had troubles starting with locking errors all over the logs. I ended up 
performing a recursive chown to the ES filesystem and when restarted as the 
non-root user all was well again.

Brian

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/ddf6f61e-8a4e-4c6d-b09b-54361805cdf7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


ElasticSearch giving FileNotFoundException: (Too many open files)

2014-04-29 Thread Prateek Lal
Hi Team,

I have implemented logstash using LogStash+Redis+ElasticSearch and Kibana

using kibana system is not showing logs from some hosts and this issue is 
coming very frequently. sometimes kibana is not showing logs of recent 
times at all from all hosts.

While debugging I seen some strange logs in elasticsearch log files. Which 
says that (Too many files open)  kind of things .

Please find logs from /var/log/elasticsearch.log file

*[2014-04-29 15:13:00,033][WARN ][cluster.action.shard ] [Whitemane, 
Aelfyre] [logstash-2014.04.20][1] sending failed shard for 
[logstash-2014.04.20][1], node[NTHTtK4DRIuCrm5RKgx30g], [P], 
s[INITIALIZING], indexUUID [silMCoFlSdWJf66yAgpybQ], reason [Failed to 
start shard, message 
[IndexShardGatewayRecoveryException[[logstash-2014.04.20][1] failed 
recovery]; nested: EngineCreationFailureException[[logstash-2014.04.20][1] 
failed to open reader on writer]; nested: 
FileNotFoundException[/usr/local/elasticsearch-0.90.9/data/elasticsearch/nodes/0/indices/logstash-2014.04.20/1/index/_f0r_es090_0.doc
 
(Too many open files)]; ]]*
*[2014-04-29 15:13:00,033][WARN ][cluster.action.shard ] [Whitemane, 
Aelfyre] [logstash-2014.04.20][1] received shard failed for 
[logstash-2014.04.20][1], node[NTHTtK4DRIuCrm5RKgx30g], [P], 
s[INITIALIZING], indexUUID [silMCoFlSdWJf66yAgpybQ], reason [Failed to 
start shard, message 
[IndexShardGatewayRecoveryException[[logstash-2014.04.20][1] failed 
recovery]; nested: EngineCreationFailureException[[logstash-2014.04.20][1] 
failed to open reader on writer]; nested: 
FileNotFoundException[/usr/local/elasticsearch-0.90.9/data/elasticsearch/nodes/0/indices/logstash-2014.04.20/1/index/_f0r_es090_0.doc
 
(Too many open files)]; ]]*
*[2014-04-29 15:13:00,039][WARN ][index.engine.robin   ] [Whitemane, 
Aelfyre] [logstash-2014.04.29][1] shard is locked, releasing lock*
*[2014-04-29 15:13:00,039][WARN ][indices.cluster  ] [Whitemane, 
Aelfyre] [logstash-2014.04.29][1] failed to start shard*
*org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: 
[logstash-2014.04.29][1] failed recovery*
* at 
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:232)*
* at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)*
* at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)*
* at java.lang.Thread.run(Thread.java:722)*
*Caused by: org.elasticsearch.index.engine.EngineCreationFailureException: 
[logstash-2014.04.29][1] failed to create engine*
* at 
org.elasticsearch.index.engine.robin.RobinEngine.start(RobinEngine.java:256)*
* at 
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryPrepareForTranslog(InternalIndexShard.java:660)*
* at 
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:201)*
* at 
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:174)*
* ... 3 more*
*Caused by: org.apache.lucene.store.LockReleaseFailedException: Cannot 
forcefully unlock a NativeFSLock which is held by another indexer 
component: 
/usr/local/elasticsearch-0.90.9/data/elasticsearch/nodes/0/indices/logstash-2014.04.29/1/index/write.lock*
* at 
org.apache.lucene.store.NativeFSLock.release(NativeFSLockFactory.java:295)*
* at org.apache.lucene.index.IndexWriter.unlock(IndexWriter.java:4458)*
* at 
org.elasticsearch.index.engine.robin.RobinEngine.createWriter(RobinEngine.java:1415)*
* at 
org.elasticsearch.index.engine.robin.RobinEngine.start(RobinEngine.java:254)*
* ... 6 more*
*[2014-04-29 15:13:00,041][WARN ][cluster.action.shard ] [Whitemane, 
Aelfyre] [logstash-2014.04.29][1] sending failed shard for 
[logstash-2014.04.29][1], node[NTHTtK4DRIuCrm5RKgx30g], [P], 
s[INITIALIZING], indexUUID [W2ZbxZCXQYecXw8Jjrabhg], reason [Failed to 
start shard, message 
[IndexShardGatewayRecoveryException[[logstash-2014.04.29][1] failed 
recovery]; nested: EngineCreationFailureException[[logstash-2014.04.29][1] 
failed to create engine]; nested: LockReleaseFailedException[Cannot 
forcefully unlock a NativeFSLock which is held by another indexer 
component: 
/usr/local/elasticsearch-0.90.9/data/elasticsearch/nodes/0/indices/logstash-2014.04.29/1/index/write.lock];
 
]]*
*[2014-04-29 15:13:00,041][WARN ][cluster.action.shard ] [Whitemane, 
Aelfyre] [logstash-2014.04.29][1] received shard failed for 
[logstash-2014.04.29][1], node[NTHTtK4DRIuCrm5RKgx30g], [P], 
s[INITIALIZING], indexUUID [W2ZbxZCXQYecXw8Jjrabhg], reason [Failed to 
start shard, message 
[IndexShardGatewayRecoveryException[[logstash-2014.04.29][1] failed 
recovery]; nested: EngineCreationFailureException[[logstash-2014.04.29][1] 
failed to create engine]; nested: LockReleaseFailedException[Cannot 
forcefully unlock a NativeFSLock which is held by another indexer 
component: 
/usr/local/elasticsearch-0.90.9/data/elasticsearch/nodes/0/indices/logstash-2014.04.29/1/index/write.lock

Re: Too Many Open Files

2014-03-04 Thread smonasco
Sorry to have taken so long to reply.  So I went ahead and followed your 
link.  I'd been there before, but decided to give it a deeper look.  I 
found actually, however, that bigdesk told me how many max open files the 
process was using and from there I was able to determine that my settings 
in limits.conf was not being honored even though if I switched to the 
context Elasticsearch was running under I would get the appropriate limits.

I then dug into the service script and found someone dropped a ulimit 
statement into the script that was overwriting the limits.conf setting.

Thank you,
Shannon Monasco



On Wednesday, January 22, 2014 10:09:42 AM UTC-7, Ivan Brusic wrote:
>
> The first thing to do is check if your limits are actually being persisted 
> and used. The elasticsearch site has a good writeup: 
> http://www.elasticsearch.org/tutorials/too-many-open-files/
>
> Second, it might be possible that you are reaching the 128k limit. How 
> many shards per node do you have? Do you have non standard merge settings? 
> You can use the status API to find out how many open files you have. I do 
> not have a link since it might have changed since 0.19.
>
> Also, be aware that it is not possible to do rolling upgrades with nodes 
> have different major versions of elasticsearch. The underlying data will be 
> fine and does not need to be upgraded, but nodes will not be able to 
> communicate with each other.
>
> Cheers,
>
> Ivan
>
>
>
> On Tue, Jan 21, 2014 at 7:42 AM, smonasco 
> > wrote:
>
>> Sorry wrong error message.
>>
>> [2014-01-18 06:47:06,232][WARN 
>> ][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to accept a 
>> connection.
>> java.io.IOException: Too many open files
>> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
>> at 
>> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:226)
>> at 
>> org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:227)
>> at 
>> org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
>> at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>> at java.lang.Thread.run(Thread.java:722)
>>
>>
>> The other posted error message is newer and seems to follow the too many 
>> open files error message.
>>
>> --Shannon Monasco
>>
>> On Tuesday, January 21, 2014 8:35:18 AM UTC-7, smonasco wrote:
>>>
>>> Hi,
>>>
>>> I am using version 0.19.3 I have the nofile limit set to 128K and am 
>>> getting errors like
>>>
>>> [2014-01-18 06:52:54,857][WARN 
>>> ][netty.channel.socket.nio.NioServerSocketPipelineSink] 
>>> Failed to initialize an accepted socket.
>>> org.elasticsearch.common.netty.channel.ChannelException: Failed to 
>>> create a selector.
>>> at org.elasticsearch.common.netty.channel.socket.nio.
>>> AbstractNioWorker.start(AbstractNioWorker.java:154)
>>> at org.elasticsearch.common.netty.channel.socket.nio.
>>> AbstractNioWorker.register(AbstractNioWorker.java:131)
>>> at org.elasticsearch.common.netty.channel.socket.nio.
>>> NioServerSocketPipelineSink$Boss.registerAcceptedChannel(
>>> NioServerSocketPipelineSink.java:269)
>>> at org.elasticsearch.common.netty.channel.socket.nio.
>>> NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.
>>> java:231)
>>> at org.elasticsearch.common.netty.util.internal.
>>> DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(
>>> ThreadPoolExecutor.java:1110)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>> ThreadPoolExecutor.java:603)
>>> at java.lang.Thread.run(Thread.java:722)
>>> Caused by: java.io.IOException: Too many open files
>>> at sun.nio.ch.IOUtil.makePipe(Native Method)
>>> at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:
>>> 65)
>>> at sun.nio.ch.EPollSelectorProvider.openSelector(
>>> EPollSelectorProvider.java:36)
>>> at java.nio.channels.Selector.open(Selector.java:227)
>>> at org.elasticsearch.common.netty.channel.socket.nio.
>>> AbstractNioWorker.start(AbstractNioWorker.java:152)
>>> ... 7 more
>>&g

Re: Too Many Open Files

2014-01-22 Thread Ivan Brusic
The first thing to do is check if your limits are actually being persisted
and used. The elasticsearch site has a good writeup:
http://www.elasticsearch.org/tutorials/too-many-open-files/

Second, it might be possible that you are reaching the 128k limit. How many
shards per node do you have? Do you have non standard merge settings? You
can use the status API to find out how many open files you have. I do not
have a link since it might have changed since 0.19.

Also, be aware that it is not possible to do rolling upgrades with nodes
have different major versions of elasticsearch. The underlying data will be
fine and does not need to be upgraded, but nodes will not be able to
communicate with each other.

Cheers,

Ivan



On Tue, Jan 21, 2014 at 7:42 AM, smonasco  wrote:

> Sorry wrong error message.
>
> [2014-01-18 06:47:06,232][WARN
> ][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to accept a
> connection.
> java.io.IOException: Too many open files
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> at
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:226)
> at
> org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:227)
> at
> org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)
>
>
> The other posted error message is newer and seems to follow the too many
> open files error message.
>
> --Shannon Monasco
>
> On Tuesday, January 21, 2014 8:35:18 AM UTC-7, smonasco wrote:
>>
>> Hi,
>>
>> I am using version 0.19.3 I have the nofile limit set to 128K and am
>> getting errors like
>>
>> [2014-01-18 06:52:54,857][WARN 
>> ][netty.channel.socket.nio.NioServerSocketPipelineSink]
>> Failed to initialize an accepted socket.
>> org.elasticsearch.common.netty.channel.ChannelException: Failed to
>> create a selector.
>> at org.elasticsearch.common.netty.channel.socket.nio.
>> AbstractNioWorker.start(AbstractNioWorker.java:154)
>> at org.elasticsearch.common.netty.channel.socket.nio.
>> AbstractNioWorker.register(AbstractNioWorker.java:131)
>> at org.elasticsearch.common.netty.channel.socket.nio.
>> NioServerSocketPipelineSink$Boss.registerAcceptedChannel(
>> NioServerSocketPipelineSink.java:269)
>> at org.elasticsearch.common.netty.channel.socket.nio.
>> NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.
>> java:231)
>> at org.elasticsearch.common.netty.util.internal.
>> DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1110)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:603)
>> at java.lang.Thread.run(Thread.java:722)
>> Caused by: java.io.IOException: Too many open files
>> at sun.nio.ch.IOUtil.makePipe(Native Method)
>> at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:65)
>> at sun.nio.ch.EPollSelectorProvider.openSelector(
>> EPollSelectorProvider.java:36)
>> at java.nio.channels.Selector.open(Selector.java:227)
>> at org.elasticsearch.common.netty.channel.socket.nio.
>> AbstractNioWorker.start(AbstractNioWorker.java:152)
>> ... 7 more
>>
>> I am aware that version 0.19.3 is old.  We have been having trouble
>> getting our infrastructure group to build out new nodes so we can have a
>> rolling upgrade with testing for both versions going on.  I am now setting
>> the limit to 1048576 as per http://stackoverflow.com/
>> questions/1212925/on-linux-set-maximum-open-files-to-unlimited-possible,
>> however, I'm concerned this may cause other issues.
>>
>> If anyone has any suggestions I'd love to hear them.  I am using this as
>> fuel for the "please pay attention and get us the support we need so we can
>> upgrade" campaign.
>>
>> --Shannon Monasco
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/6c77b1f9-7838-40f9-b7e

Re: Too Many Open Files

2014-01-21 Thread smonasco
Sorry wrong error message.

[2014-01-18 06:47:06,232][WARN 
][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to accept a 
connection.
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:226)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:227)
at 
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)


The other posted error message is newer and seems to follow the too many 
open files error message.

--Shannon Monasco

On Tuesday, January 21, 2014 8:35:18 AM UTC-7, smonasco wrote:
>
> Hi,
>
> I am using version 0.19.3 I have the nofile limit set to 128K and am 
> getting errors like
>
> [2014-01-18 06:52:54,857][WARN 
> ][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to 
> initialize an accepted socket.
> org.elasticsearch.common.netty.channel.ChannelException: Failed to create 
> a selector.
> at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.start(AbstractNioWorker.java:154)
> at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.register(AbstractNioWorker.java:131)
> at 
> org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.registerAcceptedChannel(NioServerSocketPipelineSink.java:269)
> at 
> org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:231)
> at 
> org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>     at java.lang.Thread.run(Thread.java:722)
> Caused by: java.io.IOException: Too many open files
> at sun.nio.ch.IOUtil.makePipe(Native Method)
> at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:65)
> at 
> sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
> at java.nio.channels.Selector.open(Selector.java:227)
> at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.start(AbstractNioWorker.java:152)
> ... 7 more
>
> I am aware that version 0.19.3 is old.  We have been having trouble 
> getting our infrastructure group to build out new nodes so we can have a 
> rolling upgrade with testing for both versions going on.  I am now setting 
> the limit to 1048576 as per 
> http://stackoverflow.com/questions/1212925/on-linux-set-maximum-open-files-to-unlimited-possible,
>  
> however, I'm concerned this may cause other issues.
>
> If anyone has any suggestions I'd love to hear them.  I am using this as 
> fuel for the "please pay attention and get us the support we need so we can 
> upgrade" campaign.
>
> --Shannon Monasco
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6c77b1f9-7838-40f9-b7e2-6006370265a5%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Too Many Open Files

2014-01-21 Thread smonasco
Hi,

I am using version 0.19.3 I have the nofile limit set to 128K and am 
getting errors like

[2014-01-18 06:52:54,857][WARN 
][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to 
initialize an accepted socket.
org.elasticsearch.common.netty.channel.ChannelException: Failed to create a 
selector.
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.start(AbstractNioWorker.java:154)
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.register(AbstractNioWorker.java:131)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.registerAcceptedChannel(NioServerSocketPipelineSink.java:269)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:231)
at 
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.makePipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:65)
at 
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
at java.nio.channels.Selector.open(Selector.java:227)
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.start(AbstractNioWorker.java:152)
... 7 more

I am aware that version 0.19.3 is old.  We have been having trouble getting 
our infrastructure group to build out new nodes so we can have a rolling 
upgrade with testing for both versions going on.  I am now setting the 
limit to 1048576 as per 
http://stackoverflow.com/questions/1212925/on-linux-set-maximum-open-files-to-unlimited-possible,
 
however, I'm concerned this may cause other issues.

If anyone has any suggestions I'd love to hear them.  I am using this as 
fuel for the "please pay attention and get us the support we need so we can 
upgrade" campaign.

--Shannon Monasco

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/d76b08e4-d9d2-407e-8443-cb654f381c9a%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Too many open files

2014-01-07 Thread Adolfo Rodriguez
Happily, the problem of missing highlight records looks to be gone by 
making a config change.

* Initially I had 2 ES in 2 different apps (a Tomcat and a standalone) 
configured equal (both listening for incoming TransportClients requests on 
port 9300 and both open with client(false)) and a third ES connecting to 
then opened with new TransportClient() to fetch highlighting info. It looks 
that this third ES was randomly loosing highlighting records. (?)

* What I did to fix it was a configuration change to have only one 
client(false)) ES listening for TransportClients and 2 new 
TransportClient()s connecting to it.

It looks this change fixes the issue which was some kind of coupling 
between both client(false)) ESs listening on port 9300.

Regards

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/fbe72b9f-eeac-4d2b-9545-6851352aa3d5%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Too many open files

2014-01-07 Thread Adolfo Rodriguez
I guess, my problem with excessive number of sockets could be also a 
consequence of having 2 JVM running ES, one embedded in Tomcat, a second 
embedded in other Java app, as said here:

https://groups.google.com/forum/?hl=en-GB#!topicsearchin/elasticsearch/scale%7Csort:date%7Cspell:true/elasticsearch/m9IWpGzoLLE

Is there any experience running an unique embedded ES (as jar files), for 
example, in tomcat's lib folder, being consumed by several tomcat apps and 
other standalone apps in different JVMs?

Any opinion on this configuration as an starting point?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3ba7b377-9b66-4d8b-ad65-de362318f9f2%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Too many open files

2014-01-07 Thread Adolfo Rodriguez
Hi, my model is quite slow with just about some thousands documents

I realised that, when opening a

node = 
builder.client(clientOnly).data(!clientOnly).local(local).node();
client = node.client();

from my Java program to ES with such a small model, ES automatically 
creates 10 sockets. Casually I have 10 shards (?).

* Is this the expected behavior?
* Can I reduce the number of ES shards dynamically to reduce the number of 
sockets or should I redeploy my ES install?
* By opening other connections I finally get up to 200 simultaneous open 
sockets and, I am afraid, that, when fetching highlight information, some 
of the results are randomly being lost. Can this missing results be somehow 
as a consequence of a too large number of open sockets?

Thanks for your pointers.

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4c0a4660-ef70-491d-998f-5ed73c4a9025%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Too many open files: does a "service wrapper" end that issue?

2013-12-30 Thread joergpra...@gmail.com
The log says, the kernel OOM killer has stopped your process because of
getting too big. You have to configure less memory for the JVM or put in
more RAM.

Jörg

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGkXwXOJ%2BedX%2BQXubseQyT4dwsbKV%3DptRuq7TuV6zeNYQ%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Too many open files: does a "service wrapper" end that issue?

2013-12-30 Thread Jack Park
These lines in that log
Dec 30 03:37:11 bloomer kernel: [24733.355784]  []
oom_kill_process+0x81/0x180
Dec 30 03:37:11 bloomer kernel: [24733.355786]  []
__out_of_memory+0x58/0xd0
Dec 30 03:37:11 bloomer kernel: [24733.355788]  []
out_of_memory+0x86/0x1c0

suggest something about out of memory.
On that box, I was only able to use 4g. The log suggests perhaps
that's not enough.


On Mon, Dec 30, 2013 at 7:27 AM, joergpra...@gmail.com
 wrote:
> Have you checked /var/log/messages for a "killed" message?
>
> Jörg
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoELH0mcK6xDXah2jhWGo8F5sE3nDSrYLgrLXEiCZOkLtg%40mail.gmail.com.
>
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAH6s0fxb%2Br2_4c054oecNcej2HxPpai0kZPU%2B5KHEp3Qis5eeA%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Too many open files: does a "service wrapper" end that issue?

2013-12-30 Thread joergpra...@gmail.com
Have you checked /var/log/messages for a "killed" message?

Jörg

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoELH0mcK6xDXah2jhWGo8F5sE3nDSrYLgrLXEiCZOkLtg%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Too many open files: does a "service wrapper" end that issue?

2013-12-30 Thread Jack Park
Running bare on various Ubuntu boxes.
Interesting finding still deep in testing;
For Ubuntu, what *does* work is this:
Add this line to elasticsearch.in.sh:
ulimit -n 32000 (or whatever)
The notion of doing what the es tutorial says seems required, but
insufficient when on Ubuntu: even after doing all of that, and
rebooting, es with -Des.max-open-files=true is still not "getting it".
One obscure comment somewhere on the web suggested adding that line in
the shell, more as an offhand comment -- no theory, no reason, and now
es boots with the ability to open near that number (31999 or
something). So, it is now running on two different platforms to see if
my import will survive. One failed overnight with the es console
saying "Killed" for no apparent reason. Just "Killed", which, of
course, hosed that import. The other one is still running.


On Sun, Dec 29, 2013 at 11:53 PM, joergpra...@gmail.com
 wrote:
> Are you running ES on a virtual machine in a guest OS?
>
> The service wrapper
> https://github.com/elasticsearch/elasticsearch-servicewrapper works well but
> it is not related to file descriptor limit setting. This is a case for the
> host OS.
>
> Jörg
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGfJ8phUPakj6fevoU4pGbYWpHQWamwAuL%3DX4k0XajBTQ%40mail.gmail.com.
>
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAH6s0fz_8znxUhNG3KCvA2EQxZdAz9wvP0ukfuaWHRY6Paj7Fg%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Too many open files: does a "service wrapper" end that issue?

2013-12-29 Thread joergpra...@gmail.com
Are you running ES on a virtual machine in a guest OS?

The service wrapper
https://github.com/elasticsearch/elasticsearch-servicewrapper works well
but it is not related to file descriptor limit setting. This is a case for
the host OS.

Jörg

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGfJ8phUPakj6fevoU4pGbYWpHQWamwAuL%3DX4k0XajBTQ%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Too many open files: does a "service wrapper" end that issue?

2013-12-29 Thread Jack Park
Something I did not notice earlier; in one of the trials, the system
came up saying it could open 32000 files. Now it is saying 9400.
ulimit -n says 32000, and the system was rebooted. Why would ES not be
picking up that limit?

Thanks.
Jack

On Sun, Dec 29, 2013 at 7:07 PM, Jack Park  wrote:
> Yes!
> -Des.max-open-files=true does show me it can take 32000 files.
>
> But,
> typing this in
>
> http://localhost:9200/_nodes/stats
> on that machine says "No such file or directory"
>
> That would be cool if there's an easy way to see what the stats are,
> and perhaps stop and optimize or something.
>
> Many thanks!
>
> On Sun, Dec 29, 2013 at 6:55 PM, David Pilato  wrote:
>> Have a look at:
>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html
>>
>> About service wrapper, I would recommend to use RPM, DEB packages.
>>
>> You can display current Max open files setting when starting elasticsearch
>> using https://github.com/elasticsearch/elasticsearch/issues/483
>>
>> HTH
>>
>> --
>> David ;-)
>> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
>>
>>
>> Le 30 déc. 2013 à 03:01, Jack Park  a écrit :
>>
>> The tutorial at http://www.elasticsearch.org/tutorials/too-many-open-files/
>> says this:
>>
>> If sudo -u elasticsearch -s "ulimit -Sn" shows 32000 but you
>> still have a low limit when you run ElasticSearch, you’re probably running
>> it
>> through another program that doesn’t support PAM: a frequent offender is
>> supervisord.
>>
>> [I have no way to know I'm running through supervisord, since I'm just
>> typing the ./elasticsearch -f into a terminal on Ubuntu]
>>
>> The only solution I know to this problem is to raise the nofile
>> limit for the user running supervisord, but this will obviously raise the
>> limit
>> for all the processes running under supervisord, not an ideal situation.
>>
>> Consider using the
>> ElasticSearch
>> service wrapper instead.
>>
>> This is the service wrapper mentioned:
>> https://github.com/elasticsearch/elasticsearch-servicewrapper
>>
>> Is a service wrapper going to put this issue to bed?
>>
>> Is there a way to ask ElasticSearch how many open files Lucene is using?
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/CAH6s0fyjTm%2BYN69PhgPSwObyJo6bbZa1Ac%2BzLK2J4i5oxyO5Zw%40mail.gmail.com.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/E15BDF75-6FA1-45BA-A75C-6F35A7956268%40pilato.fr.
>> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAH6s0fy6798KH6ps8jQZsg-j9St%3D3SPSzZT%2BAp9v%2BchDAp0L1Q%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Too many open files: does a "service wrapper" end that issue?

2013-12-29 Thread Jack Park
Yes!
-Des.max-open-files=true does show me it can take 32000 files.

But,
typing this in

http://localhost:9200/_nodes/stats
on that machine says "No such file or directory"

That would be cool if there's an easy way to see what the stats are,
and perhaps stop and optimize or something.

Many thanks!

On Sun, Dec 29, 2013 at 6:55 PM, David Pilato  wrote:
> Have a look at:
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html
>
> About service wrapper, I would recommend to use RPM, DEB packages.
>
> You can display current Max open files setting when starting elasticsearch
> using https://github.com/elasticsearch/elasticsearch/issues/483
>
> HTH
>
> --
> David ;-)
> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
>
>
> Le 30 déc. 2013 à 03:01, Jack Park  a écrit :
>
> The tutorial at http://www.elasticsearch.org/tutorials/too-many-open-files/
> says this:
>
> If sudo -u elasticsearch -s "ulimit -Sn" shows 32000 but you
> still have a low limit when you run ElasticSearch, you’re probably running
> it
> through another program that doesn’t support PAM: a frequent offender is
> supervisord.
>
> [I have no way to know I'm running through supervisord, since I'm just
> typing the ./elasticsearch -f into a terminal on Ubuntu]
>
> The only solution I know to this problem is to raise the nofile
> limit for the user running supervisord, but this will obviously raise the
> limit
> for all the processes running under supervisord, not an ideal situation.
>
> Consider using the
> ElasticSearch
> service wrapper instead.
>
> This is the service wrapper mentioned:
> https://github.com/elasticsearch/elasticsearch-servicewrapper
>
> Is a service wrapper going to put this issue to bed?
>
> Is there a way to ask ElasticSearch how many open files Lucene is using?
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAH6s0fyjTm%2BYN69PhgPSwObyJo6bbZa1Ac%2BzLK2J4i5oxyO5Zw%40mail.gmail.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/E15BDF75-6FA1-45BA-A75C-6F35A7956268%40pilato.fr.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAH6s0fwupvSXGXx%2BRj5HsQ5aEqteNO50tRaEmAnfFtf3C8O2Ag%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Too many open files: does a "service wrapper" end that issue?

2013-12-29 Thread David Pilato
Have a look at: 
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html

About service wrapper, I would recommend to use RPM, DEB packages.

You can display current Max open files setting when starting elasticsearch 
using https://github.com/elasticsearch/elasticsearch/issues/483

HTH

--
David ;-)
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs


Le 30 déc. 2013 à 03:01, Jack Park  a écrit :

The tutorial at http://www.elasticsearch.org/tutorials/too-many-open-files/
says this:

If sudo -u elasticsearch -s "ulimit -Sn" shows 32000 but you
still have a low limit when you run ElasticSearch, you’re probably running it
through another program that doesn’t support PAM: a frequent offender is
supervisord.

[I have no way to know I'm running through supervisord, since I'm just
typing the ./elasticsearch -f into a terminal on Ubuntu]

The only solution I know to this problem is to raise the nofile
limit for the user running supervisord, but this will obviously raise the limit
for all the processes running under supervisord, not an ideal situation.

Consider using the
ElasticSearch
service wrapper instead.

This is the service wrapper mentioned:
https://github.com/elasticsearch/elasticsearch-servicewrapper

Is a service wrapper going to put this issue to bed?

Is there a way to ask ElasticSearch how many open files Lucene is using?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAH6s0fyjTm%2BYN69PhgPSwObyJo6bbZa1Ac%2BzLK2J4i5oxyO5Zw%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/E15BDF75-6FA1-45BA-A75C-6F35A7956268%40pilato.fr.
For more options, visit https://groups.google.com/groups/opt_out.


Too many open files: does a "service wrapper" end that issue?

2013-12-29 Thread Jack Park
The tutorial at http://www.elasticsearch.org/tutorials/too-many-open-files/
says this:

If sudo -u elasticsearch -s "ulimit -Sn" shows 32000 but you
still have a low limit when you run ElasticSearch, you’re probably running it
through another program that doesn’t support PAM: a frequent offender is
supervisord.

[I have no way to know I'm running through supervisord, since I'm just
typing the ./elasticsearch -f into a terminal on Ubuntu]

The only solution I know to this problem is to raise the nofile
limit for the user running supervisord, but this will obviously raise the limit
for all the processes running under supervisord, not an ideal situation.

Consider using the
ElasticSearch
service wrapper instead.

This is the service wrapper mentioned:
https://github.com/elasticsearch/elasticsearch-servicewrapper

Is a service wrapper going to put this issue to bed?

Is there a way to ask ElasticSearch how many open files Lucene is using?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAH6s0fyjTm%2BYN69PhgPSwObyJo6bbZa1Ac%2BzLK2J4i5oxyO5Zw%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.