Re: OutOfMemoryError and Too many open files

2017-05-08 Thread Erick Erickson
Solr/Lucene really like having a bunch of files available, so bumping
the ulimit is often the right thing to do.

This assumes you don't have any custom code that is failing to close
searchers and the like.

Best,
Erick

On Mon, May 8, 2017 at 10:40 AM, Satya Marivada
 wrote:
> Hi,
>
> Started getting below errors/exceptions. I have listed the resolution
> inline. Could you please see if I am headed right?
>
> The below error basically says that there are no more threads can be
> created as the limit has reached. We have big index and I assume the
> threads are being created outside of jvm and could not be because of low
> ulimit setting of nproc (4096). It has been increased to 131072. This
> number can be found by ulimit -u
>
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
> at
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1368)
> at
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:214)
> at
> java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
> at
> org.apache.solr.common.cloud.SolrZkClient$3.process(SolrZkClient.java:268)
> at
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
> at
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
>
> The below error basically says that there are no more files can be opened
> as the limit has reached. It has been increased to 65536 from 4096. This
> number can be found by ulimit -Hn, ulimit -Sn
>
> java.io.IOException: Too many open files
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> at
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
> at
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
> at
> org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:382)
> at
> org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:593)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
>
> Thanks,
> Satya


Re: OutOfMemoryError and Too many open files

2017-05-08 Thread Shawn Heisey
On 5/8/2017 11:40 AM, Satya Marivada wrote:
> Started getting below errors/exceptions. I have listed the resolution
> inline. Could you please see if I am headed right?
>
> java.lang.OutOfMemoryError: unable to create new native thread

> java.io.IOException: Too many open files

I have never had any luck setting these limits with the ulimit command. 
On Linux, I have adjusted these in the /etc/security/limits.conf config
file.  This is what I added to the file:

solrhardnproc   61440
solrsoftnproc   40960

solrhardnofile  65535
solrsoftnofile  49151

A reboot shouldn't be needed to get the change to take effect, though
you may need to log out and back in again before attempting to restart
Solr.  A reboot would be the guaranteed option.

For an OS other than Linux, the method for changing these limits is
probably going to be different.

Thanks,
Shawn



OutOfMemoryError and Too many open files

2017-05-08 Thread Satya Marivada
Hi,

Started getting below errors/exceptions. I have listed the resolution
inline. Could you please see if I am headed right?

The below error basically says that there are no more threads can be
created as the limit has reached. We have big index and I assume the
threads are being created outside of jvm and could not be because of low
ulimit setting of nproc (4096). It has been increased to 131072. This
number can be found by ulimit -u

java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
at
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1368)
at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:214)
at
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
at
org.apache.solr.common.cloud.SolrZkClient$3.process(SolrZkClient.java:268)
at
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
at
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)

The below error basically says that there are no more files can be opened
as the limit has reached. It has been increased to 65536 from 4096. This
number can be found by ulimit -Hn, ulimit -Sn

java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
at
org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:382)
at
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:593)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)

Thanks,
Satya


Re: Solr node crashes while indexing - Too many open files

2016-06-30 Thread Toke Eskildsen
Mads Tomasgård Bjørgan  wrote:

> That's true, but I was hoping there would be another way to solve this issue 
> as it's not considered preferable in our situation.

What you are looking for might be
https://cwiki.apache.org/confluence/display/solr/IndexConfig+in+SolrConfig#IndexConfiginSolrConfig-CompoundFileSegments

> Is it normal behavior for Solr to open over 4000 files without closing them 
> properly?

Open, yes. Not closing them properly, no. The number of open file handles 
should match the number of files in the index folder.

- Toke Eskildsen, State and University Library, Denmark


RE: Solr node crashes while indexing - Too many open files

2016-06-30 Thread Markus Jelsma
Yes, that is quite normal for a busy search engine, especially for cloud 
environments. We always start by increasing it to 64k minimum when provisioning 
machines.
Markus
 
-Original message-
> From:Mads Tomasgård Bjørgan 
> Sent: Thursday 30th June 2016 13:05
> To: solr-user@lucene.apache.org
> Subject: RE: Solr node crashes while indexing - Too many open files
> 
> That's true, but I was hoping there would be another way to solve this issue 
> as it's not considered preferable in our situation.
> 
> Is it normal behavior for Solr to open over 4000 files without closing them 
> properly? Is it for example possible to adjust autoCommit-settings I 
> solrconfig.xml for forcing Solr to close the files?
> 
> Any help is appreciated :-)
> 
> -Original Message-
> From: Markus Jelsma [mailto:markus.jel...@openindex.io] 
> Sent: torsdag 30. juni 2016 11.41
> To: solr-user@lucene.apache.org
> Subject: RE: Solr node crashes while indexing - Too many open files
> 
> Mads, some distributions require different steps for increasing 
> max_open_files. Check how it works vor CentOS specifically.
> 
> Markus
> 
>  
>  
> -Original message-
> > From:Mads Tomasgård Bjørgan 
> > Sent: Thursday 30th June 2016 10:52
> > To: solr-user@lucene.apache.org
> > Subject: Solr node crashes while indexing - Too many open files
> > 
> > Hello,
> > We're indexing a large set of files using Solr 6.1.0, running a SolrCloud 
> > by utilizing ZooKeeper 3.4.8.
> > 
> > We have two ensembles - and both clusters are running on three of their own 
> > respective VMs (CentOS 7). We first thought the error was due to CDCR - as 
> > we were trying to index a large amount of documents which had to be 
> > replicated to the target cluster. However, we got the same error even after 
> > turning of CDCR - which indicates CDCR wasn't the problem after all.
> > 
> > After indexing between 20 000 to 35 000 documents to the source cluster 
> > does the File Descriptor Count reach 4096 for one of the solr-nodes - and 
> > the respective node crashes. The count grows quite linearly as time goes. 
> > The remaining 2 nodes in the cluster is not affected at all, and their logs 
> > had no relevant posts.  We found the following errors for the crashing node 
> > in its log:
> > 
> > 2016-06-30 08:23:12.459 ERROR 
> > (updateExecutor-2-thread-22-processing-https:10.0.106.168:443//solr//DIPS_shard3_replica1
> >  x:DIPS_shard1_replica1 r:core_node1 n:10.0.106.115:443_solr s:shard1 
> > c:DIPS) [c:DIPS s:shard1 r:core_node1 x:DIPS_shard1_replica1] 
> > o.a.s.u.StreamingSolrClients error
> > java.net.SocketException: Too many open files
> > (...)
> > 2016-06-30 08:23:12.460 ERROR 
> > (updateExecutor-2-thread-22-processing-https:10.0.106.168:443//solr//DIPS_shard3_replica1
> >  x:DIPS_shard1_replica1 r:core_node1 n:10.0.106.115:443_solr s:shard1 
> > c:DIPS) [c:DIPS s:shard1 r:core_node1 x:DIPS_shard1_replica1] 
> > o.a.s.u.StreamingSolrClients error
> > java.net.SocketException: Too many open files
> >         (...)
> > 2016-06-30 08:23:12.461 ERROR (qtp314337396-18) [c:DIPS s:shard1 
> > r:core_node1 x:DIPS_shard1_replica1] o.a.s.h.RequestHandlerBase 
> > org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException:
> >  2 Async exceptions during distributed update:
> > Too many open files
> > Too many open files
> > (...)
> > 2016-06-30 08:23:12.461 INFO  (qtp314337396-18) [c:DIPS s:shard1 
> > r:core_node1 x:DIPS_shard1_replica1] o.a.s.c.S.Request 
> > [DIPS_shard1_replica1]  webapp=/solr path=/update params={version=2.2} 
> > status=-1 QTime=5
> > 2016-06-30 08:23:12.461 ERROR (qtp314337396-18) [c:DIPS s:shard1 
> > r:core_node1 x:DIPS_shard1_replica1] o.a.s.s.HttpSolrCall 
> > null:org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException:
> >  2 Async exceptions during distributed update:
> > Too many open files
> > Too many open files
> > ()
> > 
> > 2016-06-30 08:23:12.461 WARN  (qtp314337396-18) [c:DIPS s:shard1 
> > r:core_node1 x:DIPS_shard1_replica1] o.a.s.s.HttpSolrCall invalid return 
> > code: -1
> > 2016-06-30 08:23:38.108 INFO  (qtp314337396-20) [c:DIPS s:shard1 
> > r:core_node1 x:DIPS_shard1_replica1] o.a.s.c.S.Request 
> > [DIPS_shard1_replica1]  webapp=/solr path=/select 
> > params={df=_text_&distrib=false&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=https://10.0.106.1

RE: Solr node crashes while indexing - Too many open files

2016-06-30 Thread Mads Tomasgård Bjørgan
That's true, but I was hoping there would be another way to solve this issue as 
it's not considered preferable in our situation.

Is it normal behavior for Solr to open over 4000 files without closing them 
properly? Is it for example possible to adjust autoCommit-settings I 
solrconfig.xml for forcing Solr to close the files?

Any help is appreciated :-)

-Original Message-
From: Markus Jelsma [mailto:markus.jel...@openindex.io] 
Sent: torsdag 30. juni 2016 11.41
To: solr-user@lucene.apache.org
Subject: RE: Solr node crashes while indexing - Too many open files

Mads, some distributions require different steps for increasing max_open_files. 
Check how it works vor CentOS specifically.

Markus

 
 
-Original message-
> From:Mads Tomasgård Bjørgan 
> Sent: Thursday 30th June 2016 10:52
> To: solr-user@lucene.apache.org
> Subject: Solr node crashes while indexing - Too many open files
> 
> Hello,
> We're indexing a large set of files using Solr 6.1.0, running a SolrCloud by 
> utilizing ZooKeeper 3.4.8.
> 
> We have two ensembles - and both clusters are running on three of their own 
> respective VMs (CentOS 7). We first thought the error was due to CDCR - as we 
> were trying to index a large amount of documents which had to be replicated 
> to the target cluster. However, we got the same error even after turning of 
> CDCR - which indicates CDCR wasn't the problem after all.
> 
> After indexing between 20 000 to 35 000 documents to the source cluster does 
> the File Descriptor Count reach 4096 for one of the solr-nodes - and the 
> respective node crashes. The count grows quite linearly as time goes. The 
> remaining 2 nodes in the cluster is not affected at all, and their logs had 
> no relevant posts.  We found the following errors for the crashing node in 
> its log:
> 
> 2016-06-30 08:23:12.459 ERROR 
> (updateExecutor-2-thread-22-processing-https:10.0.106.168:443//solr//DIPS_shard3_replica1
>  x:DIPS_shard1_replica1 r:core_node1 n:10.0.106.115:443_solr s:shard1 c:DIPS) 
> [c:DIPS s:shard1 r:core_node1 x:DIPS_shard1_replica1] 
> o.a.s.u.StreamingSolrClients error
> java.net.SocketException: Too many open files
> (...)
> 2016-06-30 08:23:12.460 ERROR 
> (updateExecutor-2-thread-22-processing-https:10.0.106.168:443//solr//DIPS_shard3_replica1
>  x:DIPS_shard1_replica1 r:core_node1 n:10.0.106.115:443_solr s:shard1 c:DIPS) 
> [c:DIPS s:shard1 r:core_node1 x:DIPS_shard1_replica1] 
> o.a.s.u.StreamingSolrClients error
> java.net.SocketException: Too many open files
> (...)
> 2016-06-30 08:23:12.461 ERROR (qtp314337396-18) [c:DIPS s:shard1 r:core_node1 
> x:DIPS_shard1_replica1] o.a.s.h.RequestHandlerBase 
> org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException:
>  2 Async exceptions during distributed update:
> Too many open files
> Too many open files
> (...)
> 2016-06-30 08:23:12.461 INFO  (qtp314337396-18) [c:DIPS s:shard1 r:core_node1 
> x:DIPS_shard1_replica1] o.a.s.c.S.Request [DIPS_shard1_replica1]  
> webapp=/solr path=/update params={version=2.2} status=-1 QTime=5
> 2016-06-30 08:23:12.461 ERROR (qtp314337396-18) [c:DIPS s:shard1 r:core_node1 
> x:DIPS_shard1_replica1] o.a.s.s.HttpSolrCall 
> null:org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException:
>  2 Async exceptions during distributed update:
> Too many open files
> Too many open files
> ()
> 
> 2016-06-30 08:23:12.461 WARN  (qtp314337396-18) [c:DIPS s:shard1 r:core_node1 
> x:DIPS_shard1_replica1] o.a.s.s.HttpSolrCall invalid return code: -1
> 2016-06-30 08:23:38.108 INFO  (qtp314337396-20) [c:DIPS s:shard1 r:core_node1 
> x:DIPS_shard1_replica1] o.a.s.c.S.Request [DIPS_shard1_replica1]  
> webapp=/solr path=/select 
> params={df=_text_&distrib=false&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=https://10.0.106.115:443/solr/DIPS_shard1_replica1/&rows=10&version=2&q=*:*&NOW=1467275018057&isShard=true&wt=javabin&_=1467275017220}
>  hits=30218 status=0 QTime=1
> 
> Running netstat -n -p on the VM that yields the exceptions reveals that there 
> is at least 1 800 TCP connections (not counted how many - the netstat command 
> filled the entire PuTTY window yielding 2 000 lines) waiting to be closed:
> tcp6  70  0 10.0.106.115:34531  10.0.106.114:443
> CLOSE_WAIT  21658/java
> We're running the SolrCloud on 443, and the IP's belong to the VMs. We also 
> tried adjusting the ulimit for the machine to 100 000 - without any results..
> 
> Greetings,
> Mads
> 


RE: Solr node crashes while indexing - Too many open files

2016-06-30 Thread Markus Jelsma
Mads, some distributions require different steps for increasing max_open_files. 
Check how it works vor CentOS specifically.

Markus

 
 
-Original message-
> From:Mads Tomasgård Bjørgan 
> Sent: Thursday 30th June 2016 10:52
> To: solr-user@lucene.apache.org
> Subject: Solr node crashes while indexing - Too many open files
> 
> Hello,
> We're indexing a large set of files using Solr 6.1.0, running a SolrCloud by 
> utilizing ZooKeeper 3.4.8.
> 
> We have two ensembles - and both clusters are running on three of their own 
> respective VMs (CentOS 7). We first thought the error was due to CDCR - as we 
> were trying to index a large amount of documents which had to be replicated 
> to the target cluster. However, we got the same error even after turning of 
> CDCR - which indicates CDCR wasn't the problem after all.
> 
> After indexing between 20 000 to 35 000 documents to the source cluster does 
> the File Descriptor Count reach 4096 for one of the solr-nodes - and the 
> respective node crashes. The count grows quite linearly as time goes. The 
> remaining 2 nodes in the cluster is not affected at all, and their logs had 
> no relevant posts.  We found the following errors for the crashing node in 
> its log:
> 
> 2016-06-30 08:23:12.459 ERROR 
> (updateExecutor-2-thread-22-processing-https:10.0.106.168:443//solr//DIPS_shard3_replica1
>  x:DIPS_shard1_replica1 r:core_node1 n:10.0.106.115:443_solr s:shard1 c:DIPS) 
> [c:DIPS s:shard1 r:core_node1 x:DIPS_shard1_replica1] 
> o.a.s.u.StreamingSolrClients error
> java.net.SocketException: Too many open files
> (...)
> 2016-06-30 08:23:12.460 ERROR 
> (updateExecutor-2-thread-22-processing-https:10.0.106.168:443//solr//DIPS_shard3_replica1
>  x:DIPS_shard1_replica1 r:core_node1 n:10.0.106.115:443_solr s:shard1 c:DIPS) 
> [c:DIPS s:shard1 r:core_node1 x:DIPS_shard1_replica1] 
> o.a.s.u.StreamingSolrClients error
> java.net.SocketException: Too many open files
> (...)
> 2016-06-30 08:23:12.461 ERROR (qtp314337396-18) [c:DIPS s:shard1 r:core_node1 
> x:DIPS_shard1_replica1] o.a.s.h.RequestHandlerBase 
> org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException:
>  2 Async exceptions during distributed update:
> Too many open files
> Too many open files
> (...)
> 2016-06-30 08:23:12.461 INFO  (qtp314337396-18) [c:DIPS s:shard1 r:core_node1 
> x:DIPS_shard1_replica1] o.a.s.c.S.Request [DIPS_shard1_replica1]  
> webapp=/solr path=/update params={version=2.2} status=-1 QTime=5
> 2016-06-30 08:23:12.461 ERROR (qtp314337396-18) [c:DIPS s:shard1 r:core_node1 
> x:DIPS_shard1_replica1] o.a.s.s.HttpSolrCall 
> null:org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException:
>  2 Async exceptions during distributed update:
> Too many open files
> Too many open files
> ()
> 
> 2016-06-30 08:23:12.461 WARN  (qtp314337396-18) [c:DIPS s:shard1 r:core_node1 
> x:DIPS_shard1_replica1] o.a.s.s.HttpSolrCall invalid return code: -1
> 2016-06-30 08:23:38.108 INFO  (qtp314337396-20) [c:DIPS s:shard1 r:core_node1 
> x:DIPS_shard1_replica1] o.a.s.c.S.Request [DIPS_shard1_replica1]  
> webapp=/solr path=/select 
> params={df=_text_&distrib=false&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=https://10.0.106.115:443/solr/DIPS_shard1_replica1/&rows=10&version=2&q=*:*&NOW=1467275018057&isShard=true&wt=javabin&_=1467275017220}
>  hits=30218 status=0 QTime=1
> 
> Running netstat -n -p on the VM that yields the exceptions reveals that there 
> is at least 1 800 TCP connections (not counted how many - the netstat command 
> filled the entire PuTTY window yielding 2 000 lines) waiting to be closed:
> tcp6  70  0 10.0.106.115:34531  10.0.106.114:443
> CLOSE_WAIT  21658/java
> We're running the SolrCloud on 443, and the IP's belong to the VMs. We also 
> tried adjusting the ulimit for the machine to 100 000 - without any results..
> 
> Greetings,
> Mads
> 


Solr node crashes while indexing - Too many open files

2016-06-30 Thread Mads Tomasgård Bjørgan
Hello,
We're indexing a large set of files using Solr 6.1.0, running a SolrCloud by 
utilizing ZooKeeper 3.4.8.

We have two ensembles - and both clusters are running on three of their own 
respective VMs (CentOS 7). We first thought the error was due to CDCR - as we 
were trying to index a large amount of documents which had to be replicated to 
the target cluster. However, we got the same error even after turning of CDCR - 
which indicates CDCR wasn't the problem after all.

After indexing between 20 000 to 35 000 documents to the source cluster does 
the File Descriptor Count reach 4096 for one of the solr-nodes - and the 
respective node crashes. The count grows quite linearly as time goes. The 
remaining 2 nodes in the cluster is not affected at all, and their logs had no 
relevant posts.  We found the following errors for the crashing node in its log:

2016-06-30 08:23:12.459 ERROR 
(updateExecutor-2-thread-22-processing-https:10.0.106.168:443//solr//DIPS_shard3_replica1
 x:DIPS_shard1_replica1 r:core_node1 n:10.0.106.115:443_solr s:shard1 c:DIPS) 
[c:DIPS s:shard1 r:core_node1 x:DIPS_shard1_replica1] 
o.a.s.u.StreamingSolrClients error
java.net.SocketException: Too many open files
(...)
2016-06-30 08:23:12.460 ERROR 
(updateExecutor-2-thread-22-processing-https:10.0.106.168:443//solr//DIPS_shard3_replica1
 x:DIPS_shard1_replica1 r:core_node1 n:10.0.106.115:443_solr s:shard1 c:DIPS) 
[c:DIPS s:shard1 r:core_node1 x:DIPS_shard1_replica1] 
o.a.s.u.StreamingSolrClients error
java.net.SocketException: Too many open files
(...)
2016-06-30 08:23:12.461 ERROR (qtp314337396-18) [c:DIPS s:shard1 r:core_node1 
x:DIPS_shard1_replica1] o.a.s.h.RequestHandlerBase 
org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException:
 2 Async exceptions during distributed update:
Too many open files
Too many open files
(...)
2016-06-30 08:23:12.461 INFO  (qtp314337396-18) [c:DIPS s:shard1 r:core_node1 
x:DIPS_shard1_replica1] o.a.s.c.S.Request [DIPS_shard1_replica1]  webapp=/solr 
path=/update params={version=2.2} status=-1 QTime=5
2016-06-30 08:23:12.461 ERROR (qtp314337396-18) [c:DIPS s:shard1 r:core_node1 
x:DIPS_shard1_replica1] o.a.s.s.HttpSolrCall 
null:org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException:
 2 Async exceptions during distributed update:
Too many open files
Too many open files
()

2016-06-30 08:23:12.461 WARN  (qtp314337396-18) [c:DIPS s:shard1 r:core_node1 
x:DIPS_shard1_replica1] o.a.s.s.HttpSolrCall invalid return code: -1
2016-06-30 08:23:38.108 INFO  (qtp314337396-20) [c:DIPS s:shard1 r:core_node1 
x:DIPS_shard1_replica1] o.a.s.c.S.Request [DIPS_shard1_replica1]  webapp=/solr 
path=/select 
params={df=_text_&distrib=false&fl=id&fl=score&shards.purpose=4&start=0&fsv=true&shard.url=https://10.0.106.115:443/solr/DIPS_shard1_replica1/&rows=10&version=2&q=*:*&NOW=1467275018057&isShard=true&wt=javabin&_=1467275017220}
 hits=30218 status=0 QTime=1

Running netstat -n -p on the VM that yields the exceptions reveals that there 
is at least 1 800 TCP connections (not counted how many - the netstat command 
filled the entire PuTTY window yielding 2 000 lines) waiting to be closed:
tcp6  70  0 10.0.106.115:34531  10.0.106.114:443CLOSE_WAIT  
21658/java
We're running the SolrCloud on 443, and the IP's belong to the VMs. We also 
tried adjusting the ulimit for the machine to 100 000 - without any results..

Greetings,
Mads


IOFileUploadException(Too many open files) occurs while indexing using ExtractingRequestHandler

2012-11-29 Thread Shigeki Kobayashi
Hello everyone

I use ManifoldCF (File Crawler) to crawl and index file contents into
Solr3.6.
ManifoldCF uses ExtractingRequestHandler to extract contents from files.
Somehow IOFileUploadException occurs and tells there are too many open
files.

Does Solr open temporary files under /var/tmp/ a lot? Are there any cases
that those files remained open?

Also, after IOFileUploadException occurs, LockObtainFailedException tend to
happen a lot. Do you think this is related to IOFileUploadException?


2012/11/30 04:11:19
ERROR[solr.servlet.SolrDispatchFilter]-[TP-Processor1962]-:org.apache.commons.fileupload.FileUploadBase$IOFileUploadException:
Processing of multipart/form-data request failed.
/var/tmp/upload_4f3502de_13b4ac3d1f6__8000_24519177.tmp (Too many open
files)
at
org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:367)
at
org.apache.commons.fileupload.servlet.ServletFileUpload.parseRequest(ServletFileUpload.java:126)
at
org.apache.solr.servlet.MultipartRequestParser.parseParamsAndFillStreams(SolrRequestParsers.java:344)
at
org.apache.solr.servlet.StandardRequestParser.parseParamsAndFillStreams(SolrRequestParsers.java:397)
at
org.apache.solr.servlet.SolrRequestParsers.parse(SolrRequestParsers.java:115)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:244)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
filters.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:122)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190)
at
org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:291)
at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:774)
at
org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:703)
at
org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:896)
at
org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.FileNotFoundException:
/var/tmp/upload_4f3502de_13b4ac3d1f6__8000_24519177.tmp (Too many open
files)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:194)
at java.io.FileOutputStream.(FileOutputStream.java:145)
at
org.apache.commons.io.output.DeferredFileOutputStream.thresholdReached(DeferredFileOutputStream.java:181)
at
org.apache.commons.io.output.ThresholdingOutputStream.checkThreshold(ThresholdingOutputStream.java:226)
at
org.apache.commons.io.output.ThresholdingOutputStream.write(ThresholdingOutputStream.java:130)
at org.apache.commons.fileupload.util.Streams.copy(Streams.java:101)
at org.apache.commons.fileupload.util.Streams.copy(Streams.java:64)
at
org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:362)
... 23 more






2012/11/30 06:11:08
ERROR[solr.servlet.SolrDispatchFilter]-[TP-Processor1940]-:org.apache.lucene.store.LockObtainFailedException:
Lock obtain timed out: NativeFSLock@/usr/local/solr/data/index/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:1098)
at
org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:84)
at
org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:101)
at
org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:171)
at
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:219)
at
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:61)
at
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:115)
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.doAdd(ExtractingDocumentLoader.java:141)
at

Re: Open deleted index file failing jboss shutdown with Too many open files Error

2012-04-02 Thread Michael McCandless
Hmm, unless the ulimits are low, or the default mergeFactor was
changed, or you have many indexes open in a single JVM, or you keep
too many IndexReaders open, even in an NRT or frequent commit use
case, you should not run out of file descriptors.

Frequent commit/reopen should be perfectly fine, as long as you close
the old readers...

Mike McCandless

http://blog.mikemccandless.com

On Mon, Apr 2, 2012 at 8:35 AM, Erick Erickson  wrote:
> How often are you committing index updates? This kind of thing
> can happen if you commit too often. Consider setting
> commitWithin to something like, say, 5 minutes. Or doing the
> equivalent with the autoCommit parameters in solrconfig.xml
>
> If that isn't relevant, you need to provide some more details
> about what you're doing and how you're using Solr
>
> Best
> Erick
>
> On Sun, Apr 1, 2012 at 10:47 PM, Gopal Patwa  wrote:
>> I am using Solr 4.0 nightly build with NRT and I often get this
>> error during auto commit "Too many open files". I have search this forum
>> and what I found it is related to OS ulimit setting, please see below my
>> ulimit settings. I am not sure what ulimit setting I should have for open
>> file? ulimit -n unlimited?.
>>
>> Even if I set to higher number, it will just delay the issue until it reach
>> new open file limit. What I have seen that Solr has kept deleted index file
>> open by java process, which causing issue for our application server jboss
>> to shutdown gracefully due this open files by java process.
>>
>> I have seen recently this issue was resolved in lucene, is it TRUE?
>>
>> https://issues.apache.org/jira/browse/LUCENE-3855
>>
>>
>> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3
>> - 15GB, with Single shard
>>
>> We update the index every 5 seconds, soft commit every 1 second and hard
>> commit every 15 minutes
>>
>> Environment: Jboss 4.2, JDK 1.6 64 bit, CentOS , JVM Heap Size = 24GB*
>>
>>
>> ulimit:
>>
>> core file size          (blocks, -c) 0
>>
>> data seg size           (kbytes, -d) unlimited
>>
>> scheduling priority             (-e) 0
>>
>> file size               (blocks, -f) unlimited
>>
>> pending signals                 (-i) 401408
>>
>> max locked memory       (kbytes, -l) 1024
>>
>> max memory size         (kbytes, -m) unlimited
>>
>> open files                      (-n) 4096
>>
>> pipe size            (512 bytes, -p) 8
>>
>> POSIX message queues     (bytes, -q) 819200
>>
>> real-time priority              (-r) 0
>>
>> stack size              (kbytes, -s) 10240
>>
>> cpu time               (seconds, -t) unlimited
>>
>> max user processes              (-u) 401408
>>
>> virtual memory          (kbytes, -v) unlimited
>>
>> file locks                      (-x) unlimited
>>
>>
>> ERROR:*
>>
>> *2012-04-01* *20:08:35*,*323* [] *priority=ERROR* *app_name=*
>> *thread=pool-10-thread-1* *location=CommitTracker* *line=93* *auto*
>> *commit* *error...:org.apache.solr.common.SolrException:* *Error*
>> *opening* *new* *searcher*
>>        *at* 
>> *org.apache.solr.core.SolrCore.openNewSearcher*(*SolrCore.java:1138*)
>>        *at* *org.apache.solr.core.SolrCore.getSearcher*(*SolrCore.java:1251*)
>>        *at* 
>> *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:409*)
>>        *at* 
>> *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
>>        *at* 
>> *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
>>        *at* 
>> *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
>>        *at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
>>        *at* 
>> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
>>        *at* 
>> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
>>        *at* 
>> *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
>>        *at* 
>> *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
>>        *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
>> *java.io.FileNotFoundException:*
>> */opt/mci/data/srwp01mci001/inventory/index/_4q1y_0.tip* (*Too many
>> open files*)
>>        *at* *java.io.RandomAccessFile.open*(*Native* *Method*)
>&

Re: Open deleted index file failing jboss shutdown with Too many open files Error

2012-04-02 Thread Gopal Patwa
Here is SolrConfig.xml, and I am using Lucene NRT with soft commit and
 update the index every 5 seconds, soft commit every 1 second and hard
commit every 15 minutes

> SolrConfig.xml:
>
>
>
>false
>10
>2147483647
>1
>4096
>10
>1000
>1
>single
>
>
>  0.0
>  10.0
>
>
>
>  false
>  0
>
>
>
>
>
>
>1000
> 
>   90
>   false
> 
> 
>   ${inventory.solr.softcommit.duration:1000}
> 
>
>

On Sun, Apr 1, 2012 at 7:47 PM, Gopal Patwa  wrote:

> I am using Solr 4.0 nightly build with NRT and I often get this
> error during auto commit "Too many open files". I have search this forum
> and what I found it is related to OS ulimit setting, please see below my
> ulimit settings. I am not sure what ulimit setting I should have for open
> file? ulimit -n unlimited?.
>
> Even if I set to higher number, it will just delay the issue until it
> reach new open file limit. What I have seen that Solr has kept deleted
> index file open by java process, which causing issue for our application
> server jboss to shutdown gracefully due this open files by java process.
>
> I have seen recently this issue was resolved in lucene, is it TRUE?
>
> https://issues.apache.org/jira/browse/LUCENE-3855
>
>
> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3
> - 15GB, with Single shard
>
> We update the index every 5 seconds, soft commit every 1 second and hard
> commit every 15 minutes
>
> Environment: Jboss 4.2, JDK 1.6 64 bit, CentOS , JVM Heap Size = 24GB*
>
>
> ulimit:
>
> core file size  (blocks, -c) 0
>
> data seg size   (kbytes, -d) unlimited
>
> scheduling priority (-e) 0
>
> file size   (blocks, -f) unlimited
>
> pending signals (-i) 401408
>
> max locked memory   (kbytes, -l) 1024
>
> max memory size (kbytes, -m) unlimited
>
> open files  (-n) 4096
>
> pipe size(512 bytes, -p) 8
>
> POSIX message queues (bytes, -q) 819200
>
> real-time priority  (-r) 0
>
> stack size  (kbytes, -s) 10240
>
> cpu time   (seconds, -t) unlimited
>
> max user processes  (-u) 401408
>
> virtual memory  (kbytes, -v) unlimited
>
> file locks  (-x) unlimited
>
>
> ERROR:*
>
> *2012-04-01* *20:08:35*,*323* [] *priority=ERROR* *app_name=* 
> *thread=pool-10-thread-1* *location=CommitTracker* *line=93* *auto* *commit* 
> *error...:org.apache.solr.common.SolrException:* *Error* *opening* *new* 
> *searcher*
>   *at* 
> *org.apache.solr.core.SolrCore.openNewSearcher*(*SolrCore.java:1138*)
>   *at* *org.apache.solr.core.SolrCore.getSearcher*(*SolrCore.java:1251*)
>   *at* 
> *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:409*)
>   *at* 
> *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
>   *at* 
> *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
>   *at* 
> *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
>   *at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
>   *at* 
> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
>   *at* 
> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
>   *at* 
> *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
>   *at* 
> *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
>   *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:* 
> *java.io.FileNotFoundException:* 
> */opt/mci/data/srwp01mci001/inventory/index/_4q1y_0.tip* (*Too many open 
> files*)
>   *at* *java.io.RandomAccessFile.open*(*Native* *Method*)
>   *at* *java.io.RandomAccessFile.*<*init*>(*RandomAccessFile.java:212*)
>   *at* 
> *org.apache.lucene.store.FSDirectory$FSIndexOutput.*<*init*>(*FSDirectory.java:449*)
>   *at* 
> *org.apache.lucene.store.FSDirectory.createOutput*(*FSDirectory.java:288*)
>   *at* 
> *org.apache.lucene.codecs.BlockTreeTermsWri

Re: Open deleted index file failing jboss shutdown with Too many open files Error

2012-04-02 Thread Erick Erickson
How often are you committing index updates? This kind of thing
can happen if you commit too often. Consider setting
commitWithin to something like, say, 5 minutes. Or doing the
equivalent with the autoCommit parameters in solrconfig.xml

If that isn't relevant, you need to provide some more details
about what you're doing and how you're using Solr

Best
Erick

On Sun, Apr 1, 2012 at 10:47 PM, Gopal Patwa  wrote:
> I am using Solr 4.0 nightly build with NRT and I often get this
> error during auto commit "Too many open files". I have search this forum
> and what I found it is related to OS ulimit setting, please see below my
> ulimit settings. I am not sure what ulimit setting I should have for open
> file? ulimit -n unlimited?.
>
> Even if I set to higher number, it will just delay the issue until it reach
> new open file limit. What I have seen that Solr has kept deleted index file
> open by java process, which causing issue for our application server jboss
> to shutdown gracefully due this open files by java process.
>
> I have seen recently this issue was resolved in lucene, is it TRUE?
>
> https://issues.apache.org/jira/browse/LUCENE-3855
>
>
> I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3
> - 15GB, with Single shard
>
> We update the index every 5 seconds, soft commit every 1 second and hard
> commit every 15 minutes
>
> Environment: Jboss 4.2, JDK 1.6 64 bit, CentOS , JVM Heap Size = 24GB*
>
>
> ulimit:
>
> core file size          (blocks, -c) 0
>
> data seg size           (kbytes, -d) unlimited
>
> scheduling priority             (-e) 0
>
> file size               (blocks, -f) unlimited
>
> pending signals                 (-i) 401408
>
> max locked memory       (kbytes, -l) 1024
>
> max memory size         (kbytes, -m) unlimited
>
> open files                      (-n) 4096
>
> pipe size            (512 bytes, -p) 8
>
> POSIX message queues     (bytes, -q) 819200
>
> real-time priority              (-r) 0
>
> stack size              (kbytes, -s) 10240
>
> cpu time               (seconds, -t) unlimited
>
> max user processes              (-u) 401408
>
> virtual memory          (kbytes, -v) unlimited
>
> file locks                      (-x) unlimited
>
>
> ERROR:*
>
> *2012-04-01* *20:08:35*,*323* [] *priority=ERROR* *app_name=*
> *thread=pool-10-thread-1* *location=CommitTracker* *line=93* *auto*
> *commit* *error...:org.apache.solr.common.SolrException:* *Error*
> *opening* *new* *searcher*
>        *at* 
> *org.apache.solr.core.SolrCore.openNewSearcher*(*SolrCore.java:1138*)
>        *at* *org.apache.solr.core.SolrCore.getSearcher*(*SolrCore.java:1251*)
>        *at* 
> *org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:409*)
>        *at* 
> *org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
>        *at* 
> *java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
>        *at* 
> *java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
>        *at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
>        *at* 
> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
>        *at* 
> *java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
>        *at* 
> *java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
>        *at* 
> *java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
>        *at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
> *java.io.FileNotFoundException:*
> */opt/mci/data/srwp01mci001/inventory/index/_4q1y_0.tip* (*Too many
> open files*)
>        *at* *java.io.RandomAccessFile.open*(*Native* *Method*)
>        *at* *java.io.RandomAccessFile.*<*init*>(*RandomAccessFile.java:212*)
>        *at* 
> *org.apache.lucene.store.FSDirectory$FSIndexOutput.*<*init*>(*FSDirectory.java:449*)
>        *at* 
> *org.apache.lucene.store.FSDirectory.createOutput*(*FSDirectory.java:288*)
>        *at* 
> *org.apache.lucene.codecs.BlockTreeTermsWriter.*<*init*>(*BlockTreeTermsWriter.java:161*)
>        *at* 
> *org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsConsumer*(*Lucene40PostingsFormat.java:66*)
>        *at* 
> *org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.addField*(*PerFieldPostingsFormat.java:118*)
>        *at* 
> *org.apache.lucene.index.FreqProxTermsWriterPerField.flush*(*FreqProxTermsWriterPerField.java:322*)
>        *at* 
> *org.apache.lucene.index.FreqProxTermsWriter.flus

Open deleted index file failing jboss shutdown with Too many open files Error

2012-04-01 Thread Gopal Patwa
I am using Solr 4.0 nightly build with NRT and I often get this
error during auto commit "Too many open files". I have search this forum
and what I found it is related to OS ulimit setting, please see below my
ulimit settings. I am not sure what ulimit setting I should have for open
file? ulimit -n unlimited?.

Even if I set to higher number, it will just delay the issue until it reach
new open file limit. What I have seen that Solr has kept deleted index file
open by java process, which causing issue for our application server jboss
to shutdown gracefully due this open files by java process.

I have seen recently this issue was resolved in lucene, is it TRUE?

https://issues.apache.org/jira/browse/LUCENE-3855


I have 3 core with index size : core1 - 70GB, Core2 - 50GB and Core3
- 15GB, with Single shard

We update the index every 5 seconds, soft commit every 1 second and hard
commit every 15 minutes

Environment: Jboss 4.2, JDK 1.6 64 bit, CentOS , JVM Heap Size = 24GB*


ulimit:

core file size  (blocks, -c) 0

data seg size   (kbytes, -d) unlimited

scheduling priority (-e) 0

file size   (blocks, -f) unlimited

pending signals (-i) 401408

max locked memory   (kbytes, -l) 1024

max memory size (kbytes, -m) unlimited

open files  (-n) 4096

pipe size(512 bytes, -p) 8

POSIX message queues (bytes, -q) 819200

real-time priority  (-r) 0

stack size  (kbytes, -s) 10240

cpu time   (seconds, -t) unlimited

max user processes  (-u) 401408

virtual memory  (kbytes, -v) unlimited

file locks  (-x) unlimited


ERROR:*

*2012-04-01* *20:08:35*,*323* [] *priority=ERROR* *app_name=*
*thread=pool-10-thread-1* *location=CommitTracker* *line=93* *auto*
*commit* *error...:org.apache.solr.common.SolrException:* *Error*
*opening* *new* *searcher*
*at* 
*org.apache.solr.core.SolrCore.openNewSearcher*(*SolrCore.java:1138*)
*at* *org.apache.solr.core.SolrCore.getSearcher*(*SolrCore.java:1251*)
*at* 
*org.apache.solr.update.DirectUpdateHandler2.commit*(*DirectUpdateHandler2.java:409*)
*at* 
*org.apache.solr.update.CommitTracker.run*(*CommitTracker.java:197*)
*at* 
*java.util.concurrent.Executors$RunnableAdapter.call*(*Executors.java:441*)
*at* 
*java.util.concurrent.FutureTask$Sync.innerRun*(*FutureTask.java:303*)
*at* *java.util.concurrent.FutureTask.run*(*FutureTask.java:138*)
*at* 
*java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301*(*ScheduledThreadPoolExecutor.java:98*)
*at* 
*java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run*(*ScheduledThreadPoolExecutor.java:206*)
*at* 
*java.util.concurrent.ThreadPoolExecutor$Worker.runTask*(*ThreadPoolExecutor.java:886*)
*at* 
*java.util.concurrent.ThreadPoolExecutor$Worker.run*(*ThreadPoolExecutor.java:908*)
*at* *java.lang.Thread.run*(*Thread.java:662*)*Caused* *by:*
*java.io.FileNotFoundException:*
*/opt/mci/data/srwp01mci001/inventory/index/_4q1y_0.tip* (*Too many
open files*)
*at* *java.io.RandomAccessFile.open*(*Native* *Method*)
*at* *java.io.RandomAccessFile.*<*init*>(*RandomAccessFile.java:212*)
*at* 
*org.apache.lucene.store.FSDirectory$FSIndexOutput.*<*init*>(*FSDirectory.java:449*)
*at* 
*org.apache.lucene.store.FSDirectory.createOutput*(*FSDirectory.java:288*)
*at* 
*org.apache.lucene.codecs.BlockTreeTermsWriter.*<*init*>(*BlockTreeTermsWriter.java:161*)
*at* 
*org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsConsumer*(*Lucene40PostingsFormat.java:66*)
*at* 
*org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.addField*(*PerFieldPostingsFormat.java:118*)
*at* 
*org.apache.lucene.index.FreqProxTermsWriterPerField.flush*(*FreqProxTermsWriterPerField.java:322*)
*at* 
*org.apache.lucene.index.FreqProxTermsWriter.flush*(*FreqProxTermsWriter.java:92*)
*at* *org.apache.lucene.index.TermsHash.flush*(*TermsHash.java:117*)
*at* *org.apache.lucene.index.DocInverter.flush*(*DocInverter.java:53*)
*at* 
*org.apache.lucene.index.DocFieldProcessor.flush*(*DocFieldProcessor.java:81*)
*at* 
*org.apache.lucene.index.DocumentsWriterPerThread.flush*(*DocumentsWriterPerThread.java:475*)
*at* 
*org.apache.lucene.index.DocumentsWriter.doFlush*(*DocumentsWriter.java:422*)
*at* 
*org.apache.lucene.index.DocumentsWriter.flushAllThreads*(*DocumentsWriter.java:553*)
*at* 
*org.apache.lucene.index.IndexWriter.getReader*(*IndexWriter.java:354*)
*at* 
*org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter*(*StandardDirectoryReader.java:258*)
*at* 
*org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged*(*StandardDirectoryR

Re: Too many open files - lots of sockets

2012-03-14 Thread Erick Erickson
Colin:

FYI, you might consider just setting up the autocommit (or commitWithin if
you're using SolrJ) for some reasonable interval (I often use 10 minutes or so).

Even though you've figured it is a Tomcat issue, each
commit causes searcher re-opens, perhaps replication in a master/slave
setup, increased merges etc. It works, but it's also resource intensive...

FWIW
Erick

On Wed, Mar 14, 2012 at 6:40 AM, Michael Kuhlmann  wrote:
> Ah, good to know! Thank you!
>
> I already had Jetty under suspicion, but we had this failure quite often in
> October and November, when the bug was not yet reported.
>
> -Kuli
>
> Am 14.03.2012 12:08, schrieb Colin Howe:
>
>> After some more digging around I discovered that there was a bug reported
>> in jetty 6:  https://jira.codehaus.org/browse/JETTY-1458
>>
>> This prompted me to upgrade to Jetty 7 and things look a bit more stable
>> now :)


Re: Too many open files - lots of sockets

2012-03-14 Thread Michael Kuhlmann

Ah, good to know! Thank you!

I already had Jetty under suspicion, but we had this failure quite often 
in October and November, when the bug was not yet reported.


-Kuli

Am 14.03.2012 12:08, schrieb Colin Howe:

After some more digging around I discovered that there was a bug reported
in jetty 6:  https://jira.codehaus.org/browse/JETTY-1458

This prompted me to upgrade to Jetty 7 and things look a bit more stable
now :)


Re: Too many open files - lots of sockets

2012-03-14 Thread Colin Howe
After some more digging around I discovered that there was a bug reported
in jetty 6:  https://jira.codehaus.org/browse/JETTY-1458

This prompted me to upgrade to Jetty 7 and things look a bit more stable
now :)



On Wed, Mar 14, 2012 at 10:26 AM, Michael Kuhlmann  wrote:

> I had the same problem, without auto-commit.
>
> I never really found out what exactly the reason was, but I think it was
> because commits were triggered before a previous commit had the chance to
> finish.
>
> We now commit after every minute or 1000 (quite large) documents, whatever
> comes first. And we never optimize. We haven't had this exceptions for
> months now.
>
> Good luck!
> -Kuli
>
> Am 14.03.2012 11:22, schrieb Colin Howe:
>
>> Currently using 3.4.0. We have autocommit enabled but we manually do
>> commits every 100 documents anyway... I can turn it off if you think that
>> might help.
>>
>>
>> Cheers,
>> Colin
>>
>>
>> On Wed, Mar 14, 2012 at 10:24 AM, Markus Jelsma
>> **wrote:
>>
>>  Are you running trunk and have auto-commit enabled? Then disable
>>> auto-commit. Even if you increase ulimits it will continue to swallow all
>>> available file descriptors.
>>>
>>>
>>> On Wed, 14 Mar 2012 10:13:55 +, Colin Howe
>>> wrote:
>>>
>>>  Hello,
>>>>
>>>> We keep hitting the too many open files exception. Looking at lsof we
>>>> have
>>>> a lot (several thousand) of entries like this:
>>>>
>>>> java  19339 root 1619u sock0,7
>>>>  0t0
>>>>  682291383 can't identify protocol
>>>>
>>>>
>>>> However, netstat -a doesn't show any of these.
>>>>
>>>> Can anyone suggest a way to diagnose what these socket entries are?
>>>> Happy
>>>> to post any more information as needed.
>>>>
>>>>
>>>> Cheers,
>>>> Colin
>>>>
>>>>
>>> --
>>> Markus Jelsma - CTO - Openindex
>>> http://www.linkedin.com/in/markus17<http://www.linkedin.com/in/**markus17>
>>> <http://www.linkedin.**com/in/markus17<http://www.linkedin.com/in/markus17>
>>> >
>>> 050-8536600 / 06-50258350
>>>
>>>
>>
>>
>>
>


-- 
Colin Howe
@colinhowe

VP of Engineering
Conversocial Ltd
conversocial.com


Re: Too many open files - lots of sockets

2012-03-14 Thread Michael Kuhlmann

I had the same problem, without auto-commit.

I never really found out what exactly the reason was, but I think it was 
because commits were triggered before a previous commit had the chance 
to finish.


We now commit after every minute or 1000 (quite large) documents, 
whatever comes first. And we never optimize. We haven't had this 
exceptions for months now.


Good luck!
-Kuli

Am 14.03.2012 11:22, schrieb Colin Howe:

Currently using 3.4.0. We have autocommit enabled but we manually do
commits every 100 documents anyway... I can turn it off if you think that
might help.


Cheers,
Colin


On Wed, Mar 14, 2012 at 10:24 AM, Markus Jelsma
wrote:


Are you running trunk and have auto-commit enabled? Then disable
auto-commit. Even if you increase ulimits it will continue to swallow all
available file descriptors.


On Wed, 14 Mar 2012 10:13:55 +, Colin Howe
wrote:


Hello,

We keep hitting the too many open files exception. Looking at lsof we have
a lot (several thousand) of entries like this:

java  19339 root 1619u sock0,70t0
  682291383 can't identify protocol


However, netstat -a doesn't show any of these.

Can anyone suggest a way to diagnose what these socket entries are? Happy
to post any more information as needed.


Cheers,
Colin



--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/**markus17<http://www.linkedin.com/in/markus17>
050-8536600 / 06-50258350









Re: Too many open files - lots of sockets

2012-03-14 Thread Colin Howe
Currently using 3.4.0. We have autocommit enabled but we manually do
commits every 100 documents anyway... I can turn it off if you think that
might help.


Cheers,
Colin


On Wed, Mar 14, 2012 at 10:24 AM, Markus Jelsma
wrote:

> Are you running trunk and have auto-commit enabled? Then disable
> auto-commit. Even if you increase ulimits it will continue to swallow all
> available file descriptors.
>
>
> On Wed, 14 Mar 2012 10:13:55 +, Colin Howe 
> wrote:
>
>> Hello,
>>
>> We keep hitting the too many open files exception. Looking at lsof we have
>> a lot (several thousand) of entries like this:
>>
>> java  19339 root 1619u sock0,70t0
>>  682291383 can't identify protocol
>>
>>
>> However, netstat -a doesn't show any of these.
>>
>> Can anyone suggest a way to diagnose what these socket entries are? Happy
>> to post any more information as needed.
>>
>>
>> Cheers,
>> Colin
>>
>
> --
> Markus Jelsma - CTO - Openindex
> http://www.linkedin.com/in/**markus17<http://www.linkedin.com/in/markus17>
> 050-8536600 / 06-50258350
>



-- 
Colin Howe
@colinhowe

VP of Engineering
Conversocial Ltd
conversocial.com


Re: Too many open files - lots of sockets

2012-03-14 Thread Markus Jelsma
Are you running trunk and have auto-commit enabled? Then disable 
auto-commit. Even if you increase ulimits it will continue to swallow 
all available file descriptors.


On Wed, 14 Mar 2012 10:13:55 +, Colin Howe  
wrote:

Hello,

We keep hitting the too many open files exception. Looking at lsof we 
have

a lot (several thousand) of entries like this:

java  19339 root 1619u sock0,7
0t0

 682291383 can't identify protocol


However, netstat -a doesn't show any of these.

Can anyone suggest a way to diagnose what these socket entries are? 
Happy

to post any more information as needed.


Cheers,
Colin


--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536600 / 06-50258350


Too many open files - lots of sockets

2012-03-14 Thread Colin Howe
Hello,

We keep hitting the too many open files exception. Looking at lsof we have
a lot (several thousand) of entries like this:

java  19339 root 1619u sock0,7 0t0
 682291383 can't identify protocol


However, netstat -a doesn't show any of these.

Can anyone suggest a way to diagnose what these socket entries are? Happy
to post any more information as needed.


Cheers,
Colin


-- 
Colin Howe
@colinhowe

VP of Engineering
Conversocial Ltd
conversocial.com


Re: [SolrCloud] Too many open files - internal server error

2012-02-29 Thread Carlos Alberto Schneider
I had this problem sometime ago,
It happened on our homolog machine.

There was 3 solr instances , 1 master 2 slaves, running.
My Solution was: I stoped the slaves, deleted both data folders, runned an
optimize and than started it again.

I tried to raise the OS open file limit first, but i think it was not a
good idea... so i tried this ...


On Wed, Feb 29, 2012 at 2:07 PM, Markus Jelsma
wrote:

> I get the correct output for ulimit -n as tomcat6 user. However, i did
> find a




-- 
Carlos Alberto Schneider
Informant -(47) 38010919 - 9904-5517


Re: [SolrCloud] Too many open files - internal server error

2012-02-29 Thread Markus Jelsma
Thanks. They are set properly. But i misspelled the tomcat6 username in 
limits.conf :(

On Wednesday 29 February 2012 18:08:55 Yonik Seeley wrote:
> On Wed, Feb 29, 2012 at 10:32 AM, Markus Jelsma
> 
>  wrote:
> > The Linux machines have proper settings for ulimit and friends, 32k open
> > files allowed
> 
> Maybe you can expand on this point.
> 
> cat /proc/sys/fs/file-max
> cat /proc/sys/fs/nr_open
> 
> Those take precedence over ulimit.  Not sure if there are others...
> 
> -Yonik
> lucenerevolution.com - Lucene/Solr Open Source Search Conference.
> Boston May 7-10

-- 
Markus Jelsma - CTO - Openindex



Re: [SolrCloud] Too many open files - internal server error

2012-02-29 Thread Markus Jelsma
On Wednesday 29 February 2012 17:52:55 Sami Siren wrote:
> On Wed, Feb 29, 2012 at 5:53 PM, Markus Jelsma
> 
>  wrote:
> > Sami,
> > 
> > As superuser:
> > $ lsof | wc -l
> > 
> > But, just now, i also checked the system handler and it told me:
> > (error executing: ulimit -n)
> 
> That's odd, you should see something like this there:
> 
> "openFileDescriptorCount":131,
> "maxFileDescriptorCount":4096,
> 
> Which jvm do you have?

Standard issue SUN Java 6 on Debian. We run that JVM on all machines. But i 
see the same (error executing: ulimit -n) locally with Jetty and Solr trunk 
and Solr 3.5 and on a production server with Solr 3.2 with Tomcat6.

> 
> > This is rather strange, it seems. lsof | wc -l is not higher than 6k
> > right now and ulimit -n is 32k. Is lsof not to be trusted in this case
> > or... something else?
> 
> I am not sure what is going on, are you sure the open file descriptor
> (32k) limit is active for the user running solr?

I get the correct output for ulimit -n as tomcat6 user. However, i did find a 
mistake in /etc/security/limits.conf where i misspelled the tomcat6 user 
(shame). On recent systems only ulimit and sysctl is not enough so spelling 
tomcat6 correctly should fix the open files issue. 

No we only have the issue of (error executing: ulimit -n).

> 
> --
>  Sami Siren

-- 
Markus Jelsma - CTO - Openindex


Re: [SolrCloud] Too many open files - internal server error

2012-02-29 Thread Yonik Seeley
On Wed, Feb 29, 2012 at 10:32 AM, Markus Jelsma
 wrote:
> The Linux machines have proper settings for ulimit and friends, 32k open files
> allowed

Maybe you can expand on this point.

cat /proc/sys/fs/file-max
cat /proc/sys/fs/nr_open

Those take precedence over ulimit.  Not sure if there are others...

-Yonik
lucenerevolution.com - Lucene/Solr Open Source Search Conference.
Boston May 7-10


Re: [SolrCloud] Too many open files - internal server error

2012-02-29 Thread Sami Siren
On Wed, Feb 29, 2012 at 5:53 PM, Markus Jelsma
 wrote:
> Sami,
>
> As superuser:
> $ lsof | wc -l
>
> But, just now, i also checked the system handler and it told me:
> (error executing: ulimit -n)

That's odd, you should see something like this there:

"openFileDescriptorCount":131,
"maxFileDescriptorCount":4096,

Which jvm do you have?

> This is rather strange, it seems. lsof | wc -l is not higher than 6k right now
> and ulimit -n is 32k. Is lsof not to be trusted in this case or... something
> else?

I am not sure what is going on, are you sure the open file descriptor
(32k) limit is active for the user running solr?

--
 Sami Siren


Re: [SolrCloud] Too many open files - internal server error

2012-02-29 Thread Markus Jelsma
Sami,

As superuser:
$ lsof | wc -l

But, just now, i also checked the system handler and it told me:
(error executing: ulimit -n)

This is rather strange, it seems. lsof | wc -l is not higher than 6k right now 
and ulimit -n is 32k. Is lsof not to be trusted in this case or... something 
else? 

Thanks

On Wednesday 29 February 2012 16:44:58 Sami Siren wrote:
> Hi Markus,
> 
> > The Linux machines have proper settings for ulimit and friends, 32k open
> > files allowed so i suspect there's another limit which i am unaware of.
> > I also listed the number of open files while the errors were coming in
> > but it did not exceed 11k at any given time.
> 
> How did you check the number of filedescriptors used? Did you get this
> number from the system info handler
> (http://hotname:8983/solr/admin/system?indent=on&wt=json) or somehow
> differently?
> 
> --
>  Sami Siren

-- 
Markus Jelsma - CTO - Openindex


Re: [SolrCloud] Too many open files - internal server error

2012-02-29 Thread Sami Siren
Hi Markus,

> The Linux machines have proper settings for ulimit and friends, 32k open files
> allowed so i suspect there's another limit which i am unaware of. I also
> listed the number of open files while the errors were coming in but it did not
> exceed 11k at any given time.

How did you check the number of filedescriptors used? Did you get this
number from the system info handler
(http://hotname:8983/solr/admin/system?indent=on&wt=json) or somehow
differently?

--
 Sami Siren


[SolrCloud] Too many open files - internal server error

2012-02-29 Thread Markus Jelsma
Hi,

We're doing some tests with the latest trunk revision on a cluster of five 
high-end machines. There is one collection, five shards and one replica per 
shard on some other node.

We're filling the index from a MapReduce job, 18 processes run concurrently. 
This is plenty when indexing to a single high-end node but with SolrCloud 
things go down pretty soon.

First we get a Too Many Open Files error on all nodes almost at the same time. 
When shutting down the indexer the nodes won't respond anymore except for an 
Internal Server Error.

First the too many open files stack trace:

2012-02-29 15:22:51,067 ERROR [solr.core.SolrCore] - [http-80-6] - : 
java.io.FileNotFoundException: /opt/solr/openindex_b/data/index/_h5_0.tim (Too 
many open files)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:216)
at 
org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:449)
at 
org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:288)
at 
org.apache.lucene.codecs.BlockTreeTermsWriter.(BlockTreeTermsWriter.java:149)
at 
org.apache.lucene.codecs.lucene40.Lucene40PostingsFormat.fieldsConsumer(Lucene40PostingsFormat.java:66)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.addField(PerFieldPostingsFormat.java:118)
at 
org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:322)
at 
org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:92)
at org.apache.lucene.index.TermsHash.flush(TermsHash.java:117)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
at 
org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
at 
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:475)
at 
org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422)
at 
org.apache.lucene.index.DocumentsWriter.postUpdate(DocumentsWriter.java:320)
at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:389)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1533)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1505)
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:168)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:56)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:53)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:354)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:451)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:258)
at 
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:118)
at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:135)
at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:79)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:59)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1539)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:406)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:255)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602)
at 
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:662



A similar exception sometimes begins with:

%2012-02-29 15:25:36,137 ERROR [solr.update.CommitTracker] - [pool-5-thread-1] 
- : au

Little hint for: java.net.SocketException: Too many open files

2012-02-13 Thread Gerke, Axel
Hi together,

We're running several instances of SOLR (3.5) on Apache Tomcat (6.0) on
Ubuntu 10.xx. After adding another instance (maybe the 14th or 15th for
the developers sandboxes), tomcat rise the exception
"java.net.SocketException: Too many open files" .

After reading some several sites I've googled, the reason is the follow:
Every Instance open the index with a couple of files. The user, that
owns the process have a limit from 1024 per default.
So I've tried to rise the limit via ulimit -n. Ubuntu told me, that
ulimit is deprecated. 

After trying some hints, the following page was helpfully for me: 
http://www.chrissearle.org/blog/technical/increasing_max_number_open_fil
es_glassfish_user_debian

hth,

Axel Gerke





Re: java.net.SocketException: Too many open files

2012-01-24 Thread Sethi, Parampreet
Hi Jonty,

You can try changing the maximum number of files opened by a process using
command:

ulimit -n XXX

In case, the number of opened files is not increasing with time and just a
constant number which is larger than system default limit, this should fix
it.

-param

On 1/24/12 11:40 AM, "Michael Kuhlmann"  wrote:

>Hi Jonty,
>
>no, not really. When we first had such problems, we really thought that
>the number of open files is the problem, so we implemented an algorithm
>that performed an optimize from time to time to force a segment merge.
>Due to some misconfiguration, this ran too often. With the result that
>an optimize was issued before thje previous optimization was finished,
>which is a really bad idea.
>
>We removed the optimization calls, and since then we didn't have any
>more problems.
>
>However, I never found out the initial reason for the exception. Maybe
>there was some bug in Solr's 3.1 version - we're using 3.5 right now -,
>but I couldn't find a hint in the changelog.
>
>At least we didn't have this exception for more than two months now, so
>I'm hoping that the cause for this has disappeared somehow.
>
>Sorry that I can't help you more.
>
>Greetings,
>Kuli
>
>On 24.01.2012 07:48, Jonty Rhods wrote:
>> Hi Kuli,
>>
>> Did you get the solution of this problem? I am still facing this
>>problem.
>> Please help me to overcome this problem.
>>
>> regards
>>
>>
>> On Wed, Oct 26, 2011 at 1:16 PM, Michael Kuhlmann
>>wrote:
>>
>>> Hi;
>>>
>>> we have a similar problem here. We already raised the file ulimit on
>>>the
>>> server to 4096, but this only defered the problem. We get a
>>> TooManyOpenFilesException every few months.
>>>
>>> The problem has nothing to do with real files. When we had the last
>>> TooManyOpenFilesException, we investigated with netstat -a and saw that
>>> there were about 3900 open sockets in Jetty.
>>>
>>> Curiously, we only have one SolrServer instance per Solr client, and we
>>> only have three clients (our running web servers).
>>>
>>> We have set defaultMaxConnectionsPerHost to 20 and maxTotalConnections
>>> to 100. There should be room enough.
>>>
>>> Sorry that I can't help you, we still have not solved tghe problem on
>>> our own.
>>>
>>> Greetings,
>>> Kuli
>>>
>>> Am 25.10.2011 22:03, schrieb Jonty Rhods:
>>>> Hi,
>>>>
>>>> I am using solrj and for connection to server I am using instance of
>>>>the
>>>> solr server:
>>>>
>>>> SolrServer server =  new CommonsHttpSolrServer("
>>>> http://localhost:8080/solr/core0";);
>>>>
>>>> I noticed that after few minutes it start throwing exception
>>>> java.net.SocketException: Too many open files.
>>>> It seems that it related to instance of the HttpClient. How to
>>>>resolved
>>> the
>>>> instances to a certain no. Like connection pool in dbcp etc..
>>>>
>>>> I am not experienced on java so please help to resolved this problem.
>>>>
>>>>   solr version: 3.4
>>>>
>>>> regards
>>>> Jonty
>>>>
>>>
>>>
>>
>



Re: java.net.SocketException: Too many open files

2012-01-24 Thread Michael Kuhlmann

Hi Jonty,

no, not really. When we first had such problems, we really thought that 
the number of open files is the problem, so we implemented an algorithm 
that performed an optimize from time to time to force a segment merge. 
Due to some misconfiguration, this ran too often. With the result that 
an optimize was issued before thje previous optimization was finished, 
which is a really bad idea.


We removed the optimization calls, and since then we didn't have any 
more problems.


However, I never found out the initial reason for the exception. Maybe 
there was some bug in Solr's 3.1 version - we're using 3.5 right now -, 
but I couldn't find a hint in the changelog.


At least we didn't have this exception for more than two months now, so 
I'm hoping that the cause for this has disappeared somehow.


Sorry that I can't help you more.

Greetings,
Kuli

On 24.01.2012 07:48, Jonty Rhods wrote:

Hi Kuli,

Did you get the solution of this problem? I am still facing this problem.
Please help me to overcome this problem.

regards


On Wed, Oct 26, 2011 at 1:16 PM, Michael Kuhlmann  wrote:


Hi;

we have a similar problem here. We already raised the file ulimit on the
server to 4096, but this only defered the problem. We get a
TooManyOpenFilesException every few months.

The problem has nothing to do with real files. When we had the last
TooManyOpenFilesException, we investigated with netstat -a and saw that
there were about 3900 open sockets in Jetty.

Curiously, we only have one SolrServer instance per Solr client, and we
only have three clients (our running web servers).

We have set defaultMaxConnectionsPerHost to 20 and maxTotalConnections
to 100. There should be room enough.

Sorry that I can't help you, we still have not solved tghe problem on
our own.

Greetings,
Kuli

Am 25.10.2011 22:03, schrieb Jonty Rhods:

Hi,

I am using solrj and for connection to server I am using instance of the
solr server:

SolrServer server =  new CommonsHttpSolrServer("
http://localhost:8080/solr/core0";);

I noticed that after few minutes it start throwing exception
java.net.SocketException: Too many open files.
It seems that it related to instance of the HttpClient. How to resolved

the

instances to a certain no. Like connection pool in dbcp etc..

I am not experienced on java so please help to resolved this problem.

  solr version: 3.4

regards
Jonty










Re: java.net.SocketException: Too many open files

2012-01-23 Thread Jonty Rhods
Hi Kuli,

Did you get the solution of this problem? I am still facing this problem.
Please help me to overcome this problem.

regards


On Wed, Oct 26, 2011 at 1:16 PM, Michael Kuhlmann  wrote:

> Hi;
>
> we have a similar problem here. We already raised the file ulimit on the
> server to 4096, but this only defered the problem. We get a
> TooManyOpenFilesException every few months.
>
> The problem has nothing to do with real files. When we had the last
> TooManyOpenFilesException, we investigated with netstat -a and saw that
> there were about 3900 open sockets in Jetty.
>
> Curiously, we only have one SolrServer instance per Solr client, and we
> only have three clients (our running web servers).
>
> We have set defaultMaxConnectionsPerHost to 20 and maxTotalConnections
> to 100. There should be room enough.
>
> Sorry that I can't help you, we still have not solved tghe problem on
> our own.
>
> Greetings,
> Kuli
>
> Am 25.10.2011 22:03, schrieb Jonty Rhods:
> > Hi,
> >
> > I am using solrj and for connection to server I am using instance of the
> > solr server:
> >
> > SolrServer server =  new CommonsHttpSolrServer("
> > http://localhost:8080/solr/core0";);
> >
> > I noticed that after few minutes it start throwing exception
> > java.net.SocketException: Too many open files.
> > It seems that it related to instance of the HttpClient. How to resolved
> the
> > instances to a certain no. Like connection pool in dbcp etc..
> >
> > I am not experienced on java so please help to resolved this problem.
> >
> >  solr version: 3.4
> >
> > regards
> > Jonty
> >
>
>


Re: java.net.SocketException: Too many open files

2011-10-26 Thread Michael Kuhlmann
Hi;

we have a similar problem here. We already raised the file ulimit on the
server to 4096, but this only defered the problem. We get a
TooManyOpenFilesException every few months.

The problem has nothing to do with real files. When we had the last
TooManyOpenFilesException, we investigated with netstat -a and saw that
there were about 3900 open sockets in Jetty.

Curiously, we only have one SolrServer instance per Solr client, and we
only have three clients (our running web servers).

We have set defaultMaxConnectionsPerHost to 20 and maxTotalConnections
to 100. There should be room enough.

Sorry that I can't help you, we still have not solved tghe problem on
our own.

Greetings,
Kuli

Am 25.10.2011 22:03, schrieb Jonty Rhods:
> Hi,
> 
> I am using solrj and for connection to server I am using instance of the
> solr server:
> 
> SolrServer server =  new CommonsHttpSolrServer("
> http://localhost:8080/solr/core0";);
> 
> I noticed that after few minutes it start throwing exception
> java.net.SocketException: Too many open files.
> It seems that it related to instance of the HttpClient. How to resolved the
> instances to a certain no. Like connection pool in dbcp etc..
> 
> I am not experienced on java so please help to resolved this problem.
> 
>  solr version: 3.4
> 
> regards
> Jonty
> 



Re: java.net.SocketException: Too many open files

2011-10-25 Thread Jonty Rhods
Hi Yonik,

thanks for reply.

Currently I have more than 50 classes and every class have their own
SolrServer server =  new CommonsHttpSolrServer("
http://localhost:8080/solr/core0";);
Majority of classes connect to core0 however there are many cores which is
connecting from different classes.

My senario is
expecting 40 to 50 hit on server every day and the server is
deployed on tomcat 6.20 with 12GB heap size to catalina. My OS is Red hat
Linux (production) and using Ubuntu as development server.
Logically I can make a common class for connecting solr server but now
question is :

1. If I using a common class then I must have to use max connection on
httpclient what will the ideal default setting for my current problem.
2. I am expecting concurrent connection at pick time is minimum 5000 hits to
solr server.
3. If I will use SolrServer server =  new CommonsHttpSolrServer("
http://localhost:8080/solr/core0";); using common class across all classes
then will it help to resolve current problem with the current load I dont
want my user experience slow response from solr server to get result.
4. As other users overcome with this issue to increase or to make unlimited
TCP/IP setting on OS level. Is it right approach?

As I am newly shift to Java language so any piece of code will much
appreciated and will much easier for me to understand.

thanks.

regards
Jonty


On Wed, Oct 26, 2011 at 1:37 AM, Yonik Seeley wrote:

> On Tue, Oct 25, 2011 at 4:03 PM, Jonty Rhods 
> wrote:
> > Hi,
> >
> > I am using solrj and for connection to server I am using instance of the
> > solr server:
> >
> > SolrServer server =  new CommonsHttpSolrServer("
> > http://localhost:8080/solr/core0";);
>
> Are you reusing the server object for all of your requests?
> By default, Solr and SolrJ use persistent connections, meaning that
> sockets are reused and new ones are not opened for every request.
>
> -Yonik
> http://www.lucidimagination.com
>
>
> > I noticed that after few minutes it start throwing exception
> > java.net.SocketException: Too many open files.
> > It seems that it related to instance of the HttpClient. How to resolved
> the
> > instances to a certain no. Like connection pool in dbcp etc..
> >
> > I am not experienced on java so please help to resolved this problem.
> >
> >  solr version: 3.4
> >
> > regards
> > Jonty
> >
>


Re: java.net.SocketException: Too many open files

2011-10-25 Thread Bui Van Quy

Hi,

I had save problem "Too many open files" but it is logged by Tomcat 
server. Please check your index directory if there are too much index 
files please execute Solr optimize command. This exception is raised by 
OS of server, you can google for researching it.



On 10/26/2011 3:07 AM, Yonik Seeley wrote:

On Tue, Oct 25, 2011 at 4:03 PM, Jonty Rhods  wrote:

Hi,

I am using solrj and for connection to server I am using instance of the
solr server:

SolrServer server =  new CommonsHttpSolrServer("
http://localhost:8080/solr/core0";);

Are you reusing the server object for all of your requests?
By default, Solr and SolrJ use persistent connections, meaning that
sockets are reused and new ones are not opened for every request.

-Yonik
http://www.lucidimagination.com



I noticed that after few minutes it start throwing exception
java.net.SocketException: Too many open files.
It seems that it related to instance of the HttpClient. How to resolved the
instances to a certain no. Like connection pool in dbcp etc..

I am not experienced on java so please help to resolved this problem.

  solr version: 3.4

regards
Jonty







Re: java.net.SocketException: Too many open files

2011-10-25 Thread Péter Király
One note for this. I had a trouble to reset the root's limit in
Ubuntu. Somewhere I read, that Ubuntu doesn't give you even the
correct number of limit. The solution to this problem is to run Solr
under another user.

Péter

2011/10/25 Markus Jelsma :
> This is on Linux? This should help:
>
> echo fs.file-max = 16384 >> /etc/sysctl.conf
>
> On some distro's like Debian it seems you also have to add these settings to
> security.conf, otherwise it may not persist between reboots or even shell
> sessions:
>
> echo "systems hard nofile 16384
> systems soft nofile 16384" >> /etc/security/limits.conf
>
>
>> Hi,
>>
>> I am using solrj and for connection to server I am using instance of the
>> solr server:
>>
>> SolrServer server =  new CommonsHttpSolrServer("
>> http://localhost:8080/solr/core0";);
>>
>> I noticed that after few minutes it start throwing exception
>> java.net.SocketException: Too many open files.
>> It seems that it related to instance of the HttpClient. How to resolved the
>> instances to a certain no. Like connection pool in dbcp etc..
>>
>> I am not experienced on java so please help to resolved this problem.
>>
>>  solr version: 3.4
>>
>> regards
>> Jonty
>



-- 
Péter Király
eXtensible Catalog
http://eXtensibleCatalog.org
http://drupal.org/project/xc


Re: java.net.SocketException: Too many open files

2011-10-25 Thread Markus Jelsma
This is on Linux? This should help:

echo fs.file-max = 16384 >> /etc/sysctl.conf

On some distro's like Debian it seems you also have to add these settings to 
security.conf, otherwise it may not persist between reboots or even shell 
sessions:

echo "systems hard nofile 16384
systems soft nofile 16384" >> /etc/security/limits.conf


> Hi,
> 
> I am using solrj and for connection to server I am using instance of the
> solr server:
> 
> SolrServer server =  new CommonsHttpSolrServer("
> http://localhost:8080/solr/core0";);
> 
> I noticed that after few minutes it start throwing exception
> java.net.SocketException: Too many open files.
> It seems that it related to instance of the HttpClient. How to resolved the
> instances to a certain no. Like connection pool in dbcp etc..
> 
> I am not experienced on java so please help to resolved this problem.
> 
>  solr version: 3.4
> 
> regards
> Jonty


Re: java.net.SocketException: Too many open files

2011-10-25 Thread Yonik Seeley
On Tue, Oct 25, 2011 at 4:03 PM, Jonty Rhods  wrote:
> Hi,
>
> I am using solrj and for connection to server I am using instance of the
> solr server:
>
> SolrServer server =  new CommonsHttpSolrServer("
> http://localhost:8080/solr/core0";);

Are you reusing the server object for all of your requests?
By default, Solr and SolrJ use persistent connections, meaning that
sockets are reused and new ones are not opened for every request.

-Yonik
http://www.lucidimagination.com


> I noticed that after few minutes it start throwing exception
> java.net.SocketException: Too many open files.
> It seems that it related to instance of the HttpClient. How to resolved the
> instances to a certain no. Like connection pool in dbcp etc..
>
> I am not experienced on java so please help to resolved this problem.
>
>  solr version: 3.4
>
> regards
> Jonty
>


java.net.SocketException: Too many open files

2011-10-25 Thread Jonty Rhods
Hi,

I am using solrj and for connection to server I am using instance of the
solr server:

SolrServer server =  new CommonsHttpSolrServer("
http://localhost:8080/solr/core0";);

I noticed that after few minutes it start throwing exception
java.net.SocketException: Too many open files.
It seems that it related to instance of the HttpClient. How to resolved the
instances to a certain no. Like connection pool in dbcp etc..

I am not experienced on java so please help to resolved this problem.

 solr version: 3.4

regards
Jonty


Re: why too many open files?

2011-06-20 Thread Koji Sekiguchi

(11/06/20 16:16), Jason, Kim wrote:

Hi, Mark

I think FileNotFoundException will be worked around by raise the ulimit.
I just want to know why segments are created more than mergeFactor.
During the googling, I found contents concerning mergeFactor:
http://web.archiveorange.com/archive/v/bH0vUQzfYcdtZoocG2C9
Yonik wrote:
"mergeFactor 10 means a maximum of 10 segments at each "level".
if maxBufferedDocs=10 with a log doc merge policy (equivalent to
Lucene in the old days), then you could have up to ~ 10*log10(nDocs)
segments in the index (i.e. up to 60 segments for a 1M doc index)."

But I don't understand this.
someone explain to me in more detail?


Take a look at:

Visualizing Lucene's segment merges
http://s.apache.org/merging

koji
--
http://www.rondhuit.com/en/


Re: why too many open files?

2011-06-20 Thread Markus Jelsma
12 shards on the same machine?

> Hi, All
> 
> I have 12 shards and ramBufferSizeMB=512, mergeFactor=5.
> But solr raise java.io.FileNotFoundException (Too many open files).
> mergeFactor is just 5. How can this happen?
> Below is segments of some shard. That is too many segments over mergFactor.
> What's wrong and How should I set the mergeFactor?
> 
> ==
> [root@solr solr]# ls indexData/multicore-us/usn02/data/index/
> _0.fdt   _gs.fdt  _h5.tii  _hl.nrm  _i1.nrm  _kn.nrm  _l1.nrm  _lq.tii
> _0.fdx   _gs.fdx  _h5.tis  _hl.prx  _i1.prx  _kn.prx  _l1.prx  _lq.tis
> _3i.fdt  _gs.fnm  _h7.fnm  _hl.tii  _i1.tii  _kn.tii  _l1.tii
> lucene-2de7b31b5eabdff0b6ec7fd32eecf8c7-write.lock
> _3i.fdx  _gs.frq  _h7.frq  _hl.tis  _i1.tis  _kn.tis  _l1.tis  _lu.fnm
> _3s.fnm  _gs.nrm  _h7.nrm  _hn.fnm  _j7.fdt  _kp.fnm  _l2.fnm  _lu.frq
> _3s.frq  _gs.prx  _h7.prx  _hn.frq  _j7.fdx  _kp.frq  _l2.frq  _lu.nrm
> _3s.nrm  _gs.tii  _h7.tii  _hn.nrm  _kb.fnm  _kp.nrm  _l2.nrm  _lu.prx
> _3s.prx  _gs.tis  _h7.tis  _hn.prx  _kb.frq  _kp.prx  _l2.prx  _lu.tii
> _3s.tii  _gu.fnm  _h9.fnm  _hn.tii  _kb.nrm  _kp.tii  _l2.tii  _lu.tis
> _3s.tis  _gu.frq  _h9.frq  _hn.tis  _kb.prx  _kp.tis  _l2.tis  _ly.fnm
> _48.fdt  _gu.nrm  _h9.nrm  _hp.fnm  _kb.tii  _kq.fnm  _l6.fnm  _ly.frq
> _48.fdx  _gu.prx  _h9.prx  _hp.frq  _kb.tis  _kq.frq  _l6.frq  _ly.nrm
> _4d.fnm  _gu.tii  _h9.tii  _hp.nrm  _kc.fnm  _kq.nrm  _l6.nrm  _ly.prx
> _4d.frq  _gu.tis  _h9.tis  _hp.prx  _kc.frq  _kq.prx  _l6.prx  _ly.tii
> _4d.nrm  _gw.fnm  _hb.fnm  _hp.tii  _kc.nrm  _kq.tii  _l6.tii  _ly.tis
> _4d.prx  _gw.frq  _hb.frq  _hp.tis  _kc.prx  _kq.tis  _l6.tis  _m3.fnm
> _4d.tii  _gw.nrm  _hb.nrm  _hr.fnm  _kc.tii  _kr.fnm  _la.fnm  _m3.frq
> _4d.tis  _gw.prx  _hb.prx  _hr.frq  _kc.tis  _kr.frq  _la.frq  _m3.nrm
> _5b.fdt  _gw.tii  _hb.tii  _hr.nrm  _kf.fdt  _kr.nrm  _la.nrm  _m3.prx
> _5b.fdx  _gw.tis  _hb.tis  _hr.prx  _kf.fdx  _kr.prx  _la.prx  _m3.tii
> _5b.fnm  _gy.fnm  _he.fdt  _hr.tii  _kf.fnm  _kr.tii  _la.tii  _m3.tis
> _5b.frq  _gy.frq  _he.fdx  _hr.tis  _kf.frq  _kr.tis  _la.tis  _m8.fnm
> _5b.nrm  _gy.nrm  _he.fnm  _ht.fnm  _kf.nrm  _kt.fnm  _le.fnm  _m8.frq
> _5b.prx  _gy.prx  _he.frq  _ht.frq  _kf.prx  _kt.frq  _le.frq  _m8.nrm
> _5b.tii  _gy.tii  _he.nrm  _ht.nrm  _kf.tii  _kt.nrm  _le.nrm  _m8.prx
> _5b.tis  _gy.tis  _he.prx  _ht.prx  _kf.tis  _kt.prx  _le.prx  _m8.tii
> _5m.fnm  _h0.fnm  _he.tii  _ht.tii  _kg.fnm  _kt.tii  _le.tii  _m8.tis
> _5m.frq  _h0.frq  _he.tis  _ht.tis  _kg.frq  _kt.tis  _le.tis  _md.fnm
> _5m.nrm  _h0.nrm  _hh.fnm  _hv.fnm  _kg.nrm  _kw.fnm  _li.fnm  _md.frq
> _5m.prx  _h0.prx  _hh.frq  _hv.frq  _kg.prx  _kw.frq  _li.frq  _md.nrm
> _5m.tii  _h0.tii  _hh.nrm  _hv.nrm  _kg.tii  _kw.nrm  _li.nrm  _md.prx
> _5m.tis  _h0.tis  _hh.prx  _hv.prx  _kg.tis  _kw.prx  _li.prx  _md.tii
> _5n.fnm  _h2.fnm  _hh.tii  _hv.tii  _kj.fdt  _kw.tii  _li.tii  _md.tis
> _5n.frq  _h2.frq  _hh.tis  _hv.tis  _kj.fdx  _kw.tis  _li.tis  _mi.fnm
> _5n.nrm  _h2.nrm  _hk.fnm  _hz.fdt  _kj.fnm  _ky.fnm  _lm.fnm  _mi.frq
> _5n.prx  _h2.prx  _hk.frq  _hz.fdx  _kj.frq  _ky.frq  _lm.frq  _mi.nrm
> _5n.tii  _h2.tii  _hk.nrm  _hz.fnm  _kj.nrm  _ky.nrm  _lm.nrm  _mi.prx
> _5n.tis  _h2.tis  _hk.prx  _hz.frq  _kj.prx  _ky.prx  _lm.prx  _mi.tii
> _5x.fnm  _h5.fdt  _hk.tii  _hz.nrm  _kj.tii  _ky.tii  _lm.tii  _mi.tis
> _5x.frq  _h5.fdx  _hk.tis  _hz.prx  _kj.tis  _ky.tis  _lm.tis  segments_1
> _5x.nrm  _h5.fnm  _hl.fdt  _hz.tii  _kn.fdt  _l1.fdt  _lq.fnm  segments.gen
> _5x.prx  _h5.frq  _hl.fdx  _hz.tis  _kn.fdx  _l1.fdx  _lq.frq
> _5x.tii  _h5.nrm  _hl.fnm  _i1.fnm  _kn.fnm  _l1.fnm  _lq.nrm
> _5x.tis  _h5.prx  _hl.frq  _i1.frq  _kn.frq  _l1.frq  _lq.prx
> ==
> 
> Thanks in advance.
> 
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/why-too-many-open-files-tp3084407p30844
> 07.html Sent from the Solr - User mailing list archive at Nabble.com.


Re: why too many open files?

2011-06-20 Thread Jason, Kim
Hi, Mark

I think FileNotFoundException will be worked around by raise the ulimit.
I just want to know why segments are created more than mergeFactor.
During the googling, I found contents concerning mergeFactor:
http://web.archiveorange.com/archive/v/bH0vUQzfYcdtZoocG2C9
Yonik wrote:
"mergeFactor 10 means a maximum of 10 segments at each "level".
if maxBufferedDocs=10 with a log doc merge policy (equivalent to
Lucene in the old days), then you could have up to ~ 10*log10(nDocs)
segments in the index (i.e. up to 60 segments for a 1M doc index)."

But I don't understand this.
someone explain to me in more detail?

Thanks


--
View this message in context: 
http://lucene.472066.n3.nabble.com/why-too-many-open-files-tp3084407p3085172.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: why too many open files?

2011-06-19 Thread Mark Schoy
Hi,

did you have checked the max opened files of your OS?

see: http://lj4newbies.blogspot.com/2007/04/too-many-open-files.html



2011/6/20 Jason, Kim 

> Hi, All
>
> I have 12 shards and ramBufferSizeMB=512, mergeFactor=5.
> But solr raise java.io.FileNotFoundException (Too many open files).
> mergeFactor is just 5. How can this happen?
> Below is segments of some shard. That is too many segments over mergFactor.
> What's wrong and How should I set the mergeFactor?
>
> ==
> [root@solr solr]# ls indexData/multicore-us/usn02/data/index/
> _0.fdt   _gs.fdt  _h5.tii  _hl.nrm  _i1.nrm  _kn.nrm  _l1.nrm  _lq.tii
> _0.fdx   _gs.fdx  _h5.tis  _hl.prx  _i1.prx  _kn.prx  _l1.prx  _lq.tis
> _3i.fdt  _gs.fnm  _h7.fnm  _hl.tii  _i1.tii  _kn.tii  _l1.tii
> lucene-2de7b31b5eabdff0b6ec7fd32eecf8c7-write.lock
> _3i.fdx  _gs.frq  _h7.frq  _hl.tis  _i1.tis  _kn.tis  _l1.tis  _lu.fnm
> _3s.fnm  _gs.nrm  _h7.nrm  _hn.fnm  _j7.fdt  _kp.fnm  _l2.fnm  _lu.frq
> _3s.frq  _gs.prx  _h7.prx  _hn.frq  _j7.fdx  _kp.frq  _l2.frq  _lu.nrm
> _3s.nrm  _gs.tii  _h7.tii  _hn.nrm  _kb.fnm  _kp.nrm  _l2.nrm  _lu.prx
> _3s.prx  _gs.tis  _h7.tis  _hn.prx  _kb.frq  _kp.prx  _l2.prx  _lu.tii
> _3s.tii  _gu.fnm  _h9.fnm  _hn.tii  _kb.nrm  _kp.tii  _l2.tii  _lu.tis
> _3s.tis  _gu.frq  _h9.frq  _hn.tis  _kb.prx  _kp.tis  _l2.tis  _ly.fnm
> _48.fdt  _gu.nrm  _h9.nrm  _hp.fnm  _kb.tii  _kq.fnm  _l6.fnm  _ly.frq
> _48.fdx  _gu.prx  _h9.prx  _hp.frq  _kb.tis  _kq.frq  _l6.frq  _ly.nrm
> _4d.fnm  _gu.tii  _h9.tii  _hp.nrm  _kc.fnm  _kq.nrm  _l6.nrm  _ly.prx
> _4d.frq  _gu.tis  _h9.tis  _hp.prx  _kc.frq  _kq.prx  _l6.prx  _ly.tii
> _4d.nrm  _gw.fnm  _hb.fnm  _hp.tii  _kc.nrm  _kq.tii  _l6.tii  _ly.tis
> _4d.prx  _gw.frq  _hb.frq  _hp.tis  _kc.prx  _kq.tis  _l6.tis  _m3.fnm
> _4d.tii  _gw.nrm  _hb.nrm  _hr.fnm  _kc.tii  _kr.fnm  _la.fnm  _m3.frq
> _4d.tis  _gw.prx  _hb.prx  _hr.frq  _kc.tis  _kr.frq  _la.frq  _m3.nrm
> _5b.fdt  _gw.tii  _hb.tii  _hr.nrm  _kf.fdt  _kr.nrm  _la.nrm  _m3.prx
> _5b.fdx  _gw.tis  _hb.tis  _hr.prx  _kf.fdx  _kr.prx  _la.prx  _m3.tii
> _5b.fnm  _gy.fnm  _he.fdt  _hr.tii  _kf.fnm  _kr.tii  _la.tii  _m3.tis
> _5b.frq  _gy.frq  _he.fdx  _hr.tis  _kf.frq  _kr.tis  _la.tis  _m8.fnm
> _5b.nrm  _gy.nrm  _he.fnm  _ht.fnm  _kf.nrm  _kt.fnm  _le.fnm  _m8.frq
> _5b.prx  _gy.prx  _he.frq  _ht.frq  _kf.prx  _kt.frq  _le.frq  _m8.nrm
> _5b.tii  _gy.tii  _he.nrm  _ht.nrm  _kf.tii  _kt.nrm  _le.nrm  _m8.prx
> _5b.tis  _gy.tis  _he.prx  _ht.prx  _kf.tis  _kt.prx  _le.prx  _m8.tii
> _5m.fnm  _h0.fnm  _he.tii  _ht.tii  _kg.fnm  _kt.tii  _le.tii  _m8.tis
> _5m.frq  _h0.frq  _he.tis  _ht.tis  _kg.frq  _kt.tis  _le.tis  _md.fnm
> _5m.nrm  _h0.nrm  _hh.fnm  _hv.fnm  _kg.nrm  _kw.fnm  _li.fnm  _md.frq
> _5m.prx  _h0.prx  _hh.frq  _hv.frq  _kg.prx  _kw.frq  _li.frq  _md.nrm
> _5m.tii  _h0.tii  _hh.nrm  _hv.nrm  _kg.tii  _kw.nrm  _li.nrm  _md.prx
> _5m.tis  _h0.tis  _hh.prx  _hv.prx  _kg.tis  _kw.prx  _li.prx  _md.tii
> _5n.fnm  _h2.fnm  _hh.tii  _hv.tii  _kj.fdt  _kw.tii  _li.tii  _md.tis
> _5n.frq  _h2.frq  _hh.tis  _hv.tis  _kj.fdx  _kw.tis  _li.tis  _mi.fnm
> _5n.nrm  _h2.nrm  _hk.fnm  _hz.fdt  _kj.fnm  _ky.fnm  _lm.fnm  _mi.frq
> _5n.prx  _h2.prx  _hk.frq  _hz.fdx  _kj.frq  _ky.frq  _lm.frq  _mi.nrm
> _5n.tii  _h2.tii  _hk.nrm  _hz.fnm  _kj.nrm  _ky.nrm  _lm.nrm  _mi.prx
> _5n.tis  _h2.tis  _hk.prx  _hz.frq  _kj.prx  _ky.prx  _lm.prx  _mi.tii
> _5x.fnm  _h5.fdt  _hk.tii  _hz.nrm  _kj.tii  _ky.tii  _lm.tii  _mi.tis
> _5x.frq  _h5.fdx  _hk.tis  _hz.prx  _kj.tis  _ky.tis  _lm.tis  segments_1
> _5x.nrm  _h5.fnm  _hl.fdt  _hz.tii  _kn.fdt  _l1.fdt  _lq.fnm  segments.gen
> _5x.prx  _h5.frq  _hl.fdx  _hz.tis  _kn.fdx  _l1.fdx  _lq.frq
> _5x.tii  _h5.nrm  _hl.fnm  _i1.fnm  _kn.fnm  _l1.fnm  _lq.nrm
> _5x.tis  _h5.prx  _hl.frq  _i1.frq  _kn.frq  _l1.frq  _lq.prx
> ======
>
> Thanks in advance.
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/why-too-many-open-files-tp3084407p3084407.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


why too many open files?

2011-06-19 Thread Jason, Kim
Hi, All

I have 12 shards and ramBufferSizeMB=512, mergeFactor=5.
But solr raise java.io.FileNotFoundException (Too many open files).
mergeFactor is just 5. How can this happen?
Below is segments of some shard. That is too many segments over mergFactor.
What's wrong and How should I set the mergeFactor?

==
[root@solr solr]# ls indexData/multicore-us/usn02/data/index/
_0.fdt   _gs.fdt  _h5.tii  _hl.nrm  _i1.nrm  _kn.nrm  _l1.nrm  _lq.tii
_0.fdx   _gs.fdx  _h5.tis  _hl.prx  _i1.prx  _kn.prx  _l1.prx  _lq.tis
_3i.fdt  _gs.fnm  _h7.fnm  _hl.tii  _i1.tii  _kn.tii  _l1.tii 
lucene-2de7b31b5eabdff0b6ec7fd32eecf8c7-write.lock
_3i.fdx  _gs.frq  _h7.frq  _hl.tis  _i1.tis  _kn.tis  _l1.tis  _lu.fnm
_3s.fnm  _gs.nrm  _h7.nrm  _hn.fnm  _j7.fdt  _kp.fnm  _l2.fnm  _lu.frq
_3s.frq  _gs.prx  _h7.prx  _hn.frq  _j7.fdx  _kp.frq  _l2.frq  _lu.nrm
_3s.nrm  _gs.tii  _h7.tii  _hn.nrm  _kb.fnm  _kp.nrm  _l2.nrm  _lu.prx
_3s.prx  _gs.tis  _h7.tis  _hn.prx  _kb.frq  _kp.prx  _l2.prx  _lu.tii
_3s.tii  _gu.fnm  _h9.fnm  _hn.tii  _kb.nrm  _kp.tii  _l2.tii  _lu.tis
_3s.tis  _gu.frq  _h9.frq  _hn.tis  _kb.prx  _kp.tis  _l2.tis  _ly.fnm
_48.fdt  _gu.nrm  _h9.nrm  _hp.fnm  _kb.tii  _kq.fnm  _l6.fnm  _ly.frq
_48.fdx  _gu.prx  _h9.prx  _hp.frq  _kb.tis  _kq.frq  _l6.frq  _ly.nrm
_4d.fnm  _gu.tii  _h9.tii  _hp.nrm  _kc.fnm  _kq.nrm  _l6.nrm  _ly.prx
_4d.frq  _gu.tis  _h9.tis  _hp.prx  _kc.frq  _kq.prx  _l6.prx  _ly.tii
_4d.nrm  _gw.fnm  _hb.fnm  _hp.tii  _kc.nrm  _kq.tii  _l6.tii  _ly.tis
_4d.prx  _gw.frq  _hb.frq  _hp.tis  _kc.prx  _kq.tis  _l6.tis  _m3.fnm
_4d.tii  _gw.nrm  _hb.nrm  _hr.fnm  _kc.tii  _kr.fnm  _la.fnm  _m3.frq
_4d.tis  _gw.prx  _hb.prx  _hr.frq  _kc.tis  _kr.frq  _la.frq  _m3.nrm
_5b.fdt  _gw.tii  _hb.tii  _hr.nrm  _kf.fdt  _kr.nrm  _la.nrm  _m3.prx
_5b.fdx  _gw.tis  _hb.tis  _hr.prx  _kf.fdx  _kr.prx  _la.prx  _m3.tii
_5b.fnm  _gy.fnm  _he.fdt  _hr.tii  _kf.fnm  _kr.tii  _la.tii  _m3.tis
_5b.frq  _gy.frq  _he.fdx  _hr.tis  _kf.frq  _kr.tis  _la.tis  _m8.fnm
_5b.nrm  _gy.nrm  _he.fnm  _ht.fnm  _kf.nrm  _kt.fnm  _le.fnm  _m8.frq
_5b.prx  _gy.prx  _he.frq  _ht.frq  _kf.prx  _kt.frq  _le.frq  _m8.nrm
_5b.tii  _gy.tii  _he.nrm  _ht.nrm  _kf.tii  _kt.nrm  _le.nrm  _m8.prx
_5b.tis  _gy.tis  _he.prx  _ht.prx  _kf.tis  _kt.prx  _le.prx  _m8.tii
_5m.fnm  _h0.fnm  _he.tii  _ht.tii  _kg.fnm  _kt.tii  _le.tii  _m8.tis
_5m.frq  _h0.frq  _he.tis  _ht.tis  _kg.frq  _kt.tis  _le.tis  _md.fnm
_5m.nrm  _h0.nrm  _hh.fnm  _hv.fnm  _kg.nrm  _kw.fnm  _li.fnm  _md.frq
_5m.prx  _h0.prx  _hh.frq  _hv.frq  _kg.prx  _kw.frq  _li.frq  _md.nrm
_5m.tii  _h0.tii  _hh.nrm  _hv.nrm  _kg.tii  _kw.nrm  _li.nrm  _md.prx
_5m.tis  _h0.tis  _hh.prx  _hv.prx  _kg.tis  _kw.prx  _li.prx  _md.tii
_5n.fnm  _h2.fnm  _hh.tii  _hv.tii  _kj.fdt  _kw.tii  _li.tii  _md.tis
_5n.frq  _h2.frq  _hh.tis  _hv.tis  _kj.fdx  _kw.tis  _li.tis  _mi.fnm
_5n.nrm  _h2.nrm  _hk.fnm  _hz.fdt  _kj.fnm  _ky.fnm  _lm.fnm  _mi.frq
_5n.prx  _h2.prx  _hk.frq  _hz.fdx  _kj.frq  _ky.frq  _lm.frq  _mi.nrm
_5n.tii  _h2.tii  _hk.nrm  _hz.fnm  _kj.nrm  _ky.nrm  _lm.nrm  _mi.prx
_5n.tis  _h2.tis  _hk.prx  _hz.frq  _kj.prx  _ky.prx  _lm.prx  _mi.tii
_5x.fnm  _h5.fdt  _hk.tii  _hz.nrm  _kj.tii  _ky.tii  _lm.tii  _mi.tis
_5x.frq  _h5.fdx  _hk.tis  _hz.prx  _kj.tis  _ky.tis  _lm.tis  segments_1
_5x.nrm  _h5.fnm  _hl.fdt  _hz.tii  _kn.fdt  _l1.fdt  _lq.fnm  segments.gen
_5x.prx  _h5.frq  _hl.fdx  _hz.tis  _kn.fdx  _l1.fdx  _lq.frq
_5x.tii  _h5.nrm  _hl.fnm  _i1.fnm  _kn.fnm  _l1.fnm  _lq.nrm
_5x.tis  _h5.prx  _hl.frq  _i1.frq  _kn.frq  _l1.frq  _lq.prx
==

Thanks in advance.

--
View this message in context: 
http://lucene.472066.n3.nabble.com/why-too-many-open-files-tp3084407p3084407.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Too many open files exception related to solrj getServer too often?

2011-05-02 Thread Chris Hostetter

Off the top of my head, i don't know hte answers to some of your 
questions, but as to the core cause of the exception...

: 3. server.query(solrQuery) throws SolrServerException.  How can concurrent
: solr queries triggers Too many open file exception?

...bear in mind that (as i understand it) the limit on open files is 
actually a limit on open file *descriptors* which includes network 
sockets.

a google search for "java.net.SocketException: Too many open files" will 
give you loads of results -- it's not specific to solr.

-Hoss


Re: Too many open files exception related to solrj getServer too often?

2011-04-26 Thread cyang2010
Just pushing up the topic and look for answers.

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Too-many-open-files-exception-related-to-solrj-getServer-too-often-tp2808718p2867976.html
Sent from the Solr - User mailing list archive at Nabble.com.


Too many open files exception related to solrj getServer too often?

2011-04-11 Thread cyang2010
Hi,

I get this solrj error in development environment.

org.apache.solr.client.solrj.SolrServerException: java.net.SocketException:
Too many open files

At the time there was no reindexing or any write to the index.   There were
only different queries genrated using solrj to hit solr server:

CommonsHttpSolrServer server = new CommonsHttpSolrServer(url);
server.setSoTimeout(1000); // socket read timeout
server.setConnectionTimeout(1000);
server.setDefaultMaxConnectionsPerHost(100);
server.setMaxTotalConnections(100);
...
QueryResponse rsp = server.query(solrQuery);

I did NOT share reference of solrj CommonsHttpSolrServer among requests.  
So every http request will obtain a solj solr server instance and run query
on it.  

The question is:

1. Should solrj client share one instance of CommonHttpSolrServer?   Why? 
Is every CommonHttpSolrServer matched to one solr/lucene reader?  But from
the source code, it just shows it related to one apache http client.

2. Is TooManyOpenFiles exeption related to my possible wrong usage of
CommonHttpSolrServer?

3. server.query(solrQuery) throws SolrServerException.  How can concurrent
solr queries triggers Too many open file exception?


Look forward to your input.  Thanks,



cy

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Too-many-open-files-exception-related-to-solrj-getServer-too-often-tp2808718p2808718.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Too Many Open Files

2010-06-28 Thread Anderson vasconcelos
Other question,
Why SOLRJ d'ont close the StringWriter e OutputStreamWriter ?

thanks

2010/6/28 Anderson vasconcelos 

> Thanks for responses.
> I instantiate one instance of  per request (per delete query, in my case).
> I have a lot of concurrency process. Reusing the same instance (to send,
> delete and remove data) in solr, i will have a trouble?
> My concern is if i do this, solr will commit documents with data from other
> transaction.
>
> Thanks
>
>
>
>
> 2010/6/28 Michel Bottan 
>
> Hi Anderson,
>>
>> If you are using SolrJ, it's recommended to reuse the same instance per
>> solr
>> server.
>>
>> http://wiki.apache.org/solr/Solrj#CommonsHttpSolrServer
>>
>> But there are other scenarios which may cause this situation:
>>
>> 1. Other application running in the same Solr JVM which doesn't close
>> properly sockets or control file handlers.
>> 2. Open files limits configuration is low . Check your limits, read it
>> from
>> JVM process info:
>> cat /proc/1234/limits (where 1234 is your process ID)
>>
>> Cheers,
>> Michel Bottan
>>
>>
>> On Mon, Jun 28, 2010 at 1:18 PM, Erick Erickson > >wrote:
>>
>> > This probably means you're opening new readers without closing
>> > old ones. But that's just a guess. I'm guessing that this really
>> > has nothing to do with the delete itself, but the delete is what's
>> > finally pushing you over the limit.
>> >
>> > I know this has been discussed before, try searching the mail
>> > archive for TooManyOpenFiles and/or File Handles
>> >
>> > You could get much better information by providing more details, see:
>> >
>> >
>> >
>> http://wiki.apache.org/solr/UsingMailingLists?highlight=(most)|(users)|(list)<http://wiki.apache.org/solr/UsingMailingLists?highlight=%28most%29%7C%28users%29%7C%28list%29>
>> <
>> http://wiki.apache.org/solr/UsingMailingLists?highlight=%28most%29%7C%28users%29%7C%28list%29
>> >
>> >
>> > Best
>> > Erick
>> >
>> > On Mon, Jun 28, 2010 at 11:56 AM, Anderson vasconcelos <
>> > anderson.v...@gmail.com> wrote:
>> >
>> > > Hi all
>> > > When i send a delete query to SOLR, using the SOLRJ i received this
>> > > exception:
>> > >
>> > > org.apache.solr.client.solrj.SolrServerException:
>> > java.net.SocketException:
>> > > Too many open files
>> > > 11:53:06,964 INFO  [HttpMethodDirector] I/O exception
>> > > (java.net.SocketException) caught when processing request: Too many
>> open
>> > > files
>> > >
>> > > Anyone could Help me? How i can solve this?
>> > >
>> > > Thanks
>> > >
>> >
>>
>
>


Re: Too Many Open Files

2010-06-28 Thread Anderson vasconcelos
Thanks for responses.
I instantiate one instance of  per request (per delete query, in my case).
I have a lot of concurrency process. Reusing the same instance (to send,
delete and remove data) in solr, i will have a trouble?
My concern is if i do this, solr will commit documents with data from other
transaction.

Thanks




2010/6/28 Michel Bottan 

> Hi Anderson,
>
> If you are using SolrJ, it's recommended to reuse the same instance per
> solr
> server.
>
> http://wiki.apache.org/solr/Solrj#CommonsHttpSolrServer
>
> But there are other scenarios which may cause this situation:
>
> 1. Other application running in the same Solr JVM which doesn't close
> properly sockets or control file handlers.
> 2. Open files limits configuration is low . Check your limits, read it from
> JVM process info:
> cat /proc/1234/limits (where 1234 is your process ID)
>
> Cheers,
> Michel Bottan
>
>
> On Mon, Jun 28, 2010 at 1:18 PM, Erick Erickson  >wrote:
>
> > This probably means you're opening new readers without closing
> > old ones. But that's just a guess. I'm guessing that this really
> > has nothing to do with the delete itself, but the delete is what's
> > finally pushing you over the limit.
> >
> > I know this has been discussed before, try searching the mail
> > archive for TooManyOpenFiles and/or File Handles
> >
> > You could get much better information by providing more details, see:
> >
> >
> >
> http://wiki.apache.org/solr/UsingMailingLists?highlight=(most)|(users)|(list)<http://wiki.apache.org/solr/UsingMailingLists?highlight=%28most%29%7C%28users%29%7C%28list%29>
> <
> http://wiki.apache.org/solr/UsingMailingLists?highlight=%28most%29%7C%28users%29%7C%28list%29
> >
> >
> > Best
> > Erick
> >
> > On Mon, Jun 28, 2010 at 11:56 AM, Anderson vasconcelos <
> > anderson.v...@gmail.com> wrote:
> >
> > > Hi all
> > > When i send a delete query to SOLR, using the SOLRJ i received this
> > > exception:
> > >
> > > org.apache.solr.client.solrj.SolrServerException:
> > java.net.SocketException:
> > > Too many open files
> > > 11:53:06,964 INFO  [HttpMethodDirector] I/O exception
> > > (java.net.SocketException) caught when processing request: Too many
> open
> > > files
> > >
> > > Anyone could Help me? How i can solve this?
> > >
> > > Thanks
> > >
> >
>


Re: Too Many Open Files

2010-06-28 Thread Michel Bottan
Hi Anderson,

If you are using SolrJ, it's recommended to reuse the same instance per solr
server.

http://wiki.apache.org/solr/Solrj#CommonsHttpSolrServer

But there are other scenarios which may cause this situation:

1. Other application running in the same Solr JVM which doesn't close
properly sockets or control file handlers.
2. Open files limits configuration is low . Check your limits, read it from
JVM process info:
cat /proc/1234/limits (where 1234 is your process ID)

Cheers,
Michel Bottan


On Mon, Jun 28, 2010 at 1:18 PM, Erick Erickson wrote:

> This probably means you're opening new readers without closing
> old ones. But that's just a guess. I'm guessing that this really
> has nothing to do with the delete itself, but the delete is what's
> finally pushing you over the limit.
>
> I know this has been discussed before, try searching the mail
> archive for TooManyOpenFiles and/or File Handles
>
> You could get much better information by providing more details, see:
>
>
> http://wiki.apache.org/solr/UsingMailingLists?highlight=(most)|(users)|(list)<http://wiki.apache.org/solr/UsingMailingLists?highlight=%28most%29%7C%28users%29%7C%28list%29>
>
> Best
> Erick
>
> On Mon, Jun 28, 2010 at 11:56 AM, Anderson vasconcelos <
> anderson.v...@gmail.com> wrote:
>
> > Hi all
> > When i send a delete query to SOLR, using the SOLRJ i received this
> > exception:
> >
> > org.apache.solr.client.solrj.SolrServerException:
> java.net.SocketException:
> > Too many open files
> > 11:53:06,964 INFO  [HttpMethodDirector] I/O exception
> > (java.net.SocketException) caught when processing request: Too many open
> > files
> >
> > Anyone could Help me? How i can solve this?
> >
> > Thanks
> >
>


Re: Too Many Open Files

2010-06-28 Thread Erick Erickson
This probably means you're opening new readers without closing
old ones. But that's just a guess. I'm guessing that this really
has nothing to do with the delete itself, but the delete is what's
finally pushing you over the limit.

I know this has been discussed before, try searching the mail
archive for TooManyOpenFiles and/or File Handles

You could get much better information by providing more details, see:

http://wiki.apache.org/solr/UsingMailingLists?highlight=(most)|(users)|(list)

Best
Erick

On Mon, Jun 28, 2010 at 11:56 AM, Anderson vasconcelos <
anderson.v...@gmail.com> wrote:

> Hi all
> When i send a delete query to SOLR, using the SOLRJ i received this
> exception:
>
> org.apache.solr.client.solrj.SolrServerException: java.net.SocketException:
> Too many open files
> 11:53:06,964 INFO  [HttpMethodDirector] I/O exception
> (java.net.SocketException) caught when processing request: Too many open
> files
>
> Anyone could Help me? How i can solve this?
>
> Thanks
>


Too Many Open Files

2010-06-28 Thread Anderson vasconcelos
Hi all
When i send a delete query to SOLR, using the SOLRJ i received this
exception:

org.apache.solr.client.solrj.SolrServerException: java.net.SocketException:
Too many open files
11:53:06,964 INFO  [HttpMethodDirector] I/O exception
(java.net.SocketException) caught when processing request: Too many open
files

Anyone could Help me? How i can solve this?

Thanks


Re: Healthcheck. Too many open files

2010-04-12 Thread Blargy


Tim Underwood wrote:
> 
> Have you tried hitting /admin/ping (which handles checking for the
> existence of your health file) instead of
> /admin/file?file=healthcheck.txt?
> 

Ok this is what I was looking for. I was wondering if the way I was doing it
was the preferred way or not.

I didnt even realize that the response being sent back from the admin/ping
request was an error until I checked it out using curl... everything looked
fine using firefox.

Thanks

-- 
View this message in context: 
http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p715070.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Healthcheck. Too many open files

2010-04-12 Thread Tim Underwood
I'm using HAProxy with 5 second healthcheck intervals and haven't seen
any problems on Solr 1.4.

My HAProxy config looks like this:

listen solr :5083
  option httpchk GET /solr/parts/admin/ping HTTP/1.1\r\nHost:\ www
  server solr01 192.168.0.101:9983 check inter 5000
  server solr02 192.168.0.102:9983 check inter 5000

Have you tried hitting /admin/ping (which handles checking for the
existence of your health file) instead of
/admin/file?file=healthcheck.txt?

-Tim

On Sat, Apr 10, 2010 at 9:26 PM, Blargy  wrote:
>
> Lance,
>
> We have have thousands of searches per minute so a minute of downtime is out
> of the question. If for whatever reason one of our solr slaves goes down I
> want to remove it ASAP from the loadbalancers rotation, hence the 2 second
> check.
>
> Maybe I am doing something wrong but the my HAProxy healthcheck is as
> follows:
> ...
> option  httpchk GET /solr/items/admin/file?file=healthcheck.txt
> ...
> so basically I am requesting that file to determine if that particular slave
> is up or not. Is this the preferred way of doing this? I kind of like the
> "Enable/Disable" feature of this healthcheck file.
>
> You mentioned:
>
> "It should not run out of file descriptors from doing this. The code
> does a 'new File(healthcheck file name).exists()' and throws away the
> descriptor. This should not be a resource leak for file desciptors."
>
> yet if i run the following on the command line:
> # lsof -p 
> Where xxx is the pid of the solr, I get the following output:
>
> ...
> java    4408 root  220r   REG               8,17  56085252  817639
> /var/solr/home/items/data/index/_4y.tvx
> java    4408 root  221r   REG               8,17  10499759  817645
> /var/solr/home/items/data/index/_4y.tvd
> java    4408 root  222r   REG               8,17 296791079  817647
> /var/solr/home/items/data/index/_4y.tvf
> java    4408 root  223r   REG               8,17   7010660  817648
> /var/solr/home/items/data/index/_4y.nrm
> java    4408 root  224r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java    4408 root  225r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java    4408 root  226r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java    4408 root  227r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java    4408 root  228r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java    4408 root  229r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java    4408 root  230r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java    4408 root  231r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> ... at it keeps going 
>
> and I've see it as high as 3000. I've had to update my ulimit to 1 to
> overcome this problem however I feel this is really just a bandaid to a
> deeper problem.
>
> Am I doing something wrong (Solr or HAProxy) or is this a possible resource
> leak?
>
> Thanks for any input!
> --
> View this message in context: 
> http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p711141.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: Healthcheck. Too many open files

2010-04-12 Thread Mark Miller

On 04/11/2010 10:12 PM, Blargy wrote:

Mark,

Cool. I didn't think that was the expected behavior. Will you guys at Lucid
be rolling this patch into your 1.4 distribution?
   


I don't know the release plans, but I'm sure this patch would be 
included in the next release.



As per your 1.5 comment, do you think 1.5 trunk is stable enough for
production or should I just be keeping an eye on it.


I think 1.5 is fairly stable, but you may want to give it a bit of time 
- some recent changes that just went in could use a little time to bake 
- the large example being flexible indexing in Lucene. Solr Cloud is 
also almost ready to commit, but I'm not sure when I will get to that 
last bit of cleanup it needs.



I know its never really
known, but do you happen to know an approximate release date of 1.5 (summer,
winter, 2011)?
   


Well, like I said, flexible indexing will need some time to bake - and 
there is a still a lot of activity coming around the Lucene/Solr merge - 
so while these things are impossible to guess - I wouldn't at all be 
surprised if it was towards the end of the year/2011.



Thanks for the patch!
   



--
- Mark

http://www.lucidimagination.com





Re: Healthcheck. Too many open files

2010-04-11 Thread Blargy

Mark, 

Cool. I didn't think that was the expected behavior. Will you guys at Lucid
be rolling this patch into your 1.4 distribution? 

As per your 1.5 comment, do you think 1.5 trunk is stable enough for
production or should I just be keeping an eye on it. I know its never really
known, but do you happen to know an approximate release date of 1.5 (summer,
winter, 2011)? 

Thanks for the patch!
-- 
View this message in context: 
http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p712587.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Healthcheck. Too many open files

2010-04-11 Thread Mark Miller

On 04/11/2010 12:26 AM, Blargy wrote:

...
option  httpchk GET /solr/items/admin/file?file=healthcheck.txt
...
so basically I am requesting that file to determine if that particular slave
is up or not. Is this the preferred way of doing this? I kind of like the
"Enable/Disable" feature of this healthcheck file.
   


I likely fixed this bug a short while ago: 
https://issues.apache.org/jira/browse/SOLR-1748


You might be interested in the new Solr Cloud stuff coming to trunk - 
its got some nice load balancing features.


--
- Mark

http://www.lucidimagination.com





Re: Healthcheck. Too many open files

2010-04-11 Thread Blargy

Taking the HAProxy out of the picture I still see the same results if I hit
my solr instance:
http://localhost:8983/solr/items/admin/file?file=healthcheck.txt from my
browser

..
java4729 root   48u   REG   8,17 0  817622
/var/solr/home/items/conf/healthcheck.txt
java4729 root   49r   REG   8,17 0  817622
/var/solr/home/items/conf/healthcheck.txt
java4729 root   51r   REG   8,17 0  817622
/var/solr/home/items/conf/healthcheck.txt
... (1 row for every request I make)

The number of threads remains static regardless of how many times I request
the file: http://localhost:8983/solr/items/admin/threads


18
19
15


Same goes for number of sockets: 

# netstat -an | wc -l
120


-- 
View this message in context: 
http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p712439.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Healthcheck. Too many open files

2010-04-11 Thread Lance Norskog
What is the client for these requests? Do they all go in on the same
socket or do they use separate sockets?

If they are a SolrJ program, and you say 'new CommonsHttpSolrServer'
for each request, each request goes in on a new socket. This creates a
new thread for each request, and the old threads don't die until the
client socket times out. I haven't looked at the file-fetching request
handler,

Check the number of threads in the server. Also look at the sockets
with the Unix/Windows program 'netstat -an'.

It sounds like the load balancer does not have a 'retry request on
another server in the pool' option. This is the core system
architecture problem, if you want this level of uptime.

On 4/10/10, Blargy  wrote:
>
> Lance,
>
> We have have thousands of searches per minute so a minute of downtime is out
> of the question. If for whatever reason one of our solr slaves goes down I
> want to remove it ASAP from the loadbalancers rotation, hence the 2 second
> check.
>
> Maybe I am doing something wrong but the my HAProxy healthcheck is as
> follows:
> ...
> option  httpchk GET /solr/items/admin/file?file=healthcheck.txt
> ...
> so basically I am requesting that file to determine if that particular slave
> is up or not. Is this the preferred way of doing this? I kind of like the
> "Enable/Disable" feature of this healthcheck file.
>
> You mentioned:
>
> "It should not run out of file descriptors from doing this. The code
> does a 'new File(healthcheck file name).exists()' and throws away the
> descriptor. This should not be a resource leak for file desciptors."
>
> yet if i run the following on the command line:
> # lsof -p 
> Where xxx is the pid of the solr, I get the following output:
>
> ...
> java4408 root  220r   REG   8,17  56085252  817639
> /var/solr/home/items/data/index/_4y.tvx
> java4408 root  221r   REG   8,17  10499759  817645
> /var/solr/home/items/data/index/_4y.tvd
> java4408 root  222r   REG   8,17 296791079  817647
> /var/solr/home/items/data/index/_4y.tvf
> java4408 root  223r   REG   8,17   7010660  817648
> /var/solr/home/items/data/index/_4y.nrm
> java4408 root  224r   REG   8,17 0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java4408 root  225r   REG   8,17 0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java4408 root  226r   REG   8,17 0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java4408 root  227r   REG   8,17 0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java4408 root  228r   REG   8,17 0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java4408 root  229r   REG   8,17 0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java4408 root  230r   REG   8,17 0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java4408 root  231r   REG   8,17 0  817622
> /var/solr/home/items/conf/healthcheck.txt
> ... at it keeps going 
>
> and I've see it as high as 3000. I've had to update my ulimit to 1 to
> overcome this problem however I feel this is really just a bandaid to a
> deeper problem.
>
> Am I doing something wrong (Solr or HAProxy) or is this a possible resource
> leak?
>
> Thanks for any input!
> --
> View this message in context:
> http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p711141.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


-- 
Lance Norskog
goks...@gmail.com


Re: Healthcheck. Too many open files

2010-04-10 Thread Blargy

Lance,

We have have thousands of searches per minute so a minute of downtime is out
of the question. If for whatever reason one of our solr slaves goes down I
want to remove it ASAP from the loadbalancers rotation, hence the 2 second
check.

Maybe I am doing something wrong but the my HAProxy healthcheck is as
follows: 
...
option  httpchk GET /solr/items/admin/file?file=healthcheck.txt
...
so basically I am requesting that file to determine if that particular slave
is up or not. Is this the preferred way of doing this? I kind of like the
"Enable/Disable" feature of this healthcheck file.

You mentioned:

"It should not run out of file descriptors from doing this. The code
does a 'new File(healthcheck file name).exists()' and throws away the
descriptor. This should not be a resource leak for file desciptors."

yet if i run the following on the command line:
# lsof -p  
Where xxx is the pid of the solr, I get the following output: 

...
java4408 root  220r   REG   8,17  56085252  817639
/var/solr/home/items/data/index/_4y.tvx
java4408 root  221r   REG   8,17  10499759  817645
/var/solr/home/items/data/index/_4y.tvd
java4408 root  222r   REG   8,17 296791079  817647
/var/solr/home/items/data/index/_4y.tvf
java4408 root  223r   REG   8,17   7010660  817648
/var/solr/home/items/data/index/_4y.nrm
java4408 root  224r   REG   8,17 0  817622
/var/solr/home/items/conf/healthcheck.txt
java4408 root  225r   REG   8,17 0  817622
/var/solr/home/items/conf/healthcheck.txt
java4408 root  226r   REG   8,17 0  817622
/var/solr/home/items/conf/healthcheck.txt
java4408 root  227r   REG   8,17 0  817622
/var/solr/home/items/conf/healthcheck.txt
java4408 root  228r   REG   8,17 0  817622
/var/solr/home/items/conf/healthcheck.txt
java4408 root  229r   REG   8,17 0  817622
/var/solr/home/items/conf/healthcheck.txt
java4408 root  230r   REG   8,17 0  817622
/var/solr/home/items/conf/healthcheck.txt
java4408 root  231r   REG   8,17 0  817622
/var/solr/home/items/conf/healthcheck.txt
... at it keeps going 

and I've see it as high as 3000. I've had to update my ulimit to 1 to
overcome this problem however I feel this is really just a bandaid to a
deeper problem. 

Am I doing something wrong (Solr or HAProxy) or is this a possible resource
leak?

Thanks for any input! 
-- 
View this message in context: 
http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p711141.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Healthcheck. Too many open files

2010-04-10 Thread Lance Norskog
Grrr... a pox on gmail ajax mode. It told me these did not go out.

A resource leak is held by a memory leak. Ruben Laguna just posted
this on lucene's java-dev and I've paraphrase it:

Take a memory snapshot with JConsole -> dumpHeap [1] and the analyze
it with Eclipse MAT [2]. Find the biggest objects and look at their
path to GC roots to see if Solr is actually retaining them.

[1] http://java.sun.com/developer/technicalArticles/J2SE/monitoring/
[2] http://www.eclipse.org/mat/

On 4/10/10, Lance Norskog  wrote:
> Two different points:
> Checking once a minute should be sufficient. Also, when I did this
> instead of pulling a file or doing the 'ping' feature, I did a search
> of a non-existent fwildcard field "bogus_s:test". The point being to
> make sure that the Lucene part could actually talk to its index.
>
> It should not run out of file descriptors from doing this. The code
> does a 'new File(healthcheck file name).exists()' and throws away the
> descriptor. This should not be a resource leak for file desciptors.
>
> On Sat, Apr 10, 2010 at 12:36 PM, Blargy  wrote:
>>
>> I have my loadbalancer (HAProxy) configured to check Solr for a
>> healthcheck
>> file every 2 seconds.
>>
>>  
>>    solr
>>    solr/conf/healthcheck.txt
>>  
>>
>> However it keeps marking my slaves as down and I am seeing this error:
>>
>> Apr 10, 2010 12:29:20 PM org.apache.solr.core.SolrCore execute
>> INFO: [items] webapp=/solr path=/admin/file params={file=healthcheck.txt}
>> status=0 QTime=0
>> Apr 10, 2010 12:29:20 PM org.apache.solr.common.SolrException log
>> SEVERE: java.io.FileNotFoundException:
>> /var/solr/home/items/conf/healthcheck.txt (Too many open files)
>>        at java.io.FileInputStream.open(Native Method)
>>        at java.io.FileInputStream.(FileInputStream.java:137)
>>        at java.io.FileReader.(FileReader.java:72)
>>        at
>> org.apache.solr.common.util.ContentStreamBase$FileStream.getReader(ContentStreamBase.java:118)
>>        at
>> org.apache.solr.request.RawResponseWriter.write(RawResponseWriter.java:83)
>>
>> Obviously solr is keeping too many files open, but how can I solve this
>> problem so I can use this file as my healthcheck?
>>
>> Thanks
>>
>>
>>
>> --
>> View this message in context:
>> http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p710631.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
>
>
>
> --
> Lance Norskog
> goks...@gmail.com
>


-- 
Lance Norskog
goks...@gmail.com


Re: Healthcheck. Too many open files

2010-04-10 Thread Lance Norskog
Two different points:
Checking once a minute should be sufficient. Also, when I did this
instead of pulling a file or doing the 'ping' feature, I did a search
of a non-existent fwildcard field "bogus_s:test". The point being to
make sure that the Lucene part could actually talk to its index.

It should not run out of file descriptors from doing this. The code
does a 'new File(healthcheck file name).exists()' and throws away the
descriptor. This should not be a resource leak for file desciptors.

On Sat, Apr 10, 2010 at 12:36 PM, Blargy  wrote:
>
> I have my loadbalancer (HAProxy) configured to check Solr for a healthcheck
> file every 2 seconds.
>
>  
>    solr
>    solr/conf/healthcheck.txt
>  
>
> However it keeps marking my slaves as down and I am seeing this error:
>
> Apr 10, 2010 12:29:20 PM org.apache.solr.core.SolrCore execute
> INFO: [items] webapp=/solr path=/admin/file params={file=healthcheck.txt}
> status=0 QTime=0
> Apr 10, 2010 12:29:20 PM org.apache.solr.common.SolrException log
> SEVERE: java.io.FileNotFoundException:
> /var/solr/home/items/conf/healthcheck.txt (Too many open files)
>        at java.io.FileInputStream.open(Native Method)
>        at java.io.FileInputStream.(FileInputStream.java:137)
>        at java.io.FileReader.(FileReader.java:72)
>        at
> org.apache.solr.common.util.ContentStreamBase$FileStream.getReader(ContentStreamBase.java:118)
>        at
> org.apache.solr.request.RawResponseWriter.write(RawResponseWriter.java:83)
>
> Obviously solr is keeping too many files open, but how can I solve this
> problem so I can use this file as my healthcheck?
>
> Thanks
>
>
>
> --
> View this message in context: 
> http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p710631.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
Lance Norskog
goks...@gmail.com


Re: Healthcheck. Too many open files

2010-04-10 Thread Lance Norskog
Two different points:
Checking once a minute should be sufficient. Also, when I did this
instead of pulling a file or doing the 'ping' feature, I did a search
of a non-existent fwildcard field "bogus_s:test". The point being to
make sure that the Lucene part could actually talk to its index.

It should not run out of file descriptors from doing this. The code
does a 'new File(healthcheck file name).exists()' and throws away the
descriptor. This should not be a resource leak for file desciptors.

On Sat, Apr 10, 2010 at 12:36 PM, Blargy  wrote:
>
> I have my loadbalancer (HAProxy) configured to check Solr for a healthcheck
> file every 2 seconds.
>
>  
>    solr
>    solr/conf/healthcheck.txt
>  
>
> However it keeps marking my slaves as down and I am seeing this error:
>
> Apr 10, 2010 12:29:20 PM org.apache.solr.core.SolrCore execute
> INFO: [items] webapp=/solr path=/admin/file params={file=healthcheck.txt}
> status=0 QTime=0
> Apr 10, 2010 12:29:20 PM org.apache.solr.common.SolrException log
> SEVERE: java.io.FileNotFoundException:
> /var/solr/home/items/conf/healthcheck.txt (Too many open files)
>        at java.io.FileInputStream.open(Native Method)
>        at java.io.FileInputStream.(FileInputStream.java:137)
>        at java.io.FileReader.(FileReader.java:72)
>        at
> org.apache.solr.common.util.ContentStreamBase$FileStream.getReader(ContentStreamBase.java:118)
>        at
> org.apache.solr.request.RawResponseWriter.write(RawResponseWriter.java:83)
>
> Obviously solr is keeping too many files open, but how can I solve this
> problem so I can use this file as my healthcheck?
>
> Thanks
>
>
>
> --
> View this message in context: 
> http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p710631.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
Lance Norskog
goks...@gmail.com


Healthcheck. Too many open files

2010-04-10 Thread Blargy

I have my loadbalancer (HAProxy) configured to check Solr for a healthcheck
file every 2 seconds.

 
solr
solr/conf/healthcheck.txt
 

However it keeps marking my slaves as down and I am seeing this error:

Apr 10, 2010 12:29:20 PM org.apache.solr.core.SolrCore execute
INFO: [items] webapp=/solr path=/admin/file params={file=healthcheck.txt}
status=0 QTime=0 
Apr 10, 2010 12:29:20 PM org.apache.solr.common.SolrException log
SEVERE: java.io.FileNotFoundException:
/var/solr/home/items/conf/healthcheck.txt (Too many open files)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:137)
at java.io.FileReader.(FileReader.java:72)
at
org.apache.solr.common.util.ContentStreamBase$FileStream.getReader(ContentStreamBase.java:118)
at
org.apache.solr.request.RawResponseWriter.write(RawResponseWriter.java:83)

Obviously solr is keeping too many files open, but how can I solve this
problem so I can use this file as my healthcheck?

Thanks



-- 
View this message in context: 
http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p710631.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Too many open files

2009-10-24 Thread Fuad Efendi

> If you had gone over 2GB of actual buffer *usage*, it would have
> broke...  Guaranteed.
> We've now added a check in Lucene 2.9.1 that will throw an exception
> if you try to go over 2048MB.
> And as the javadoc says, to be on the safe side, you probably
> shouldn't go too near 2048 - perhaps 2000MB is a good practical limit.
> 


I browsed http://issues.apache.org/jira/browse/LUCENE-1995 and
http://search.lucidimagination.com/search/document/f29fc52348ab9b63/arrayind
exoutofboundsexception_during_indexing
- it is not proof of concept. It is workaround. Problem still exists, and
scenario is unclear.


-Fuad
http://www.linkedin.com/in/liferay






RE: Too many open files

2009-10-24 Thread Fuad Efendi

> > when you store raw (non
> > tokenized, non indexed) "text" value with a document (which almost
everyone
> > does). Try to store 1,000,000 documents with 1000 bytes non-tokenized
field:
> > you will need 1Gb just for this array.
> 
> Nope.  You shouldn't even need 1GB of buffer space for that.
> The size specified is for all things that the indexing process needs
> to temporarily keep in memory... stored fields are normally
> immediately written to disk.
> 
> -Yonik
> http://www.lucidimagination.com


-Ok, thanks for clarification! What about term vectors, what about
non-trivial schema having 10 tokenized fields? Buffer will need 10 arrays
(up to 2048M each) for that. 
My understanding is probably very naive...


-Fuad
http://www.linkedin.com/in/liferay






RE: Too many open files

2009-10-24 Thread Fuad Efendi

Hi Yonik,


I am still using pre-2.9 Lucene (taken from SOLR trunk two months ago).

2048 is limit for documents, not for array of pointers to documents. And
especially for new "uninverted" SOLR features, plus non-tokenized stored
fields, we need 1Gb to store 1Mb of a simple field only (size of field: 1000
bytes).

May be it would broke... frankly, I started with 8Gb, then by some reason I
set if to 2Gb (a month ago), I don't remember why... I had hardware problems
and I didn't want frequent loose of ram buffer...


But again: why it would broke? Because "int" has 2048M different values?!! 

This is extremely strange. My understanding is that "buffer" stores
processed data such as "term -> document_id" values, _per_field_array(s!!!);
so that 2048M is _absolute_maximum_ in case if your SOLR schema consists
from _single_tokenized_field_only_. What about 10 fields? What about plain
text stored with document, term vectors, "uninverted" values??? What are
reasons on putting such check in Lucene? Array overflow?


-Fuad
http://www.linkedin.com/in/liferay



> -Original Message-
> From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik
Seeley
> Sent: October-24-09 12:27 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Too many open files
> 
> On Sat, Oct 24, 2009 at 12:18 PM, Fuad Efendi  wrote:
> >
> > Mark, I don't understand this; of course it is use case specific, I
haven't
> > seen any terrible behaviour with 8Gb
> 
> If you had gone over 2GB of actual buffer *usage*, it would have
> broke...  Guaranteed.
> We've now added a check in Lucene 2.9.1 that will throw an exception
> if you try to go over 2048MB.
> And as the javadoc says, to be on the safe side, you probably
> shouldn't go too near 2048 - perhaps 2000MB is a good practical limit.
> 
> -Yonik
> http://www.lucidimagination.com




Re: Too many open files

2009-10-24 Thread Yonik Seeley
On Sat, Oct 24, 2009 at 12:25 PM, Fuad Efendi  wrote:
> This JavaDoc is incorrect especially for SOLR,

It looks correct to me... if you think it can be clarified, please
propose how you would change it.

> when you store raw (non
> tokenized, non indexed) "text" value with a document (which almost everyone
> does). Try to store 1,000,000 documents with 1000 bytes non-tokenized field:
> you will need 1Gb just for this array.

Nope.  You shouldn't even need 1GB of buffer space for that.
The size specified is for all things that the indexing process needs
to temporarily keep in memory... stored fields are normally
immediately written to disk.

-Yonik
http://www.lucidimagination.com


Re: Too many open files

2009-10-24 Thread Yonik Seeley
On Sat, Oct 24, 2009 at 12:18 PM, Fuad Efendi  wrote:
>
> Mark, I don't understand this; of course it is use case specific, I haven't
> seen any terrible behaviour with 8Gb

If you had gone over 2GB of actual buffer *usage*, it would have
broke...  Guaranteed.
We've now added a check in Lucene 2.9.1 that will throw an exception
if you try to go over 2048MB.
And as the javadoc says, to be on the safe side, you probably
shouldn't go too near 2048 - perhaps 2000MB is a good practical limit.

-Yonik
http://www.lucidimagination.com


RE: Too many open files

2009-10-24 Thread Fuad Efendi
This JavaDoc is incorrect especially for SOLR, when you store raw (non
tokenized, non indexed) "text" value with a document (which almost everyone
does). Try to store 1,000,000 documents with 1000 bytes non-tokenized field:
you will need 1Gb just for this array.


> -Original Message-
> From: Fuad Efendi [mailto:f...@efendi.ca]
> Sent: October-24-09 12:10 PM
> To: solr-user@lucene.apache.org
> Subject: RE: Too many open files
> 
> Thanks for pointing to it, but it is so obvious:
> 
> 1. "Buffer" is used as a RAM storage for index updates
> 2. "int" has 2 x Gb different values (2^^32)
> 3. We can have _up_to_ 2Gb of _Documents_ (stored as key->value pairs,
> inverted index)
> 
> In case of 5 fields which I have, I need 5 arrays (up to 2Gb of size for
> each) to store inverted pointers, so that there is no any theoretical
limit:
> 
> > Also, from the javadoc in IndexWriter:
> >
> >*  NOTE: because IndexWriter uses
> >* ints when managing its internal storage,
> >* the absolute maximum value for this setting is somewhat
> >* less than 2048 MB.  The precise limit depends on
> >* various factors, such as how large your documents are,
> >* how many fields have norms, etc., so it's best to set
> >* this value comfortably under 2048.
> 
> 
> 
> Note also, I use norms etc...
> 
> 





RE: Too many open files

2009-10-24 Thread Fuad Efendi

Mark, I don't understand this; of course it is use case specific, I haven't
seen any terrible behaviour with 8Gb... 32Mb is extremely small for
Nutch-SOLR -like applications, but it is acceptable for Liferay-SOLR...

Please note also, I have some documents with same IDs updated many thousands
times a day, and I believe (I hope) IndexWriter flushes "optimized" segment
instead of thousands "delete" and single "insert" in many small (32Mb) files
(especially with SOLR)...


> Hmm - came out worse than it looked. Here is a better attempt:
> 
> MergeFactor: 10
> 
> BUF   DOCS/S
> 32   37.40
> 80   39.91
> 120 40.74
> 512 38.25
> 
> Mark Miller wrote:
> > Here is an example using the Lucene benchmark package. Indexing 64,000
> > wikipedia docs (sorry for the formatting):
> >
> >  [java] > Report sum by Prefix (MAddDocs) and Round (4
> > about 32 out of 256058)
> >  [java] Operation round mrg  flush   runCnt
> > recsPerRunrec/s  elapsedSecavgUsedMemavgTotalMem
> >  [java] MAddDocs_8000 0  10  32.00MB8
> > 800037.401,711.22   124,612,472182,689,792
> >  [java] MAddDocs_8000 -   1  10  80.00MB -  -   8 -  -  - 8000 -
> > -   39.91 -  1,603.76 - 266,716,128 -  469,925,888
> >  [java] MAddDocs_8000 2  10 120.00MB8
> > 800040.741,571.02   348,059,488548,233,216
> >  [java] MAddDocs_8000 -   3  10 512.00MB -  -   8 -  -  - 8000 -
> > -   38.25 -  1,673.05 - 746,087,808 -  926,089,216
> >
> > After about 32-40, you don't gain much, and it starts decreasing once
> > you start getting to high. 8GB is a terrible recommendation.
> >




RE: Too many open files

2009-10-24 Thread Fuad Efendi
Thanks for pointing to it, but it is so obvious:

1. "Buffer" is used as a RAM storage for index updates
2. "int" has 2 x Gb different values (2^^32)
3. We can have _up_to_ 2Gb of _Documents_ (stored as key->value pairs,
inverted index)

In case of 5 fields which I have, I need 5 arrays (up to 2Gb of size for
each) to store inverted pointers, so that there is no any theoretical limit:

> Also, from the javadoc in IndexWriter:
> 
>*  NOTE: because IndexWriter uses
>* ints when managing its internal storage,
>* the absolute maximum value for this setting is somewhat
>* less than 2048 MB.  The precise limit depends on
>* various factors, such as how large your documents are,
>* how many fields have norms, etc., so it's best to set
>* this value comfortably under 2048.



Note also, I use norms etc...
 




RE: Too many open files

2009-10-24 Thread Fuad Efendi

I had extremely specific use case; about 5000 documents-per-second (small
documents) update rate, some documents can be repeatedly sent to SOLR with
different timestamp field (and same unique document ID). Nothing breaks,
just a great performance gain which was impossible with 32GB Buffer (- it
caused constant index merge, 5 times more CPU than index update). Nothing
breaks... with indexMerge=10 I don't have ANY merge during 24 hours;
segments are large (few of 4Gb-8Gb, and one large "union"); I have "merge"
explicitly only, at night, when I issue "commit".


Of course, it depends on use case, for applications such as "Content
Management System" we don't need high remBufferSizeMB (few updates a day
sent to SOLR)...



> -Original Message-
> From: Mark Miller [mailto:markrmil...@gmail.com]
> Sent: October-23-09 5:28 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Too many open files
> 
> 8 GB is much larger than is well supported. Its diminishing returns over
> 40-100 and mostly a waste of RAM. Too high and things can break. It
> should be well below 2 GB at most, but I'd still recommend 40-100.
> 
> Fuad Efendi wrote:
> > Reason of having big RAM buffer is lowering frequency of IndexWriter
flushes
> > and (subsequently) lowering frequency of index merge events, and
> > (subsequently) merging of a few larger files takes less time...
especially
> > if RAM Buffer is intelligent enough (and big enough) to deal with 100
> > concurrent updates of existing document without 100-times flushing to
disk
> > of 100 document versions.
> >
> > I posted here thread related; I had 1:5 timing for Update:Merge (5
minutes
> > merge, and 1 minute update) with default SOLR settings (32Mb buffer). I
> > increased buffer to 8Gb on Master, and it triggered significant indexing
> > performance boost...
> >
> > -Fuad
> > http://www.linkedin.com/in/liferay
> >
> >
> >
> >> -Original Message-
> >> From: Mark Miller [mailto:markrmil...@gmail.com]
> >> Sent: October-23-09 3:03 PM
> >> To: solr-user@lucene.apache.org
> >> Subject: Re: Too many open files
> >>
> >> I wouldn't use a RAM buffer of a gig - 32-100 is generally a good
number.
> >>
> >> Fuad Efendi wrote:
> >>
> >>> I was partially wrong; this is what Mike McCandless
(Lucene-in-Action,
> >>>
> > 2nd
> >
> >>> edition) explained at Manning forum:
> >>>
> >>> mergeFactor of 1000 means you will have up to 1000 segments at each
> >>>
> > level.
> >
> >>> A level 0 segment means it was flushed directly by IndexWriter.
> >>> After you have 1000 such segments, they are merged into a single level
1
> >>> segment.
> >>> Once you have 1000 level 1 segments, they are merged into a single
level
> >>>
> > 2
> >
> >>> segment, etc.
> >>> So, depending on how many docs you add to your index, you'll could
have
> >>> 1000s of segments w/ mergeFactor=1000.
> >>>
> >>> http://www.manning-sandbox.com/thread.jspa?threadID=33784&tstart=0
> >>>
> >>>
> >>> So, in case of mergeFactor=100 you may have (theoretically) 1000
> >>>
> > segments,
> >
> >>> 10-20 files each (depending on schema)...
> >>>
> >>>
> >>> mergeFactor=10 is default setting... ramBufferSizeMB=1024 means that
you
> >>> need at least double Java heap, but you have -Xmx1024m...
> >>>
> >>>
> >>> -Fuad
> >>>
> >>>
> >>>
> >>>
> >>>> I am getting too many open files error.
> >>>>
> >>>> Usually I test on a server that has 4GB RAM and assigned 1GB for
> >>>> tomcat(set JAVA_OPTS=-Xms256m -Xmx1024m), ulimit -n is 256 for this
> >>>> server and has following setting for SolrConfig.xml
> >>>>
> >>>>
> >>>>
> >>>> true
> >>>>
> >>>> 1024
> >>>>
> >>>> 100
> >>>>
> >>>> 2147483647
> >>>>
> >>>> 1
> >>>>
> >>>>
> >>>>
> >>>
> >>>
> >> --
> >> - Mark
> >>
> >> http://www.lucidimagination.com
> >>
> >>
> >>
> >
> >
> >
> >
> 
> 
> --
> - Mark
> 
> http://www.lucidimagination.com
> 
> 





Re: Too many open files

2009-10-23 Thread Mark Miller
Hmm - came out worse than it looked. Here is a better attempt:

MergeFactor: 10

BUF   DOCS/S
32   37.40
80   39.91
120 40.74
512 38.25

Mark Miller wrote:
> Here is an example using the Lucene benchmark package. Indexing 64,000
> wikipedia docs (sorry for the formatting):
>
>  [java] > Report sum by Prefix (MAddDocs) and Round (4
> about 32 out of 256058)
>  [java] Operation round mrg  flush   runCnt  
> recsPerRunrec/s  elapsedSecavgUsedMemavgTotalMem
>  [java] MAddDocs_8000 0  10  32.00MB8
> 800037.401,711.22   124,612,472182,689,792
>  [java] MAddDocs_8000 -   1  10  80.00MB -  -   8 -  -  - 8000 - 
> -   39.91 -  1,603.76 - 266,716,128 -  469,925,888
>  [java] MAddDocs_8000 2  10 120.00MB8
> 800040.741,571.02   348,059,488548,233,216
>  [java] MAddDocs_8000 -   3  10 512.00MB -  -   8 -  -  - 8000 - 
> -   38.25 -  1,673.05 - 746,087,808 -  926,089,216
>
> After about 32-40, you don't gain much, and it starts decreasing once
> you start getting to high. 8GB is a terrible recommendation.
>
> Also, from the javadoc in IndexWriter:
>
>*  NOTE: because IndexWriter uses
>* ints when managing its internal storage,
>* the absolute maximum value for this setting is somewhat
>* less than 2048 MB.  The precise limit depends on
>* various factors, such as how large your documents are,
>* how many fields have norms, etc., so it's best to set
>* this value comfortably under 2048.
>
> Mark Miller wrote:
>   
>> 8 GB is much larger than is well supported. Its diminishing returns over
>> 40-100 and mostly a waste of RAM. Too high and things can break. It
>> should be well below 2 GB at most, but I'd still recommend 40-100.
>>
>> Fuad Efendi wrote:
>>   
>> 
>>> Reason of having big RAM buffer is lowering frequency of IndexWriter flushes
>>> and (subsequently) lowering frequency of index merge events, and
>>> (subsequently) merging of a few larger files takes less time... especially
>>> if RAM Buffer is intelligent enough (and big enough) to deal with 100
>>> concurrent updates of existing document without 100-times flushing to disk
>>> of 100 document versions.
>>>
>>> I posted here thread related; I had 1:5 timing for Update:Merge (5 minutes
>>> merge, and 1 minute update) with default SOLR settings (32Mb buffer). I
>>> increased buffer to 8Gb on Master, and it triggered significant indexing
>>> performance boost... 
>>>
>>> -Fuad
>>> http://www.linkedin.com/in/liferay
>>>
>>>
>>>   
>>> 
>>>   
>>>> -Original Message-
>>>> From: Mark Miller [mailto:markrmil...@gmail.com]
>>>> Sent: October-23-09 3:03 PM
>>>> To: solr-user@lucene.apache.org
>>>> Subject: Re: Too many open files
>>>>
>>>> I wouldn't use a RAM buffer of a gig - 32-100 is generally a good number.
>>>>
>>>> Fuad Efendi wrote:
>>>> 
>>>>   
>>>> 
>>>>> I was partially wrong; this is what Mike McCandless  (Lucene-in-Action,
>>>>>   
>>>>> 
>>>>>   
>>> 2nd
>>>   
>>> 
>>>   
>>>>> edition) explained at Manning forum:
>>>>>
>>>>> mergeFactor of 1000 means you will have up to 1000 segments at each
>>>>>   
>>>>> 
>>>>>   
>>> level.
>>>   
>>> 
>>>   
>>>>> A level 0 segment means it was flushed directly by IndexWriter.
>>>>> After you have 1000 such segments, they are merged into a single level 1
>>>>> segment.
>>>>> Once you have 1000 level 1 segments, they are merged into a single level
>>>>>   
>>>>> 
>>>>>   
>>> 2
>>>   
>>> 
>>>   
>>>>> segment, etc.
>>>>> So, depending on how many docs you add to your index, you'll could have
>>>>> 1000s of segments w/ mergeFactor=1000.
>>>>>
>>>>> http://www.manning-sandbox.com/thread.jspa?threadID=33784&tstart=0
>>>>>
>>>>>
>>>>> So, in case of mergeFactor=100 you may have (

Re: Too many open files

2009-10-23 Thread Mark Miller
Here is an example using the Lucene benchmark package. Indexing 64,000
wikipedia docs (sorry for the formatting):

 [java] > Report sum by Prefix (MAddDocs) and Round (4
about 32 out of 256058)
 [java] Operation round mrg  flush   runCnt  
recsPerRunrec/s  elapsedSecavgUsedMemavgTotalMem
 [java] MAddDocs_8000 0  10  32.00MB8
800037.401,711.22   124,612,472182,689,792
 [java] MAddDocs_8000 -   1  10  80.00MB -  -   8 -  -  - 8000 - 
-   39.91 -  1,603.76 - 266,716,128 -  469,925,888
 [java] MAddDocs_8000 2  10 120.00MB8
800040.741,571.02   348,059,488548,233,216
 [java] MAddDocs_8000 -   3  10 512.00MB -  -   8 -  -  - 8000 - 
-   38.25 -  1,673.05 - 746,087,808 -  926,089,216

After about 32-40, you don't gain much, and it starts decreasing once
you start getting to high. 8GB is a terrible recommendation.

Also, from the javadoc in IndexWriter:

   *  NOTE: because IndexWriter uses
   * ints when managing its internal storage,
   * the absolute maximum value for this setting is somewhat
   * less than 2048 MB.  The precise limit depends on
   * various factors, such as how large your documents are,
   * how many fields have norms, etc., so it's best to set
   * this value comfortably under 2048.

Mark Miller wrote:
> 8 GB is much larger than is well supported. Its diminishing returns over
> 40-100 and mostly a waste of RAM. Too high and things can break. It
> should be well below 2 GB at most, but I'd still recommend 40-100.
>
> Fuad Efendi wrote:
>   
>> Reason of having big RAM buffer is lowering frequency of IndexWriter flushes
>> and (subsequently) lowering frequency of index merge events, and
>> (subsequently) merging of a few larger files takes less time... especially
>> if RAM Buffer is intelligent enough (and big enough) to deal with 100
>> concurrent updates of existing document without 100-times flushing to disk
>> of 100 document versions.
>>
>> I posted here thread related; I had 1:5 timing for Update:Merge (5 minutes
>> merge, and 1 minute update) with default SOLR settings (32Mb buffer). I
>> increased buffer to 8Gb on Master, and it triggered significant indexing
>> performance boost... 
>>
>> -Fuad
>> http://www.linkedin.com/in/liferay
>>
>>
>>   
>> 
>>> -Original Message-
>>> From: Mark Miller [mailto:markrmil...@gmail.com]
>>> Sent: October-23-09 3:03 PM
>>> To: solr-user@lucene.apache.org
>>> Subject: Re: Too many open files
>>>
>>> I wouldn't use a RAM buffer of a gig - 32-100 is generally a good number.
>>>
>>> Fuad Efendi wrote:
>>> 
>>>   
>>>> I was partially wrong; this is what Mike McCandless  (Lucene-in-Action,
>>>>   
>>>> 
>> 2nd
>>   
>> 
>>>> edition) explained at Manning forum:
>>>>
>>>> mergeFactor of 1000 means you will have up to 1000 segments at each
>>>>   
>>>> 
>> level.
>>   
>> 
>>>> A level 0 segment means it was flushed directly by IndexWriter.
>>>> After you have 1000 such segments, they are merged into a single level 1
>>>> segment.
>>>> Once you have 1000 level 1 segments, they are merged into a single level
>>>>   
>>>> 
>> 2
>>   
>> 
>>>> segment, etc.
>>>> So, depending on how many docs you add to your index, you'll could have
>>>> 1000s of segments w/ mergeFactor=1000.
>>>>
>>>> http://www.manning-sandbox.com/thread.jspa?threadID=33784&tstart=0
>>>>
>>>>
>>>> So, in case of mergeFactor=100 you may have (theoretically) 1000
>>>>   
>>>> 
>> segments,
>>   
>> 
>>>> 10-20 files each (depending on schema)...
>>>>
>>>>
>>>> mergeFactor=10 is default setting... ramBufferSizeMB=1024 means that you
>>>> need at least double Java heap, but you have -Xmx1024m...
>>>>
>>>>
>>>> -Fuad
>>>>
>>>>
>>>>
>>>>   
>>>> 
>>>>> I am getting too many open files error.
>>>>>
>>>>> Usually I test on a server that has 4GB RAM and assigned 1GB for
>>>>> tomcat(set JAVA_OPTS=-Xms256m -Xmx1024m), ulimit -n is 256 for this
>>>>> server and has following setting for SolrConfig.xml
>>>>>
>>>>>
>>>>>
>>>>> true
>>>>>
>>>>> 1024
>>>>>
>>>>> 100
>>>>>
>>>>> 2147483647
>>>>>
>>>>> 1
>>>>>
>>>>>
>>>>> 
>>>>>   
>>>>   
>>>> 
>>> --
>>> - Mark
>>>
>>> http://www.lucidimagination.com
>>>
>>>
>>> 
>>>   
>>
>>   
>> 
>
>
>   


-- 
- Mark

http://www.lucidimagination.com





Re: Too many open files

2009-10-23 Thread Mark Miller
8 GB is much larger than is well supported. Its diminishing returns over
40-100 and mostly a waste of RAM. Too high and things can break. It
should be well below 2 GB at most, but I'd still recommend 40-100.

Fuad Efendi wrote:
> Reason of having big RAM buffer is lowering frequency of IndexWriter flushes
> and (subsequently) lowering frequency of index merge events, and
> (subsequently) merging of a few larger files takes less time... especially
> if RAM Buffer is intelligent enough (and big enough) to deal with 100
> concurrent updates of existing document without 100-times flushing to disk
> of 100 document versions.
>
> I posted here thread related; I had 1:5 timing for Update:Merge (5 minutes
> merge, and 1 minute update) with default SOLR settings (32Mb buffer). I
> increased buffer to 8Gb on Master, and it triggered significant indexing
> performance boost... 
>
> -Fuad
> http://www.linkedin.com/in/liferay
>
>
>   
>> -Original Message-
>> From: Mark Miller [mailto:markrmil...@gmail.com]
>> Sent: October-23-09 3:03 PM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Too many open files
>>
>> I wouldn't use a RAM buffer of a gig - 32-100 is generally a good number.
>>
>> Fuad Efendi wrote:
>> 
>>> I was partially wrong; this is what Mike McCandless  (Lucene-in-Action,
>>>   
> 2nd
>   
>>> edition) explained at Manning forum:
>>>
>>> mergeFactor of 1000 means you will have up to 1000 segments at each
>>>   
> level.
>   
>>> A level 0 segment means it was flushed directly by IndexWriter.
>>> After you have 1000 such segments, they are merged into a single level 1
>>> segment.
>>> Once you have 1000 level 1 segments, they are merged into a single level
>>>   
> 2
>   
>>> segment, etc.
>>> So, depending on how many docs you add to your index, you'll could have
>>> 1000s of segments w/ mergeFactor=1000.
>>>
>>> http://www.manning-sandbox.com/thread.jspa?threadID=33784&tstart=0
>>>
>>>
>>> So, in case of mergeFactor=100 you may have (theoretically) 1000
>>>   
> segments,
>   
>>> 10-20 files each (depending on schema)...
>>>
>>>
>>> mergeFactor=10 is default setting... ramBufferSizeMB=1024 means that you
>>> need at least double Java heap, but you have -Xmx1024m...
>>>
>>>
>>> -Fuad
>>>
>>>
>>>
>>>   
>>>> I am getting too many open files error.
>>>>
>>>> Usually I test on a server that has 4GB RAM and assigned 1GB for
>>>> tomcat(set JAVA_OPTS=-Xms256m -Xmx1024m), ulimit -n is 256 for this
>>>> server and has following setting for SolrConfig.xml
>>>>
>>>>
>>>>
>>>> true
>>>>
>>>> 1024
>>>>
>>>> 100
>>>>
>>>> 2147483647
>>>>
>>>> 1
>>>>
>>>>
>>>> 
>>>
>>>   
>> --
>> - Mark
>>
>> http://www.lucidimagination.com
>>
>>
>> 
>
>
>
>   


-- 
- Mark

http://www.lucidimagination.com





RE: Too many open files

2009-10-23 Thread Fuad Efendi
Reason of having big RAM buffer is lowering frequency of IndexWriter flushes
and (subsequently) lowering frequency of index merge events, and
(subsequently) merging of a few larger files takes less time... especially
if RAM Buffer is intelligent enough (and big enough) to deal with 100
concurrent updates of existing document without 100-times flushing to disk
of 100 document versions.

I posted here thread related; I had 1:5 timing for Update:Merge (5 minutes
merge, and 1 minute update) with default SOLR settings (32Mb buffer). I
increased buffer to 8Gb on Master, and it triggered significant indexing
performance boost... 

-Fuad
http://www.linkedin.com/in/liferay


> -Original Message-
> From: Mark Miller [mailto:markrmil...@gmail.com]
> Sent: October-23-09 3:03 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Too many open files
> 
> I wouldn't use a RAM buffer of a gig - 32-100 is generally a good number.
> 
> Fuad Efendi wrote:
> > I was partially wrong; this is what Mike McCandless  (Lucene-in-Action,
2nd
> > edition) explained at Manning forum:
> >
> > mergeFactor of 1000 means you will have up to 1000 segments at each
level.
> > A level 0 segment means it was flushed directly by IndexWriter.
> > After you have 1000 such segments, they are merged into a single level 1
> > segment.
> > Once you have 1000 level 1 segments, they are merged into a single level
2
> > segment, etc.
> > So, depending on how many docs you add to your index, you'll could have
> > 1000s of segments w/ mergeFactor=1000.
> >
> > http://www.manning-sandbox.com/thread.jspa?threadID=33784&tstart=0
> >
> >
> > So, in case of mergeFactor=100 you may have (theoretically) 1000
segments,
> > 10-20 files each (depending on schema)...
> >
> >
> > mergeFactor=10 is default setting... ramBufferSizeMB=1024 means that you
> > need at least double Java heap, but you have -Xmx1024m...
> >
> >
> > -Fuad
> >
> >
> >
> >> I am getting too many open files error.
> >>
> >> Usually I test on a server that has 4GB RAM and assigned 1GB for
> >> tomcat(set JAVA_OPTS=-Xms256m -Xmx1024m), ulimit -n is 256 for this
> >> server and has following setting for SolrConfig.xml
> >>
> >>
> >>
> >> true
> >>
> >> 1024
> >>
> >> 100
> >>
> >> 2147483647
> >>
> >> 1
> >>
> >>
> >
> >
> >
> 
> 
> --
> - Mark
> 
> http://www.lucidimagination.com
> 
> 





Re: Too many open files

2009-10-23 Thread Mark Miller
I wouldn't use a RAM buffer of a gig - 32-100 is generally a good number.

Fuad Efendi wrote:
> I was partially wrong; this is what Mike McCandless  (Lucene-in-Action, 2nd
> edition) explained at Manning forum:
>
> mergeFactor of 1000 means you will have up to 1000 segments at each level.
> A level 0 segment means it was flushed directly by IndexWriter.
> After you have 1000 such segments, they are merged into a single level 1
> segment.
> Once you have 1000 level 1 segments, they are merged into a single level 2
> segment, etc.
> So, depending on how many docs you add to your index, you'll could have
> 1000s of segments w/ mergeFactor=1000.
>
> http://www.manning-sandbox.com/thread.jspa?threadID=33784&tstart=0
>
>
> So, in case of mergeFactor=100 you may have (theoretically) 1000 segments,
> 10-20 files each (depending on schema)...
>
>
> mergeFactor=10 is default setting... ramBufferSizeMB=1024 means that you
> need at least double Java heap, but you have -Xmx1024m...
>
>
> -Fuad
>
>
>   
>> I am getting too many open files error.
>>
>> Usually I test on a server that has 4GB RAM and assigned 1GB for
>> tomcat(set JAVA_OPTS=-Xms256m -Xmx1024m), ulimit -n is 256 for this
>> server and has following setting for SolrConfig.xml
>>
>>
>>
>> true
>>
>> 1024
>>
>> 100
>>
>> 2147483647
>>
>> 1
>>
>> 
>
>
>   


-- 
- Mark

http://www.lucidimagination.com





RE: Too many open files

2009-10-23 Thread Fuad Efendi
I was partially wrong; this is what Mike McCandless  (Lucene-in-Action, 2nd
edition) explained at Manning forum:

mergeFactor of 1000 means you will have up to 1000 segments at each level.
A level 0 segment means it was flushed directly by IndexWriter.
After you have 1000 such segments, they are merged into a single level 1
segment.
Once you have 1000 level 1 segments, they are merged into a single level 2
segment, etc.
So, depending on how many docs you add to your index, you'll could have
1000s of segments w/ mergeFactor=1000.

http://www.manning-sandbox.com/thread.jspa?threadID=33784&tstart=0


So, in case of mergeFactor=100 you may have (theoretically) 1000 segments,
10-20 files each (depending on schema)...


mergeFactor=10 is default setting... ramBufferSizeMB=1024 means that you
need at least double Java heap, but you have -Xmx1024m...


-Fuad


> 
> I am getting too many open files error.
> 
> Usually I test on a server that has 4GB RAM and assigned 1GB for
> tomcat(set JAVA_OPTS=-Xms256m -Xmx1024m), ulimit -n is 256 for this
> server and has following setting for SolrConfig.xml
> 
> 
> 
> true
> 
> 1024
> 
> 100
> 
> 2147483647
> 
> 1
> 




RE: Too many open files

2009-10-23 Thread Fuad Efendi
> 1024

Ok, it will lower frequency of Buffer flush to disk (buffer flush happens
when it reaches capacity, due commit, etc.); it will improve performance. It
is internal buffer used by Lucene. It is not total memory of Tomcat... 


> 100

It will deal with 100 Segments, and each segment will consist on number of
files (equal to number of fields) - you may have 20 fields, 2000 files...



For many such applications, set ulimit to 65536. You never know how many
files you will need (including log files of Tomcat, class files, config
files, image/css/html files, etc...)

Even with 10 Lucene segments (mergeFactor), 10 files each, (100 files)
Lucene may need much more during commit/optimize...


-Fuad


> -Original Message-
> From: Ranganathan, Sharmila [mailto:sranganat...@library.rochester.edu]
> Sent: October-23-09 1:08 PM
> To: solr-user@lucene.apache.org
> Subject: Too many open files
> 
> Hi,
> 
> I am getting too many open files error.
> 
> Usually I test on a server that has 4GB RAM and assigned 1GB for
> tomcat(set JAVA_OPTS=-Xms256m -Xmx1024m), ulimit -n is 256 for this
> server and has following setting for SolrConfig.xml
> 
> 
> 
> true
> 
> 1024
> 
> 100
> 
> 2147483647
> 
> 1
> 
> 
> 
> In my case 200,000 documents is of 1024MB size and in this testing, I am
> indexing total of million documents. We have high setting because we are
> expected to index about 10+ million records in production. It works fine
> in this server.
> 
> 
> 
> When I deploy same solr configuration on a server with 32GB RAM, I get
> "too many open files" error. The ulimit -n is 1024 for this server. Any
> idea? Is this because 2nd server has 32GB RAM? Is 1024 open files limit
> too low? Also I don't find any documentation for .
> I checked Solr 'Solr 1.4 Enterprise Search Server' book, wiki, etc. I am
> using Solr 1.3.
> 
> 
> 
> Is it good idea to use ramBufferSizeMB? Vs maxBufferedDocs?  What does
> ramBufferSizeMB mean? My understanding is that when documents added to
> index which are initially stored in memory reaches size
> 1024MB(ramBufferSizeMB), it flushes data to disk. Or is it when total
> memory used(by tomcat, etc) reaches 1024, it flushed data to disk?
> 
> 
> 
> Thanks,
> 
> Sharmila
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 





RE: Too many open files

2009-10-23 Thread Fuad Efendi
Make it 10:
10

-Fuad


> -Original Message-
> From: Ranganathan, Sharmila [mailto:sranganat...@library.rochester.edu]
> Sent: October-23-09 1:08 PM
> To: solr-user@lucene.apache.org
> Subject: Too many open files
> 
> Hi,
> 
> I am getting too many open files error.
> 
> Usually I test on a server that has 4GB RAM and assigned 1GB for
> tomcat(set JAVA_OPTS=-Xms256m -Xmx1024m), ulimit -n is 256 for this
> server and has following setting for SolrConfig.xml
> 
> 
> 
> true
> 
> 1024
> 
> 100
> 
> 2147483647
> 
> 1
> 
> 
> 
> In my case 200,000 documents is of 1024MB size and in this testing, I am
> indexing total of million documents. We have high setting because we are
> expected to index about 10+ million records in production. It works fine
> in this server.
> 
> 
> 
> When I deploy same solr configuration on a server with 32GB RAM, I get
> "too many open files" error. The ulimit -n is 1024 for this server. Any
> idea? Is this because 2nd server has 32GB RAM? Is 1024 open files limit
> too low? Also I don't find any documentation for .
> I checked Solr 'Solr 1.4 Enterprise Search Server' book, wiki, etc. I am
> using Solr 1.3.
> 
> 
> 
> Is it good idea to use ramBufferSizeMB? Vs maxBufferedDocs?  What does
> ramBufferSizeMB mean? My understanding is that when documents added to
> index which are initially stored in memory reaches size
> 1024MB(ramBufferSizeMB), it flushes data to disk. Or is it when total
> memory used(by tomcat, etc) reaches 1024, it flushed data to disk?
> 
> 
> 
> Thanks,
> 
> Sharmila
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 





Too many open files

2009-10-23 Thread Ranganathan, Sharmila
Hi,

I am getting too many open files error.

Usually I test on a server that has 4GB RAM and assigned 1GB for
tomcat(set JAVA_OPTS=-Xms256m -Xmx1024m), ulimit -n is 256 for this
server and has following setting for SolrConfig.xml

 

true

1024

100

2147483647

1

 

In my case 200,000 documents is of 1024MB size and in this testing, I am
indexing total of million documents. We have high setting because we are
expected to index about 10+ million records in production. It works fine
in this server. 

 

When I deploy same solr configuration on a server with 32GB RAM, I get
"too many open files" error. The ulimit -n is 1024 for this server. Any
idea? Is this because 2nd server has 32GB RAM? Is 1024 open files limit
too low? Also I don't find any documentation for .
I checked Solr 'Solr 1.4 Enterprise Search Server' book, wiki, etc. I am
using Solr 1.3.

 

Is it good idea to use ramBufferSizeMB? Vs maxBufferedDocs?  What does
ramBufferSizeMB mean? My understanding is that when documents added to
index which are initially stored in memory reaches size
1024MB(ramBufferSizeMB), it flushes data to disk. Or is it when total
memory used(by tomcat, etc) reaches 1024, it flushed data to disk?

 

Thanks,

Sharmila

 

 

 

 

 

 

 



Re: spellcheck /too many open files

2009-06-09 Thread revas
Thanks

On Tue, Jun 9, 2009 at 5:14 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:

> On Tue, Jun 9, 2009 at 4:32 PM, revas  wrote:
>
> > Thanks ShalinWhen we use the external  file  dictionary (if there is
> > one),then it should work fine ,right for spell check,also is there any
> > format for this file
> >
>
> The external file should have one token per line. See
> http://wiki.apache.org/solr/FileBasedSpellChecker
>
> The default analyzer is WhitespaceAnalyzer. So all tokens in the file will
> be split on whitespace and the resulting tokens will be used for giving
> suggestions. If you want to change the analyzer, specify fieldType in the
> spell checker configuration and the component will use the analyzer
> configured for that field type.
>
> --
> Regards,
> Shalin Shekhar Mangar.
>


Re: spellcheck /too many open files

2009-06-09 Thread Shalin Shekhar Mangar
On Tue, Jun 9, 2009 at 4:32 PM, revas  wrote:

> Thanks ShalinWhen we use the external  file  dictionary (if there is
> one),then it should work fine ,right for spell check,also is there any
> format for this file
>

The external file should have one token per line. See
http://wiki.apache.org/solr/FileBasedSpellChecker

The default analyzer is WhitespaceAnalyzer. So all tokens in the file will
be split on whitespace and the resulting tokens will be used for giving
suggestions. If you want to change the analyzer, specify fieldType in the
spell checker configuration and the component will use the analyzer
configured for that field type.

-- 
Regards,
Shalin Shekhar Mangar.


Re: spellcheck /too many open files

2009-06-09 Thread revas
Thanks ShalinWhen we use the external  file  dictionary (if there is
one),then it should work fine ,right for spell check,also is there any
format for this file

Regards
Sujatha

On Tue, Jun 9, 2009 at 3:03 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:

> On Tue, Jun 9, 2009 at 2:56 PM, revas  wrote:
>
> > But the spell check componenet uses the n-gram analyzer and henc should
> > work
> > for any language ,is this correct ,also we can refer an extern dictionary
> > for suggestions ,could this be in any language?
> >
>
> Yes it does use n-grams but there's an analysis step before the n-grams are
> created. For example, if you are creating your spell check index from a
> Solr
> field, SpellCheckComponent uses that field's index time analyzer. So you
> should create your language-specific fields in such a way that the analysis
> works correctly for that language.
>
>
> > The open files is not because of spell check as we have not yet
> implemented
> > this yet, every time we restart solr we need to up the ulimit ,otherwise
> it
> > does not work,so is there any workaround to permanently close this open
> > files ,does optmizing the index close it?
> >
>
> Optimization merges the segments of the index into one big segment. So it
> will reduce the number of files. However, during the merge it may create
> many more files. The old files after the merge are cleanup by Lucene in a
> while (unless you have changed the defaults in the IndexDeletionPolicy
> section in solrconfig.xml).
>
> --
> Regards,
> Shalin Shekhar Mangar.
>


Re: spellcheck /too many open files

2009-06-09 Thread Shalin Shekhar Mangar
On Tue, Jun 9, 2009 at 2:56 PM, revas  wrote:

> But the spell check componenet uses the n-gram analyzer and henc should
> work
> for any language ,is this correct ,also we can refer an extern dictionary
> for suggestions ,could this be in any language?
>

Yes it does use n-grams but there's an analysis step before the n-grams are
created. For example, if you are creating your spell check index from a Solr
field, SpellCheckComponent uses that field's index time analyzer. So you
should create your language-specific fields in such a way that the analysis
works correctly for that language.


> The open files is not because of spell check as we have not yet implemented
> this yet, every time we restart solr we need to up the ulimit ,otherwise it
> does not work,so is there any workaround to permanently close this open
> files ,does optmizing the index close it?
>

Optimization merges the segments of the index into one big segment. So it
will reduce the number of files. However, during the merge it may create
many more files. The old files after the merge are cleanup by Lucene in a
while (unless you have changed the defaults in the IndexDeletionPolicy
section in solrconfig.xml).

-- 
Regards,
Shalin Shekhar Mangar.


Re: spellcheck /too many open files

2009-06-09 Thread revas
But the spell check componenet uses the n-gram analyzer and henc should work
for any language ,is this correct ,also we can refer an extern dictionary
for suggestions ,could this be in any language?

The open files is not because of spell check as we have not yet implemented
this yet, every time we restart solr we need to up the ulimit ,otherwise it
does not work,so is there any workaround to permanently close this open
files ,does optmizing the index close it?

Regards
Sujatha

On Tue, Jun 9, 2009 at 12:53 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:

> On Tue, Jun 9, 2009 at 11:15 AM, revasHi  wrote:
>
> >
> > 1)Does the spell check component support all languages?
> >
>
> SpellCheckComponent relies on Lucene/Solr analyzers and tokenizers. So if
> you can find an analyzer/tokenizer for your language, spell checker can
> work.
>
>
> > 2) I have a scnenario where i have abt 20 webapps in  a single
> container.We
> > get too many open files at index time /while restarting tomcat.
>
>
> Is that because of SpellCheckComponent?
>
>
> > The mergefactor is at default.
> >
> > If i reduce the merge factor to 2 and optimize the index ,will the open
> > files be closed automatically or would i have to reindex to close the
> open
> > files or  how do i close the already opened files.This is on linux with
> > solr
> > 1.3 and tomcat 5.5
> >
>
> Lucene/Solr does not keep any file opened longer than it is necessary. But
> decreasing merge factor should help. You can also increase the open file
> limit on your system.
>
> --
> Regards,
> Shalin Shekhar Mangar.
>


Re: spellcheck /too many open files

2009-06-09 Thread Shalin Shekhar Mangar
On Tue, Jun 9, 2009 at 11:15 AM, revas  wrote:

>
> 1)Does the spell check component support all languages?
>

SpellCheckComponent relies on Lucene/Solr analyzers and tokenizers. So if
you can find an analyzer/tokenizer for your language, spell checker can
work.


> 2) I have a scnenario where i have abt 20 webapps in  a single container.We
> get too many open files at index time /while restarting tomcat.


Is that because of SpellCheckComponent?


> The mergefactor is at default.
>
> If i reduce the merge factor to 2 and optimize the index ,will the open
> files be closed automatically or would i have to reindex to close the open
> files or  how do i close the already opened files.This is on linux with
> solr
> 1.3 and tomcat 5.5
>

Lucene/Solr does not keep any file opened longer than it is necessary. But
decreasing merge factor should help. You can also increase the open file
limit on your system.

-- 
Regards,
Shalin Shekhar Mangar.


spellcheck /too many open files

2009-06-08 Thread revas
Hi ,

1)Does the spell check component support all languages?


2) I have a scnenario where i have abt 20 webapps in  a single container.We
get too many open files at index time /while restarting tomcat.

The mergefactor is at default.

If i reduce the merge factor to 2 and optimize the index ,will the open
files be closed automatically or would i have to reindex to close the open
files or  how do i close the already opened files.This is on linux with solr
1.3 and tomcat 5.5

Regards
Revas


Re: Too many open files and background merge exceptions

2009-04-06 Thread Walter Ferrara
you may try to put true in that useCompoundFile entry; this way indexing
should use far less file descriptors, but it will slow down indexing, see
http://issues.apache.org/jira/browse/LUCENE-888.
Try to see if the reason of lack of descriptors is related only on solr. How
are you using indexing, by using solrj, by posting xmls? Are the files being
opened/parsed on the same machine of solr?

On Mon, Apr 6, 2009 at 2:58 PM, Jarek Zgoda  wrote:

> I'm indexing a set of 50 small documents. I'm adding documents in
> batches of 1000. At the beginning I had a setup that optimized the index
> each 1 documents, but quickly I had to optimize after adding each batch
> of documents. Unfortunately, I'm still getting the "Too many open files" IO
> error on optimize. I went from mergeFactor of 25 down to 10, but I'm still
> unable to optimize the index.
>
> I have configuration:
>false
>256
>2
>2147483647
>1
>
> The machine (2 core AMD64, 4GB RAM) is running Debian Linux, Java is
> 1.6.0_11 64-Bit, Solr is nightly build (2009-04-02). And no, I can not
> change the limit of file descriptors (currently: 1024). What more can I do?
>
> --
> We read Knuth so you don't have to. - Tim Peters
>
> Jarek Zgoda, R&D, Redefine
> jarek.zg...@redefine.pl
>
>


Re: Too many open files and background merge exceptions

2009-04-06 Thread Jacob Singh
try ulimit -n5 or something

On Mon, Apr 6, 2009 at 6:28 PM, Jarek Zgoda  wrote:
> I'm indexing a set of 50 small documents. I'm adding documents in
> batches of 1000. At the beginning I had a setup that optimized the index
> each 1 documents, but quickly I had to optimize after adding each batch
> of documents. Unfortunately, I'm still getting the "Too many open files" IO
> error on optimize. I went from mergeFactor of 25 down to 10, but I'm still
> unable to optimize the index.
>
> I have configuration:
>    false
>    256
>    2
>    2147483647
>    1
>
> The machine (2 core AMD64, 4GB RAM) is running Debian Linux, Java is
> 1.6.0_11 64-Bit, Solr is nightly build (2009-04-02). And no, I can not
> change the limit of file descriptors (currently: 1024). What more can I do?
>
> --
> We read Knuth so you don't have to. - Tim Peters
>
> Jarek Zgoda, R&D, Redefine
> jarek.zg...@redefine.pl
>
>



-- 

+1 510 277-0891 (o)
+91  33 7458 (m)

web: http://pajamadesign.com

Skype: pajamadesign
Yahoo: jacobsingh
AIM: jacobsingh
gTalk: jacobsi...@gmail.com


Too many open files and background merge exceptions

2009-04-06 Thread Jarek Zgoda
I'm indexing a set of 50 small documents. I'm adding documents in  
batches of 1000. At the beginning I had a setup that optimized the  
index each 1 documents, but quickly I had to optimize after adding  
each batch of documents. Unfortunately, I'm still getting the "Too  
many open files" IO error on optimize. I went from mergeFactor of 25  
down to 10, but I'm still unable to optimize the index.


I have configuration:
false
256
2
2147483647
1

The machine (2 core AMD64, 4GB RAM) is running Debian Linux, Java is  
1.6.0_11 64-Bit, Solr is nightly build (2009-04-02). And no, I can not  
change the limit of file descriptors (currently: 1024). What more can  
I do?


--
We read Knuth so you don't have to. - Tim Peters

Jarek Zgoda, R&D, Redefine
jarek.zg...@redefine.pl



Re: too many open files

2008-07-14 Thread Brian Carmalt
Am Montag, den 14.07.2008, 09:50 -0400 schrieb Yonik Seeley:
> Solr uses reference counting on IndexReaders to close them ASAP (since
> relying on gc can lead to running out of file descriptors).
> 

How do you force them to close ASAP? I use File and FileOutputStream
objects, I close the output streams and then call delete on the files. I
sill have problems with to many open files. After a while I get
exceptions that I cannot open any new files. After this the threads stop
working and a day later, the files are still open and marked for
deletion. I have to kill the server to get it running again or call
System.gc() periodically.  

How do force the VM to realese the files?

This happens under RedHat with a 2.4er kernel and under Debian Etch with
2.6er kernel. 

Thanks,

Brian
> -Yonik
> 
> On Mon, Jul 14, 2008 at 9:15 AM, Brian Carmalt <[EMAIL PROTECTED]> wrote:
> > Hello,
> >
> > I have a similar problem, not with Solr, but in Java. From what I have
> > found, it is a usage and os problem: comes from using to many files, and
> > the time it takes the os to reclaim the fds. I found the recomendation
> > that System.gc() should be called periodically. It works for me. May not
> > be the most elegant, but it works.
> >
> > Brian.
> >
> > Am Montag, den 14.07.2008, 11:14 +0200 schrieb Alexey Shakov:
> >> now we have set the limt to ~1 files
> >> but this is not the solution - the amount of open files increases
> >> permanantly.
> >> Earlier or later, this limit will be exhausted.
> >>
> >>
> >> Fuad Efendi schrieb:
> >> > Have you tried [ulimit -n 65536]? I don't think it relates to files
> >> > marked for deletion...
> >> > ==
> >> > http://www.linkedin.com/in/liferay
> >> >
> >> >
> >> >> Earlier or later, the system crashes with message "Too many open files"
> >> >
> >> >
> >>
> >>
> >>
> >
> >



Re: too many open files

2008-07-14 Thread Alexey Shakov

Yonik Seeley schrieb:

On Mon, Jul 14, 2008 at 10:17 AM, Alexey Shakov <[EMAIL PROTECTED]> wrote:
  

Yonik Seeley schrieb:


On Mon, Jul 14, 2008 at 5:14 AM, Alexey Shakov <[EMAIL PROTECTED]>
  

now we have set the limt to ~1 files
but this is not the solution - the amount of open files increases
permanantly.
Earlier or later, this limit will be exhausted.



How can you tell? Are you seeing descriptor use continually growing?

-Yonik
  

'deleted' index files, listed with lsof-command today, was listed (also as
deleted)
several days ago...

But the amount of this 'deleted' files increases. So, I make a conclusion,
that this is the question of time, when the limit of 1 will be reached.



You are probably just seeing growth in the number of segments in the
index... which means that any IndexReader will be holding open a
larger number of files at any one time (and become deleted when the
IndexWriter removes old segments).

This growth in the number of segments isn't unbounded though (because
of segment merges).  Your 10,000 descriptors should be sufficient.

-Yonik


  


Ok, thank you for the explanation. I will observ the open files amount 
over the time - hopefully it remains stable.


Best regards,
Alexey



Fortunately, the check-ins of new documents are seldom. The server (Tomcat)
was restarted (due to different software updates) relatively oft in the last
weeks...  So, we had no yet the possibility to reach this limit. But the
default open file limit (1024) was already reached several times (before we
increase it)…

Thanks for your help !
Alexey




--
Alexey Shakov (Senior Software Entwickler/Architekt)
menta AG - The Information E-volution

Bodenseestraße 4, D-81241 München
Telefon: +49-89/87130-142
Telefax: +49-89/87130-146
Mobil: +49-177/3200615
E-Mail: <[EMAIL PROTECTED]>
Eintrag: Amtsgericht München, HRB 125132
Vorstand: Reinhold Müller
Aufsichtsrat: Thomas N. Pieper (Vors.)




Re: too many open files

2008-07-14 Thread Yonik Seeley
On Mon, Jul 14, 2008 at 10:17 AM, Alexey Shakov <[EMAIL PROTECTED]> wrote:
> Yonik Seeley schrieb:
>>
>> On Mon, Jul 14, 2008 at 5:14 AM, Alexey Shakov <[EMAIL PROTECTED]>
>>> now we have set the limt to ~1 files
>>> but this is not the solution - the amount of open files increases
>>> permanantly.
>>> Earlier or later, this limit will be exhausted.
>>>
>>
>> How can you tell? Are you seeing descriptor use continually growing?
>>
>> -Yonik
>
> 'deleted' index files, listed with lsof-command today, was listed (also as
> deleted)
> several days ago...
>
> But the amount of this 'deleted' files increases. So, I make a conclusion,
> that this is the question of time, when the limit of 1 will be reached.

You are probably just seeing growth in the number of segments in the
index... which means that any IndexReader will be holding open a
larger number of files at any one time (and become deleted when the
IndexWriter removes old segments).

This growth in the number of segments isn't unbounded though (because
of segment merges).  Your 10,000 descriptors should be sufficient.

-Yonik


> Fortunately, the check-ins of new documents are seldom. The server (Tomcat)
> was restarted (due to different software updates) relatively oft in the last
> weeks...  So, we had no yet the possibility to reach this limit. But the
> default open file limit (1024) was already reached several times (before we
> increase it)…
>
> Thanks for your help !
> Alexey


Re: too many open files

2008-07-14 Thread Alexey Shakov

Yonik Seeley schrieb:

On Mon, Jul 14, 2008 at 5:14 AM, Alexey Shakov <[EMAIL PROTECTED]> wrote:
  

now we have set the limt to ~1 files
but this is not the solution - the amount of open files increases
permanantly.
Earlier or later, this limit will be exhausted.



How can you tell? Are you seeing descriptor use continually growing?

-Yonik
  



'deleted' index files, listed with lsof-command today, was listed (also as 
deleted)
several days ago...

But the amount of this 'deleted' files increases. So, I make a conclusion, that 
this is the question of time, when the limit of 1 will be reached.

Fortunately, the check-ins of new documents are seldom. The server (Tomcat) was 
restarted (due to different software updates) relatively oft in the last 
weeks...  So, we had no yet the possibility to reach this limit. But the 
default open file limit (1024) was already reached several times (before we 
increase it)…

Thanks for your help !
Alexey





Re: too many open files

2008-07-14 Thread Yonik Seeley
On Mon, Jul 14, 2008 at 9:52 AM, Fuad Efendi <[EMAIL PROTECTED]> wrote:
> Even Oracle requires 65536; MySQL+MyISAM depends on number of tables,
> indexes, and Client Threads.
>
> From my experience with Lucene, 8192 is not enough; leave space for OS too.
>
> Multithreaded application (in most cases) multiplies number of files to a
> number of threads (each thread needs own handler), in case with SOLR-Tomcat:
> 256 threads... Number of files depends on mergeFactor=10 (default for SOLR).
> Now, if 10 is "merge factor" and we have *.cfs, *.fdt, etc (6 file types per
> segment):
> 256*10*6 = 15360 (theoretically)

In Solr, the number of threads does not come into play.  It would only
matter in Lucene if you were doing something like opening an
IndexReader per thread or something.

The number of files per segment is normally more like 12, but only 9
of them are held open for the entire life of the reader.

Also remember that the IndexWriter internally uses another IndexReader
to do deletions, and Solr can have 2 (or more) open... one serving
queries and one opening+warming.

-Yonik


Re: too many open files

2008-07-14 Thread Fuad Efendi
Even Oracle requires 65536; MySQL+MyISAM depends on number of tables,  
indexes, and Client Threads.


From my experience with Lucene, 8192 is not enough; leave space for OS too.

Multithreaded application (in most cases) multiplies number of files  
to a number of threads (each thread needs own handler), in case with  
SOLR-Tomcat: 256 threads... Number of files depends on mergeFactor=10  
(default for SOLR). Now, if 10 is "merge factor" and we have *.cfs,  
*.fdt, etc (6 file types per segment):

256*10*6 = 15360 (theoretically)

==
http://www.linkedin.com/in/liferay


Quoting Brian Carmalt <[EMAIL PROTECTED]>:


Hello,

I have a similar problem, not with Solr, but in Java. From what I have
found, it is a usage and os problem: comes from using to many files, and
the time it takes the os to reclaim the fds. I found the recomendation
that System.gc() should be called periodically. It works for me. May not
be the most elegant, but it works.

Brian.

Am Montag, den 14.07.2008, 11:14 +0200 schrieb Alexey Shakov:

now we have set the limt to ~1 files
but this is not the solution - the amount of open files increases
permanantly.
Earlier or later, this limit will be exhausted.


Fuad Efendi schrieb:
> Have you tried [ulimit -n 65536]? I don't think it relates to files
> marked for deletion...
> ==
> http://www.linkedin.com/in/liferay
>
>
>> Earlier or later, the system crashes with message "Too many open files"
>
>












Re: too many open files

2008-07-14 Thread Yonik Seeley
Solr uses reference counting on IndexReaders to close them ASAP (since
relying on gc can lead to running out of file descriptors).

-Yonik

On Mon, Jul 14, 2008 at 9:15 AM, Brian Carmalt <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I have a similar problem, not with Solr, but in Java. From what I have
> found, it is a usage and os problem: comes from using to many files, and
> the time it takes the os to reclaim the fds. I found the recomendation
> that System.gc() should be called periodically. It works for me. May not
> be the most elegant, but it works.
>
> Brian.
>
> Am Montag, den 14.07.2008, 11:14 +0200 schrieb Alexey Shakov:
>> now we have set the limt to ~1 files
>> but this is not the solution - the amount of open files increases
>> permanantly.
>> Earlier or later, this limit will be exhausted.
>>
>>
>> Fuad Efendi schrieb:
>> > Have you tried [ulimit -n 65536]? I don't think it relates to files
>> > marked for deletion...
>> > ======
>> > http://www.linkedin.com/in/liferay
>> >
>> >
>> >> Earlier or later, the system crashes with message "Too many open files"
>> >
>> >
>>
>>
>>
>
>


Re: too many open files

2008-07-14 Thread Brian Carmalt
Hello, 

I have a similar problem, not with Solr, but in Java. From what I have
found, it is a usage and os problem: comes from using to many files, and
the time it takes the os to reclaim the fds. I found the recomendation
that System.gc() should be called periodically. It works for me. May not
be the most elegant, but it works. 

Brian.  

Am Montag, den 14.07.2008, 11:14 +0200 schrieb Alexey Shakov:
> now we have set the limt to ~1 files
> but this is not the solution - the amount of open files increases 
> permanantly.
> Earlier or later, this limit will be exhausted.
> 
> 
> Fuad Efendi schrieb:
> > Have you tried [ulimit -n 65536]? I don't think it relates to files 
> > marked for deletion...
> > ==
> > http://www.linkedin.com/in/liferay
> >
> >
> >> Earlier or later, the system crashes with message "Too many open files"
> >
> >
> 
> 
> 



  1   2   >