Hi Mukund,
Since I am getting this issue for long time, I had done some hit and
run. In my case I am connecting the local tomcat server using solrJ.
SolrJ has max connection perhost 20 and per client 2. As I have heavy
load and lots of dependency on solr so it seems very low. To increase
the default perhost and per client connetions I done:
MultiThreadedHttpConnectionManager connectionManager=new
MultiThreadedHttpConnectionManager();
DefaultHttpMethodRetryHandler retryhandler = new
DefaultHttpMethodRetryHandler(0, false);
connectionManager.getParams().setMaxTotalConnections(500);
connectionManager.getParams().setDefaultMaxConnectionsPerHost(500);
connectionManager.closeIdleConnections(0L);
connectionManager.getParams().setMaxTotalConnections(500);
connectionManager.getParams().setDefaultMaxConnectionsPerHost(500);
connectionManager.closeIdleConnections(0L);
HttpClient httpClient=new HttpClient(connectionManager);
httpClient.getHttpConnectionManager().getParams().setDefaultMaxConnectionsPerHost(500);
httpClient.getHttpConnectionManager().getParams().setMaxTotalConnections(500);
httpClient.getParams().setParameter("http.method.retry-handler",
retryhandler);
server=new CommonsHttpSolrServer(getSolrURL(), httpClient);
I have 5 cores setup and I am using above code for each core in static
block and using same instance across all the class.
but it not seems any effect. still server die randomly in 1 to 2 days. I
am using Tomcat instead of jetty. I already increase maxThread in tomcat
to 500 Is there any limitation of tomcat for this much stress. But other
end when I look using telnet there is not much thread is open. I have
doubt in httpclient.
any help..
regards
On Tuesday 24 January 2012 07:27 AM, Mukunda Madhava wrote:
Hi Ranveer,
I dont have any solution to this problem. I havent got any response
from the forums as well.
I implemented custom design for distributed searching, as it gave me
better control on the connections open.
On Sun, Jan 22, 2012 at 10:05 PM, Ranveer Kumar
<ranveer.s...@gmail.com <mailto:ranveer.s...@gmail.com>> wrote:
Hi Mukunda,
Did you get solution. Actually I am aslo getting same problem.
Please help me to over come this problem.
regards
Ranveer
On Thu, Jun 2, 2011 at 12:37 AM, Mukunda Madhava
<mukunda...@gmail.com <mailto:mukunda...@gmail.com>> wrote:
Hi Otis,
Sending to solr-user mailing list.
We see this CLOSE_WAIT connections even when i do a simple
http request via
curl, that is, even when i do a simple curl using a primary
and secondary
shard query, like for e.g.
curl "
http://primaryshardhost:8180/solr/core0/select?q=*%3A*&shards=secondaryshardhost1:8090/solr/appgroup1_11053000_11053100
<http://primaryshardhost:8180/solr/core0/select?q=*%3A*&shards=secondaryshardhost1:8090/solr/appgroup1_11053000_11053100>
"
While fetching data it is in ESTABLISHED state
-sh-3.2$ netstat | grep ESTABLISHED | grep 8090
tcp 0 0 primaryshardhost:36805
secondaryshardhost1:8090
ESTABLISHED
After the request has come back, it is in CLOSE_WAIT state
-sh-3.2$ netstat | grep CLOSE_WAIT | grep 8090
tcp 1 0 primaryshardhost:36805
secondaryshardhost1:8090
CLOSE_WAIT
why does Solr keep the connection to the shards in CLOSE_WAIT?
Is this a feature of Solr? If we modify an OS property (I dont
know how) to
cleanup the CLOSE_WAITs will it cause an issue with subsequent
searches?
Can someone help me please?
thanks,
Mukunda
On Mon, May 30, 2011 at 5:59 PM, Otis Gospodnetic <
otis_gospodne...@yahoo.com
<mailto:otis_gospodne...@yahoo.com>> wrote:
> Hi,
>
> A few things:
> 1) why not send this to the Solr list?
> 2) you talk about searching, but the code sample is about
optimizing the
> index.
>
> 3) I don't have SolrJ API in front of me, but isn't there is
> CommonsSolrServe
> ctor that takes in a URL instead of HttpClient instance?
Try that one.
>
> Otis
> -----
> Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
> Lucene ecosystem search :: http://search-lucene.com/
>
>
>
> ----- Original Message ----
> > From: Mukunda Madhava <mukunda...@gmail.com
<mailto:mukunda...@gmail.com>>
> > To: gene...@lucene.apache.org
<mailto:gene...@lucene.apache.org>
> > Sent: Mon, May 30, 2011 1:54:07 PM
> > Subject: CLOSE_WAIT after connecting to multiple shards
from a primary
> shard
> >
> > Hi,
> > We are having a "primary" Solr shard, and multiple
"secondary" shards.
> We
> > query data from the secondary shards by specifying the
"shards" param in
> the
> > query params.
> >
> > But we found that after recieving the data, there are
large number of
> > CLOSE_WAIT on the secondary shards from the primary shards.
> >
> > Like for e.g.
> >
> > tcp 1 0 primaryshardhost:56109
secondaryshardhost1:8090
> > CLOSE_WAIT
> > tcp 1 0 primaryshardhost:51049
secondaryshardhost1:8090
> > CLOSE_WAIT
> > tcp 1 0 primaryshardhost:49537
secondaryshardhost1:8089
> > CLOSE_WAIT
> > tcp 1 0 primaryshardhost:44109
secondaryshardhost2:8090
> > CLOSE_WAIT
> > tcp 1 0 primaryshardhost:32041
secondaryshardhost2:8090
> > CLOSE_WAIT
> > tcp 1 0 primaryshardhost:48533
secondaryshardhost2:8089
> > CLOSE_WAIT
> >
> >
> > We open the Solr connections as below..
> >
> > SimpleHttpConnectionManager cm = new
> > SimpleHttpConnectionManager(true);
> > cm.closeIdleConnections(0L);
> > HttpClient httpClient = new HttpClient(cm);
> > solrServer = new
CommonsHttpSolrServer(url,httpClient);
> > solrServer.optimize();
> >
> > But still we see these issues. Any ideas?
> > --
> > Thanks,
> > Mukunda
> >
>
--
Thanks,
Mukunda
--
Thanks,
Mukunda <mailto:mukunda...@gmail.com>