Doppleganger threads after ingestion completed

2010-06-17 Thread karl.wright
Folks, I ran 20,000,000 records into Solr via the extractingUpdateRequestHandler under jetty. The previous problems with resources have apparently been resolved by using Http1.1 with keep-alive, rather than creating and destroying 20,000,000 sockets. ;-) However, after the client terminates,

Re: Doppleganger threads after ingestion completed

2010-06-19 Thread Lance Norskog
"Chewing up cpu" or "blocked". The stack trace says it's blocked. The sockets are abandoned by the program, yes, but TCP/IP itself has a complex sequence for shutting down sockets that takes a few minutes. If these sockets stay around for hours, then there's a real problem. (In fact, there is a bu

RE: Doppleganger threads after ingestion completed

2010-06-20 Thread karl.wright
Sent: Saturday, June 19, 2010 8:51 PM To: dev@lucene.apache.org Subject: Re: Doppleganger threads after ingestion completed "Chewing up cpu" or "blocked". The stack trace says it's blocked. The sockets are abandoned by the program, yes, but TCP/IP itself has a complex se

Re: Doppleganger threads after ingestion completed

2010-06-20 Thread Lance Norskog
d" but then it must loop. > > Karl > > From: ext Lance Norskog [goks...@gmail.com] > Sent: Saturday, June 19, 2010 8:51 PM > To: dev@lucene.apache.org > Subject: Re: Doppleganger threads after ingestion completed > > "Chewing up cpu" or "blocked&

RE: Doppleganger threads after ingestion completed

2010-06-21 Thread karl.wright
not tried tomcat yet. Karl From: ext Lance Norskog [goks...@gmail.com] Sent: Sunday, June 20, 2010 10:47 PM To: dev@lucene.apache.org Subject: Re: Doppleganger threads after ingestion completed Does 'netstat -an' show incoming sockets for the

Re: Doppleganger threads after ingestion completed

2010-06-22 Thread Lance Norskog
10:47 PM > To: dev@lucene.apache.org > Subject: Re: Doppleganger threads after ingestion completed > > Does 'netstat -an' show incoming sockets for these threads? > > What Solr release is this? > > Is this one long upload of 20m documents without committing? Are you >