I'd bet it's the optimize that's taking the time, and not the delete. You don't really need to optimize these days, and you certainly don't need to do it on every delete.
And you can give solr a list of ids to delete, which would be more efficient. I don't believe you can tell which ones have failed, if any do, if you delete with a list, but you are not using "unsuccessful" now anyway. Tom On Thu, Dec 9, 2010 at 7:55 AM, Ravi Kiran <ravi.bhas...@gmail.com> wrote: > Thank you Tom for responding. On an average the docs are around 25-35 KB. > The code is as follows, Kindly let me know if you see anything weird, a > second pair of eyes always helps :-) > > public List<String> deleteDocs(List<String> ids) throws > SolrCustomException { > CommonsHttpSolrServer server = (CommonsHttpSolrServer) > getServerInstance(); > List<String> unsuccessful = new ArrayList<String>(); > try { > if(ids!= null && !ids.isEmpty()) { > for(String id : ids) { > server.deleteById(id); > } > server.commit(); > server.optimize(); > } > }catch(IOException ioex) { > throw new SolrCustomException("IOException while deleting : ", > ioex); > }catch(SolrServerException solrex) { > throw new SolrCustomException("Could not delete : ", solrex); > } > > return unsuccessful; > } > > private SolrServer getServerInstance() throws SolrCustomException { > if(server != null) { > return server; > } else { > String url = getServerURL(); > log.debug("Server URL: " + url); > try { > server = new CommonsHttpSolrServer(url); > server.setSoTimeout(1000000); // socket read timeout > server.setConnectionTimeout(1000000); > server.setDefaultMaxConnectionsPerHost(1000); > server.setMaxTotalConnections(1000); > server.setFollowRedirects(false); // defaults to false > // allowCompression defaults to false.Server side must > support gzip or deflate for this to have any effect. > server.setAllowCompression(true); > server.setMaxRetries(1); // defaults to 0. > 1 not > recommended. > > } catch (MalformedURLException mex) { > throw new SolrCustomException("Cannot resolve Solr Server at > '" + url + "'\n", mex); > } > return server; > } > } > > Thanks, > > Ravi Kiran Bhaskar > > On Wed, Dec 8, 2010 at 6:16 PM, Tom Hill <solr-l...@worldware.com> wrote: > >> That''s a pretty low number of documents for auto complete. It means >> that when getting to 850,000 documents, you will create 8500 segments, >> and that's not counting merges. >> >> How big are your documents? I just created an 850,000 document (and a >> 3.5 m doc index) with tiny documents (id and title), and they deleted >> quickly (17 milliseconds). >> >> Maybe if you post your delete code? Are you doing anything else (like >> commit/optimize?) >> >> Tom >> >> >> >> On Wed, Dec 8, 2010 at 12:55 PM, Ravi Kiran <ravi.bhas...@gmail.com> >> wrote: >> > Hello, >> > >> > Iam using solr 1.4.1 when I delete by query or Id from solrj >> it >> > is very very slow almost like a hang. The core form which Iam deleting >> has >> > close to 850K documents in the index. In the solrconfig.xml autocommit is >> > set as follows. Any idea how to speed up the deletion process. Please let >> me >> > know if any more info is required >> > >> > >> > >> > <updateHandler class=*"solr.DirectUpdateHandler2"*> >> > >> > <!-- Perform a <commit/> automatically under certain conditions: >> > >> > maxDocs - number of updates since last commit is greater than >> this >> > >> > maxTime - oldest *uncommited* update (in *ms*) is this long ago >> > --> >> > >> > <autoCommit> >> > >> > <maxDocs>100</maxDocs> >> > >> > <maxTime>120000</maxTime> >> > >> > </autoCommit> >> > >> > </updateHandler> >> > >> > >> > >> > Thanks, >> > >> > >> > >> > *Ravi Kiran Bhaskar* >> > >> >