Hi All,
          I am running an application in which I am having to index
about 300,000 records of a table which has 6 columns. I am committing to
the solr server after every 10,000 rows and I observed that the by the
end of about 150,000 the process eats up about 1 Gig of memory, and
since my server has only 1 Gig it throws me an Out of Memory error. How
ever if I commit after every 1000 rows, it is able to process about
200,000 rows before throwing out of memory. This is just dev server and
the production data would be much more bigger. It will be great if
someone suggests a way to improve this scenario.
 
 
Regards
Sundar Sankarnarayanan

Reply via email to