Hello - what memory is not getting released by what process? Crawls 'slowing down' is usually the case because more and more records are being fetched.
I have never seen Nutch actually leaking memory in the JVM heap and since the process' memory is largely dictated by the max heap size (default 1g), the process' memory (RSS) usage can never exceed 1.2-1.5g. Additionally, each job in a crawl cycle is independent, the JVM exits and a new one is started. M. -----Original message----- > From:Megha Bhandari <mbhanda...@sapient.com> > Sent: Thursday 7th July 2016 10:57 > To: user@nutch.apache.org > Subject: Nutch 1.11 | memory leak? > > Hi > > After running multiple incremental crawls we are seeing a slowdown in our > Nutch box. Memory is not getting released. > We are using the following crawl command > ./crawl -i -D > solr.server.url=http://solrserver:8080/solr/solr_core_shard1_replica2 seeds > data6 1 > > Has anyone faced this issue in 1.11? > > Regards > Megha >