Thanks Timothy,

I gave these a try and -XX:+CMSPermGenSweepingEnabled seemed to cause the
error to happen more quickly. With this option on it didn't seemed to do
any intermittent garbage collecting that delayed the issue in with it off.
I was already using a max of 512MB, and I can reproduce it with it set this
high or even higher. Right now because of how we have this implemented just
increasing it to something high just delays the problem :/

Anything else you could suggest I would really appreciate.


On Wed, Feb 26, 2014 at 3:19 PM, Tim Potter <tim.pot...@lucidworks.com>wrote:

> Hi Josh,
>
> Try adding: -XX:+CMSPermGenSweepingEnabled as I think for some VM
> versions, permgen collection was disabled by default.
>
> Also, I use: -XX:MaxPermSize=512m -XX:PermSize=256m with Solr, so 64M may
> be too small.
>
>
> Timothy Potter
> Sr. Software Engineer, LucidWorks
> www.lucidworks.com
>
> ________________________________________
> From: Josh <jwda...@gmail.com>
> Sent: Wednesday, February 26, 2014 12:27 PM
> To: solr-user@lucene.apache.org
> Subject: Solr Permgen Exceptions when creating/removing cores
>
> We are using the Bitnami version of Solr 4.6.0-1 on a 64bit windows
> installation with 64bit Java 1.7U51 and we are seeing consistent issues
> with PermGen exceptions. We have the permgen configured to be 512MB.
> Bitnami ships with a 32bit version of Java for windows and we are replacing
> it with a 64bit version.
>
> Passed in Java Options:
>
> -XX:MaxPermSize=64M
> -Xms3072M
> -Xmx6144M
> -XX:+UseParNewGC
> -XX:+UseConcMarkSweepGC
> -XX:CMSInitiatingOccupancyFraction=75
> -XX:+CMSClassUnloadingEnabled
> -XX:NewRatio=3
>
> -XX:MaxTenuringThreshold=8
>
> This is our use case:
>
> We have what we call a database core which remains fairly static and
> contains the imported contents of a table from SQL server. We then have
> user cores which contain the record ids of results from a text search
> outside of Solr. We then query for the data we want from the database core
> and limit the results to the content of the user core. This allows us to
> combine facet data from Solr with the search results from another engine.
> We are creating the user cores on demand and removing them when the user
> logs out.
>
> Our issue is the constant creation and removal of user cores combined with
> the constant importing seems to push us over our PermGen limit. The user
> cores are removed at the end of every session and as a test I made an
> application that would loop creating the user core, import a set of data to
> it, query the database core using it as a limiter and then remove the user
> core. My expectation was in this scenario that all the permgen associated
> with that user cores would be freed upon it's unload and allow permgen to
> reclaim that memory during a garbage collection. This was not the case, it
> would constantly go up until the application would exhaust the memory.
>
> I also investigated whether the there was a connection between the two
> cores left behind because I was joining them together in a query but even
> unloading the database core after unloading all the user cores won't
> prevent the limit from being hit or any memory to be garbage collected from
> Solr.
>
> Is this a known issue with creating and unloading a large number of cores?
> Could it be configuration based for the core? Is there something other than
> unloading that needs to happen to free the references?
>
> Thanks
>
> Notes: I've tried using tools to determine if it's a leak within Solr such
> as Plumbr and my activities turned up nothing.
>

Reply via email to