Support for large no:of cores and faster loading/unloading of cores
-------------------------------------------------------------------
Key: SOLR-1293
URL: https://issues.apache.org/jira/browse/SOLR-1293
Project: Solr
Issue Type: New Feature
Reporter: Noble Paul
Fix For: 1.5
Solr , currently ,is not very suitable for a large no:of homogeneous cores
where you require fast/frequent loading/unloading of cores . usually a core is
required to be loaded just to fire a search query or to just index one document
The requirements of such a system are.
* Very efficient loading of cores . Solr cannot afford to read and parse and
create Schema, SolrConfig Objects for each core each time the core has to be
loaded ( SOLR-919 , SOLR-920)
* START STOP core . Currently it is only possible to unload a core (SOLR-880)
* Automatic loading of cores . If a core is present and it is not loaded and a
request comes for that load it automatically before serving up a request
* As there are a large no:of cores , all the cores cannot be kept loaded
always. There has to be an upper limit beyond which we need to unload a few
cores (probably the least recently used ones)
* Automatic allotment of dataDir for cores. If the no:of cores is too high al
the cores' dataDirs cannot live in the same dir. There is an upper limit on the
no:of dirs you can create in a unix dir w/o affecting performance
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.