[ 
https://issues.apache.org/jira/browse/SOLR-1308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12735466#action_12735466
 ] 

Hoss Man commented on SOLR-1308:
--------------------------------

bq. One interesting question is the structure of the cache and how to size 
caches.

i feel like i'm missing something here ... wouldn't the simplest approach still 
be the best?

if i currently have a single filterCache of size=1024, and 1million docs then 
that uses up some quantity of memory =~ func(1024,1mil) (based on sparseness of 
each query)

if i start having per segment caches, and there are 22 segments each with a 
filterCache of size=1024, then the amount of memory used by all the caches will 
be ~22*func(1024,(1mil/22)) ... which should wind up being roughtly the same as 
before.

smaller segments will wind up using less ram for their caches, even if the 
"size" of the cache is the same for each segment.

> Cache docsets and docs at the SegmentReader level
> -------------------------------------------------
>
>                 Key: SOLR-1308
>                 URL: https://issues.apache.org/jira/browse/SOLR-1308
>             Project: Solr
>          Issue Type: Improvement
>    Affects Versions: 1.4
>            Reporter: Jason Rutherglen
>            Priority: Minor
>             Fix For: 1.5
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> Solr caches docsets and documents at the top level Multi*Reader
> level. After a commit, the caches are flushed. Reloading the
> caches in near realtime (i.e. commits every 1s - 2min)
> unnecessarily consumes IO resources, especially for largish
> indexes.
> We can cache docsets and documents at the SegmentReader level.
> The cache settings in SolrConfig can be applied to the
> individual SR caches.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to