seperating requests over 2 ports is a nice solution when having multiple
user-types. I like that althuigh I don't think i need it for this case. 

I'm just going to go the 'normal' caching-route and see where that takes me,
instead of thinking it can't be done upfront :-) 

Thanks!



hossman wrote:
> 
> 
> : Although I haven't tried yet, I can't imagine that this request returns
> in
> : sub-zero seconds, which is what I want (having a index of about 1M docs
> with
> : 6000 fields/ doc and about 10 complex facetqueries / request). 
> 
> i wouldn't neccessarily assume that :)  
> 
> If you have a request handler which does a query with a facet.field, and 
> then does a followup query for the top N constraings in that facet.field, 
> the time needed to execute that handler on a cold index should primarily 
> depend on the faceting aspect and how many unique terms there are in that 
> field.  try it and see.
> 
> : The navigation-pages are pretty important for, eh well navigation ;-)
> and
> : although I can rely on frequent access of these pages most of the time,
> it
> : is not guarenteed (so neither is the caching)
> 
> if i were in your shoes: i wouldn't worry about it.  i would setup 
> "cold cache warming" of the important queries using a firstSearcher event 
> listener, i would setup autowarming on the caches, i would setup explicit 
> warming of queries using sort fields i care about in a newSearcher event 
> listener, andi would make sure to tune my caches so that they were big 
> enough to contain a much larger number of entries then are used by my 
> custom request handler for the queris i care about (especially if my index 
> only changed a few times a day, the caches become a huge win in that case, 
> so throw everything you've got at them)
> 
> and for the record: i've been in your shoes.
> 
> From a purely theoretical standpoint: if enough other requests are coming 
> in fast enough to expunge the objects used by your "important" navigation 
> pages from the caches ... then those pages aren't that important (at least 
> not to your end users as an aggregate)
> 
> on the other hand: if you've got discreet pools of users (like say: 
> customers who do searches, vs your boss who thiks navigation pages are 
> really important) then another appraoch is to have to ports searching 
> queries -- one that you send your navigation type queries to (with the 
> caches tuned appropriately) and one that you send other traffic to (with 
> caches tuned appropriately) ... i do that for one major index, it makes a 
> lot of sense when you have very distinct usage profiles and you want to 
> get the most bang for your buck cache wise.
> 
> 
> : > #1 wouldn't really accomplish what you want without #2 as well.
> 
> : regarding #1. 
> : Wouldn't making a user-cache for the sole-purpose of storing these
> queries
> : be enough? I could then reference this user-cache by name, and extract
> the
> 
> only if you also write a custom request handler ... that was my point 
> before it was clear that you were already doing that no matter what (you 
> had custom request handler listed in #2)
> 
> you could definitely make sure to explicitly put all of your DocLists in 
> your own usercache, that will certainly work.  but frankly, based on 
> what you've described about your use case, and how often your data 
> cahnges, it would probably be easier to set up a layer of caching in front 
> of Solr (since you are concerned with ensuring *all* of the date 
> for these important pages gets cached) ... something like an HTTP reverse 
> proxy cache (aka: acelerator proxy) would help you ensure that thes whole 
> pages were getting cached.
> 
> i've never tried it, but in theory: you could even setup a newSearcher 
> event listener to trigger a little script to ping your proxy with a 
> request thatforced it to revalidate the query when your index changes.
> 
> 
> 
> -Hoss
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/how-to-make-sure-a-particular-query-is-ALWAYS-cached-tf4566711.html#a13110514
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to