Hi,
I am adding few features to my LTR model which re-uses the same value for
different features.
For example, I have features that compare different similarities for each
document with the input text: "token1 token2 token3 token4"
My features are
- No of common terms
- No of common
3 characters using
>>>
>> https://lucene.apache.org/solr/guide/7_4/filter-descriptions.html#FilterDescriptions-LengthFilter
>>> ?
>>>
>>> PS: this filter requires a max length too.
>>>
>>> Edward
>>>
>>> Em qui, 21
gt; >
> > PS: this filter requires a max length too.
> >
> > Edward
> >
> > Em qui, 21 de fev de 2019 04:52, Furkan KAMACI
> > escreveu:
> >
> >> Hi Joakim,
> >>
> >> I suggest you to read these resources:
> >>
> >&
length too.
>
> Edward
>
> Em qui, 21 de fev de 2019 04:52, Furkan KAMACI
> escreveu:
>
>> Hi Joakim,
>>
>> I suggest you to read these resources:
>>
>> http://lucene.472066.n3.nabble.com/Varnish-td4072057.html
>> http://lucene.472066.
escreveu:
> Hi Joakim,
>
> I suggest you to read these resources:
>
> http://lucene.472066.n3.nabble.com/Varnish-td4072057.html
> http://lucene.472066.n3.nabble.com/SolrJ-HTTP-caching-td490063.html
> https://wiki.apache.org/solr/SolrAndHTTPCaches
>
> which gives inf
Hi Joakim,
I suggest you to read these resources:
http://lucene.472066.n3.nabble.com/Varnish-td4072057.html
http://lucene.472066.n3.nabble.com/SolrJ-HTTP-caching-td490063.html
https://wiki.apache.org/solr/SolrAndHTTPCaches
which gives information about HTTP Caching including Varnish Cache,
Last
Hello dear user list!
I work at a company in retail where we use solr to perform searches as you
type.
As soon as you type more than 1 characters in the search field solr starts
serving hits.
Of course this generates a lot of "unnecessary" queries (in the sense that
they are never shown to the
In Solr there's /ExternalFileFieldReloader/ which is responsible for caching
the contents of external files whenever a new searcher is being warmed up.
It happens that I've defined a dynamic field to be used as an
/ExternalField/ as in
/* */
If you have a look inside the code
sure that
> we are not leaving out any possibility of caching the same in Query Result
> Cache.
>
> was just exploring field collapsing instead of grouping but It doesn't
> fulfill our requirement of having all values in a facet for a group instead
> of only grouped value. So we
Hi Yasufumi
Thanks for the reply. Yes, you are correct. I also checked the code and it
seems the same.
We are facing performance issues due to grouping so wanted to be sure that
we are not leaving out any possibility of caching the same in Query Result
Cache.
was just exploring field
ava#L120
)
But in caching grouping result, query result cache should have {query and
conditions} -> {grouped value, condition, etc...} -> {DocList} structure
cache, I think.
Thanks,
Yasufumi
2018年5月18日(金) 23:41 rubi.hali <rubih...@gmail.com>:
> Hi All
>
> Can somebody please
Hi All
Can somebody please explain if we can cache solr grouping results in query
result cache as i dont see any inserts in query result cache once we enabled
grouping?
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
totle
>
> Problem:
>
> When we start solrcloud ,the cached index will make memory 98% or
> more used . And if we continue to index document (batch commit 10 000
> documents),one or more server will refuse serving.Cannot login wia ssh,even
> refuse the monitor.
>
> So,how can I limit the solr’s caching index to memory behavior?
>
> Anyone thanks!
>
(batch commit 10 000
documents),one or more server will refuse serving.Cannot login wia ssh,even
refuse the monitor.
So,how can I limit the solr’s caching index to memory behavior?
Anyone thanks!
hes to worry
> > > about (as long as you have autowarmCount=0 on all caches, as is the
> > > case with the Solr example configs).
> > >
> > > To test sorted query performance (I assume you're sorting the index to
> > > accelerate certain so
d
> > {!cache=false} to the query
> > example: q={!cache=false}*:*
> > You could also add a random term on a non-existent field to change the
> > query and prevent unwanted caching...
> > example: q=*:* does_not_exist_s:149475394
> >
> > -Yonik
> >
>
t; To test sorted query performance (I assume you're sorting the index to
> accelerate certain sorted queries), if you can't make the queries
> unique, then add
> {!cache=false} to the query
> example: q={!cache=false}*:*
> You could also add a random term on a non-existe
alse} to the query
example: q={!cache=false}*:*
You could also add a random term on a non-existent field to change the
query and prevent unwanted caching...
example: q=*:* does_not_exist_s:149475394
-Yonik
t;nilesh.kam...@gmail.com>
> wrote:
> > I am planning to do load testing for some of my code changes and I need
> to
> > disable all kind of caching.
>
> Perhaps you should be aiming to either:
> 1) seek a config + query load that maximizes time spent in your code
>
On Fri, Mar 31, 2017 at 9:44 AM, Nilesh Kamani <nilesh.kam...@gmail.com> wrote:
> I am planning to do load testing for some of my code changes and I need to
> disable all kind of caching.
Perhaps you should be aiming to either:
1) seek a config + query load that maximizes time spent
I think there are default caching settings. You may need to explicitly
disable them.
Regards,
Alex
On 31 Mar 2017 9:44 AM, "Nilesh Kamani" <nilesh.kam...@gmail.com> wrote:
> Hello All,
>
> I am planning to do load testing for some of my code changes and I n
On 31/03/2017 14:44, Nilesh Kamani wrote:
Hello All,
I am planning to do load testing for some of my code changes and I need to
disable all kind of caching.
I removed all caching related elements from solr config (in zookeeper).
This is the document I referred.
https://cwiki.apache.org
Hello All,
I am planning to do load testing for some of my code changes and I need to
disable all kind of caching.
I removed all caching related elements from solr config (in zookeeper).
This is the document I referred.
https://cwiki.apache.org/confluence/display/solr/Query+Settings
},
> "mlt": { "time": 0.0 },
> "highlight": { "time": 0.0 },
> "stats": { "time": 0.0 },
> "expand": { "time": 0.0 },
> "terms": { "time": 0.0 },
ht": { "time": 0.0 },
"stats": { "time": 0.0 },
"expand": { "time": 0.0 },
"terms": { "time": 0.0 },
"debug": { "time": 0.0 }
}
Or perhaps the facet component uses the query c
on the
first and second pass and is not cached, the so query/collapse need to be
run twice for facets.
The fix for this would be to start caching the DocSets needed for faceting.
Joel Bernstein
http://joelsolr.blogspot.com/
On Fri, Feb 10, 2017 at 1:29 PM, Ronald K. Braun <ronbr...@gmail.com>
submission of the query runs at ~600ms. I interpret this to
mean that caching is somehow defeated when facet processing is set. Facets
are empty as expected:
facet_counts": {
"facet_queries": { },
"facet_fields": { },
"facet_ranges
On Tue, Dec 20, 2016 at 7:04 AM, Alessandro Benedetti
<benedetti.ale...@gmail.com> wrote:
> Hi Erick,
> just thinking on this :
>
> 1) "q=myclause AND filter(field:value)
>
> is identical to
>
> 2) q=myclause=field:value"
>
> Correct me if I am wrong,
> tho
Hi Erick,
just thinking on this :
1) "q=myclause AND filter(field:value)
is identical to
2) q=myclause=field:value"
Correct me if I am wrong,
those two queries , from filter caching point of view are identical, but
from scoring point of you :
1) will score only the documents resu
Hi there,
my index is created from XML files that are downloaded on the fly.
This also includes downloading a mapping file that is used to resolve IDs in
the main file (root entity) and map them onto names.
The basic functionality works - the supplier_name is set for each document.
However, the
> we are implementing a questionnaire tool for companies. I would like to
> import the data using a DIH.
>
> To increase performance i would like to use some caching. But my solution
> is not working. The score of my
>
> questionnaire is empty. But there is a value in the data
Hello,
we are implementing a questionnaire tool for companies. I would like to
import the data using a DIH.
To increase performance i would like to use some caching. But my
solution is not working. The score of my
questionnaire is empty. But there is a value in the database. I've
checked
No. Please re-read and use the admin plugins/stats page to examine for yourself.
1) fq=filter(fromfield:[* TO NOW/DAY+1DAY]&& tofield:[NOW/DAY-7DAY TO *])
&& fq=type:abc
&& is totally unnecessary when using fq clauses, there is already an
implicit AND.
I'm not even sure what the above does, I
Thanks for the explanation Eric.
So that I understand this clearly
1) fq=filter(fromfield:[* TO NOW/DAY+1DAY]&& tofield:[NOW/DAY-7DAY TO *])
&& fq=type:abc
2) fq= fromfield:[* TO NOW/DAY+1DAY]&& fq=tofield:[NOW/DAY-7DAY TO *]) &&
fq=type:abc
Using 1) would benefit from having 2 separate
You're confusing a query clause with fq when thinking about filter() I think.
Essentially they don't need to be used together, i.e.
q=myclause AND filter(field:value)
is identical to
q=myclause=field:value
both in docs returned and filterCache usage.
q=myclause(fq=field:value)
actually uses
Thanks Ahmet...but I am not still clear how is adding filter() option
better or is it the same as filtercache?
My question is below.
"As mentioned above adding filter() will add the filter query to the cache.
This would mean that results are fetched from cache instead of running n
number of
Hi,
As I understand it useful incase you use an OR operator between two restricting
clauses.
Recall that multiple fq means implicit AND.
ahmet
On Monday, May 9, 2016 4:02 AM, Jay Potharaju wrote:
As mentioned above adding filter() will add the filter query to the
As mentioned above adding filter() will add the filter query to the cache.
This would mean that results are fetched from cache instead of running n
number of filter queries in parallel.
Is it necessary to use the filter() option? I was under the impression that
all filter queries will get added
We have high query load and considering that I think the suggestions made
above will help with performance.
Thanks
Jay
On Fri, May 6, 2016 at 7:26 AM, Shawn Heisey wrote:
> On 5/6/2016 7:19 AM, Shawn Heisey wrote:
> > With three separate
> > fq parameters, you'll get three
On 5/6/2016 7:19 AM, Shawn Heisey wrote:
> With three separate
> fq parameters, you'll get three cache entries in filterCache from the
> one query.
One more tidbit of information related to this:
When you have multiple filters and they aren't cached, I am reasonably
certain that they run in
Thanks Shawn,Erick & Ahmet , this was very helpful.
> On May 6, 2016, at 6:19 AM, Shawn Heisey wrote:
>
>> On 5/5/2016 2:44 PM, Jay Potharaju wrote:
>> Are you suggesting rewriting it like this ?
>> fq=filter(fromfield:[* TO NOW/DAY+1DAY]&& tofield:[NOW/DAY-7DAY TO *] )
>>
On 5/5/2016 2:44 PM, Jay Potharaju wrote:
> Are you suggesting rewriting it like this ?
> fq=filter(fromfield:[* TO NOW/DAY+1DAY]&& tofield:[NOW/DAY-7DAY TO *] )
> fq=filter(type:abc)
>
> Is this a better use of the cache as supposed to fq=fromfield:[* TO
> NOW/DAY+1DAY]&& tofield:[NOW/DAY-7DAY TO
t; NOW/DAY+1DAY]&& tofield:[NOW/DAY-7DAY TO *] && type:"abc"
>>
>> Thanks
>>
>> On Thu, May 5, 2016 at 12:50 PM, Ahmet Arslan <iori...@yahoo.com.invalid>
>> wrote:
>>
>>> Hi,
>>>
>>> Cache ene
ache will work
>> within-day.
>> I would use separate filer queries, especially fq=type:abc for the
>> structured query so it will be cached independently.
>>
>> Also consider disabling caching (using cost) in expensive queries:
>> http://yonik.com/advanced-filter-cachi
;iori...@yahoo.com.invalid>
> wrote:
>
>> Hi,
>>
>> Cache enemy is not * but NOW. Since you round it to DAY, cache will work
>> within-day.
>> I would use separate filer queries, especially fq=type:abc for the
>> structured query so it will be cached
uery so it will be cached independently.
>
> Also consider disabling caching (using cost) in expensive queries:
> http://yonik.com/advanced-filter-caching-in-solr/
>
> Ahmet
>
>
>
> On Thursday, May 5, 2016 8:25 PM, Jay Potharaju <jspothar...@gmail.com>
> wrote:
> Hi
Hi,
Cache enemy is not * but NOW. Since you round it to DAY, cache will work
within-day.
I would use separate filer queries, especially fq=type:abc for the structured
query so it will be cached independently.
Also consider disabling caching (using cost) in expensive queries:
http://yonik.com
Hi,
I have a filter query that gets documents based on date ranges from last n
days to anytime in future.
The objective is to get documents between a date range, but the start date
and end date values are stored in different fields and that is why I wrote
the filter query as below
gt; for 1 day!
>
> /I was using Solr for 2 years without knowing in details what it was
> caching...(because I did not need to understand it before).//
> //I had to take a look since I needed to restart (regularly) my tomcat
> in order to improve performances.../
>
> But I now
es <https://teaspoon-consulting.com/articles/solr-cache-tuning.html>'
> !
> This is the only good document that I was able to find after searching for
> 1 day!
>
> *I was using Solr for 2 years without knowing in details what it was
> caching...(because I did not need to understand
gt; We have very simple query which returns only 5 solr documents. Under load
> > condition it takes 100 ms to 2000 ms.
> >
> >
> > -Original Message-
> > From: Maulin Rathod
> > Sent: 03 March 2016 12:24
> > To: solr-user@lucene.apache.or
ad
> condition it takes 100 ms to 2000 ms.
>
>
> -Original Message-
> From: Maulin Rathod
> Sent: 03 March 2016 12:24
> To: solr-user@lucene.apache.org
> Subject: RE: Solr Configuration (Caching & RAM) for performance Tuning
>
> we do soft commit when we inser
oad
condition it takes 100 ms to 2000 ms.
-Original Message-
From: Maulin Rathod
Sent: 03 March 2016 12:24
To: solr-user@lucene.apache.org
Subject: RE: Solr Configuration (Caching & RAM) for performance Tuning
we do soft commit when we insert/update document.
//Insert D
To: solr-user@lucene.apache.org
Subject: Re: Solr Configuration (Caching & RAM) for performance Tuning
1) Experiment with the autowarming settings in solrconfig.xml. Since in your
case, you're indexing so frequently consider setting the count to a low number,
so that not a lot of time is spent war
, <mrat...@asite.com> wrote:
> Hi,
>
> We are using Solr 5.2 (on windows 2012 server/jdk 1.8) for document
> content indexing/querying. We found that querying slows down intermittently
> under load condition.
>
> In our analysis we found two issues.
>
> 1) Solr
Hi,
We are using Solr 5.2 (on windows 2012 server/jdk 1.8) for document content
indexing/querying. We found that querying slows down intermittently under load
condition.
In our analysis we found two issues.
1) Solr is not effectively using caching.
Whenever new document indexed, it opens new
I read the client was happy, so I am only curious to know more :)
Apart the readibility, shouldn't be more efficient to put the filters
directly in the main query if you don't cache ?
( checking into the code when not caching is adding a lucene boolean query,
with specifically 0 score, maybe
> parameter
> >> > to suit your needs.
> >> > a) Use the LeastFrequentlyUsed or LFU eviction policy.
> >> > b) Set the size to whatever number of fqs you find suitable.
> >> > You can do this like so:
> >> > >> > autoW
; > > > autoWarmCount="10"/>
>> > You should play around with these parameters to find the best combination
>> > for your implementation.
>> > For more details take a look here:
>> > https://wiki.apache.org/solr/SolrCaching
>> &
Hi,
after looking at the presentation of cloudsearch from lucene revolution
2014
https://www.youtube.com/watch?v=RI1x0d-yO8A=PLU6n9Voqu_1FM8nmVwiWWDRtsEjlPqhgP=49
min 17:08
I recognized I'd love to be able to remove the burden of disabling filter
query caching from developers
the problem
/solr/SolrCaching
http://yonik.com/advanced-filter-caching-in-solr/
On Tue, Jan 5, 2016 at 7:28 PM Matteo Grolla <matteo.gro...@gmail.com>
wrote:
> Hi,
> after looking at the presentation of cloudsearch from lucene revolution
> 2014
>
> https://www.youtube.co
hatever number of fqs you find suitable.
> You can do this like so:
> autoWarmCount="10"/>
> You should play around with these parameters to find the best combination
> for your implementation.
> For more details take a look here:
> https://wiki.apache.org/solr/S
> b) Set the size to whatever number of fqs you find suitable.
> > You can do this like so:
> > > autoWarmCount="10"/>
> > You should play around with these parameters to find the best combination
> > for your implementation.
> > For more details take a l
n I saw the presentation of CloudSearch where they explained that they
were enabling/disabling caching based on fq statistics I thought this kind
of problem were general enough that I could find a plugin already built
2016-01-05 19:17 GMT+01:00 Erick Erickson <erickerick...@gmail.com>:
>
={!cache=false}n_rea:xxx={!cache=false}provincia:,fq={!cache=false}type:
You have a comma in front of the last fq clause, typo?
Well, the whole point of caching filter queries is so that the
_second_ time you use it,
very little work has to be done. That comes at a cost of course
last fq clause, typo?
>
> Well, the whole point of caching filter queries is so that the
> _second_ time you use it,
> very little work has to be done. That comes at a cost of course for
> first-time execution.
> Basically any fq clause that you can guarantee won't be re-used s
rdware (cpu
> bound and cpu was scarce)
> After that customer happy, time finished and didn't go further but
> definitely cost was something I'd try
> When I saw the presentation of CloudSearch where they explained that they
> were enabling/disabling caching based on fq statistics I thought
Content Group
-Original Message-
From: Todd Long [mailto:lon...@gmail.com]
Sent: Wednesday, December 16, 2015 10:21 AM
To: solr-user@lucene.apache.org
Subject: RE: DIH Caching w/ BerkleyBackedCache
James,
I apologize for the late response.
Dyer, James-2 wrote
> With the DIH requ
her Berkeley DB/system configuration I
could consider that would affect this?
It's possible that this caching simply might not be suitable for our data
set where one document might contain a field with tens of thousands of
values... maybe this is the bottleneck with using this database as every add
co
I think there might have been an api
change or something that prevented the uncommitted caching code from working
with newer versions, but I honestly forget. This is probably a viable solution
if you don't want to write any code, but it might take some trial and error
getting it to work.
James
l back on, if possible. Again, I'm not sure
why but it appears that the Berkley cache is overwriting itself (i.e.
cleaning up unused data) when building the database... I've read plenty of
other threads where it appears folks are having success using that caching
solution.
Mikhail Khludnev wrote
> threa
Mikhail Khludnev wrote
> "External merge" join helps to avoid boilerplate caching in such simple
> cases.
Thank you for the reply. I can certainly look into this though I would have
to apply the patch for our version (i.e. 4.8.1). I really just simplified
our data configur
On Mon, Nov 16, 2015 at 5:08 PM, Todd Long <lon...@gmail.com> wrote:
> Mikhail Khludnev wrote
> > "External merge" join helps to avoid boilerplate caching in such simple
> > cases.
>
> Thank you for the reply. I can certainly look into this though I would have
being ~1GB in size (this appears to be hard coded). Is
there some additional configuration I'm missing to prevent the process from
"cleaning" up database files before the index has finished? I think this
"cleanup" continues to kickoff the caching which never complete
Hello Todd,
"External merge" join helps to avoid boilerplate caching in such simple
cases.
it should be something
On Fri, Nov 13, 2015 at 10:54 PM, Todd Long <lon...@gmail.com> wrote:
> We currently index using DIH along with the SortedMapBackedCache cac
Erick Erickson wrote
> Have you considered using SolrJ instead of DIH? I've seen
> situations where that can make a difference for things like
> caching small tables at the start of a run, see:
>
> searchhub.org/2012/02/14/indexing-with-solrj/
Nice write-up. I think we'r
Have you considered using SolrJ instead of DIH? I've seen
situations where that can make a difference for things like
caching small tables at the start of a run, see:
searchhub.org/2012/02/14/indexing-with-solrj/
Best,
Erick
On Sat, Oct 24, 2015 at 6:17 PM, Todd Long <lon...@gmail.com>
ucene.472066.n3.nabble.com/DIH-Caching-with-Delta-Import-tp4235598p4236384.html
Sent from the Solr - User mailing list archive at Nabble.com.
to implement partial updates with DIH.
James Dyer
Ingram Content Group
-Original Message-
From: Todd Long [mailto:lon...@gmail.com]
Sent: Tuesday, October 20, 2015 8:02 PM
To: solr-user@lucene.apache.org
Subject: DIH Caching with Delta Import
It appears that DIH entity caching (e.g
It appears that DIH entity caching (e.g. SortedMapBackedCache) does not work
with deltas... is this simply a bug with the DIH cache support or somehow by
design?
Any ideas on a workaround for this? Ideally, I could just omit the
"cacheImpl" attribute but that leaves the query (using t
Hello.
- Would you be kind enough to share your experience using SolrCloud with
HTTP Caching to return 304 status as described in the wiki
https://cwiki.apache.org/confluence/display/solr/RequestDispatcher+in+SolrConfig#RequestDispatcherinSolrConfig-httpCachingElement
?
- Looking at the SolrJ
I ran into another issue that I am having issue running to ground. My
implementation on Solr 4.x worked as I expected but trying to migrate this
to Solr 5.x it looks like some of the faceting is delegated to
DocValuesFacets which ultimately caches things at a field level in the
FieldCache.DEFAULT
On Tue, Aug 18, 2015 at 10:58 PM, Jamie Johnson jej2...@gmail.com wrote:
Hmm...so I think I have things setup correctly, I have a custom
QParserPlugin building a custom query that wraps the query built from the
base parser and stores the user who is executing the query. I've added the
This was my original thought. We already have the thread local so should
be straight fwd to just wrap the Field name and use that as the key. Again
thanks, I really appreciate the feedback
On Aug 19, 2015 8:12 AM, Yonik Seeley ysee...@gmail.com wrote:
On Tue, Aug 18, 2015 at 10:58 PM, Jamie
Subject: Re: Solr Caching (documentCache) not working
On 8/17/2015 7:04 AM, Maulin Rathod wrote:
We have observed that Intermittently querying become slower when
documentCache become empty. The documentCache is getting flushed whenever
new document added to the collection
Hmm...so I think I have things setup correctly, I have a custom
QParserPlugin building a custom query that wraps the query built from the
base parser and stores the user who is executing the query. I've added the
username to the hashCode and equals checks so I think everything is setup
properly.
On Tue, Aug 18, 2015 at 9:51 PM, Jamie Johnson jej2...@gmail.com wrote:
Thanks, I'll try to delve into this. We are currently using the parent
query parser, within we could use {!secure} I think. Ultimately I would
want the solr qparser to actually do the work of parsing and I'd just wrap
On Tue, Aug 18, 2015 at 7:11 PM, Jamie Johnson jej2...@gmail.com wrote:
Yes, my use case is security. Basically I am executing queries with
certain auths and when they are executed multiple times with differing
auths I'm getting cached results.
If it's just simple stuff like top N docs
when you say a security filter, are you asking if I can express my security
constraint as a query? If that is the case then the answer is no. At this
point I have a requirement to secure Terms (a nightmare I know). Our
fallback is to aggregate the authorizations to a document level and secure
On Tue, Aug 18, 2015 at 8:19 PM, Jamie Johnson jej2...@gmail.com wrote:
when you say a security filter, are you asking if I can express my security
constraint as a query? If that is the case then the answer is no. At this
point I have a requirement to secure Terms (a nightmare I know).
Heh -
Thanks, I'll try to delve into this. We are currently using the parent
query parser, within we could use {!secure} I think. Ultimately I would
want the solr qparser to actually do the work of parsing and I'd just wrap
that. Are there any examples that I could look at for this? It's not
clear
to disable them all. Security?
-Yonik
On Tue, Aug 18, 2015 at 6:52 PM, Jamie Johnson jej2...@gmail.com wrote:
I see that if Solr is in realtime mode that caching is disable within the
SolrIndexSearcher that is created in SolrCore, but is there anyway to
disable caching without being in realtime mode
On Tue, Aug 18, 2015 at 8:38 PM, Jamie Johnson jej2...@gmail.com wrote:
I really like this idea in concept. My query would literally be just a
wrapper at that point, what would be the appropriate place to do this?
It depends on how much you are trying to make everything transparent
(that there
I see that if Solr is in realtime mode that caching is disable within the
SolrIndexSearcher that is created in SolrCore, but is there anyway to
disable caching without being in realtime mode? Currently I'm implementing
a NoOp cache that implements SolrCache but returns null for everything
...@gmail.com wrote:
I see that if Solr is in realtime mode that caching is disable within the
SolrIndexSearcher that is created in SolrCore, but is there anyway to
disable caching without being in realtime mode? Currently I'm
implementing
a NoOp cache that implements SolrCache but returns
I really like this idea in concept. My query would literally be just a
wrapper at that point, what would be the appropriate place to do this?
What would I need to do to the query to make it behave with the cache.
Again thanks for the idea, I think this could be a simple way to use the
caches.
On 8/18/2015 2:30 AM, Daniel Collins wrote:
I think this is expected. As Shawn mentioned, your hard commits have
openSearcher=false, so they flush changes to disk, but don't force a
re-open of the active searcher.
By contrast softCommit, sets openSearcher=true, the point of softCommit is
to
On Mon, Aug 17, 2015 at 4:36 PM, Daniel Collins danwcoll...@gmail.com wrote:
we had to turn off
ALL the Solr caches (warming is useless at that kind of frequency
Warming and caching are related, but different. Caching still
normally makes sense without warming, and Solr is generally written
On 8/17/2015 7:04 AM, Maulin Rathod wrote:
We have observed that Intermittently querying become slower when
documentCache become empty. The documentCache is getting flushed whenever new
document added to the collection.
Is there any way by which we can ensure that newly added documents are
Hi,
We are using solr cloud 5.2 version.
We have observed that Intermittently querying become slower when documentCache
become empty. The documentCache is getting flushed whenever new document added
to the collection.
Is there any way by which we can ensure that newly added documents are
1 - 100 of 329 matches
Mail list logo