Hello,
Can we create a collection across data Center ( shard replica is in a
different data center)
for HA ?
Thanks
Revas
Thanks, Erick. Its just when we enable both index=true and docValues=true,
it increases the index time by 2x atleast for full re-index.
On Wed, May 20, 2020 at 2:30 PM Erick Erickson
wrote:
> Revas:
>
> Facet queries are just queries that are constrained by the total result
>
Erick, Can you also explain how to optimize facet query and range facets as
they dont use docValues and contribute to higher response time?
On Tue, May 19, 2020 at 5:55 PM Erick Erickson
wrote:
> They are _absolutely_ able to be used together. Background:
>
> “In the bad old days”, there was no
re spaced far enough apart that the warming completes before a new
> searcher starts warming.
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
>
> On Mon, May 4, 2020 at 10:27 AM Revas wrote:
>
> > Hi Erick, Thanks for the explanation and advise. With face
t get adequate response time even under fairly light query loads
> as a general rule.
>
> Best,
> Erick
>
> > On Apr 16, 2020, at 12:08 PM, Revas wrote:
> >
> > Hi Erick, You are correct, we have only about 1.8M documents so far and
> > turning on the inde
t I think that’s the root issue here.
>
> Best,
> Erick
>
> > On Apr 14, 2020, at 11:51 PM, Revas wrote:
> >
> > We have faceting fields that have been defined as indexed=false,
> > stored=false and docValues=true
> >
> > However we use a lot of subfa
We have faceting fields that have been defined as indexed=false,
stored=false and docValues=true
However we use a lot of subfacets using json facets and facet ranges
using facet.queries. We see that after every soft-commit our performance
worsens and performs ideal between commits
how is that
Hi
I am seeing from my logs searcher referenced as main and realtime .Do they
correspond to hard vs sofCommit. I do not see the co-relation to that based
on our commit settings.
Opening [Searcher@538abc62[xx_shard1_replica2] main]
Opening [Searcher@2e151991[ xx _shard1_replica1] realtime]
r-powered
> machine. The hard commit will trigger segment merging, which is CPU and
> I/O intensive. If
> you’re using a machine that can’t afford the cycles to be taken up by
> merging, that could account
> for what you see, but new searchers are being opened every 2 seconds
>
towarming. That should smooth out
> the delay your user’s experience when commits happen.
>
> Best,
> Erick
>
> > On Mar 30, 2020, at 4:06 PM, Revas wrote:
> >
> > Thanks, Eric.
> >
> > 1) We are using dynamic string field for faceting where indexing
secs.
On Mon, Mar 30, 2020 at 4:06 PM Revas wrote:
> Thanks, Eric.
>
> 1) We are using dynamic string field for faceting where indexing =false
> and stored=false . By default docValues are enabled for primitive fields
> (solr 6.6.), so not explicitly defined in schema. Do you
ouble check <1> first.
>
> Best,
> Erick
>
> > On Mar 30, 2020, at 12:20 PM, sujatha arun wrote:
> >
> > A facet heavy query which uses docValue fields for faceting returns
> about
> > 5k results executes between 10ms to 5 secs and the 5 secs time seems to
> > coincide with after a hard commit.
> >
> > Does that have any relation? Why the fluctuation in execution time?
> >
> > Thanks,
> > Revas
>
>
Thanks for the repsonse .What happens in this scenario?
Does the commit happen in this case or does the search server hang or just
throws an error without committing
Regards
Sujatha
On Mon, May 3, 2010 at 11:41 PM, Chris Hostetter
hossman_luc...@fucit.orgwrote:
: When i run 2 -3 commits
Hello,
I would like to know if by just copying the solr.war file to my existing
solr 1.3 installation ,lucene version is also upgraded to current 2.9 ?
I believe reindex is not necessary ,is that correct?
Is there anything else apart form this that i need to do to upgrade to the
latest
Thanks ,Erik.
On Fri, Jan 8, 2010 at 4:34 PM, Erik Hatcher erik.hatc...@gmail.com wrote:
On Jan 8, 2010, at 4:14 AM, revas wrote:
I would like to know if by just copying the solr.war file to my existing
solr 1.3 installation ,lucene version is also upgraded to current 2.9 ?
Yes
*/str
* * str name=*querystring**simple:peRsonal*/str
* * str name=*parsedquery**MultiPhraseQuery(simple:(person pe)
rsonal)*/str
* * str name=*parsedquery_toString**simple:(person pe) rsonal*/str
what is this multiphrase query ,why is this a phrase query istead of simple
query?
Regards
Revas
and the index
analyzer and I am assuming that this will be used by both
query and index analyzer .Is this correct?
Regards
Revas
the above
for German language analysis am i to use the standardard anlyzer with
German filter factory and German stemmers ?
are there more language specific tokenizers in lucene and if so what are the
steps to integrate into solr?
Regards
Revas
Hi Michael,
What is GNU gettext and how this can be used in a multilanguage scenario?
Regards
Revas
On Wed, Jun 10, 2009 at 8:10 PM, Michael Ludwig m...@as-guides.com wrote:
Manepalli, Kalyan schrieb:
Hi,
I am trying to customize the response that I receive from Solr. In the
index I have
:15 AM, revasHi revas...@gmail.com wrote:
1)Does the spell check component support all languages?
SpellCheckComponent relies on Lucene/Solr analyzers and tokenizers. So if
you can find an analyzer/tokenizer for your language, spell checker can
work.
2) I have a scnenario where i have
PM, revas revas...@gmail.com wrote:
But the spell check componenet uses the n-gram analyzer and henc should
work
for any language ,is this correct ,also we can refer an extern dictionary
for suggestions ,could this be in any language?
Yes it does use n-grams but there's an analysis step
On Sat, Jun 6, 2009 at 11:40 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
On Sat, May 30, 2009 at 9:48 AM, revas revas...@gmail.com wrote:
Hi ,
When i give a query like the following ,why does it become a phrase query
as shown below?
The field type is the default text
,will the open
files be closed automatically or would i have to reindex to close the open
files or how do i close the already opened files.This is on linux with solr
1.3 and tomcat 5.5
Regards
Revas
translate php fucntion) ,but
again not in windows OS.
Any pointers
Regards
Revas
Regards
Revas
What is the draw back in using compunf file format for indexing when we have
several webapps in a sinle container
Regards
Sujatha
the above languages .Will search be
same across all the above cases?
thanks
revas
Hi ,
With respect to language support in solr ,we have analyzers for some
languages and stemmers for certain langauges.Do we say that solr supports
this particular language only if we have both analyzer and stemmer for the
language or also for which we have analyzer but not stemmer
Regards
Hi,
I typically issue a facetdrill down query thus
q=somequery and Facetfield:facetval .
Is there any issues with the above approach as opposed to
fq=facetfield:value in terms of memory consumption and the use of cache.
Regards
Suajatha
If i don't explicity set any default query in the solrconfig.xml for
caching and make use of the default config file,does solr do the caching
automatically based on the query?
Thanks
PM, revas revas...@gmail.com wrote:
Hi Erik,
I have now commented the query time stopword analyzer .I restarted the
server.But now when i search for a stop word ,i am getting results.
We had earlier indexed the content with the stop word analyzer.I dont
think
we need to reindex
AM, revas revas...@gmail.com wrote:
Hi,
I have a query like this
content:the AND iuser_id:5
which means return all docs of user id 5 which have the word the in
content .Since 'the' is a stop word ,this query executes as just user_id
:5
inspite of the AND clause ,Whereas
Hi,
I have a query like this
content:the AND iuser_id:5
which means return all docs of user id 5 which have the word the in
content .Since 'the' is a stop word ,this query executes as just user_id :5
inspite of the AND clause ,Whereas the expected result here is since there
is no result for
Hi,
If i were to add a second server for sharding once ,the first server reaches
its limit and then if i need to update any document,how can i figure out on
which server the document is located?
Regards
Sujatha
The luke request handler returns all the tokens from the index ,is this
correct?
On 3/5/09, revas revas...@gmail.com wrote:
We will be using sqllite for db.This can be used for a cd version where we
need to provide search
On 3/5/09, Grant Ingersoll gsing...@apache.org wrote:
On Mar 5
Hi,
I just want to confirm my understanding of luke request handler.
It gives us the raw lucene index tokens on a field by field basis.
What should be the query to return all tokens for a field .
Is there any way to return all the token across all fields
Regards
Revas
Zend lucene is not able to
open the solr index files ,the error being unsupported format.
The final option is to reindex using zend lucene and read the index tokens
,but then facets are not supported by zend-lucene
Any body done something similar,please give your thoughts or pointers
Regards
Revas
Hi,
If i need to change the lucene version of solr ,then how can we do this?
Regards
Revas
25, 2009 at 11:42 AM, revas revas...@gmail.com wrote:
thanks will try that .I also have the war file for each solr instance
in
the
home directory of the instance ,would that be the problem ?
if i were to have common war file for n instances ,would there be any
issue?
regards
We will be using sqllite for db.This can be used for a cd version where we
need to provide search
On 3/5/09, Grant Ingersoll gsing...@apache.org wrote:
On Mar 5, 2009, at 3:10 AM, revas wrote:
Hi,
I have a requirement where i need to search offline.We are thinking of
doing
Hi
I am sure this question has been repeated many times over and there has been
several generic answers ,but i am looking for specific answers.
I have a single server whose configuration i give below,this being the only
server we have at present ,the requirement is everytime we create a new
thanks will try that .I also have the war file for each solr instance in the
home directory of the instance ,would that be the problem ?
if i were to have common war file for n instances ,would there be any issue?
regards
revas
On 2/25/09, Michael Della Bitta mdellabi...@gmail.com wrote:
It's
42 matches
Mail list logo