Software Engineer, Director
Sease Ltd. - www.sease.io
--
View this message in context:
http://lucene.472066.n3.nabble.com/Help-with-facet-limit-tp4331971p4332162.html
Sent from the Solr - User mailing list archive at Nabble.com.
The only two canned orderings are "index" which means lexically
ordered and the default frequency, the top 500 most frequent facets
will be returned.
You can always specify facet.query=XXX and I think they are returned
in the order you define the facets. If you have a small number of
facets you
Hi Team,
I am using facet on particular field along with facet.limit=500, problem I
am facing is:
1. As there are more than 500 facets and it is giving me 500 results, I
want particular facets to be returned i.e can I specify to solr to return
me 500 facets along with ones I require?
eg facets
I tried that, but it returned no results.
I understand now that the issue is that since the field has been tokenized
- searching for "*san\ *" will try to search for individual tokens which
contain the string sequence "san ", and so of course it won't find any.
I think I've found another
This can be done with escaping space
select?q=field:*san\ *
Probably sow=false in new version might also helo
On Mon, Apr 17, 2017 at 4:42 PM, OTH wrote:
> If I submit the query:
> "select?q=field:*san*"
> Then it works as expected; returning all values in the field
Ok. What analyzer / fieldtype should I use to be able to search across
tokens?
Basically, I'm just trying to replicate the functionality of the
AnalyzingInfixLookupFactor Suggester, but I need to do it using a regular
index, because I need to utilize multiple fields using edismax bq, which
seems
Use the analyser available in the solr admin console to find out exactly
how your query is analysed. That should give you a lot more information.
On Mon 17 Apr, 2017, 21:58 OTH, wrote:
> Ok, I get it now, it's because the field has been indexed as tokens. So
> maybe I
Ok, I get it now, it's because the field has been indexed as tokens. So
maybe I should use a field which does not have a tokenizer index? I'll try
something like that. Thanks
On Mon, Apr 17, 2017 at 9:16 PM, OTH wrote:
> The field type is "text_general".
>
> On Mon, Apr
The field type is "text_general".
On Mon, Apr 17, 2017 at 7:15 PM, Binoy Dalal wrote:
> I think it returns everything because your query matches *san or " *".
> What is your field type definition?
>
> On Mon 17 Apr, 2017, 19:12 OTH, wrote:
>
> > If
I think it returns everything because your query matches *san or " *".
What is your field type definition?
On Mon 17 Apr, 2017, 19:12 OTH, wrote:
> If I submit the query:
> "select?q=field:*san*"
> Then it works as expected; returning all values in the field which contain
If I submit the query:
"select?q=field:*san*"
Then it works as expected; returning all values in the field which contain
the string "san".
However if I submit:
"select?q=field:*san *"
It then seems to return all the values of the field, regardless of what the
value is (!)
I only wish in this
I see, thanks. So I"m just using a string field to store the JSON.
On Sat, Apr 15, 2017 at 11:15 PM, Walter Underwood
wrote:
> Sorry, that was formatted. The quotes are actually escaped, like this:
>
> {"term":"microsoft office","weight":14,"payload":"{\"count\":
>
Sorry, that was formatted. The quotes are actually escaped, like this:
{"term":"microsoft office","weight":14,"payload":"{\"count\": 1534255,
\"id\": \"microsoft office\"}”}
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Apr 15, 2017, at 10:40
JSON does not have a binary data type, so true BLOBs are not possible in JSON.
Sorry, I wasn’t clear.
The payload I use is JSON in a string. It looks like this:
suggest: {
skill_names_infix: {
m: {
numFound: 10,
suggestions: [
{
term: "microsoft office",
weight: 14,
payload: "{"count": 1534255,
Hi - just wondering, what would be the difference between using a blob /
binary field to store the JSON rather than simply using a string field?
Thanks
On Sat, Apr 15, 2017 at 2:50 AM, Walter Underwood
wrote:
> We recently needed multiple values in the payload, so I put a
Great! That's what I was about to resort to do, but thanks for the
confirmation!
On Sat, Apr 15, 2017 at 2:50 AM, Walter Underwood
wrote:
> We recently needed multiple values in the payload, so I put a JSON blob in
> there. It comes back as a string, so you have to
We recently needed multiple values in the payload, so I put a JSON blob in
there. It comes back as a string, so you have to decode that JSON separately.
Otherwise, it was a pretty clean solution.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Apr
Thanks, that works! But is it possible to have multiple payloadFields?
On Sat, Apr 15, 2017 at 1:23 AM, Marek Tichy wrote:
> Utilize the payload field.
> > I don't need to search multiple fields; I need to search just one field
> but
> > get the corresponding values from
Utilize the payload field.
> I don't need to search multiple fields; I need to search just one field but
> get the corresponding values from another field as well.
> I.e. if a user is searching for cities, I wouldn't need the countries to
> also be searched. However, when the list of cities is
I don't need to search multiple fields; I need to search just one field but
get the corresponding values from another field as well.
I.e. if a user is searching for cities, I wouldn't need the countries to
also be searched. However, when the list of cities is displayed, I need
their corresponding
You can create a copy field and copy to it from all the fields you want to
retrieve the suggestions from and then use that field with the suggester.
On Thu 13 Apr, 2017, 23:21 OTH, wrote:
> Hello,
>
> I've followed the steps here to set up auto-suggest:
>
Hello,
I've followed the steps here to set up auto-suggest:
https://lucidworks.com/2015/03/04/solr-suggester/
So basically I configured the auto-suggester in solrconfig.xml, where I
told it which field in my index needs to be used for auto-suggestion.
The problem is:
When the user searches in
I`m writing to you for your
> help. Here is the problem I encountered:
>
>
> There is a timing task set at night in our project which uses Lucene to
> build index for the data from Oracle database. It was working fine at the
> beginning, however, as the index file grows bigger, t
Dear Sir/Madam, I am Li Wei, from China, and I`m writing to you for your help.
Here is the problem I encountered:
There is a timing task set at night in our project which uses Lucene to build
index for the data from Oracle database. It was working fine at the beginning,
however
my search results.
>>> Precisely, I am required to maintain 3 buckets wherein documents
>>with
>>> updated date falling in range of last 30 days should have maximum
>>weight,
>>> followed by update date in 60 and 90 and the rest.
>>> However in cases where update date is unavailable I need to sort it
>>using
>>> created date.
>>> I am not sure how do I achieve this.
>>> Any insights here would be a great help.
>>> Thanks in advance.
>>> Regards,
>>> Atita
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>with
>> updated date falling in range of last 30 days should have maximum
>weight,
>> followed by update date in 60 and 90 and the rest.
>> However in cases where update date is unavailable I need to sort it
>using
>> created date.
>> I am not sure how do I achieve this.
>> Any insights here would be a great help.
>> Thanks in advance.
>> Regards,
>> Atita
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
d 90 and the rest.
>> However in cases where update date is unavailable I need to sort it using
>> created date.
>> I am not sure how do I achieve this.
>> Any insights here would be a great help.
>> Thanks in advance.
>> Regards,
>> Atita
s should have maximum weight,
> followed by update date in 60 and 90 and the rest.
> However in cases where update date is unavailable I need to sort it using
> created date.
> I am not sure how do I achieve this.
> Any insights here would be a great help.
> Thanks in advance.
> Regards,
> Atita
be a great help.
Thanks in advance.
Regards,
Atita
message, the mailing list is likely to delete them. I
> haven't been able to figure out what makes some attachments get through
> while others don't. It's best to use a paste website or a file sharing
> site, and provide one or more URLs.
>
> If we can find a problem with *Solr*
akes some attachments get through
while others don't. It's best to use a paste website or a file sharing
site, and provide one or more URLs.
If we can find a problem with *Solr* running on Tomcat, I will attempt
to help you solve it. I can't guarantee that I will be successful,
because I don't
has
finished in 8,162 ms
--
View this message in context:
http://lucene.472066.n3.nabble.com/Running-Solr-6-3-on-Tomcat-Help-Please-tp4320874p4321007.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 2/16/2017 11:31 PM, Prashant Saraswat wrote:
> *On Solr 6.3 onwards, the following logs are displayed:*
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
>
Hi Guys,
I understand that this configuration is unsupported. However, this was
working until 6.2. I have looked at the changes document for both solr and
lucene for 6.3.0 but I can't figure out what has changed. Can someone point
me in the right direction?
Here are the details.
The following
er@lucene.apache.org <solr-user@lucene.apache.org>
> <solr-user@lucene.apache.org>
> Date: February 6, 2017 at 5:57:41 AM
> To: solr-user@lucene.apache.org <solr-user@lucene.apache.org>
> <solr-user@lucene.apache.org>
> Subject: Help with design choice: join or
<karl.kil...@gmail.com> <karl.kil...@gmail.com>
Reply: solr-user@lucene.apache.org <solr-user@lucene.apache.org>
<solr-user@lucene.apache.org>
Date: February 6, 2017 at 5:57:41 AM
To: solr-user@lucene.apache.org <solr-user@lucene.apache.org>
<solr-user@lucene.apach
Hello!
I have Items and I have Shops. This is a e-commerce system with items from
thousands of shops all though the inventory is often similar between shops.
Some users can shop from any shop and some only from their default one.
One item can exist in about 1 shops.
- When a user logs
Solr _does_ have a query parser that doesn't suffer from this problem --
SimpleQParser chosen as the string "simple".
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-SimpleQueryParser
In this case, see the "WHITESPACE" operator feature which can be toggled.
Configure to
Steve and Shawn, thanks for your replies/explanations!
I eagerly await the completion of the Solr JIRA ticket referenced above in
a future release. Many thanks for addressing this challenge that has had
me banging my head against my desk off and on for the last couple years!
Cliff
On Thu, Feb
Hi Cliff,
The Solr query parsers (standard/“Lucene” and e/dismax anyway) have a problem
that prevents SynonymGraphFilter from working: the text fed to your query
analyzer is first split on whitespace. So e.g. a query containing “United
States” will never match multi-word synonym “United
On 2/2/2017 7:36 AM, Cliff Dickinson wrote:
> The SynonymGraphFilter API documentation contains the following statement
> at the end:
>
> "To get fully correct positional queries when your synonym replacements are
> multiple tokens, you should instead apply synonyms using this TokenFilter
> at
oTermAutomatonQuery."
How do I use TokenStreamtoTermAutomationQuery or can this not be configured
in Solr, but only by writing code against Lucene? Would this even address
my issue?
I've found synonyms to be very frustrating in Solr and am hoping this new
filter will be a big improvement. Thanks in advance for the help!
On 2/2/2017 6:16 AM, deepak.gha...@mediawide.com wrote:
> I am writting query for getting response from specific index content first.
> eg.
> http://192.168.200.14:8983/solr/mypgmee/select?q=*blood*=id:(*/939/* OR
> **)=id=json=true
>
> In above query I am getting response, Means suppose I Get
Hello Sir,
I am writting query for getting response from specific index content first.
eg.
http://192.168.200.14:8983/solr/mypgmee/select?q=*blood*=id:(*/939/* OR
**)=id=json=true
In above query I am getting response, Means suppose I Get 4 result for course
"939" out of 10. It works fine by
Is there anyone to help me with my issue?
Your help is much appreciated
I figured out the problem but need solution
In my below data-config file tikaConfig.xml is not recognized by zookeeper (
processor="TikaEntityProcessor" tikaConfig="
to do is not supported in ZooKeeper mode");
}
https://github.com/apache/lucene-solr/blob/branch_6_3/solr/core/src/java/org/apache/solr/cloud/ZkSolrResourceLoader.java
Can someone help me with work around
ERROR :
2017-02-01 16:39:55.932 ERROR (Thread-20) [c:dsearch s:shard2 r:
ment(DocBuilder.java:414)
>
> ... 6 more
>
> Caused by: org.apache.solr.common.cloud.ZooKeeperException:
> ZkSolrResourceLoader does not support getConfigDir() - likely, what you are
> trying to do is not supported in ZooKeeper mode
>
> at
> org.apache.solr.cloud.ZkSolrResourceLoader.getConfigDir(ZkSolrResourceLoa
)
at
org.apache.solr.handler.dataimport.TikaEntityProcessor.firstInit(TikaEntityProcessor.java:91)
... 12 more
I have attached the code for your reference
Could you please help me with the solution
Regards,
~Sri
application/xml
image/svg+xml
text/xml
On 1/18/2017 10:18 AM, Abhijit Pawar wrote:
> One thing that popped in my mind is I saw your code wherein the
> password for mysql is not included in quotes or double quotes.
> password=REDACTED
> Whereas mine were included.
>
> password="*<>*"
>
> Do you think that could be a possible issue
t;
>
>
> On Wed, Jan 18, 2017 at 9:26 AM, Shawn Heisey <apa...@elyograg.org> wrote:
>
>> On 1/16/2017 1:04 PM, Abhijit Pawar wrote:
>> > Hello,
>> >
>> > Need your help on one small problem I am facing in SOLR.
>> >
>> &g
2017 at 9:26 AM, Shawn Heisey <apa...@elyograg.org> wrote:
> On 1/16/2017 1:04 PM, Abhijit Pawar wrote:
> > Hello,
> >
> > Need your help on one small problem I am facing in SOLR.
> >
> > I have added authentication for our mongodb database in
> data-sourc
On 1/16/2017 1:04 PM, Abhijit Pawar wrote:
> Hello,
>
> Need your help on one small problem I am facing in SOLR.
>
> I have added authentication for our mongodb database in data-source-config
> file in SOLR.
> rating,updatedAt,comparable,hide_price FROM
> products':ja
Hello,
Need your help on one small problem I am facing in SOLR.
I have added authentication for our mongodb database in data-source-config
file in SOLR.
This is the configuration I have done :
>*.*<>*]
Any idea what could be the issue.?Appreciate you help!!!
Thank Yo
anuary 10, 2017 10:51 AM
To: solr-user
Subject: Re: Help needed in breaking large index file into smaller ones
Hi Erick,
Its due to some past issues observed with Joins on Solr 4, which got OOM on
joining to large indexes after optimization/compaction, if those are stored as
smaller files those ge
r-user
Subject: Re: Help needed in breaking large index file into smaller ones
Why do you have a requirement that the indexes be < 4G? If it's
arbitrarily imposed why bother?
Or is it a non-negotiable requirement imposed by the platform you're on?
Because just splitting the files into a sm
.com>
> Sent: Monday, January 9, 2017 3:51 PM
> To: solr-user@lucene.apache.org
> Subject: Help needed in breaking large solr index file into smaller ones
>
> Hi All,
>
> My solr server has a few large index files (say ~10G). I am looking
> for some help on breaking them i
Why do you have a requirement that the indexes be < 4G? If it's
arbitrarily imposed why bother?
Or is it a non-negotiable requirement imposed by the platform you're on?
Because just splitting the files into a smaller set won't help you if
you then start to index into it, the merge proc
both locations separately.
> Perhaps shard splitting in SolrCloud does something like that.
>
> On Mon, Jan 9, 2017 at 1:12 PM, Narsimha Reddy CHALLA <
> chnredd...@gmail.com>
> wrote:
>
> > Hi All,
> >
> > My solr server has a few large index files (s
, Narsimha Reddy CHALLA <chnredd...@gmail.com>
wrote:
> Hi All,
>
> My solr server has a few large index files (say ~10G). I am looking
> for some help on breaking them it into smaller ones (each < 4G) to satisfy
> my application requirements. Are there any such tools availa
Hi,
Aplogies for my response, did not read the question properly.
I was speaking about splitting files for import
-Original Message-
From: billnb...@gmail.com [mailto:billnb...@gmail.com]
Sent: 09 January 2017 05:45 PM
To: solr-user@lucene.apache.org
Subject: Re: Help needed
> OR a new segment_NN file should be created, probably.
>
> Can someone who is familiar with lucene index files please help us in this
> regard?
>
> Thanks
> NRC
>
> On Mon, Jan 9, 2017 at 7:38 PM, Manan Sheth <manan.sh...@impetus.co.in>
> wrote:
>
>> Is
N file which will refer index files in a
> commit. So, when we split a large index file into smaller ones, the
> corresponding segment_NN file also needs to be updated with new index files
> OR a new segment_NN file should be created, probably.
>
> Can someone who is familiar with lucene
OR a new segment_NN file should be created, probably.
Can someone who is familiar with lucene index files please help us in this
regard?
Thanks
NRC
On Mon, Jan 9, 2017 at 7:38 PM, Manan Sheth <manan.sh...@impetus.co.in>
wrote:
> Is this really works for lucene index files?
>
> Thanks
Is this really works for lucene index files?
Thanks,
Manan Sheth
From: Moenieb Davids <moenieb.dav...@gpaa.gov.za>
Sent: Monday, January 9, 2017 7:36 PM
To: solr-user@lucene.apache.org
Subject: RE: Help needed in breaking large index file into smalle
@lucene.apache.org
Subject: Help needed in breaking large index file into smaller ones
Hi All,
My solr server has a few large index files (say ~10G). I am looking for
some help on breaking them it into smaller ones (each < 4G) to satisfy my
application requirements. Are there any such tools availa
From: Narsimha Reddy CHALLA <chnredd...@gmail.com>
Sent: Monday, January 9, 2017 3:51 PM
To: solr-user@lucene.apache.org
Subject: Help needed in breaking large solr index file into smaller ones
Hi All,
My solr server has a few large index files (say ~10G). I am looking
for som
Hi All,
My solr server has a few large index files (say ~10G). I am looking
for some help on breaking them it into smaller ones (each < 4G) to satisfy
my application requirements. Basically, I am not looking for any
optimization of index here (ex: optimize, expungeDeletes
Hi All,
My solr server has a few large index files (say ~10G). I am looking
for some help on breaking them it into smaller ones (each < 4G) to satisfy
my application requirements. Are there any such tools available?
Appreciate your help.
Thanks
NRC
Hello Shahi, would you clarify your requirement or issue from Solr
perspective. From the above its not clear what you are asking.
You use Solr for indexing some data which later you can search upon.
Keeping this in mind, can you elaborate what kind of data you are indexing
and what are you
Hello Team,
I am looking for your valuable suggestions/solutions for the below scenario:
> Scenario :
When any user gives a request by giving the name of the filename.zip, then he
wants the "filename.zip" zip file.
> Description:
*The data is a collection.zip where it consists of
Yeah,, I'm curious why this thread is used to talk that topic.
I'll start a new thread on my questions.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-cannot-provide-index-service-after-a-large-GC-pause-but-core-state-in-ZK-is-still-active-tp4308942p4310302.html
Sent
Afaik the only xml that nutch should be touching is its own config files. This
error shows up in solr admin
Sent from my iPhone
> On Dec 16, 2016, at 1:55 AM, Reth RM wrote:
>
> Are you indexing xml files through nutch? This exception purely looks like
> processing of
nt search
> engine at my current job to solr. (Eww sphinx haha) anyway I need some
> help. I was running around the net getting my suggester working and im
> stuck and I need some help. This is what I have so far. (I will explain
> after I posted links to the config files)
>
> her
Are you indexing xml files through nutch? This exception purely looks like
processing of in-correct format xml file.
On Mon, Dec 12, 2016 at 11:53 AM, KRIS MUSSHORN
wrote:
> ive scoured my nutch and solr config files and I cant find any cause.
> suggestions?
> Monday,
Hi Friends,
I'm new to solr, been working on it for the past 2-3 months trying to
really get my feet wet with it so that I can transition the current search
engine at my current job to solr. (Eww sphinx haha) anyway I need some
help. I was running around the net getting my suggester working
sorry my mistake.. sent to wrong list.
- Original Message -
From: "Shawn Heisey" <apa...@elyograg.org>
To: solr-user@lucene.apache.org
Sent: Monday, December 12, 2016 2:36:26 PM
Subject: Re: regex-urlfilter help
On 12/12/2016 12:19 PM, KRIS MUSSHORN wrote:
&
ive scoured my nutch and solr config files and I cant find any cause.
suggestions?
Monday, December 12, 2016 2:37:13 PMERROR nullRequestHandlerBase
org.apache.solr.common.SolrException: Unexpected character '&' (code 38) in
epilog; expected '<'
On 12/12/2016 12:19 PM, KRIS MUSSHORN wrote:
> I'm using nutch 1.12 and Solr 5.4.1.
>
> Crawling a website and indexing into nutch.
>
> AFAIK the regex-urlfilter.txt file will cause content to not be crawled..
>
> what if I have
> https:///inside/default.cfm as my seed url...
>
I'm using nutch 1.12 and Solr 5.4.1.
Crawling a website and indexing into nutch.
AFAIK the regex-urlfilter.txt file will cause content to not be crawled..
what if I have
https:///inside/default.cfm as my seed url...
I want the links on this page to be crawled and indexed but I
I think this will work. Ill try it tomorrow and let you know.
Thanks for the help Eric and Shawn
Kris
-Original Message-
From: Erik Hatcher [mailto:erik.hatc...@gmail.com]
Sent: Thursday, December 8, 2016 2:43 PM
To: solr-user@lucene.apache.org
Subject: Re: prefix query help
It’s hard
=metadata.date:(2016-06* OR 2014-04*) as
you’ve got it, but you said that sort of thing wasn’t working (debug out would
help suss that issue out).
If you did index those strings cleaner as -MM to accommodate the types of
query you’ve shown then you could do q=metadata.date:(2016-06 OR 2014-04), or
q
its wonky but its what I have to deal with until he content is
cleaned up.
I cant use date type.. that would make my life to easy.
TIA again
Kris
- Original Message -
From: "Erik Hatcher" <erik.hatc...@gmail.com>
To: solr-user@lucene.apache.org
Sent: Thursday, Decembe
On 12/8/2016 10:02 AM, KRIS MUSSHORN wrote:
>
> Here is how I have the field defined... see attachment.
You're using a tokenized field type.
For the kinds of queries you asked about here, you want to use StrField,
not TextField -- this type cannot have an analysis chain and indexes to
one token
Kris -
To chain multiple prefix queries together:
q=({!prefix f=field1 v=‘prefix1'} {!prefix f=field2 v=‘prefix2’})
The leading paren is needed to ensure it’s being parsed with the lucene qparser
(be sure not to have defType set, or a variant would be needed) and that allows
multiple {!…}
Here is how I have the field defined... see attachment.
- Original Message -
From: "Erick Erickson" <erickerick...@gmail.com>
To: "solr-user" <solr-user@lucene.apache.org>
Sent: Thursday, December 8, 2016 10:44:08 AM
Subject: Re: prefix query hel
You'd probably be better off indexing it as a "string" type given your
expectations. Depending on the analysis chain (do take a look at
admin/analysis for the field in question) the tokenization can be tricky
to get right.
Best,
Erick
On Thu, Dec 8, 2016 at 7:18 AM, KRIS MUSSHORN
Im indexing data from Nutch into SOLR 5.4.1.
I've got a date metatag that I have to store as text type because the data
stinks.
It's stored in SOLR as field metatag.date.
At the source the dates are formatted (when they are entered correctly ) as
-MM-DD
q=metatag.date:2016-01* does
base
> -Do I need to write lot of stub java code to integrate SOLR?
>
> please advise.
>
> Thanks,
> Venkat
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/SOLR-index-help-SQL-Anywhere-16-MS-SQL-2014-tp4308542.html
> Sent from the Solr - User mailing list archive at Nabble.com.
sage in context:
http://lucene.472066.n3.nabble.com/SOLR-index-help-SQL-Anywhere-16-MS-SQL-2014-tp4308542.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Shalin,
when the buffer is enabled, tlogs are not removed anymore, even if they
were replicated [1]:
"When buffering updates, the updates log will store all the updates
indefinitely. "
Once you disable the buffer, all the old tlogs should be cleaned (the
next time the tlog cleaning
Even if buffer is enabled, the old tlogs should be remove once the
updates in those tlogs have been replicated to the target. So the real
question is why they haven't been removed automatically?
On Thu, Dec 1, 2016 at 9:13 PM, Renaud Delbru wrote:
> Hi Thomas,
>
> Looks
Hi Thomas,
Looks like the buffer is enabled on the update log, and even if the
updates were replicated, they are not removed.
What is the output of the command `cdcr?action=STATUS` on both cluster ?
If you see in the response `enabled`, then the
buffer is enabled.
To disable it, you
'}))
)
Regards,
Prasanna
-Original Message-
From: Michael Kuhlmann [mailto:k...@solr.info]
Sent: Thursday, November 24, 2016 4:29 PM
To: solr-user@lucene.apache.org
Subject: Re: Again : Query formulation help
Hi Prasanna,
there's no such filter out-of-the-box. It's similar to the mm parameter
_What_ issue? You haven't told us what the results are, what if anything
the Solr logs show when you try this, in short anything that could help
us diagnose the problem.
Solr has "atomic updates" that work to update partial documents, but
that requires that all fields be stored. Are
:(
Thanks Michael.
Regards,
Prasanna.
-Original Message-
From: Michael Kuhlmann [mailto:k...@solr.info]
Sent: Thursday, November 24, 2016 4:29 PM
To: solr-user@lucene.apache.org
Subject: Re: Again : Query formulation help
Hi Prasanna,
there's no such filter out-of-the-box. It's
ple document in SOLAR at time in my batch job.
>
>
>
> Could you please help me by giving example or an documentation for the
> same.
>
>
>
> Thanks
>
> Sankar Reddy M.B
>
Hi Prasanna,
there's no such filter out-of-the-box. It's similar to the mm parameter
in (e)dismax parser, but this only works for full text searches on the
same fields.
So you have to build the query on your own using all possible permutations:
fq=(code1: AND code2:) OR (code1: AND
Hi,
Need to formulate a distinctive field values query on 4 fields with minimum
match on 2 fields
I have 4 fields in my core
Code 1 : Values between 1001 to
Code 2 : Values between 1001 to
Code 3 : Values between 1001 to
Code 4 : Values between 1001 to
I want to
Hi Team ,
Facing issue to update multiple document in SOLAR at time in my batch job.
Could you please help me by giving example or an documentation for the same.
Thanks
Sankar Reddy M.B
{
"add": {
"doc": {
"quoteNumber": "133940",
s%7C6.3-0.1%7Cpom
-Original Message-
From: Alexandre Rafalovitch [mailto:arafa...@gmail.com]
Sent: Wednesday, November 23, 2016 10:03 AM
To: solr-user <solr-user@lucene.apache.org>
Subject: Re: negation search help
Well, then 'no' becomes a signal token. So, the question is how
401 - 500 of 2438 matches
Mail list logo