pache.org
> Subject: Re: Searching for an efficient and scalable way to filter query
> results using non-indexed and dynamic range values
>
> Hi,
>
> first of all, thank you for your answers.
>
> @ Rick: the reason is that the set of pages that are stored into the disk
> rep
And the coupon has no expiration date on it (LOL). Thank you again, Emir!
Best Regards,
Wendy
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi Wendy,
You are welcome! I’ll put your lunch coupon in my wallet, just in case I get
hungry around NJ ;)
Regards,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 1 Feb 2018, at 16:26, Wendy2 wrot
Excellent!!! Thank you so much for all your help, Emir!
Both worked now and I got 997 result counts back as the expected number :-)
/rcsb/search?q=method:"x-ray*" "Solution NMR"&mm=1
/rcsb/search?q=+method:"x-ray*" +"Solution NMR"&mm=1
I will keep this in my mind regarding query with multiple p
Hi Wendy,
Query now looks as expected but you are not getting results as expected. The
reason for that is edismax’s mm parameter is what matters. You are setting it
to 7 and you have two parts to match so it is always AND and you don’t have
such documents. You can set it to 1 and it will be OR.
Good morning, Emir,
Here are the debug output for case 1f-a (q=method:"x-ray*" "Solution NMR"),
1f-b (q=+method:"x-ray*" +"Solution NMR"). both returned zero counts. It
looks that the querystrings are the same. Thanks for following up on my
post and your help! -- Wendy
*=De
Thanks I think we'll go for extractOnly cause using a recent version of
Tika causes to many dependency issues.
On Thu, Feb 1, 2018 at 12:25 PM, Emir Arnautović <
emir.arnauto...@sematext.com> wrote:
> Hi Joris,
> I doubt that you can do that. That would require extracting req
Am 31.01.18 um 16:30 schrieb David Frese:
Am 29.01.18 um 18:05 schrieb Erick Erickson:
Try searching with lowercase the word and. Somehow you have to allow
the parser to distinguish the two.
Oh yeah, the biggest unsolved problem in the ~80 years history of
programming languages... NOT ;-)
Hi Joris,
I doubt that you can do that. That would require extracting request handler to
support incremental updating and I don’t thing it does. In order to update
existing doc, you would have to extract content and send it as incrementa
update request.
You can still use extracting handler to ex
Hi
I'd like to update a single field of an existing document with the content
of a file.
My current setup looks like this:
final File file = new File("path to file");
ContentStreamUpdateRequest req = new
ContentStreamUpdateRequest("/update/extract");
req.addContentStream(new ContentStrea
different - since idf will
be computed on all the documents that you have in the collection.
Cheers,
Diego
From: solr-user@lucene.apache.org At: 01/31/18 20:12:16To:
solr-user@lucene.apache.org
Subject: Re: Searching for an efficient and scalable way to filter query
results using non-indexed a
ng Support Training - http://sematext.com/
> On 1 Feb 2018, at 11:10, Alessandro Benedetti wrote:
>
> Reading from the wiki [1]:
>
> " An atomic update operation is performed using this approach only when the
> fields to be updated meet these three conditions:
>
>
Reading from the wiki [1]:
" An atomic update operation is performed using this approach only when the
fields to be updated meet these three conditions:
are non-indexed (indexed="false"), non-stored (stored="false"), single
valued (multiValued="false")
Hi Wendy,
I was thinking of query q=method:“x-ray*” “Solution NMR”
This should be equivalent to one with OR between them. If you want to put AND
between those two, query would be q=+method:”x-ray*” +”Solution NMR”
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticse
Hi Emir,
Listed below are the debugQuery outputs from query without "OR" operator. I
really appreciate your help! --Wendy
===DebugQuery Outputs for case 1f-a, 1f-b without "OR"
operator=
*1f-a (/search?q=+method:"x-ray*" +method:"Solution NMR") result counts = 0:
*
tion, and/or to run other
experiments.
@ Alessandro: your approach of using a static and a dynamic index and then
to merge the results by means of query joins was what I had in mind at a
first glance. It could still do the job, but you already highlighted a
performance limitation on the static
Hi Wendy,
With OR with spaces OR is interpreted as another search term. Can you try
without or - just a space between two parts. If you need and, use + before
each part.
HTH,
Emir
On Jan 31, 2018 6:24 PM, "Wendy2" wrote:
Hi Emir,
Thank you so much for following up with your ticket.
Listed belo
Hi Emir,
Thank you so much for following up with your ticket.
Listed below are the parts of debugQuery outputs via /search request
handler. The reason I used * in the query term is that there are a couple of
methods starting with "x-ray". When I used space surrounding the "OR"
boolean search opera
Am 29.01.18 um 18:05 schrieb Erick Erickson:
Try searching with lowercase the word and. Somehow you have to allow
the parser to distinguish the two.
Oh yeah, the biggest unsolved problem in the ~80 years history of
programming languages... NOT ;-)
You _might_ be able to try "AND~2" (with qu
future). I am
>setting
>authentication for solr. As Solr provided basic authentication is not
>working in Solr 6.4.2, I am setting up digest authentication in tomcat
>for
>Solr. I am able to login into Solr admin application using credentials.
>
>Now from my Java application, wh
generic experiment, I measure the
>time
>units as the number of crawling cycles completed so far, i.e., with an
>integer value. Finally, I evaluate the experiment by analyzing the
>documents fetched over the crawling cycles. In this work I am using
>Lucene
>7.2.1, but this should no
Hi Luigi,
What about using an updatable DocValue [1] for the field x ? you could
initially set it to -1,
and then update it for the docs in the step j. Range queries should still work
and the update should be fast.
Cheers
[1] http://shaierera.blogspot.com/2014/04/updatable-docvalues-under
I am not sure I fully understood your use case, but let me suggest few
different possible solutions :
1) Query Time join approach : you keep 2 collections, one static with all
the pages, one that just store lighweight documents containing the crawling
interaction :
1) Id, content -> Pages
2)pageId
t for
Solr. I am able to login into Solr admin application using credentials.
Now from my Java application, when I try to run a query, which will delete
documents in a core, it's throwing following error.
org.apache.http.client.NonRepeatableRequestException: Cannot retry request
with a non-
OR). Please paste results as text and not as picture, and do not update
original post since some of us are using mails and we are not getting updates.
Thanks,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext
Hi Emlr,
Thank you for reading my post and for your reply. I updated my post with
debug info and a better view of the definition of /search request handler.
Any suggestion on what I should try?
Thanks,
Wendy
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi Emir,
Thank you so much for your response. I updated my post with an image which
display the configuration of the /search request handler. Any suggestions?
Thanks,
Wendy
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
the
discovered links accordingly. In a generic experiment, I measure the time
units as the number of crawling cycles completed so far, i.e., with an
integer value. Finally, I evaluate the experiment by analyzing the
documents fetched over the crawling cycles. In this work I am using Lucene
7.2.1
Hi Wendy,
It is most likely that you need to list fields that can appear in query using
uf. The best way to see what is going on is to use debugQuery and you can see
more details how your query is parsed.
HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr
On 30/01/2018 07:57, Mohammed.Adnan2 wrote:
Hello Team,
I am a beginner learning Apache Solr. I am trying to check the compatibility of
solr with SharePoint Online, but I am not getting anything concrete related to
this in the website documentation. Can you please help me in providing some
in
Hello Team,
I am a beginner learning Apache Solr. I am trying to check the compatibility of
solr with SharePoint Online, but I am not getting anything concrete related to
this in the website documentation. Can you please help me in providing some
information on this? How I can index my SharePoi
Hi Solr users,I am having an issue on boolean search with Solr parser
edismax. The search "OR" doesn't work. The image below shows the different
results tested on different Solr versions. There are two types of search
requester handlers, /select vs /search. The /select requester uses Lucene
default
Try searching with lowercase the word and. Somehow you have to allow
the parser to distinguish the two.
You _might_ be able to try "AND~2" (with quotes) to see if you can get
that through the parser. Kind of a hack, but
There's also a parameter (depending on the parser) about lowercasing
oper
Hello everybody,
how can I formulate a fuzzy query that works for an arbitrary string,
resp. is there a formal syntax definition somewhere?
I already found by by hand, that
field:"val"~2
Is read by the parser, but the fuzzyness seems to get lost. So I write
field:val~2
Now if val contain s
On 1/27/2018 6:53 AM, SOLR4189 wrote:
1. You are right, due to memory and garbage collection issues I set each
shard to different VM. So in my VM I has 50 GB RAM (10 GB for JVM and 40 GB
for index) and it works good for my using case. Maybe I don't understand
solr terms, but if you say t
1. You are right, due to memory and garbage collection issues I set each
shard to different VM. So in my VM I has 50 GB RAM (10 GB for JVM and 40 GB
for index) and it works good for my using case. Maybe I don't understand
solr terms, but if you say to set one VM for 20 shards what does it mea
1. You could just have 2 VMs, one has all 20 shards of your collection, the
other one has the replicas for those shards. In this scenario, if one VM is
not available, you still have application availability as at least one
replica is available for each shard. This assumes that your VM can fit all
t
I use SOLR-6.5.1. I would like to use SolrCloud replicas. And I have some
questions:
1) What is the best architecture for this if my collection contains 20
shards, and each shard is in different vm? 40 vms where 20 for leaders and
20 for replicas? Or maybe stay with 20 vms where leader and replica
Hi,
let me see if I got your problem :
your "user specific" features are Query dependent features from Solr side.
The value of this feature depends on a query component ( the user Id) and a
document component( product Id)
You can definitely use them.
You can model this feature as a binary feature.
I have never been a big fan of " getting N results from Solr and then filter
them client side" .
I get your point about the document modelling, so I will assume you properly
tested it and having the small documents at Solr side is really not
sustainable.
I also appreciate the fact you want to fin
-user@lucene.apache.org
Subject: RE: Using lucene to post-process Solr query results
And you want to show to the users only the Lucene documents that matched the
original query sent to Solr? (what if a lucene document matches only part of
the query?)
From: solr-user@lucene.apache.org At: 01/23/18 13:55:46To:
hi,
I am going through learning to rank examples in Solr7. In the examples, the
features are part of the searched document. Can I use solr's learning to
rank system if my features are user specific? e.g., if searching for
products, i want to rank some products higher if they have been used by
cur
@lucene.apache.org
Subject: RE: Using lucene to post-process Solr query results
Hi Diego,
Basically, each Solr document has a text field , which contains large amount of
text separated by some delimiters. I split this text into parts and then assign
each part to a separate lucene Document object.
The field
document
for each different value for that field in the same Solr document.
Regards,
Rahul
-Original Message-
From: Diego Ceccarelli (BLOOMBERG/ LONDON) [mailto:dceccarel...@bloomberg.net]
Sent: Tuesday, January 23, 2018 7:17 PM
To: solr-user@lucene.apache.org
Subject: Re: Using lucene to
ok at streaming expressions, looks interesting.
Regards,
Rahul Chhiber
-Original Message-
From: Atita Arora [mailto:atitaar...@gmail.com]
Sent: Tuesday, January 23, 2018 3:29 PM
To: solr-user@lucene.apache.org
Subject: Re: Using lucene to post-process Solr query results
Hi Rahul,
Looks
Rahul, can you provide more details on how you decide that the smaller lucene
objects are part of the same solr document?
From: solr-user@lucene.apache.org At: 01/23/18 09:59:17To:
solr-user@lucene.apache.org
Subject: Re: Using lucene to post-process Solr query results
Hi Rahul,
Looks like
Hi Rahul,
Looks like Streaming expressions can probably can help you.
Is there something else you have tried for this?
Atita
On Jan 23, 2018 3:24 PM, "Rahul Chhiber"
wrote:
Hi All,
For our business requirement, once our Solr client (Java) gets the results
of a search query from the Solr ser
Hi All,
For our business requirement, once our Solr client (Java) gets the results of a
search query from the Solr server, we need to further search across and also
within the content of the returned documents. To accomplish this, I am
attempting to create on the client-side an in-memory lucene
I started to use timeAllowed parameter in SOLR-6.5.1. And got too many (each
second) exceptions
null:java.lang.NullPointerException
at
org.apache.lucene.search.TimeLimitingCollector.needScores(TimeLimitingCollector.java:166)
caused to perfomance problems.
For reproducing exception need group=tr
a JIRA you might want to review the objections there to
see if they would apply.
In those instances where you _are_ using SolrCloud, the bin/solr script can
be used to move things back and forth,
either on a directory or individual file basis.
Try: bin/solr zk -help
and you'll see this outp
Hi Shawn,
thanks for confirming.
I am not using Solr Cloud (I forgot to mention that), or at least not in
all instances where that particular piece of code would be used.
I'll think about opening a Jira issue, or just doing it iteratively through
the API.
Regards,
André
2018-01-05 15:0
On 1/5/2018 6:51 AM, André Widhani wrote:
I know I can retrieve the entire schema using Schema API and I can also use
it to manipulate the schema by adding fields etc.
I don't see any way to post an entire schema file back to the Schema API
though ... this is what most REST APIs offer
Hi,
I know I can retrieve the entire schema using Schema API and I can also use
it to manipulate the schema by adding fields etc.
I don't see any way to post an entire schema file back to the Schema API
though ... this is what most REST APIs offer: You retrieve an object,
modify it and send
Consulting Support Training - http://sematext.com/
>
>
>
> > On 4 Jan 2018, at 10:59, Zheng Lin Edwin Yeo
> wrote:
> >
> > Hi,
> >
> > I'm using Solr 7.2.0, and I'm trying to replace \n with by using
> > RegexReplaceProcessorFactory.
Hi Edwin,
You need to encode as <br>
HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 4 Jan 2018, at 10:59, Zheng Lin Edwin Yeo wrote:
>
> Hi,
>
> I'm using
Hi,
I'm using Solr 7.2.0, and I'm trying to replace \n with by using
RegexReplaceProcessorFactory.
However, I could not get the below configuration in solrconfig.xml to be
loaded.
content
\n
Understand that is a special character. Can we do some escape sequence
to
On 1/2/2018 12:55 PM, Alessandro Hoss wrote:
> Actually I haven't tried the bin/solr script because I do everything
> remotely on Solr. Thanks for the tip, it worked the way I want
> (copying the conf to a new folder), but I need to do it through an API
> and choosing what configset to copy from.
Thanks Shawn,
> How are you doing the core create?
>
You're right, I was using CoreAdmin API.
If you use "bin/solr create"
Actually I haven't tried the bin/solr script because I do everything
remotely on Solr.
Thanks for the tip, it worked the way I want (copying the
llection, if you *do not specify a configSet*, the
>_default will be used.
>-
>
> *If you use standalone mode, the instanceDir will be created
> automatically, using the _defaultconfigSet as it’s basis.*
>
> But if I try to create a *core* in standalone m
ill be used.
-
*If you use standalone mode, the instanceDir will be created
automatically, using the _defaultconfigSet as it’s basis.*
But if I try to create a *core* in standalone mode without specifying a
configset, it searches for config files and throw this:
Error CREATEing SolrCo
Hi Tomerg,
1. Did you consider using the collapse component?
https://lucene.apache.org/solr/guide/6_6/collapse-and-expand-results.html
it is compatible with rq.
2. If you implement group reranking as a separate component you will
end up with a lot of code duplicated from QueryComponent, you
hey,
i'm using solr 6.5.1 with solrCloud mode.
i use grouping for my results.
i want to use rank query(rq) in order to rerank the top groups(with ltr).
it's ok for me to rerank the groups only by reranking one of the documents
in the group.
i saw in issue SOLR-8776 that rank queri
One mechanism that comes to mind is if the swapping slows down an update.
Here's the process
- Leader sends doc to follower
- follower times out
- leader says "that replica must be sick, I'll tell it to recover"
The smoking gun here is if you see any messages about
"leader-initiated recovery". gr
On 12/15/2017 10:53 AM, Bill Oconnor wrote:
> The recovering server has a much larger swap usage than the other servers in
> the cluster. We think this this related to the mmap files used for indexes.
> The server eventually recovers but it triggers alerts for devops which are
> annoying.
>
> I
Hello,
We recently upgraded to SolrCloud 6.6. We are running on Ubuntu servers LTS
14.x - VMware on Nutanics boxs. We have 4 nodes with 32GB each and 16GB for the
jvm with 12GB minimum. Usually it is only using 4-7GB.
We do nightly indexing of partial fields for all our docs ~200K. This
You can do this with DIH, but that has some problems. I'd strongly
recommend you think about using Tika in an independent client code.
Here's a program that gets you started:
https://lucidworks.com/2012/02/14/indexing-with-solrj/
Best,
Erick
On Wed, Dec 13, 2017 at 5:36 AM, Sean Gilh
Hello,
I have been successfully able to index archive files (zip, tar, and the
like) using solr cell, but the archive is returned as a single document
when I do queries. Is there a way to configure it so that files are
extracted recursively, and indexed separately?
I know that if I set the
On 12/2/2017 12:55 PM, David Lee wrote:
{
"responseHeader":{
"status":0,
"QTime":798}}
Though the status indicates there was no error, when I try to query on
the the data using *:*, I get this:
curl 'http://localhost:8983/solr/my_collection
quot;: "aaabbbccc",
"member_name": "Sam Jackson"
},{
"member_id": "bbbcccddd",
"member_name": "Buddy Jones"
}
]
}
On 12/2/2017 1:55 PM, David Lee wrote:
Hi all,
I've b
Hi all,
I've been trying for some time now to find a suitable way to deal with
json documents that have nested data. By suitable, I mean being able to
index them and retrieve them so that they are in the same structure as
when indexed.
I'm using version 7.1 under linux Mint 18.3 w
On 11/26/2017 7:45 AM, lamelylounges wrote:
I have a datastax (DSE) server with cassandra. I am running the node in
"search" mode, which means solr is enabled and working. I want to use CQL
to write a query against my core/table.
I'm fairly certain that CQL is not something created by the Sol
Hi all,
I have a datastax (DSE) server with cassandra. I am running the node in
"search" mode, which means solr is enabled and working. I want to use CQL
to write a query against my core/table.
If this were a traditional SQL, here is how I would write it:
SELECT COUNT(DISTINCT my_field) FROM my_
ce its large size,
Ray did not make the entire data online. What I can acquire is a batch
of commits?? SHA data and some other info. So, I need to pick out
the old commits which are correlated to these SHAs.
On 17/9/2017 1:47 PM, Shawn wrote:
> The commit data you're using is nearly useles
data you're using is nearly useless, because the repository
where it originated has been gone for nearly two years. If you can find
out how it was generated, you can build a new version from the current
repository -- either on github or from Apache's official servers.
Thanks,
Shawn
Thanks for your patience and helps.
Recently, I acquired a batch of commits?? SHA data of Lucene, of which the
time span is from 2010 to 2015. In order to get original info, I tried to use
these SHA data to track commits. First, I cloned Lucene repository to my local
host, using the cmd
It depends how you want to use the payloads.
If you want to use the payloads to calculate additional features, you can
implement a payload feature:
This feature could calculate the sum of numerical payload for the query
terms in each document ( so it will be a query dependent feature and will
lev
Hi all,
I know the suggester component will work on distributed indexes. It works
fine when I'm only using the suggester component in the components chain,
but I'd like to apply the suggester to the end of the default components
chain (query, facet, mlt, etc..). When I do, I get an
: In the first few weeks of 2016, the Lucene/Solr project migrated from
: svn to git. Prior to this, there was a github mirror of the subversion
: repository, but when the official repository was converted, that github
: mirror was completely deleted, and replaced with an exact mirror of the
: off
I cloned Lucene repository to my
> local host, using the cmd git clone https://
> https://github.com/apache/lucene-solr.git. Then, I used git show [commit SHA]
> to get commits’ history record, but failed with the CMD info like this:
>
>>> git show be5672c0c242d658b7ce36f291b74c34
Thanks for your patience and helps.
Recently, I acquired a batch of commits?? SHA data of Lucene, of which the
time span is from 2010 to 2015. In order to get original info, I tried to use
these SHA data to track commits. First, I cloned Lucene repository to my local
host, using the cmd
Hey all,
I’m running v6.3.0. I’ve been trying to configure a Jython ScriptTransformer in
my data-config.xml (pulls from JdbcDataSource). But when I run the full import,
it tries to interpret the script as JavaScript, even though I added the
language=Jython attribute to the
Hi,
I have 4 shard and 4 replica and I do Composite document routing for my unique
field 'Id' as mentions below.
e.g : projectId:158380 modelId:3606 where tenants bits use as
projectId/Numbits!modelId/Numbits! prefix with Id
NumBits distributed as mention below
3 bits would spread the tenant o
CVE-2016-6809: Java code execution for serialized objects embedded in
MATLAB files parsed by Apache Solr using Tika
Severity: Important
Vendor:
The Apache Software Foundation
Versions Affected:
Solr 5.0.0 to 5.5.4
Solr 6.0.0 to 6.6.1
Solr 7.0.0 to 7.0.1
Description:
Apache Solr uses Apache
Hi,
I have a question on Solr version 7. Is it possible to use ltr and payload
plugins together for enhancement of search results? I am newbie on this
topic and I would like to know how I can use if it is possible.
Thanks.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Thanks for your reply.
can the recip function be used to boost a numeric field here:
recip(ord(rating),100,1,1)
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
You can pass additional bq params in the query.
~Aravind
On Oct 23, 2017 4:10 PM, "ruby" wrote:
> If I want to boost multiple fields using Edismax query parser, is following
> the correct way of doing it:
>
>
>
> edismax
> field1:(apple)^500
> field1:(orange
If I want to boost multiple fields using Edismax query parser, is following
the correct way of doing it:
edismax
field1:(apple)^500
field1:(orange)^400
field1:(pear)^300
field2:(4)^500
field2:(2)^100
recip(ms(NOW,mod_date),3.16e-11,1,1)
recip(ms(NOW,creation_date),3.16e-11,1,1)
And
If I want to boost multiple fields using Edismax query parser, is following
the correct way of doing it:
edismax
field1:(apple)^500
field1:(orange)^400
field1:(pear)^300
field2:(4)^500
field2:(2)^100
recip(ms(NOW,mod_date),3.16e-11,1,1)
recip(ms(NOW,creation_date),3.16e-11,1,1)
And
score='max'}keyword_address:${
> fullAddressStreet}}]
> no field name specified in query and no default specified via 'df' param
>
> I have tried fifferent variants:
>{!parent which='type:entity' score='max'}keyword_address:${
> fullAddressStreet}
>
Street}
{!parent which='type:entity' score='max'
v='keyword_address:${fullAddressStreet}'}
{!parent which='type:entity' score='max' df='keyword_address'
v='keyword_address:${fullAddressStreet}'}
When using in LTR feature, all query defi
Let me know if I should open a JIRA issue for this. Thanks.
On Tue, Oct 17, 2017 at 10:40 AM, Arnold Bronley
wrote:
> I tried spellcheck.q=polt and q=tag:polt. I get collations, but they are
> only for polt and not tag:polt. Because of that, the hits that I get back
> are for frequency of plot a
I tried spellcheck.q=polt and q=tag:polt. I get collations, but they are
only for polt and not tag:polt. Because of that, the hits that I get back
are for frequency of plot and not frequency of tag:plot
{
"responseHeader": {
"status": 0,
"QTime": 20,
"params": {
"spellcheck.col
https://issues.apache.org/jira/browse/SOLR-10829: IndexSchema should
enforce that uniqueKey field must not be points based
The description tells the real reason.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://w
In addition to what Amrit correctly stated, if you need to search on your id,
especially range queries, I recommend to use a copy field and leave the id
field, almost as default.
Cheers
-
---
Alessandro Benedetti
Search Consultant, R&D Software Engineer, Director
Sease Ltd. - ww
By looking into the code,
if (uniqueKeyField.getType().isPointField()) {
String msg = UNIQUE_KEY + " field ("+uniqueKeyFieldName+
") can not be configured to use a Points based FieldType: " +
uniqueKeyField.getType().getTypeName();
log.error(msg);
throw new SolrException(ErrorCode.SERVER
I'm trying to set up uniqueKey ( what is integer) like that:
id
But when I upload configuration into solr i see following error:
uniqueKey field (id) can not be configured to use a Points based FieldType:
pint
If i set type=“string” everything seems to be ok.
But you used :
"spellcheck.q": "tag:polt",
Instead of :
"spellcheck.q": "polt",
Regards
-
---
Alessandro Benedetti
Search Consultant, R&D Software Engineer, Director
Sease Ltd. - www.sease.io
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
"suggestion": [
{
"word": "plot",
"freq": 5934
},
{
"word": "port",
"freq": 495
},
{
"word": "post",
with spellcheck.q I don't get anything back at all.
{
"responseHeader": {
"status": 0,
"QTime": 10,
"params": {
"spellcheck.collateExtendedResults": "true",
"spellcheck.q": "tag:polt",
"indent": "true",
"spellcheck": "true",
"spellcheck.accuracy": "0.72"
Interesting, what happens when you pass it as spellcheck.q=polt ?
What is the behavior you get ?
-
---
Alessandro Benedetti
Search Consultant, R&D Software Engineer, Director
Sease Ltd. - www.sease.io
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
701 - 800 of 6016 matches
Mail list logo