with
> aliases. This is something that will get fixed, but for the current release
> there isn't a workaround for this issue.
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
>
> On Tue, May 19, 2020 at 8:25 AM Bjarke Buur Mortensen <
> morten...@eluence.
Hi list,
I seem to be unable to get REINDEXCOLLECTION to work on a collection alias
(running Solr 8.2.0). The documentation seems to state that that should be
possible:
https://lucene.apache.org/solr/guide/8_2/collection-management.html#reindexcollection
"name
Source collection name, may be an ali
.
>
> Best,
> Erick
>
> > On Apr 27, 2020, at 8:23 AM, Bjarke Buur Mortensen <
> morten...@eluence.com> wrote:
> >
> > Thanks for the reply,
> > I'm on solr 8.2 so cursorMark is there.
> >
> > Doing this from one collection to another collec
Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 27 Apr 2020, at 13:11, Bjarke Buur Mortensen
> wrote:
> >
> > Hi list,
> >
> > Let's say I add a copyField to my solr schema
Hi list,
Let's say I add a copyField to my solr schema, or change the analysis chain
of a field or some other change.
It seems to me to be an alluring choice to use a very simple
dataimporthandler to reindex all documents, by using a SolrEntityProcessor
that points to itself. I have just done this
reload and go. It’d take you 5 minutes and you’d have your answer.
>
> Best,
> Erick
>
>
> > On Aug 28, 2019, at 1:57 AM, Bjarke Buur Mortensen <
> morten...@eluence.com> wrote:
> >
> > Yes, but isn't that what I am already doing in this case (look a
ery time analysis chains, there are many
> examples in the stock Solr schemas.
>
> Best,
> Erick
>
> > On Aug 27, 2019, at 8:48 AM, Bjarke Buur Mortensen <
> morten...@eluence.com> wrote:
> >
> > We have a solr file of type "string".
> > It tur
We have a solr file of type "string".
It turns out that we need to do synonym expansion on query time in order to
account for some changes over time in the values stored in that field.
So we have tried introducing a custom fieldType that applies the synonym
filter at query time only (see bottom of
Hi list,
we have a Solr Cloud setup with a collection with 4 shards.
We backup this collection once a day.
Each night, we try to restore the latest backup on a test server.
So we restore all shards to the same machine. Upon restore, the solr logs
prints the following:
solr.log.3:25163:java.nio.fi
defaults have changed? So I’d try changing the
> definition in the schema for that field. These changes should be pointed
> out in the upgrade notes in Lucene or Solr CHANGES.txt.
>
> Best,
> Erick
>
> > On May 10, 2019, at 1:17 AM, Bjarke Buur Mortensen <
> morten...@eluen
Hi list,
I'm trying to open a 7.x core in Solr 8.
I'm getting the error:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Error opening new searcher
Digging further in the logs, I see the error:
"
...
Caused by: java.lang.IllegalArgumentException: cannot change field
"de
t work,
> even outside the standalone-> cloud issue. I'll raise a JIRA.
> Meanwhile I think you'll have to re-index I'm afraid.
>
> Thanks for raising the issue.
>
> Erick
>
> On Wed, Aug 8, 2018 at 6:34 AM, Bjarke Buur Mortensen
> wrote:
> > Eri
> Increase fault tolerance with leader and replicas being spread around the
> cluster.
> >
> > You would be bypassing general High availability / distributed computing
> processes by trying to not reindex.
> >
> > Rahul
> > On Aug 7, 2018, 7:06 AM -0400, Bjark
ould be bypassing general High availability / distributed computing
> processes by trying to not reindex.
>
> Rahul
> On Aug 7, 2018, 7:06 AM -0400, Bjarke Buur Mortensen <
> morten...@eluence.com>, wrote:
> > Hi List,
> >
> > is there a cookbook recipe for m
Right, that seems like a way to go, will give it a try.
Thanks!
/Bjarke
2018-08-07 14:08 GMT+02:00 Markus Jelsma :
> Hello Bjarke,
>
> You can use shard splitting:
> https://lucene.apache.org/solr/guide/6_6/collections-
> api.html#CollectionsAPI-splitshard
>
> Regards,
> Markus
>
>
>
> -Orig
Thank you, that is of course a way to go, but I would actually like to be
able to shard ...
Could I use your approach and add shards dynamically?
2018-08-07 13:28 GMT+02:00 Markus Jelsma :
> Hello Bjarke,
>
> If you are not going to shard you can just create a 1 shard/1 replica
> collection, shu
Hi List,
is there a cookbook recipe for moving an existing solr core to a solr cloud
collection.
We currently have a single machine with a large core (~150gb), and we would
like to move to solr cloud.
I haven't been able to find anything that reuses an existing index, so any
pointers much apprec
Hi list,
I'm having difficulties getting the solr highlighter to highlight only the
terms that actually caused the match. Let med explain:
Given a query "john OR (peter AND mary)"
and two documents:
"john is awesome and so is peter"
"peter is awesome and so is mary",
solr will highlight "peter"
Just to clarify:
I can only cause this to happen when using the complexphrase query parser.
Lucene/dismax/edismax parsers are not affected.
2018-02-07 13:09 GMT+01:00 Bjarke Buur Mortensen :
> Hello list,
>
> Whenever I make a query for ** (two consecutive wildcards) it causes my
>
this query,
causing Solr to crash.
I should add that we use the complexphrase query parser as the default
parser on a Solr 7.1.
Can anyone repro this or explain what causes the problem?
Thanks in advance,
Bjarke Buur Mortensen
Senior Software Engineer
Eluence A/S
27;]"
> , "//doc[./str[@name='id']='1']"
> );
>
> assertQ(req("q", "{!complexphrase} iso-latin1:\"craezy traen\"")
> , "//result[@numFound='1']"
> , "//doc[./str[@n
Thanks a lot for your effort, Tim.
Looking at it from the Solr side, I see some use of local classes. The
snippet below in particular caught my eye (in
solr/core/src/java/org/apache/solr/search/ComplexPhraseQParserPlugin.java).
The instance of ComplexPhraseQueryParser is not the clean one from Luc
gt; the JIRA. Let me take a look.
>
> Best,
>
> Tim
>
> This was one of my initial reasons for my SpanQueryParser LUCENE-5205[1]
> and [2], which handles analysis of multiterms even in phrases.
>
> [1] https://github.com/tballison/lucene-addons/tree/master/
y head would explode
;-)
>
> Thanks,
> Emir
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 5 Oct 2017, at 10:44, Bjarke Buur Mortensen
> wrote:
> >
- Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 4 Oct 2017, at 22:08, Bjarke Buur Mortensen
> wrote:
> >
> > Hi list,
> >
> > I'm trying to search for the term funktionsnedsättning*
&
snedsättning*"
and 0 documents. Notice how ä has not been changed to a.
How can this be? Is complexphrase somehow skipping the analysis chain for
multiterms, even though components and in particular
MappingCharFilterFactory are Multi-term aware
Are there any configuration gotchas that I'm not aware of?
Thanks for the help,
Bjarke Buur Mortensen
Senior Software Engineer, Eluence A/S
OK, that complicates things a bit.
I would still try to go for a solution where you store the rich text in
Solr, but make sure you tokenize it correctly.
If the format is relatively simple, you could use either a regexp pattern
tokenizer
https://cwiki.apache.org/confluence/display/solr/Tokenizers
OK, so the next thing to do would be to index and store the rich text ...
is it HTML? Because then you can use HTMLStripCharFilterFactory in your
analyzer, and still get the correct highlight back with hl.fragsize=0.
I would think that you will have a hard time using the term positions, if
what yo
Well, you can get Solr to highlight the entire field if that's what you are
after by setting:
hl.fragsize=0
From
https://cwiki.apache.org/confluence/display/solr/Highlighting#Highlighting-Usage
:
Specifies the approximate size, in characters, of fragments to consider for
highlighting. *0* indicate
Hi list,
Given the text:
"Kontraktsproget vil være dansk og arbejdssproget kan være dansk, svensk,
norsk og engelsk"
and the query:
{!complexphrase df=content_da}("sve* no*")
the unified highlighter (hl.method=unified) does not return any highlights.
For reference, the original highlighter returns
30 matches
Mail list logo