Hi all,
We have a catalogue of many products, including smart phones. We use
*edismax* query parser. If someone types in iPhone 11, we are getting the
correct results. But iPhone 11 Pro is coming before iPhone 11. What options
can be used to improve this?
Regards,
Jayadevan
In addition to the insightful pointers by Zisis and Erick, I would like to
mention an approach in the link below that I generally use to pinpoint
exactly which threads are causing the CPU spike. Knowing this you can
understand which aspect of Solr (search thread, GC, update thread etc) is
taking mo
On 10/16/2020 2:36 PM, David Hastings wrote:
sorry, i was thinking just using the
*:*
method for clearing the index would leave them still
In theory, if you delete all documents at the Solr level, Lucene will
delete all the segment files on the next commit, because they are empty.
I have not
In addition, what happens at query time when documents have
been index under a varying field type? Well, it doesn’t work well.
The full set of steps for uninterrupted searching is:
1. Add the new text field.
2. Reindex to populate that.
3. Switch querying to use the new text field.
4. Change the
sorry, i was thinking just using the
*:*
method for clearing the index would leave them still
On Fri, Oct 16, 2020 at 4:28 PM Erick Erickson
wrote:
> Not sure what you’re asking here. re-indexing, as I was
> using the term, means completely removing the index and
> starting over. Or indexing to
Not sure what you’re asking here. re-indexing, as I was
using the term, means completely removing the index and
starting over. Or indexing to a new collection. At any
rate, starting from a state where there are _no_ segments.
I’m guessing you’re still thinking that re-indexing without
doing the ab
You should not be using the core api to do anything with cores in SolrCloud.
True, under the covers the collections API uses the core API to do its tricks,
but you have to use it in a very precise manner.
As for legacyMode, don’t use it, please. it’s not supported any more, has
been completely re
Hey Vinodh,
I’d have to check the backup/restore process. But I believe that the
state.json file does get exported. If that is the case then the nodesets
should be persisted.
Thanks,
Sean
On October 16, 2020 at 1:22:47 PM, Kommu, Vinodh K. (vko...@dtcc.com) wrote:
Hi,
Would it be possible to
Gotcha, thanks for the explanation. another small question if you
dont mind, when deleting docs they arent actually removed, just tagged as
deleted, and the old field/field type is still in the index until
merged/optimized as well, wouldnt that cause almost the same conflicts
until then?
On Fri,
Doesn’t re-indexing a document just delete/replace….
It’s complicated. For the individual document, yes. The problem
comes because the field is inconsistent _between_ documents, and
segment merging blows things up.
Consider. I have segment1 with documents indexed with the old
schema (String in th
"If you want to
keep the same field name, you need to delete all of the
documents in the index, change the schema, and reindex."
actually doesnt re-indexing a document just delete/replace anyways assuming
the same id?
On Fri, Oct 16, 2020 at 3:07 PM Alexandre Rafalovitch
wrote:
> Just as a side
Just as a side note,
> indexed="true"
If you are storing 32K message, you probably are not searching it as a
whole string. So, don't index it. You may also want to mark the field
as 'large' (and lazy):
https://lucene.apache.org/solr/guide/8_2/field-type-definitions-and-properties.html#field-defaul
Can someone help on the above question?
On Thu, Oct 15, 2020 at 1:09 PM yaswanth kumar
wrote:
> Can someone explain what are the implications when we change
> legacyMode=true on solr 8.2
>
> We have migrated from solr 5.5 to solr 8.2 everything worked great but
> when we are trying to add a core
Hi,
Would it be possible to restore a collection with replica placement into
specific nodes/VMs in the cluster? I guess by default restore feature may not
work in such custom way so by any chance can we modify those code details in
collection_state.json file in backup directory to place replica
No. The data is already indexed as a StringField.
You need to make a new field and reindex. If you want to
keep the same field name, you need to delete all of the
documents in the index, change the schema, and reindex.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org
I am using solr 8.2
Can I change the schema fieldtype from string to solr.TextField
without indexing?
The reason is that string has only 32K char limit where as I am looking to
store more than 32K now.
The contents on this field doesn't require any analysis or tokenized but I
need this fie
Hello everyone,
we are having problems with our backup script since we upgraded to Solr
8.6.2 on kubernetes. To be more precise the message is
*Path /data/backup/2020-10-16/collection must be relative to SOLR_HOME,
SOLR_DATA_HOME coreRootDirectory. Set system property 'solr.allowPaths' to
add othe
I have a nested documents which I am syncing in Solr :
{
"id":"NCT04372953",
"title":"Positive End-Expiratory Pressure (PEEP) Levels During Resuscitation
of Preterm Infants at Birth (The POLAR Trial) ",
"phase":"N/A",
"status":"Not yet recruiting",
"studytype":"Interventional",
What can cause a very high (1G/s, which is the max our disks can provide) disk
read rate that goes on for hours, with a Solr instance not being indexed or
queried?
Last days our SolrCloud cluster stops responding to queries, today we tried
stopping indexing and querying it, to find out what i
Close, but not quite there yet. The rules say use
systemctl start (or stop or status) solr.service
That dot service part ought to be there. I suspect that if we omit it
then we may be scolded on-screen and lose some grade points.
On your error report below. Best to ensure that Sol
20 matches
Mail list logo