can you show the update request?
At 2021-01-07 20:25:13, "Flowerday, Matthew J"
wrote:
Hi There
I have recently upgraded a solr database from 7.7.1 to 8.7.0 and not wiped the
database and re-indexed (as this would take too long to run on site).
On my loc
istics and
>trying to make sense of the numbers. We are using a custom Flume sink and
>sending updates to Solr (8.4) using SolrJ.
>
>I know these stuff depend on a lot of things but can you tell me if these
>statistics are horribly bad (which means something is going obviously wron
Cross-posted / addressed (both me), here.
https://stackoverflow.com/questions/65620642/solr-query-with-space-only-q-20-stalls/65638561#65638561
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Cross-posted / addressed (both me), here.
https://stackoverflow.com/questions/65620642/solr-query-with-space-only-q-20-stalls/65638561#65638561
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Did you commit?
> On Jan 9, 2021, at 5:44 AM, Flowerday, Matthew J
> wrote:
>
>
> Hi There
>
> As a test I stopped Solr and ran the IndexUpgrader tool on the database to
> see if this might fix the issue. It completed OK but unfortunately the issue
> s
Hi There
As a test I stopped Solr and ran the IndexUpgrader tool on the database to
see if this might fix the issue. It completed OK but unfortunately the issue
still occurs - a new version of the record on solr is created rather than
updating the original record.
It looks to me
I have a frontend that uses Ajax to query Solr.
It's working well, but if I enter a single space (nothing else) in the
input/search box (the URL in the browser will show
... index.html#q=%20
In that circumstance I get a 400 error (as there are no parameters in the
request), which is fine
I have a frontend that uses Ajax to query Solr.
It's working well, but if I enter a single space (nothing else) in the
input/search box (the URL in the browser will show
... index.html#q=%20
In that circumstance I get a 400 error (as there are no parameters in the
request), which is fine
Hello all,
I have been looking at our SolrCloud indexing performance statistics and trying
to make sense of the numbers. We are using a custom Flume sink and sending
updates to Solr (8.4) using SolrJ.
I know these stuff depend on a lot of things but can you tell me if these
statistics
Hi There
I have recently upgraded a solr database from 7.7.1 to 8.7.0 and not wiped
the database and re-indexed (as this would take too long to run on site).
On my local windows machine I have a single solr server 7.7.1 installation
I upgraded in the following manner
be
the reason for the increase in response time.
Regards,
Abhishek
On Thu, Jan 7, 2021 at 12:43 PM kshitij tyagi
wrote:
> Hi,
>
> I am not querying on tlog replicas, solr version is 8.6 and 2 tlogs and 4
> pull replica setup.
>
> why should pull replicas be affected during backgro
Hi,
I am not querying on tlog replicas, solr version is 8.6 and 2 tlogs and 4
pull replica setup.
why should pull replicas be affected during background segment merges?
Regards,
kshitij
On Wed, Jan 6, 2021 at 9:48 PM Ritvik Sharma wrote:
> Hi
> It may be the cause of rebalancing and qu
Thanks Hoss,
Yes, i was making the change in solr.xml in wrong directory earlier.
Also as you said:
: You need to update EVERY solrconfig.xml that the JVM is loading for this
to
: actually work.
that has not been true for a while, see SOLR-13336 / SOLR-10921 ...
I validated this and it's
Thanks Shawn,
This entry ${solr.max.booleanClauses:2048} in solr.xml was introduced only in solr 8.x version and were not
present in 7.6 version.
We have this in solrconfig.xml in 8.4.1 version.
${solr.max.booleanClauses:2048}
i was updating the solr.xml in the installation directory
: You need to update EVERY solrconfig.xml that the JVM is loading for this to
: actually work.
that has not been true for a while, see SOLR-13336 / SOLR-10921 ...
: > 2. updated solr.xml :
: > ${solr.max.booleanClauses:2048}
:
: I don't think it's currently possible to set the
Hi
It may be the cause of rebalancing and querying is not available not on
tlog at that moment.
You can check tlog logs and pull log when u are facing this issue.
May i know which version of solr you are using? and what is the ration of
tlog and pull nodes.
On Wed, 6 Jan 2021 at 2:46 PM, kshitij
ene-solr/blob/branch_8_6/lucene/core/src/java/org/apache/lucene/search/MinShouldMatchSumScorer.java#L107
as the error changes as we change the mm for the second feature:
1 feature with mm=1 and one with mm=3 -> Index 4 out of bounds for length 4
1 feature with mm=1 and one with mm=5 -> In
Thanks for the reply Eric !
I have tried multiple versions of solr cloud , 8.3, 8.6.0,.8.6.2. Every
version has some issues either on indexing or query searching like with 8.3
, indexing throws below error,
request: http://X:8983/solr/searchcollection_shard2_replica_t103/
<http://x:8
I think you are going in the wrong direction in your upgrade path…. While it
may *seem* simpler to go from master/slave 6.6.6 to SolrCloud 6.6.6, you are
much better off just going from master/slave 6.6.6 to SolrCloud on 8.7 (or
whatever is the latest).
SolrCloud has evolved since Solr 6
Hi,
I am having a tlog + pull replica solr cloud setup.
1. I am observing that whenever background segment merge is triggered
automatically, i see high response time on all of my solr nodes.
As far as I know merges must be happening on tlog and hence the increase
response time, i am not able
Hi Guys,
Any update.
On Tue, 5 Jan 2021 at 18:06, Ritvik Sharma wrote:
> Hi Guys
>
> Happy New Year.
>
> We are trying to move to solr cloud 6.6.6 as we are using same version
> master-slave arch.
>
> solr cloud: 6.6.6
> zk: 3.4.10
>
> We are facing few
work.
maxBooleanClauses is an odd duck. At the Lucene level, where this
matters, it is a global (JVM-wide) variable. So whenever Solr sets this
value, it applies to ALL of the Lucene indexes that are being accessed
by that JVM.
When you havet multiple Solr cores, the last core
I experienced the same thing in solr-8.7 , it worked for me using system
property.
Set system property in solr.in.sh file
On Tue, Jan 5, 2021 at 8:58 PM dinesh naik
wrote:
> Hi all,
> I want to update the maxBooleanClauses to 2048 (from default value 1024).
> Below are the steps t
Hello Florin Babes,
Thanks for this detailed report! I agree you experiencing
ArrayIndexOutOfBoundsException during SolrFeature computation sounds like a
bug, would you like to open a SOLR JIRA issue for it?
Here's some investigative ideas I would have, in no particular order:
Reproducibility
Hi all,
I want to update the maxBooleanClauses to 2048 (from default value 1024).
Below are the steps tried:
1. updated solrconfig.xml :
${solr.max.booleanClauses:2048}
2. updated solr.xml :
${solr.max.booleanClauses:2048}
3. Restarted the solr nodes.
4. Tried query with more than 2000
Hi Guys
Happy New Year.
We are trying to move to solr cloud 6.6.6 as we are using same version
master-slave arch.
solr cloud: 6.6.6
zk: 3.4.10
We are facing few errors
1. Every time we upload a model-store using curl XPUT command , it is
showing at that time but after reloading collection
Hello,
We are trying to update Solr from 8.3.1 to 8.6.3. On Solr 8.3.1 we are
using LTR in production using a MultipleAdditiveTrees model. On Solr 8.6.3
we receive an error when we try to compute some SolrFeatures. We didn't
find any pattern of the queries that fail.
Example:
We have the following
k I can share it because it may
contain sensitive information. Is there something specific from this file
that may be relevant for our discussion?
Tulsi wrote
> Do try the solr admin analysis screen
> once as well to see the behaviour for this field.
> https://lucene.apache.org/so
can distinguish between then reading the
shard=true parameter.
Regards,
Markus
[1] https://lucene.apache.org/solr/guide/6_6/configuring-logging.html
Op di 29 dec. 2020 om 16:49 schreef ufuk yılmaz :
> Hello All,
>
> Is there a way to see currently executing queries in a SolrCloud? Or a
Hello All,
Is there a way to see currently executing queries in a SolrCloud? Or a general
strategy to detect a query using absurd amount or resources?
We are using Solr for not only simple querying, but running complex streaming
expressions, facets with large data etc. Sometimes, randomly, CPU
Can you post the managed schema and solrconfig content here ?
Do try the solr admin analysis screen
once as well to see the behaviour for this field.
https://lucene.apache.org/solr/guide/7_6/index.html
On Sun, 27 Dec, 2020, 6:54 pm nettadalet, wrote:
> Thank you, that was help
Hi,
thank for the comment, but I tried to use both "sow=false" and "saw=true"
and I still get the same result. For query (TITLE_ItemCode_t:KI_7) I still
see:
Solr 4.6: "parsedquery": "PhraseQuery(TITLE_ItemCode_t:\"ki 7\")"
Solr 7.5: "parse
SOW default to false?
but this seems to be true right??
For Solr 7.5 I get
"parsedquery":"+(+(text1:ki7 (+text1:ki
+text1:7)))"
At 2020-12-28 01:13:29, "Tulsi Das" wrote:
>Hi ,
>Yes this look like related to sow (split on whitespace) param
Hi ,
Yes this look like related to sow (split on whitespace) param default
behaviour change in solr 7.
The sow parameter (short for "Split on Whitespace") now defaults to
false, which allows support for multi-word synonyms out of the box.
This parameter is used with the eDismax an
I added "defType=lucene" to both searches to make sure I use the same query
parser, but it didn't change the results.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
I'm not sure how to check the implementation of the query parser, or how to
change the query parser that I use. I think I'm using the standard query
parser.
I use Solr Admin to run the queries. If I look at the URL, I see
Solr 4.6:
select?q=TITLE_ItemCode_t:KI_7=TITLE_ItemCode_t
Solr 7.5:
select
which query parser are you using? I think to answer your question, you need to
check the implementation of the query parser
At 2020-12-27 21:23:59, "nettadalet" wrote:
>Thank you, that was helpful!
>
>For Solr 4.6 I get
>"parsedquery": &quo
Thank you, that was helpful!
For Solr 4.6 I get
"parsedquery": "PhraseQuery(TITLE_ItemCode_t:\"ki 7\")"
For Solr 7.5 I get
"parsedquery":"+(+(TITLE_ItemCode_t:ki7 (+TITLE_ItemCode_t:ki
+TITLE_ItemCode_t:7)))"
So this is the cause of the diffe
Hi,
Try adding debug=true or debug=query in the url and see the formed query at
the end .
You will get to know why the results are different.
On Thu, 24 Dec, 2020, 8:05 pm nettadalet, wrote:
> Hello,
>
> I have the the same field type defined in Solr 4.6 and Solr 7.5. When I
> sear
Hello,
I have the the same field type defined in Solr 4.6 and Solr 7.5. When I
search with both versions, I get different results, and I don't know why
I have the following *field type definition in Solr 4.6
t; we are using Solr6.2 , in schema that we use we have an integer field. For
> a given query we want to know how many documents have duplicate value for
> the field , for an example how many documents have same doc_id=10.
>
> So to find this information we fire a query to solr-cloud
Hello All ,
we are using Solr6.2 , in schema that we use we have an integer field. For
a given query we want to know how many documents have duplicate value for
the field , for an example how many documents have same doc_id=10.
So to find this information we fire a query to solr-cloud
On 12/18/2020 12:03 AM, basel altameme wrote:
While trying to Import & Index data from MySQL DB custom view i am facing the
error below:
Data Config problem: The value of attribute "query" associated with an element type
"entity" must not contain the '<' character.
Please note that in my SQL
Have you tried escaping that character?
> On Dec 18, 2020, at 2:03 AM, basel altameme
> wrote:
>
> Dear,
> While trying to Import & Index data from MySQL DB custom view i am facing the
> error below:
> Data Config problem: The value of attribute "query" associated with an
> element type
Dear,
While trying to Import & Index data from MySQL DB custom view i am facing the
error below:
Data Config problem: The value of attribute "query" associated with an element
type "entity" must not contain the '<' character.
Please note that in my SQL statements i am using '<>' as an operator
I am seeing a weird issue of all the solr nodes on a single collection are
shown as down, even though I restart the solr and zookeeper services.
Little background
Its just one collection with 4 replica's and the collection size is about
~140GB, I did enabled ttl that runs for every 5 mins
Erick Erickson wrote:
>
> Well, there’s no information here to help.
>
> The first thing I’d check is what the Solr
> logs are saying. Especially if you’ve
> changed any of your configuration files.
>
> If that doesn’t show anything, I'd take a thread
> dump and look at
Does anyone have the experience to use HugePage and UseLargePage? How much
can we get the performance benefits from utilizing it?
The disk is NOT SSD and the sole node has 256 GB. The Heap is 31.99 G.
Thanks,
Jae
Well, there’s no information here to help.
The first thing I’d check is what the Solr
logs are saying. Especially if you’ve
changed any of your configuration files.
If that doesn’t show anything, I'd take a thread
dump and look at that, perhaps there’s some
deadlock.
But that said, a reload
Hi All,
For further investigation, I have raised a JIRA ticket.
https://issues.apache.org/jira/browse/SOLR-15045
In case, anyone has any information to share, feel free to mention it here.
Regards,
Raj
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi,
I have an issue with the collection reload API. The reload seems to be
hanging. It's been in the running state for many days.
Can you please suggest any documentation which explains the reload
task under the hood steps?
FYI. I am using solr 8.1
Thanks,
Moulay
I've run into the same issue with a Rails application that uses the Rsolr gem
to make calls to Solr. I will have to check if the issue is in Rsolr or in
my application, changing the %2B (+ sign) to a %20 (space char) in the
request URL fixes the issue.
I also just wanted to say hello to wunder
for the last 15 days, can someone
please help in resolving it.
Let me know in case any information/logs are missing.
Regards,
Raj
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi All,
Till we investigate further about this issue.
Can anyone please share what other ways we can issue a commit or point me to
existing documents that have a relevant example.
Regards,
Raj
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi,
I will really appreciate if someone can help me with this.
Thank you,
Moulay
On Thu, Dec 10, 2020, 8:28 AM Moulay Hicham wrote:
> Hi,
>
> We have a solr cluster of 30 nodes with a Replication Factor =3.
> Each index size is about 80GB.
> Solr version is 8.1
> The cl
1. There is no Solr support team. This is a mailing list of volunteers using
the software.
2. I do not recommend running Solr in a Docker container for production.
3. Please review the Solr Jira for security issues. If you believe that there
are security vulnerabilities that need to be fixed
n, Lakshmi
Sent: Friday, November 13, 2020 11:21 AM
To: solr-user@lucene.apache.org
Subject: FW: Vulnerabilities in SOLR 8.6.2
This is my 5th attempt in the last 60 days
Is there anyone looking at these mails?
Does anyone care?? :(
Lakshmi Narayanan
Marsh & McLennan Companies
121 River Stre
Solr Setup: (Running in solrCloud mode)
It has 6 shards, and each shard has only one replica (which is also a
leader) and the replica type is NRT.
Each shards are hosted on the separate physical host.
Zookeeper => We are using external zookeeper ensemble (3 separate node
cluster)
Shard and H
Hi,
We have a solr cluster of 30 nodes with a Replication Factor =3.
Each index size is about 80GB.
Solr version is 8.1
The cluster has high TPS both in read and write.
We have recently made a schema change and uploaded it using ZKCLI
script. Then we issue a collection reload async request
Hi, sounds like https://issues.apache.org/jira/browse/SOLR-13963 which was
fixed in Solr 8.3.1
On Thu, 10 Dec 2020 at 06:20, Ritvik Sharma wrote:
> Hi Houston,
> Thanks for reply
>
> We dont have this kind of field. It's a field value and it is coming
> randomly, not all
Hi Houston,
Thanks for reply
We dont have this kind of field. It's a field value and it is coming
randomly, not all the time.
We are indexing using cloudsolrclient + spring data . It is coming on any
value,
I am trying to do indexing of ~30 million records. And it is coming on
Solr cloud mode
Do you have a field named "314257s_seourls" in your schema?
Is there a dynamic field you are trying to match with that name?
- Houston
On Thu, Dec 10, 2020 at 2:53 PM ritvik wrote:
> Hi ,
> Please suggest, why it is happening.
>
>
>
> --
> Sent from: https://l
Hi ,
Please suggest, why it is happening.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
This code is there but it does not show on solr running cammnd
On Wed, 9 Dec 2020 at 23:28, rkrish84 wrote:
> Commented out the solr_ssl_client_key_store related code section in solr.sh
> file to resolve the issue and enable ssl.
>
>
>
> --
> Sent from: https://lucene.4720
Commented out the solr_ssl_client_key_store related code section in solr.sh
file to resolve the issue and enable ssl.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi All,
I tried debugging but unable to find any solution. Do let me know in case
details/logs shared by me are not suffiecient/clear.
Regards,
Raj
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi Furkan
I have added the mail. Please check.
On Wed, 9 Dec 2020 at 12:52, Furkan KAMACI wrote:
> Hi Ritwik,
>
> Could you send your e-mail to solr user list?
>
> Kind Regards,
> Furkan KAMACI
>
> On 9 Dec 2020 Wed at 10:18 Ritvik Sharma wrote:
>
>>
>
shards that have zero documents anyway.
>
> It’d be a little convoluted, but you could use the collections COLSTATUS
> Api to
> find the names of all your replicas. Then query _one_ replica of each
> shard with something like
> solr/collection1_shard1_replica_n1/q=*:*=false
>
> tha
e hard-commit runs.
*Little correction:*
In my last post, I had mentioned that softCommit is working fine and there
no delay or error message.
Here is what happening:
1. Hard commit with openSearcher=true
curl
"http://:solr_port/solr/my_collection/update?openSearcher=true=true=json"
All
Hi,
I was able to add the config set to the STATUS response by implementing a
custom extended CoreAdminHandler.
However, it would be nice if this could be added in Solr itself. I've create
a JIRA for this: https://issues.apache.org/jira/browse/SOLR-15034
Kind regards,
Andreas
--
Sent from
Hey All,
We have updated our system from solr 5.4 to solr 8.5.2 and we are suddenly
seeing a lot of the below errors in our logs.
HttpChannelState org.eclipse.jetty.io.EofException: Reset
cancel_stream_error
Is this related to some system level or solr level config?
How do I find the cause
Hi,
is there a way to get the name of the config set for an existing Solr
core from a stand-alone Solr server (not SolrCloud)?
I need the name of the config set to create another core with the same
config. The actual use case here is to have a script that creates cores
of the same config
/jira/browse/SOLR-13609, was this fixed ever ?
>
> Regards
>
> On Mon, Dec 7, 2020 at 6:32 PM Pushkar Mishra wrote:
>
>> Hi All,
>>
>> Is there a way to trigger a notification when a document is deleted in
>> solr? Or may be when auto purge gets c
raj.yadav wrote:
>
> Hi Folks,
>
> Do let me know if any more information required to debug this.
>
>
> Regards,
> Raj
>
>
>
> --
> Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Maybe a postCommit listener?
https://lucene.apache.org/solr/guide/8_4/updatehandlers-in-solrconfig.html
Regards,
Alex.
On Mon, 7 Dec 2020 at 08:03, Pushkar Mishra wrote:
>
> Hi All,
>
> Is there a way to trigger a notification when a document is deleted in
> solr? Or may be
No, it’s marked “unresolved”….
> On Dec 7, 2020, at 9:22 AM, Pushkar Mishra wrote:
>
> Hi All
> https://issues.apache.org/jira/browse/SOLR-13609, was this fixed ever ?
>
> Regards
>
> On Mon, Dec 7, 2020 at 6:32 PM Pushkar Mishra wrote:
>
>> Hi All,
Hi Folks,
Do let me know if any more information required to debug this.
Regards,
Raj
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi All
https://issues.apache.org/jira/browse/SOLR-13609, was this fixed ever ?
Regards
On Mon, Dec 7, 2020 at 6:32 PM Pushkar Mishra wrote:
> Hi All,
>
> Is there a way to trigger a notification when a document is deleted in
> solr? Or may be when auto purge gets complete of delet
Hi All,
Is there a way to trigger a notification when a document is deleted in
solr? Or may be when auto purge gets complete of deleted documents in solr?
Thanks
--
Pushkar Kumar Mishra
"Reactions are always instinctive whereas responses are always well thought
of... So start responding r
We are trying to migrate from solr 7.7 to solr 8.6 on Kubernetes. We are
using zookeeper-3.4.13. While adding a replica to the cluster, it returns
500 status code. While in the background it is added sometimes successfully
while sometime it is in the inactive node. We are using http2 without SSL
matthew sporleder wrote
> Is zookeeper on the solr hosts or on its own? Have you tried
> opensearcher=false (soft commit?)
1. we are using zookeeper in ensemble mode. Its hosted on 3 seperate node.
2. Soft commit (opensearcher=false) is working fine. All the shards are
getting commit r
Is zookeeper on the solr hosts or on its own? Have you tried
opensearcher=false (soft commit?)
On Sun, Dec 6, 2020 at 6:19 PM raj.yadav wrote:
>
> Hi Everyone,
>
>
> matthew sporleder wrote
> > Are you stuck in iowait during that commit?
>
> During commit operation, t
ard and corresponding node details:
shard1_0=>solr_199
shard1_1=>solr_200
shard2_0=> solr_254
shard2_1=> solr_132
shard3_0=>solr_133
shard3_1=>solr_198
We are using the following command to issue commit:
/curl
"http://solr_node:8389/solr/my_collection/update?openSearcher=tru
y, my theory is that trying to do too many commits in parallel
> (too many or not enough shards) is causing iowait = high latency to
> work through.
Can you please elaborate more about this.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
, Dec 6, 2020 at 9:05 AM raj.yadav wrote:
>
> matthew sporleder wrote
> > Are you stuck in iowait during that commit?
>
> I am not sure how do I determine that, could you help me here.
>
>
>
>
> --
> Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
matthew sporleder wrote
> Are you stuck in iowait during that commit?
I am not sure how do I determine that, could you help me here.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
First thing I’d do is run one of the examples to insure you have Zookeeper set
up etc. You can create a collection that uses the default configset.
Once that’s done, start with ‘SOLR_HOME/solr/bin/solr zk upconfig’. There’s
extensive help if you just type “bin/solr zk -help”. You give
Hello All,
Please can some one from the Solr Lucene Community Provide me the Steps on how
to migrate an existing Solr legacy Core, data and conf(manage
schema,solrconfig.xml files to SolrCloud configuration with collections and
shards and where to copy the existing files to reuse the data
e="8192"
> initialSize="3000"
> autowarmCount="0"/>
>
> size="8192"
> initialSize="3072"
>autowarmCount="0"/>
>
>
>
Exactly _how_ are you indexing? In particular, how often are commits happening?
If you’re committing too often, Solr can block until some of the background
merges are complete. This can happen particularly when you are doing hard
commits in rapid succession, either through, say, committing from
Hi!
I run a three nodes Solr 8.5.1 cluster and experienced a bug when
updating the index: (adding document)
{
"responseHeader":{
"rf":3,
"status":500,
"QTime":22938},
"error":{
"msg":"Task queue processi
rd gets empty , need to delete
> the shard as well.
>
> So lets say , is there a way to know, when solr completes the purging of
> deleted documents, then based on that flag we can configure shard deletion
>
> Thanks
> Pushkar
>
> On Tue, Dec 1, 2020 at 9:02 PM Erick Eric
ze=8m \
-XX:MaxGCPauseMillis=150 \
-XX:InitiatingHeapOccupancyPercent=60 \
-XX:+UseLargePages \
-XX:+AggressiveOpts \
"
Solr Collection details: (running in solrCloud mode)
It has 6 shards, and each shard has only one replica (which is also a
leader) and replica type is NRT
Each shard Index size: 11 GB
av
. And in this process if any shard gets empty , need to delete
the shard as well.
So lets say , is there a way to know, when solr completes the purging of
deleted documents, then based on that flag we can configure shard deletion
Thanks
Pushkar
On Tue, Dec 1, 2020 at 9:02 PM Erick Erickson
wrote
Solr handles UTF-8, so it should be able to. The problem you’ll have is
getting the UTF-8 characters to get through all the various transport
encodings, i.e. if you try to search from a browser, you need to encode
it so the browser passes it through. If you search through SolrJ, it needs
. You cannot delete any
shard when using compositeId as your routing method.
If you don’t know which router you’re using, then you’re using compositeId.
NOTE: for the rest, “documents” means non-deleted documents. Solr will
take care of purging the deleted documents automatically.
I think you’re
hard.
>>> And you won’t have any shards that have zero documents anyway.
>>>
>>> It’d be a little convoluted, but you could use the collections COLSTATUS
>>> Api to
>>> find the names of all your replicas. Then query _one_ replica of each
>
Hi community,
During integration tests with new data source I have noticed weird scenario
where replacement character can't be searched, though, seems to be stored.
I mean, honestly, I don't want that irrelevant data stored in my index but
I wondered if solr can index replacement character (U+FFFD
Any one help me for my question?
Regards,
Vishal
From: vishal patel
Sent: Friday, November 27, 2020 12:18 PM
To: solr-user@lucene.apache.org
Subject: uploading model in Solr 6.6
Hi
what is meaning of weight of feature at the time Uploading a Model for
Re
>>
>> It’d be a little convoluted, but you could use the collections COLSTATUS
>> Api to
>> find the names of all your replicas. Then query _one_ replica of each
>> shard with something like
>> solr/collection1_shard1_replica_n1/q=*:*=false
>>
&g
201 - 300 of 46318 matches
Mail list logo