Hi Lulu,
I'm afraid you're going to have to recognise that Solr 5.2.1 is very
out-of-date and the changes between this version and the current 8.x
releases are significant. A direct jump is I think the only sensible
option.
Although you could take the current configuration and attempt
Hi SOLR team,
Please may I ask for advice regarding upgrading the SOLR version (our project
currently running on solr-5.2.1) to the latest version?
What are the steps, breaking changes and potential issues ? Could this be done
as an incremental version upgrade or a direct jump to the newest
devices.
>
> [image: Grey_LI] <http://www.linkedin.com/company/unisys> [image:
> Grey_TW] <http://twitter.com/unisyscorp> [image: Grey_YT]
> <http://www.youtube.com/theunisyschannel>[image: Grey_FB]
> <http://www.facebook.com/unisyscorp>[image: Grey_Vimeo]
>
Thanks for the additional details Matthew. I created this JIRA to track
this problem: https://issues.apache.org/jira/browse/SOLR-15145. Please add
any additional information to that ticket if needed.
Are you able to upgrade your SolrJ client JAR to 8.8.0? If not, I
understand but that would
What version of SolrJ is embedded in your uleaf.ear file? There have been
changes in how we deal with URLs stored in ZK in 8.8 --> SOLR-12182
On Fri, Feb 5, 2021 at 2:34 AM Flowerday, Matthew J <
matthew.flower...@gb.unisys.com> wrote:
> Hi There
>
>
>
> I have been
No, it is not ASF slack. A separate slack org, just for Solr.
On Sat, 6 Feb, 2021, 6:35 am Anshum Gupta, wrote:
> Hey Ishan,
>
> Thanks for doing this. Is this the ASF Slack space or something else?
>
>
> On Tue, Feb 2, 2021 at 2:04 AM Ishan Chattopadhyaya <
> ichattop
to issue a request that
facets on the relevant fields. That will delay the opening of each new
searcher, but will ensure that user requests don't block.
SOLR-15008 _was_ actually pretty similar, with the added wrinkle of
involving distributed (multi-shard) requests (and iirc "dvhash" wouldn't
have wo
Hey Ishan,
Thanks for doing this. Is this the ASF Slack space or something else?
On Tue, Feb 2, 2021 at 2:04 AM Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:
> Hi all,
> I've created an invite link for the Slack workspace:
> https://s.apache.org/solr-slack.
>
> Does this happen on a warm searcher (are subsequent requests with no
intervening updates _ever_ fast?)?
Subsequent response times very fast if searcher remains open. As a control
test, I faceted on the same field that I used in the q param.
1. Start solr
2. Execute q=resultId:
core has 185million
> docs and 63GB index size.
>
> curl
> '
> http://localhost:8983/solr/TestCollection_shard1_replica_t3/query?q=resultId:x=0
> '
> {
> "responseHeader":{
> "zkConnected":true,
> "status":0,
>
Worked for me and a few others, thanks for doing that!
On Tue, Feb 2, 2021 at 5:04 AM Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:
> Hi all,
> I've created an invite link for the Slack workspace:
> https://s.apache.org/solr-slack.
> Please test it out. I
Ok. I'll try that. Meanwhile query on resultId is subsecond response. But the
immediate next query for faceting takes 40+secs. The core has 185million
docs and 63GB index size.
curl
'http://localhost:8983/solr/TestCollection_shard1_replica_t3/query?q=resultId:x=0'
{
"responseH
`resultId` sounds like it might be a relatively high-cardinality field
(lots of unique values)? What's your number of shards, and replicas per
shard? SOLR-15008 (note: not a bug) describes a situation that may be
fundamentally similar to yours (though to be sure it's impossible to say
for sure
docs are constantly being indexed with 95% new and 5% overwritten
(overwrite=true; no atomic update). Caches are not considered useful due to
commit frequency.
Solr is v8.7.0 on openjdk11.
Is there any way to improve json facet QTime?
## query only
curl
'http://localhost:8983/solr
Hi There
I have been checking out the latest (8.8.0) SolrCloud database (using
Zookeeper 3.6.2) against our application which talks to Solr via the Solr
API (I am not too sure of the details as I am not a java developer
unfortunately!). The software has Solr 8.7.0/ZooKeeper 3.6.2 libraries
Manisha,
The most general recommendation around commits is to not explicitly commit
after every update. There are settings that will let Solr automatically
commit after some threshold is met, and by delegating commits to that
mechanism you can generally ingest faster.
See this blog post
Hi All
Looking for some help on document indexing frequency. I am using apache solr
7.7 and SolrNet library to commit documents to Solr. Summary for this function
is:
// Summary:
// Commits posted documents, blocking until index changes are flushed to
disk and
// blocking until a new
Hi All
I see following errors logged, I think these errors are related to Text
Suggester. I saw these errors reported in the client environment, but don't
know what does it mean. Can someone guide me what is the possibility of these
errors?
[cid:image001.jpg@01D6FA33.FF76CC20]
Regards
Hi Pawel,
This definitely sounds like garbage collection biting you.
Backups themselves aren't usually memory intensive, but if indexing is
going on at the same time you should expect elevated memory usage.
Essentially this is because for each core being backed up, Solr needs
to hold pieces
> again.
> > On Feb 1, 2021, 2:15 PM -0600, Alexandre Rafalovitch ,
> > wrote:
> > > And if you need something more recent while this is being fixed, you
> > > can look right at the source in GitHub, though a navigation, etc is
> > > missing:
> > &
Hello,
we are using Solr 7.7.2 and sometimes we are performing a full reindex
of a core. Therefor we stop the replication on the master
(solr//replication?command=disablereplication),
we backup and delete the index, finally we rebuild the index and enable
the replication again.
However
Hi all,
I've created an invite link for the Slack workspace:
https://s.apache.org/solr-slack.
Please test it out. I'll send a broader notification once this is tested
out to be working well.
Thanks and regards,
Ishan
On Thu, Jan 28, 2021 at 12:26 AM Justin Sweeney
wrote:
> Thanks, I joi
://github.com/apache/lucene-solr/blob/master/solr/solr-ref-guide/src/analyzers.adoc
Open Source :-)
Regards,
Alex.
On Mon, 1 Feb 2021 at 15:04, Mike Drob wrote:
Hi Dorion,
We are currently working with our infra team to get these restored. In the
meantime, the 8.4 guide is still available at
https
Hi There
Just as an update to this thread I have resolved the issue. The new
schema.xml had this entries
Once I commented out the lines containing _root_ and _nest_path_ (as we
don't have nested documents) and re-started solr then no further duplication
on update
; missing:
> https://github.com/apache/lucene-solr/blob/master/solr/solr-ref-guide/src/analyzers.adoc
>
> Open Source :-)
>
> Regards,
> Alex.
>
> On Mon, 1 Feb 2021 at 15:04, Mike Drob wrote:
> >
> > Hi Dorion,
> >
> > We are currently working with
And if you need something more recent while this is being fixed, you
can look right at the source in GitHub, though a navigation, etc is
missing:
https://github.com/apache/lucene-solr/blob/master/solr/solr-ref-guide/src/analyzers.adoc
Open Source :-)
Regards,
Alex.
On Mon, 1 Feb 2021 at 15
Hi Dorion,
We are currently working with our infra team to get these restored. In the
meantime, the 8.4 guide is still available at
https://lucene.apache.org/solr/guide/8_4/ and are hopeful that the 8.8
guide will be back up soon. Thank you for your patience.
Mike
On Mon, Feb 1, 2021 at 1:58 PM
Hi,
I can't access to Apache Solr Reference Guide since few days.
Example:
URL
* https://lucene.apache.org/solr/guide/8_8/
* https://lucene.apache.org/solr/guide/8_7/
Result:
Not Found
The requested URL was not found on this server.
Do you know what going on?
Thanks
Caroline Dorion
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Solr is the popular, blazing fast, open source NoSQL search platform from
the Apache Lucene project. Its major features include powerful full-text
search, hit highlighting, faceted search and analytics, rich document
parsing, geospatial search
Hi David,
Thanks for filing this issue. The classic non-weightMatcher mode works well
for us right now. Yes, we are using the POSTINGS mode for most of the
fields although explicitly mentioning it gives an error since not all
fields are indexed with offsets. So I guess the highlighter is picking
https://issues.apache.org/jira/browse/SOLR-10321 -- near the end my opinion
is we should just omit the field if there is no highlight, which would
address your need to do this work-around. Glob or no glob. PR welcome!
It's satisfying seeing that the Unified Highlighter is so much faster than
: there are not many OOM stack details printed in the solr log file, it's
: just saying No enough memory, and it's killed by oom.sh(solr's script).
not many isn't the same as none ... can you tell us *ANYTHING* about what
the logs look like? ... as i said: it's not just the details
On another note, since response time is in question, I have been using a
customhighlighter to just override the method encodeSnippets() in the
UnifiedSolrHighlighter class since solr 6 since Solr sends back blank array
(ZERO_LEN_STR_ARRAY) in the response payload for fields that do not match.
Here
Hi David,
Thanks so much for your reply.
hl.weightMatches was indeed the culprit. After setting it to false, I am
now getting the same sub-second response as Solr 6. I am using Solr 8.6.1
(8.6.1)
Here are the tests I carried out:
hl.requireFieldMatch=true=true (2458 ms)
hl.requireFieldMatch
Thanks Hoss and Shawn for helping.
there are not many OOM stack details printed in the solr log file, it's
just saying No enough memory, and it's killed by oom.sh(solr's script).
My question(issue) is not it's OOM or not, the issue is why JVM memory
usage keeps growing up but never going
: Is the matter to use the config file ? I am using custom config instead
: of _default, my config is from solr 8.6.2 with custom solrconfig.xml
Well, it depends on what's *IN* the custom config ... maybe you are using
some built in functionality that has a bug but didn't get triggered by my
On 1/27/2021 9:00 PM, Luke wrote:
it's killed by OOME exception. The problem is that I just created empty
collections and the Solr JVM keeps growing and never goes down. there is no
data at all. at the beginning, I set Xxm=6G, then 10G, now 15G, Solr 8.7
always use all of them
Hello Kerwin,
Firstly, hopefully you've seen the upgrade notes:
https://lucene.apache.org/solr/guide/8_7/solr-upgrade-notes.html
8.6 fixes a performance regression found in 8.5; perhaps you are using 8.5?
Missing from the upgrade notes but found in the CHANGES.txt for 8.0
is hl.weightMatches
Thanks Chris,
Is the matter to use the config file ? I am using custom config instead of
_default, my config is from solr 8.6.2 with custom solrconfig.xml
Derrick
Sent from my iPhone
> On Jan 28, 2021, at 2:48 PM, Chris Hostetter wrote:
>
>
> FWIW, I just tried using
FWIW, I just tried using 8.7.0 to run:
bin/solr -m 200m -e cloud -noprompt
And then setup the following bash one liner to poll the heap metrics...
while : ; do date; echo "node 8989" && (curl -sS
http://localhost:8983/solr/admin/metrics | grep memory.heap); echo &qu
: Hi, I am using solr 8.7.0, centos 7, java 8.
:
: I just created a few collections and no data, memory keeps growing but
: never go down, until I got OOM and solr is killed
Are you usinga custom config set, or just the _default configs?
if you start up this single node with something like
:
> Mike,
>
> No, it's not docker. it is just one solr node(service) which connects to
> external zookeeper, the below is a JVM setting and memory usage.
>
> There are 25 collections which have a few 2000 documents totally. I am
> wondering why solr uses so much memo
Mike,
No, it's not docker. it is just one solr node(service) which connects to
external zookeeper, the below is a JVM setting and memory usage.
There are 25 collections which have a few 2000 documents totally. I am
wondering why solr uses so much memory.
-XX:+AlwaysPreTouch-XX
wrote:
> Shawn,
>
> it's killed by OOME exception. The problem is that I just created empty
> collections and the Solr JVM keeps growing and never goes down. there is no
> data at all. at the beginning, I set Xxm=6G, then 10G, now 15G, Solr 8.7
> always use all of them and it will b
Shawn,
it's killed by OOME exception. The problem is that I just created empty
collections and the Solr JVM keeps growing and never goes down. there is no
data at all. at the beginning, I set Xxm=6G, then 10G, now 15G, Solr 8.7
always use all of them and it will be killed by oom.sh once jvm usage
On 1/27/2021 5:08 PM, Luke Oak wrote:
I just created a few collections and no data, memory keeps growing but never go
down, until I got OOM and solr is killed
Any reason?
Was Solr killed by the operating system's oom killer or did the death
start with a Java OutOfMemoryError exception
Hi, I am using solr 8.7.0, centos 7, java 8.
I just created a few collections and no data, memory keeps growing but never go
down, until I got OOM and solr is killed
Any reason?
Thanks
Sent from my iPhone
Thanks, I joined the Relevance Slack:
https://opensourceconnections.com/slack, I definitely think a dedicated
Solr workspace would also be good allowing for channels to get involved
with development as well as user based questions.
It does seem like slack has made it increasingly difficult
Hi,
While upgrading to Solr 8 from 6 the Unified highlighter begins to have
performance issues going from approximately 100ms to more than 4 seconds
with 76 fields in the hl.q and hl.fl parameters. So I played with
different options and found that the hl.q parameter needs to have any one
field
e directly
Is it possible to use a compass generated index containing
_b.cfs
segments.gen
segments_d
with solr ?
There is https://solr-dev.slack.com
It's not really used, but it's there and we can open it up for people to
join and start using.
On Tue, Jan 26, 2021 at 5:38 AM Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:
> Thanks ufuk. I'll take a look.
>
> On Tue, 26 Jan, 202
We finally got this fixed by temporarily disabling any updates to the SOLR
index.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Thanks ufuk. I'll take a look.
On Tue, 26 Jan, 2021, 4:05 pm ufuk yılmaz,
wrote:
> It’s asking for a searchscale.com email address?
>
> Sent from Mail for Windows 10
>
> From: Ishan Chattopadhyaya
> Sent: 26 January 2021 13:33
> To: solr-user
> Subject:
s a Slack backed by official IRC support. Please see
> https://lucene.472066.n3.nabble.com/Solr-Users-Slack-td4466856.html for
> details on how to join it.
>
> On Tue, 19 Jan, 2021, 2:54 pm Charlie Hull, <
> ch...@opensourceconnections.com> wrote:
>
>> Relevance Sl
It’s asking for a searchscale.com email address?
Sent from Mail for Windows 10
From: Ishan Chattopadhyaya
Sent: 26 January 2021 13:33
To: solr-user
Subject: Re: Solr Slack Workspace
There is a Slack backed by official IRC support. Please see
https://lucene.472066.n3.nabble.com/Solr-Users-Slack
There is a Slack backed by official IRC support. Please see
https://lucene.472066.n3.nabble.com/Solr-Users-Slack-td4466856.html for
details on how to join it.
On Tue, 19 Jan, 2021, 2:54 pm Charlie Hull,
wrote:
> Relevance Slack is open to anyone working on search & relevance - #solr is
Hi Walter,
>From a strict feasibility point of view, 2 totally separate Solr instances can
>perfectly run on the same server: this would be 2 distinct JVM process, I
>don't foresee any issue.
Nothing specific to look for except different ports, directories...
Do you mean using the sam
Hi all,
is it possible to run multiple (different) Solr versions on a (Debian) server?
For development and production purposes I'd like to run
- a development version (Solr 8.7.0) and
- a productive version (Solr 7.4.0).
Which settings are available/necessary?
Thanks
Walter Claassen
Hi all,
is it possible to run multiple (different) Solr versions on a (Debian)
server?
For development and production purposes I'd like to run
- a development version (Solr 8.7.0) and
- a productive version (Solr 7.4.0).
Which settings are available/necessary?
Thanks
Walter Claassen
SinceI changed heap size to 10G, I found that solr always uses around
6G-6.5G. Just wondering where I can set to limit memory usage, for example,
I just want to give solr 6G.
On Sun, Jan 24, 2021 at 1:51 PM Luke wrote:
> looks like the solr-8983-console.log was overridden after I restarted S
looks like the solr-8983-console.log was overridden after I restarted Solr
with 10G memory, I cannot find it anymore.
as for how I install and start solr, I did as below
1. download binary file(8.7.0)
2. change configuration in solr.in.sh(setup external zk)
3. start it by ./bin/solr start
On 1/23/2021 6:41 PM, Luke wrote:
I don't see any log in solr.log, but there is OutOfMemory error in
solr-8983-console.log file.
Do you have the entire text of that exception? Can you share it? That
is the real information that I am after here.
I only asked how Solr was installed
Shawn,
What version of Solr? 8.7.0
How is it installed and started? I download binary file and change
configuration in solr.in.sh, then start it by ./bin/solr start &
What OS? centos 7
Java version? openJDK 8
I don't see any log in solr.log, but there is OutOfMemory error in
solr-
On 1/23/2021 6:29 AM, Luke Oak wrote:
I use default settings to start solr , I set heap to 6G, I created 10
collections with 1node and 1 replica, however, there is not much data at all,
just 100 documents.
My server is 32 G memory and 4 core cpu, ssd drive 300g
It was ok when i created 5
Hi there,
I use default settings to start solr , I set heap to 6G, I created 10
collections with 1node and 1 replica, however, there is not much data at all,
just 100 documents.
My server is 32 G memory and 4 core cpu, ssd drive 300g
It was ok when i created 5 collections. It got oom killed
der for it to still
match the original term.
So if I've understood your question, I would define that as "gain =>
gain,revenue".
If that doesn't solve it, feel free to share your config and someone might
be able to make a suggestion
On Fri, 22 Jan 2021 at 14:11, Iram Tariq
wrote:
Hi All,
Using SOLR default Synonyms search I am able to search Synonyms but for
some cases it is giving ambiguous results.
For example one of Synonyms of "Revenue" is "Gain"
Input Keyword for search: Revenue and Company
Irrelevant Output: Our company doesn't want to
Hello everyone,
I have a nasty problem with the scheduled Solr collections backup. From
time to time when a scheduled backup is triggered (backup operation takes
around 10 minutes) Solr freezes for 20-30 seconds. The freeze happens on
one Solr instance at time but this affects all queries latency
Hi,
I am upgrading from Solr 6.5.1 to solr 8.6.1 and have noticed a change in
the Edismax parser behavior which is affecting our search results. If user
operators are present in the search query, the Solr 6 behavior was to take
mm parameters from the user query string which was 0% by default
Relevance Slack is open to anyone working on search & relevance - #solr is only
one of the channels, there's lots more! Hope to see you there.
Cheers
Charlie
https://opensourceconnections.com/slack
On 16/01/2021 02:18, matthew sporleder wrote:
IRC has kind of died off,
h
I further checked that BM25Similarity class until solr 7.7 has a null check
for norms in the explainTFNorm method but this is removed in Solr 8
onwards. Does omitNorms work in Solr8? Can someone send me what the debug
output looks like with omitNorms="true"?
Here is my config:
On M
Hi eveybody,
I am migrating from solr 6.5.1 to solr 8.6.1 and am having a couple of
issues for which I need your help. There is a significant change in ranking
between Solr 6 and 8 search results which I need to fix before using Solr8
in our live environment. I noticed a couple of changes upfront
IRC has kind of died off,
https://lucene.apache.org/solr/community.html has a slack mentioned,
I'm on https://opensourceconnections.com/slack after taking their solr
training class and assume it's mostly open to solr community.
On Fri, Jan 15, 2021 at 8:10 PM Justin Sweeney
wrote:
>
>
Hi all,
I did some googling and didn't find anything, but is there a Slack
workspace for Solr? I think this could be useful to expand interaction
within the community of Solr users and connect people solving similar
problems.
I'd be happy to get this setup if it does not exist already.
Justin
Hi Jim
Thanks for looking into it for me.
I did some more testing and if I created a base solr 7.7.1 database using
the 'out of the box' schema.xml and solrconfig and add this item manually
using the Solr Admin tool documents/XML
ABCD-N1
A test
And then update it using
I have one collection, 3 shards, 2 replicas, I defined route field: title,
and ID is the unique key.
I index two document with same ID and different title, I configured dedupe
chain and I can see signature is generated, but the old document was
removed by solr, please help, thanks
for you, some of us do it
>> ourselves, and it wasn't clear (to me anyway) if OP was asking about that.
>>
>> Dima
>>
>
> Hi,
>
> Thanks for all the suggestions. I am hosting my Solr search service in
> GCP. I have a follow-up question regarding Solr Nodes.
I agree, documents may be gigantic or very small, with heavy text analysis
or simple strings ...
so it's not possible to give an evaluation here.
But you could make use of the nightly benchmark to give you an idea of
Lucene indexing speed (the engine inside Apache Solr) :
http://home.apache.org
I think if you have _root_ in schema.xml you should look elsewhere. My memory
is merely adding this one line to schema.xml took care of our problem.
From: Flowerday, Matthew J
Sent: Tuesday, January 12, 2021 3:23 AM
To: solr-user@lucene.apache.org
Subject: RE: Query over migrating a solr
Hi Jeremy,
You might find our recent blog on Debugging Solr Performance Issues
useful
https://opensourceconnections.com/blog/2021/01/05/a-solr-performance-debugging-toolkit/
- also check out Savan Das' blog which is linked within.
Best
Charlie
On 12/01/2021 14:53, Michael Gibney wrote
Ahh ok. If those are your only fieldType definitions, and most of your
config is copied from the default, then SOLR-13336 is unlikely to be the
culprit. Looking at more general options, off the top of my head:
1. make sure you haven't allocated all physical memory to heap (leave a
decent amount
Thanks Michael,
SOLR-13336 seems intriguing. I'm not a solr expert, but I believe these
are the relevant sections from our schema definition
Hi,
We are planning to implement Solr Cloud 8.7.0, running in Kubernetes cluster,
with external Zookeeper 3.4.5 cdh 5.16.
Solr 8.7.0 seems to be matched with Zookeeper 3.6.2. Is there any issue using
Zookeeper 3.4.5 cdh 5.16?
Thanks in advance.
Regards,
Subhajit
Classification: Internal
Hi,
I am using SOLR 7.5 master slave architecture. I am having two slaves connected
to master, when load is getting increased then one my slave server CPU spikes
100% and gets terminated. In logs and monitoring I could find
"java.io.IOException" coming.
Plea
attachments from all devices.
<http://www.linkedin.com/company/unisys><http://twitter.com/unisyscorp>
<http://www.youtube.com/theunisyschannel>
<http://www.facebook.com/unisyscorp> <https://vimeo.com/unisys>
<http://blogs.unisys.com/>
From: Dyer,
ger problems than search.
>
> That is the point, you have amazon doing that for you, some of us do it
> ourselves, and it wasn't clear (to me anyway) if OP was asking about that.
>
> Dima
>
Hi,
Thanks for all the suggestions. I am hosting my Solr search service in GCP.
I have a follow-up questi
Hi Jeremy,
Can you share your analysis chain configs? (SOLR-13336 can manifest in a
similar way, and would affect 7.3.1 with a susceptible config, given the
right (wrong?) input ...)
Michael
On Mon, Jan 11, 2021 at 5:27 PM Jeremy Smith wrote:
> Hello all,
> We have been stru
a feature was added for nested documents, this field
somehow became mandatory in order for updates to work properly, at least in
some cases.
From: Flowerday, Matthew J
Sent: Saturday, January 9, 2021 4:44 AM
To: solr-user@lucene.apache.org
Subject: RE: Query over migrating a solr database from 7.7.
Hello all,
We have been struggling with an issue where solr will intermittently use
all available CPU and become unresponsive. It will remain in this state until
we restart. Solr will remain stable for some time, usually a few hours to a
few days, before this happens again. We've tried
On 1/11/2021 12:30 PM, Walter Underwood wrote:
Use a load balancer. We’re in AWS, so we use an AWS ALB.
If you don’t have a failure-tolerant load balancer implementation, the site has
bigger problems than search.
That is the point, you have amazon doing that for you, some of us do it
nly gets lots of load?
>> Instead, size the cluster with the number of hosts you need, then add one.
>> Send traffic
>> to all of them. If any of them goes down, you have the capacity to handle
>> the traffic.
>> This is called “N+1 provisioning”.
>
> Where do
of hosts you need, then add one. Send
traffic
to all of them. If any of them goes down, you have the capacity to handle the
traffic.
This is called “N+1 provisioning”.
Where do you send your solr queries? If you have an http server at an ip
address that answers them, that's a single point of failure
of them. If any of them goes down, you have the capacity to handle the
traffic.
This is called “N+1 provisioning”.
This was our rule at Netflix a dozen years ago, running Solr 1.3. I do it the
same way
today with large sharded clusters, one extra per shard.
wunder
Walter Underwood
wun
On 1/11/2021 4:02 AM, Kaushal Shriyan wrote:
Thanks, David for the quick response. Is there any use-case to use HAProxy
or Nginx webserver or any other application to load balance both Solr
primary and secondary nodes?
I had a setup with haproxy and two copies of a Solr index.
Four
:1662155651369049897=171129c229429f29=fimg=s0-l75-ft=ANGjdJ_o0Ds8_P8d7W-csq2mmc6mBGQy9hSjXsGEv15RXUutalCYzg3HQB3CByE2swcJkH3yRaLwrXkr1G81F9FpfqcPlbpRoZcainmsJjviLoypusuKOxCnOw97zuo=emb]
De: Kaushal Shriyan
Enviado: lunes, 11 de enero de 2021 12:02
Para: solr-user
On Mon, Jan 11, 2021 at 4:11 PM DAVID MARTIN NIETO
wrote:
> I believe Solr dont have this configuration, you need a load balancer with
> that configuration mode for that.
>
> Kind regards.
>
>
Thanks, David for the quick response. Is there any use-case to use HAProxy
or Ngin
I believe Solr dont have this configuration, you need a load balancer with that
configuration mode for that.
Kind regards.
De: Kaushal Shriyan
Enviado: lunes, 11 de enero de 2021 11:32
Para: solr-user@lucene.apache.org
Asunto: Apache Solr in High Availability
Hi,
We are running Apache Solr 8.7.0 search service on CentOS Linux release
7.9.2009 (Core).
Is there a way to set up the Solr search service in High Availability Mode
in the Primary and Secondary node? For example, if the primary node is down
secondary node will take care of the service.
Best
Hi There
Thanks for replying to my query. Yes I had seen the notes saying that on
upgrading to a new major release the advice is to wipe and re-index. But I did
see this on Major Changes in Solr 8 | Apache Solr Reference Guide 8.7
<https://lucene.apache.org/solr/guide/8_7/major-chan
> I carried out this analysis of the solr log from the updates I carried out
> at the time:
>
>
>
> Looking at the update requests sent to Solr. The first update of an
> existing record generated
>
>
>
> 2021-01-07 06:04:18.958 INFO (qtp1458091526-17) [ x:uleaf]
> o.a.
101 - 200 of 46318 matches
Mail list logo