>>
>> It’d be a little convoluted, but you could use the collections COLSTATUS
>> Api to
>> find the names of all your replicas. Then query _one_ replica of each
>> shard with something like
>> solr/collection1_shard1_replica_n1/q=*:*&distrib=false
>>
Hi All,
Any suggestions on below observation, can i use Char Filter to retain old
behavior of Standard Tokenizer ?
Thanks,
Deepu
On Sat, Nov 28, 2020 at 4:59 PM Deepu wrote:
> Hi All,
>
> We are in process of migrating from Solr 5 to solr 8, during testing
> observed that Standa
ould use the collections COLSTATUS
> Api to
> find the names of all your replicas. Then query _one_ replica of each
> shard with something like
> solr/collection1_shard1_replica_n1/q=*:*&distrib=false
>
> that’ll return the number of live docs (i.e. non-deleted docs) and if it’s
something like
solr/collection1_shard1_replica_n1/q=*:*&distrib=false
that’ll return the number of live docs (i.e. non-deleted docs) and if it’s zero
you can delete the shard.
But the implicit router requires you take complete control of where documents
go, i.e. which shard they land on.
.com/
On Sat, Nov 28, 2020 at 3:42 AM Parshant Kumar
wrote:
> Hi community,
>
> I want to implement collapse queries instead of group queries . In solr
> documentation it is stated that we should prefer collapse & expand queries
> instead of group queries.Please explain how
Hi All,
pushing the query to the top.
Does anyone have any idea about it?
On Fri, Nov 27, 2020 at 11:49 AM Ajay Sharma wrote:
> Hi Community,
>
> This is the first time, I am implementing a solr *highlighting *feature.
> I have read the concept via solr documentation
&
Hi Solr team,
I am using solr cloud.(version 8.5.x). I have a need to find out a
configuration where I can delete a shard , when number of documents reaches
to zero in the shard , can some one help me out to achieve that ?
It is urgent , so a quick response will be highly appreciated .
Thanks
Hi All,
We are in process of migrating from Solr 5 to solr 8, during testing
observed that Standard tokenizer in Solr 5 was considering emojis as
special chars and removing them apparently in Solr 8 it's considering them
as regular chars so not removing while indexing.
We need to retain
Hi community,
I want to implement collapse queries instead of group queries . In solr
documentation it is stated that we should prefer collapse & expand queries
instead of group queries.Please explain how the collapse & expand queries
is better than grouped queries ? How can I implement i
Hi
what is meaning of weight of feature at the time Uploading a Model for
Re-Ranking?
How can we calculate the weight? Ranking is depended on weight?
Please give me more details about weight.
https://lucene.apache.org/solr/guide/8_1/learning-to-rank.html#uploading-a-model
Regards,
Vishal
Sent
Hi Community,
This is the first time, I am implementing a solr *highlighting *feature.
I have read the concept via solr documentation
Link- https://lucene.apache.org/solr/guide/8_2/highlighting.html
To enable highlighting I just have to add *&hl=true&hl.fl=* *in our solr
query and
Hi
I have read the concept of "Learning To Rank". I see the Example:
/path/myFeatures.json
{
"name" : "documentRecency",
"class" : "org.apache.solr.ltr.feature.SolrFeature",
"params" : {
"q" : "{!func}recip( ms(NOW,last_modified), 3.16e-11, 1, 1)"
}
},
{
"name" : "
Hi Shawn,
Thanks for taking time and replay.
Thanks,
Deepu
On Thu, Nov 26, 2020 at 10:53 PM Shawn Heisey wrote:
> On 11/25/2020 10:42 AM, Deepu wrote:
> > We are in the process of migrating from Solr 5 to Solr 8, during testing
> > identified that "Not null" queries on
On 11/25/2020 10:42 AM, Deepu wrote:
We are in the process of migrating from Solr 5 to Solr 8, during testing
identified that "Not null" queries on plong & pint field types are not
giving any results, it is working fine with solr 5.4 version.
could you please let me know if you ha
Hello Solr users,
I've recently added a Solr logs integration
<https://sematext.com/docs/integration/solr-logs/> to our logging SaaS
<https://sematext.com/docs/logs/> and I wanted to ask what would be useful
that I may have missed.
First, there are some regexes to parse Solr
Dear Team,
We are in the process of migrating from Solr 5 to Solr 8, during testing
identified that "Not null" queries on plong & pint field types are not
giving any results, it is working fine with solr 5.4 version.
could you please let me know if you have suggestions on this
Hi Experts,
We are using solr 8.4 (none cloud). When ingesting data with multiple processes
to one core in a solr node, we are hitting some throttling: the max ingestion
rate achieved is about 47K docs per second with 17 posting processes; each doc
is about 250 bytes; the CPU utilization rate
Thanks Mike
That explains it, just removing the noggit-0.6 jar should fix it. This
error depended on classloading order and didn't show up on mac but was a
problem on linux.
On Fri, Nov 20, 2020 at 2:54 PM Mike Drob wrote:
> Noggit code was forked into Solr, see SOLR-13427
&
Noggit code was forked into Solr, see SOLR-13427
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.6.3/solr/solrj/src/java/org/noggit/ObjectBuilder.java
It looks like that particular method was added in 8.4 via
https://issues.apache.org/jira/browse/SOLR-13824
Is it possible
Hi,
got this error using streaming with solrj 8.6.3 . does it use noggit-0.8.
It was not mentioned in dependencies
https://github.com/apache/lucene-solr/blob/branch_8_6/solr/solrj/ivy.xml
Caused by: java.lang.NoSuchMethodError: 'java.lang.Object
org.noggit.ObjectBuilder.getValStrict()
Your should format the date according to the ISO Standard:
https://lucene.apache.org/solr/guide/6_6/working-with-dates.html
Eg. 2018-07-12T00:00:00Z
You can either transform the date that you have in Solr or in your client
pushing the doc to Solr.
All major programming language have date
Hello Experts,
I am having issues with indexing Date field in SOLR 8.6.0. I am indexing
from MongoDB. In MongoDB the Format is as follows
* "R_CREATION_DATE" : "12-Jul-18", "R_MODIFY_DATE" : "30-Apr-19", *
In my Managed Schema I have the following e
filed X
>
>
>
> > We are able to see a decrease in index size but the response time has
> > increased.
>
> I can't say for sure, but I would imagine that when querying multiple
> fields using edismax, Solr can manage to do some of that work in
> parallel. But
Hello,
I'm trying to restore Solr and I'm getting a timeout error, e.g. Timeout
occurred when waiting response from server at http://solrserver:8983/solr
It then says 'could not restore core'. There are just under 40 million records
to restore so I understand this will t
has
increased.
I can't say for sure, but I would imagine that when querying multiple
fields using edismax, Solr can manage to do some of that work in
parallel. But with only one field, any parallel processing is lost. If
I have the right idea, that could explain what you are s
Hi All,
Earlier we were searching in 6 fields i.e qf is applied on 6 fields like
below
A
B
C
D
E
F
We had assumed if we reduced the number of fields being used to search then
the index size and response time both will decrease.
We merged all these 6 fields into one field X and now
Hi,
I am in a process of migrating from Solr-6.5.1 To Solr-8.6.3. The current
index size after optimisation is 2.4 TB. We use a 7TB disk for indexing as
the optimization needs extra space.
Now with the newer Solr the un-optimised index itself got created of size
5+TB which after optimisation
Hello everyone,
I am using the fuzzy search capability of SOLR 8.7 and I dug into a specific
case in which the search misbehaves.
I am using this analyzer (JSON here) on the field that I am using for search
"analyzer" : {
>
> On Tue, Nov 3, 2020, 6:01 PM Parshant Kumar
> wrote:
>
> > Hi team,
> >
> > We are having solr architecture as *master->repeater-> 3 slave servers.*
> >
> > We are doing incremental indexing on the master server(every 20 min) .
> > Replicat
All,please help on this
On Tue, Nov 3, 2020, 6:01 PM Parshant Kumar
wrote:
> Hi team,
>
> We are having solr architecture as *master->repeater-> 3 slave servers.*
>
> We are doing incremental indexing on the master server(every 20 min) .
> Replication of index is done
As far as I can tell only your first and 5th emails went through. Either
way, Cassandra responded on 20200929 - ~15 hrs after your first message:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/202009.mbox/%3Cbe447e96-60ed-4a40-88dd-9e0c28be6c71%40Spark%3E
Kevin Risden
On Fri, Nov 13
gt;
From: Narayanan, Lakshmi
Sent: Thursday, October 22, 2020 1:06 PM
To: solr-user@lucene.apache.org
Subject: FW: Vulnerabilities in SOLR 8.6.2
This is my 4th attempt to contact
Please advise, if there is a build that fixes these vulnerabilities
Lakshmi Narayanan
Marsh & McLennan Companies
12
Hello,
We are needing a recommendation for solr replication throttling. What are your
recommendations for maxWriteMBPerSec value? Our indexes contain 18 locales and
size for all indexes is 188GB and growing.
Also will replication throttling work with solr 4.10.3?
Thanks,
Pino Alu | HCL
I will try to explain myself in as much detail as possible and isolating as
much as possible from the context.
Shortly, I'm trying to create a `DIH` in order to digest some documents as
nested. I mean, I need to digest an `one-to-many` relation and put it as
nested documents.
My `parents` data is
HI:
solr-exporter-config.xml
solr_metrics_core_searcher_cache suggested add: escalation of ramBytesUsed
$object.value | to_entries | .[] | select(.key == "lookups" or .key == "hits"
or .key == "size" or .key == "evictions" or .key == "inserts" or .key ==
"ramBytesUsed") as $target |
Solr isn’t meant to be public facing. Not sure how anyone would send these
commands since it can’t be reached from the outside world
> On Nov 12, 2020, at 7:12 AM, Sheikh, Wasim A.
> wrote:
>
> Hi Team,
>
> Currently we are facing the below vulnerability for Apache Solr t
I want to unload and reload all cores of a collection in SolrCloud mode
(Solr 8.x.x).
--
-Gajanan
Hi Team,
Currently we are facing the below vulnerability for Apache Solr tool. So can
you please check the below details and help us to fix this issue.
/etc/init.d/solr-master version
Server version: Apache Tomcat/7.0.62
Server built: May 7 2015 17:14:55 UTC
Server number: 7.0.62.0
OS Name
Is there a way where I can configure one solr node to not take the select
requests in a solr cloud?
--
Thanks & Regards,
Yaswanth Kumar Konathala.
yaswanth...@gmail.com
FYI an updated Docker image was just published a few hours ago:
https://hub.docker.com/_/solr
~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley
On Wed, Nov 4, 2020 at 9:06 AM Atri Sharma wrote:
> 3/11/2020, Apache Solr™ 8.7 available
>
> The L
He Scott,
We have also recently migrated to solr 8.5.2. And facing similar issue.
Are you able to resolve this
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Thank you Kevin, I can now connect (I test with DbVisualiser) with -c
option.
Vincent
Le lun. 9 nov. 2020 à 16:30, Kevin Risden a écrit :
> >
> > start (without option : bin/solr start)
>
>
> Solr SQL/JDBC requires Solr Cloud (running w/ Zookeeper) since streaming
>
>
> start (without option : bin/solr start)
Solr SQL/JDBC requires Solr Cloud (running w/ Zookeeper) since streaming
expressions (which backs the Solr SQL) requires it.
You should be able to start Solr this way to get Solr in cloud mode.
bin/solr start -c
If you use the above to star
Thanks, Shawn and Erick.
We are step by step trying out the changes suggested in your post.
Will get back once we have some numbers.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Dear All,
After the release of Solr 8.7.0 I want to test the new version on my
notebook. It has the following specifications: Windows 10 64-bit, 16 GB
RAM, Amazon Corretto 11 64-bit, 50 GB free disk space. I downloaded
solr-8.7.0.zip and unzipped it into a local folder. In order to start
Solr in
Hi all :)
I'm trying to connect to Solr with JDBC, but I always have
"java.util.concurrent.TimeoutException: Could not connect to ZooKeeper
localhost:9983/ within 15000 ms" (or other port, depends wich jdbc url I
test).
Here what I did :
-
I installed Solr 7.7.2 (i follo
, “name” is really kind of a no-op, the thing
> displayed in the drop-down is taken from Zookeeper’s node_name. Please
> don’t try to name that.
>
> I very strongly recommend that you stop trying to do this. Whatever you
> are doing that requires a specific name, I’d change _tha
I _strongly_ recommend you use the collections API CREATE command
rather than try what you’re describing.
You’re trying to mix manually creating core.properties
files, which was the process for stand-alone Solr, with SolrCloud
and hoping that it somehow gets propagated to Zookeeper. This has
requires a specific name, I’d change _that_ process to use the names
assigned by Solr. If it’s just for aesthetics, there’s really no good way to
change what’s in the drop-down.
Best,
Erick
> On Nov 5, 2020, at 5:25 AM, Modassar Ather wrote:
>
> Hi Shawn,
>
> I understand that we
core discovery but
> > when this file is present under a subdirectory of SOLR_HOME I see it not
> > getting loaded and not available in Solr dashboard.
>
> You should not be trying to manipulate core.properties files yourself.
> This is especially discouraged when Solr is runni
On 11/3/2020 11:49 PM, Narayanan, Bhagyasree wrote:
Steps we followed for creating Solr App service:
1. Created a blank sitecore 9.3 solution from Azure market place and
created a Web app for Solr.
2. Unzipped the Solr 8.1.1 package and copied all the contents to
wwwroot folder of the
not
getting loaded and not available in Solr dashboard.
You should not be trying to manipulate core.properties files yourself.
This is especially discouraged when Solr is running in cloud mode.
When you're in cloud mode, the collection information in zookeeper will
always be consulted d
Hi Erick,
I have put solr configs in Zookeeper. I have created a collection using the
following API.
admin/collections?action=CREATE&name=mycore&numShards=2&replicationFactor=1&collection.configName=mycore&
property.name=mycore
The collection got created and I can see *mycore
I am seeing the same error as in this thread:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/202004.mbox/
[1]
with Solr 8.5.0
2020-11-04 16:58:00.998 WARN (qtp335107734-3042730) [c:dovecot
s:shard1 r:core_node44 x:dovecot_shard1_replica_n43]
o.a.s.u.SolrCmdDistributor Unable to
3/11/2020, Apache Solr™ 8.7 available
The Lucene PMC is pleased to announce the release of Apache Solr 8.7
Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
full-text search, hit highlighting, faceted search
f I am facing any issues.
>
If that means you still _have_ a core.properties file and it’s empty, that won’t
work.
When Solr starts, it goes through “core discovery”. Starting at SOLR_HOME it
recursively descends the directories and whenever it finds a “core.properties”
file says “aha! There’s a
hit rates. If you were using bare NOW in fq clauses,
perhaps you were getting very low hit rates as a result and expanded
the cache size, see:
https://dzone.com/articles/solr-date-math-now-and-filter
At any rate, I _strongly_ recommend that you drop your filterCache
to the default size of 512, and
Hi Team
We have created a new sitecore environment with the Azure market
place solution "Azure Experience Cloud" (PaaS). Sitecore version 9.3 XM scaled
topology with SOLR search. Since Solr App doesn't come by default with the
market place solution we created a
On 11/3/2020 11:46 PM, raj.yadav wrote:
We have two parallel system one is solr 8.5.2 and other one is solr 5.4
In solr_5.4 commit time with opensearcher true is 10 to 12 minutes while in
solr_8 it's around 25 minutes.
Commits on a properly configured and sized system should take
Ginzburg wrote:
> >
> > I second Erick's recommendation, but just for the record legacyCloud was
> > removed in (upcoming) Solr 9 and is still available in Solr 8.x. Most
> > likely this explains Modassar why you found it in the documentation.
> >
> > Ilan
Hi everyone,
We have two parallel system one is solr 8.5.2 and other one is solr 5.4
In solr_5.4 commit time with opensearcher true is 10 to 12 minutes while in
solr_8 it's around 25 minutes.
This is our current caching policy of solr_8
In solr 5, we are using FastLRU
I second Erick's recommendation, but just for the record legacyCloud was
> removed in (upcoming) Solr 9 and is still available in Solr 8.x. Most
> likely this explains Modassar why you found it in the documentation.
>
> Ilan
>
>
> On Tue, Nov 3, 2020 at 5:11 PM Erick Eric
://observer.wunderwood.org/ (my blog)
> On Nov 3, 2020, at 10:04 AM, uyilmaz wrote:
>
>
> I have been trying to find a way to do this in Solr for a while. Perform a
> query, and for a text_general field in the result set, find each term's # of
> occurences.
>
> - I
I have been trying to find a way to do this in Solr for a while. Perform a
query, and for a text_general field in the result set, find each term's # of
occurences.
- I tried the Terms Component, it doesn't have the ability to restrict the
result set with a query.
- Tried facet
Sorry for the bad format of the first mail, once again:
Hello there, while playing around with the
https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/conf/solr-exporter-config.xml
I found a bug when trying to use string arrays like 'facet.
Hello there, while playing around with the
https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/conf/solr-exporter-config.xml
I found a bug when trying to use string arrays like 'facet.field': __
__
__
__
__
__
_test_
_/select_
__
__
__
_{!EX=P
I second Erick's recommendation, but just for the record legacyCloud was
removed in (upcoming) Solr 9 and is still available in Solr 8.x. Most
likely this explains Modassar why you found it in the documentation.
Ilan
On Tue, Nov 3, 2020 at 5:11 PM Erick Erickson
wrote:
> You absolut
; there.
> I have all the solr install scripts based on older Solr versions and wanted
> to re-use the same as the core.properties way is still available.
>
> So does this mean that we do not need core.properties anymore?
> How can we ensure that the core name is configurable and no
Thanks Erick for your response.
I will certainly use the APIs and not rely on the core.properties. I was
going through the documentation on core.properties and found it to be still
there.
I have all the solr install scripts based on older Solr versions and wanted
to re-use the same as the
Hi team,
We are having solr architecture as *master->repeater-> 3 slave servers.*
We are doing incremental indexing on the master server(every 20 min) .
Replication of index is done from master to repeater server(every 10 mins)
and from repeater to 3 slave servers (every 3 hours).
*We are
There is not nearly enough information here to begin
to help you.
At minimum we need:
1> your field definition
2> the text you index
3> the query you send
You might want to review:
https://wiki.apache.org/solr/UsingMailingLists
Best,
Erick
> On Nov 3, 2020, at 1:08 AM, Vire
You’re relying on legacyMode, which is no longer supported. In
older versions of Solr, if a core.properties file was found on disk Solr
attempted to create the replica (and collection) on the fly. This is no
longer true.
Why are you doing it this manually instead of using the collections API
Hi,
I am migrating from Solr 6.5.1 to Solr 8.6.3. As a part of the entire
upgrade I have the first task to install and configure the solr with the
core and collection. The solr is installed in SolrCloud mode.
In Solr 6.5.1 I was using the following key values in core.properties file.
The
Hi Sir/Madam,
Am facing an issue with few keyword searches (like gazing, one) in solr.
Can you please help why these words are not listed in solr results?
Indexing is done properly.
--
Thanks and Regards
Veeresh Sasalawad
Solr version: 8.2; Zoo - 3.4
I am progressively adding collection by collections with 3 replica's on
each, and all of a sudden we got to see the load averages on solr nodes
were bumped and also memory usage went to 65% usage on JAVA process , with
that some replica's had went to "
y thing is that 6.0.0.
> handled these requests somehow, but newer version did not.
> Anyway, we will observe this and try to improve our code as well.
>
> Best regards,
> Jaan
>
> -Original Message-
> From: Erick Erickson
> Sent: 28 October 2020 17:18
>
observe this and try to improve our code as well.
Best regards,
Jaan
-Original Message-
From: Erick Erickson
Sent: 28 October 2020 17:18
To: solr-user@lucene.apache.org
Subject: Re: SOLR uses too much CPU and GC is also weird on Windows server
DocValues=true are usually only used for
Well, it would require to maintain tests for all of the versions Beam wants
to support. For all the time Beam had SolrJ 5.5.4 as compile dependency so
it's not likely a needed feature.
On Fri, Oct 30, 2020 at 1:30 PM matthew sporleder
wrote:
> Is there a reason you can't use a
Is there a reason you can't use a bunch of solr versions and let beam users
choose at runtime?
> On Oct 30, 2020, at 4:58 AM, Piotr Szuberski
> wrote:
>
> Thank you very much for your answer!
>
> Beam has a compile time dependency on Solr so the user doesn't ha
Thank you very much for your answer!
Beam has a compile time dependency on Solr so the user doesn't have to
provide his own. The problem would happen when a user wants to use both
Solr X version and Beam SolrIO in the same project.
As I understood it'd be the best choice to use the 8.x
Hello All,
I need to renew the expiring cert for SOLR in a Windows SOLR-ZK ensemble with 3
Solr VMs and 3 ZK VMs and as this is critical application I am performing one
Solr VM at a time so that my index is available.
So on the non-leader VM, I placed the new PFX cert at
"F:\solr-6.6.3\s
I've created a JIRA ticket now:
https://issues.apache.org/jira/browse/SOLR-14969
I'd be really glad, if a Solr developer could help or comment on the issue.
Thank you,
Andreas
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Cloudera's default configuration for the HDFSDirectoryFactory
<https://github.com/cloudera/lucene-solr/blob/cdh6.3.3/cloudera/solrconfig.xml#L118>
is very similar to yours in solrconfig.xml. The solr.hdfs.home property is
provided as a java property during Solr startup and we haven
Hi,
after reading some Solr source code, I might have found the cause:
There was indeed a change in Solr 8.6 that leads to the NullPointerException
for the CoreAdmin STATUS request in CoreAdminOperation#getCoreStatus. The
instancePath is not retrieved from the ResourceLoader anymore, but from
Hello,
We are using below suggest component in our solr implementation.
analyzinginfixsuggester
analyzinginfixlookupfactory
relatively short fields. For example you want to sort on a title field. And
probably not something you’re working with.
There’s not much we can say from this distance I’m afraid. I think I’d focus on
the memory requirements, maybe take a heap dump and see what’s using memory.
Did you restart Solr
med not using CPU so
> much...
>
> I am a bit running out of ideas and hoping that this will continue to work,
> but I dont like the CPU usage even over night, when nobody uses it. We will
> try to figure out the issue here and I hope I can ask more questions when in
> do
Chegg is running a 4.10.2 master/slave cluster for textbook search and several
other collections.
1. None of the features past 4.x are needed.
2. We depend on the extended edismax (SOLR-629).
3. Ain’t broke.
We are moving our Solr Cloud clusters to 8.x, even though there are no
features we need
a bit running out of ideas and hoping that this will continue to work, but
I dont like the CPU usage even over night, when nobody uses it. We will try to
figure out the issue here and I hope I can ask more questions when in doubt or
out of ideas. Also I must admit, solr is really new for me
On Tue, Oct 27, 2020 at 04:25:54PM -0500, Mike Drob wrote:
> Based on the questions that we've seen over the past month on this list,
> there are still users with Solr on 6, 7, and 8. I suspect there are still
> Solr 5 users out there too, although they don't appear to b
Piotr,
Based on the questions that we've seen over the past month on this list,
there are still users with Solr on 6, 7, and 8. I suspect there are still
Solr 5 users out there too, although they don't appear to be asking for
help - likely they are in set it and forget it mode.
Solr 7
g that causes uneven request distribution during fan-out. Can you check
> > the number of requests using the /admin/metrics API? Look for the /select
> > handler's distrib and local request times for each core in the node.
> > Compare those across different nodes.
> >
>
. That might not be enough, so
you’ll need to to watch that graph after the increase.
I’ve been using 8G heaps with Solr since version 1.2. We run this config
with Java 8 on over 100 machines. We do not do any faceting, which
can take more memory.
SOLR_HEAP=8g
# Use G1 GC -- wunder 2017-01-23
Hi,
We are working on dependency updates at Apache Beam and I would like to
consult which versions should be supported so we don't break any existing
users.
Previously the supported Solr version was 5.5.4.
Versions 8.x.y and 7.x.y naturally come to mind as they are the only not
deprecated
Hi Jaan,
You can also check in admin console in caches the sizes of field* caches. That
will tell you if some field needs docValues=true.
Regards,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
Hi Erick,
Thanks for this information, I will look into it.
Main changes were regarding parsing the results JSON got from solr, not the
queries or updates.
Jaan
P.S. configuration change about requestParser was not it.
-Original Message-
From: Erick Erickson
Sent: 27 October 2020
, group, sort, or use function queries on and Solr is doing all the extra
work of
uninverting the field that it didn’t have to before.
To answer that, you need to go through your schema and insure that
docValues=true is
set for any field you facet, group, sort, or use function queries on. If you
I found one little difference from old solrconfig and new one.
It is in requestDispatchers section
It does not have this, but we had this in old configuration. Maybe it helps, I
will see.
Jaan
-Original Message-
From: Jaan Arjasepp
Sent: 27 October 2020 14:05
To: solr-user
Hi Emir,
I checked the solrconfig.xml file and we dont even use fieldValueCache. Also
are you saying, I should check the schema and all the fields in the old solr
and the new one to see if they match or contain similar settings? What does
this uninverted value means? How to check this?
As for
Hi,
we're running tests on a stand-alone Solr instance, which create Solr
cores from multiple applications using CoreAdmin (via SolrJ).
Lately, we upgraded from 8.4.1 to 8.6.3, and sometimes we now see a
LockObtainFailedException for a lock held by the same JVM, after which
Solr is b
301 - 400 of 47984 matches
Mail list logo