Hello,
What does that mean by below. How do we set which cluster will act as
source or target at a time?
Both Cluster 1 and Cluster 2 can act as Source and Target at any given
point of time but a cluster cannot be both Source and Target at the same
time.
Also following the directions mentioned
Thanks Erick,
I had read thru https://issues.apache.org/jira/browse/SOLR-13510 earlier
today but it seemed specific to Solr 8 as Colvin Cowie wasn't able to
reproduce on 7.7.0 or 7.7.1. I am going to see if the 'forwardCredentials'
workaround resolves this for 6.6.6, fingers crossed
Brian
On
Looks like: https://issues.apache.org/jira/browse/SOLR-13510
> On Jun 11, 2019, at 3:08 PM, Brian Lininger wrote:
>
> Hello Solr Experts,
> I've hit an issue with Solr and BasicAuth that is stumping me at the
> moment. We've configured a basic security.json to require BasicAuth
> credentials
Hello Solr Experts,
I've hit an issue with Solr and BasicAuth that is stumping me at the
moment. We've configured a basic security.json to require BasicAuth
credentials for read/update to all collections in Solr, but we allow
un-authenticated requests to Solr admin endpoint (don't ask why). It
That was spot on. Thanks a lot for your help!
On Tue, Jun 11, 2019 at 2:14 AM Jörn Franke wrote:
> It is probably a Zookeeper limit. You have to set jute.maxbuffer in the
> Java System properties of all (!) zookeeper Servers and clients to the same
> value (in your case it should be a little
Hi,
I would like let you know about server side exceptions for specific field
types after upgrading to 7.7.2, like ClassCastException:
org.apache.solr.common.util.ByteArrayUtf8CharSequence.
For references: https://issues.apache.org/jira/browse/SOLR-13285 and
There is no way to match case insensitive without TextFields + no
tokenization. Its a long standing limitation of not being able to apply any
analyzers with str fields.
Thanks for pointing out the re-index page I've seen it. However sometimes
it is hard to re-index in a reasonable amount of time
Hi, while going through solr logs, I found data import error for certain
documents. Here are details about the error.
Exception while processing: file document :
null:org.apache.solr.handler.dataimport.DataImportHandlerException: Unable
to read content Processing Document # 7866
at
Hi:
When I use LTR, there are 72 features, 500 models and 200 million data. During
use, LTR execution was found to be extremely slow, with each query above 5s.
In LTRScoringQuery. ModelWeight scorer method in class,
for (final Feature.FeatureWeight featureWeight : extractedFeatureWeights) {
On Mon, Jun 10, 2019 at 10:53 PM John Davis
wrote:
> You have made many assumptions which might not always be realistic a)
> TextField is always tokenized
Well, you could of course change configuration or code to do something else
but this would be a very odd and misleading thing to do and we
Hello all
I hit another problem in moving from Solr 6 to 8.
We secure our ZooKeeper entirely (there's a restrictive ACL for every znode)
To pass the ZooKeeper credentials to Solr we implemented
ZkCredentialsProvider and ZkACLProvider to load the credentials from a file
on disk, which has the
Could it be a stop word ? What is the exact type definition of those fields?
Could this word be omitted or with wrong encoding during loading of the
documents?
> Am 03.06.2019 um 10:06 schrieb Martin Frank Hansen (MHQ) :
>
> Hi,
>
> I am having some difficulties making highlighting work. For
Hi David,
Thanks for your response and sorry my late reply.
Still the same result when using hl.method=unified.
Best regards
Martin
Internal - KMD A/S
-Original Message-
From: David Smiley
Sent: 10. juni 2019 16:48
To: solr-user
Subject: Re: highlighting not working as expected
It is probably a Zookeeper limit. You have to set jute.maxbuffer in the Java
System properties of all (!) zookeeper Servers and clients to the same value
(in your case it should be a little bit larger than your largest file).
If possible you can try to avoid storing the NLP / ML models in Solr
14 matches
Mail list logo