Getting the following error message while trying to create core
# sudo su - solr -c "/opt/solr/bin/solr create_core -c 9lives"
WARNING: Using _default configset with data driven schema functionality.
NOT RECOMMENDED for production use.
To turn off: bin/solr config -c 9liv
SolrWriter
Error creating document :
On Thu, 17 Sep 2020 at 15:53, Jörn Franke wrote:
> Log file will tell you the issue.
>
> > Am 17.09.2020 um 10:54 schrieb Anuj Bhargava :
> >
> > We just installed Solr 8.6.2
> > It is fetching the data but not adding
>
Log file will tell you the issue.
> Am 17.09.2020 um 10:54 schrieb Anuj Bhargava :
>
> We just installed Solr 8.6.2
> It is fetching the data but not adding
>
> Indexing completed. *Added/Updated: 0 *documents. Deleted 0 documents.
> (Duration: 06s)
> Requests: 1 ,* Fet
We just installed Solr 8.6.2
It is fetching the data but not adding
Indexing completed. *Added/Updated: 0 *documents. Deleted 0 documents.
(Duration: 06s)
Requests: 1 ,* Fetched: 100* 17/s, Skipped: 0 , Processed: 0
The *data-config.xml*
Did you re-work your schema at all? There are new primitive types,
new lucene versions, DocValue's, etc
On Wed, Sep 16, 2020 at 12:40 PM Keene Chen wrote:
>
> Hi,
>
> Thanks for pointing that out. I've linked the images below:
>
> solr5_response_times.png
> <https://gcdt
Keene Chen wrote:
> We have been doing some performance tests on Solr 5.5.2
> and Solr 8.5.2 as part of an upgrading process, and we have
> noticed some reduced performance for certain types of
> requests, particularly those that requests a large number of
> rows, eg. 1.
Solr
Hi,
Thanks for pointing that out. I've linked the images below:
solr5_response_times.png
<https://gcdt-solr-tests-public.s3-eu-west-1.amazonaws.com/solr5_response_times.png>
solr8_response_times.png
<https://gcdt-solr-tests-public.s3-eu-west-1.amazonaws.com/solr8_response_
My setup is two solr nodes running on separate Azure Ubuntu 18.04 LTS vms using
an external zookeeper assembly.
I installed Solr 6.6.6 using the install file and then followed the steps for
enabling ssl. I am able to start solr, add collections and the like using
bin/solr script.
Example:
/opt
I am not aware of a test. However keep
In mind that HDFS supported will be deprecated.
Additionally - you can configure erasure encoding in HDFS on a per folder /
file basis so you could in the worst case just make the folder for Solr with
the standard HDFS mode.
Erasure encoding has several
Anyone use Solr with Erasure Coding on HDFS? Is that supported?
Thank you
-Joe
& book tickets at
https://opensourceconnections.com/training/solr-think-like-a-relevance-engineer-tlre/
The course is delivered over 4 half-days from 9am EST / 2pm BST / 3pm
CET and is led by Eric Pugh who co-wrote the first book on Solr and is
a Solr Committer. It's suitable for all mem
Hi all,
We're running our SolrThink Like a Relevance Engineer training 6-9 Oct -
you can find out more & book tickets at
https://opensourceconnections.com/training/solr-think-like-a-relevance-engineer-tlre/
The course is delivered over 4 half-days from 9am EST / 2pm BST / 3pm
CET and is
Hi Solr community,
We have been investigating an issue in our solr (7.5.0) setup where the
shutdown of our solr node takes quite some time (3-4 minutes) during which we
are effectively leaderless.
After investigating and digging deeper we were able to track it down to segment
merges which
Hello,
Your images won't appear on the mailing list. You'll need to post them
elsewhere and link to them.
On Tue, 15 Sep 2020 at 09:44, Keene Chen wrote:
> Hi Solr users community,
>
>
> We have been doing some performance tests on Solr 5.5.2 and Solr 8.5.2 as
> part of an up
Hi Solr users community,
We have been doing some performance tests on Solr 5.5.2 and Solr 8.5.2 as
part of an upgrading process, and we have noticed some reduced performance
for certain types of requests, particularly those that requests a large
number of rows, eg. 1. Would anyone have
Hi All,
I built a server with Lucene/Solr (branch_8_0,
<https://github.com/apache/lucene-solr/tree/branch_8_0>
https://github.com/apache/lucene-solr/tree/branch_8_0) and used a client to
test the server. However, when the client sent requests, I found the server
outputs lots of seemingly
Thanks for the reply,
I didn't see anything in the Solr logs BUT I'm going to recheck it next
week and update you.
Will check this as well:
* It could be that after the upgrade some filesystem permissions do not
work anymore *
Thanks
Best Regards,
*Jean Silva*
https://github.com/jeancsil
s service or other administrative functions
> should reach Solr. Be wary of making your service so flexible to support
> arbitrary parameters you pass to Solr as-is that you don't know about in
> advance (i.e. use an allow-list).
>
> ~ David Smiley
> Apache Lucene/Solr Search Develope
Hello,
Can someone please help to let me understand what is difference between Result
Clustering and "Semantic Knowledge Graphs" components of Solr.
In which scenario we should use "Result Clustering" or "Semantic Knowledge
Graphs".
Thanks.
Can you check the logfiles of Solr?
It could be that after the upgrade some filesystem permissions do not work
anymore
> Am 08.09.2020 um 09:27 schrieb "jeanc...@gmail.com" :
>
> Hey guys, good morning.
>
> As I didn't get any reply for this one, is it ok then
ears of not using it..)
>
> I'm currently using Solr 8.1.1 in production and I use the Schema API to
> create the necessary fields before starting to index my new data. (Reason,
> the managed-schema would be big for me to take care of and I decided to
> automate this process by using the RE
Had no idea about this! Thanks a lot Eric!!
> On 4 Sep 2020, at 12:14, Eric Pugh wrote:
>
> Konstantinos, have you seen https://solr.cool/? It’s an aggregation site for
> all the extensions to Solr. You can add your project there, and that should
> get some more awareness!
Konstantinos, have you seen https://solr.cool/? It’s an aggregation site for
all the extensions to Solr. You can add your project there, and that should
get some more awareness!
> On Sep 2, 2020, at 2:21 AM, Konstantinos Koukouvis
> wrote:
>
> Hi everybody, sorry in advance
The general assumption in deploying a search platform is that you are going
to front it with a service you write that has the search features you care
about, and only those. Only this service or other administrative functions
should reach Solr. Be wary of making your service so flexible
Hi Tyrone,
We use an external load balancer across the nodes.
If you use the java client you can query the zookeepers
https://lucene.apache.org/solr/guide/7_1/solrcloud-query-routing-and-read-tolerance.html
SolrCloud Query Routing And Read Tolerance | Apache Solr Reference Guide
7.1<ht
I have setup the example Solr Cloud that comes with the built in Zoo Keeper
that runs on localhost:9993.
I created my Solr Cloud instance with 2 nodes.
Node 1 url is http://localhost:8983/solr/#/~cloud
Node 2 url is http://localhost:7574/solr/#/~cloud
Currently all Solr queries go through Node 1
00" is the default.
Where is the sense of explicitly setting a default parameter to its default
value?
Regards
Bernd
Am 01.09.20 um 18:00 schrieb Walter Underwood:
> This is misleading and not particularly good advice.
>
> Solr 8 does NOT contain G1. G1GC is a feature of the JVM. We’
100 and 1000 submissions. Also the crawler is set to run at
a lower priority than Solr, thus giving preference to Solr.
In the end we ought to run experiments to find and verify working
values.
Thanks,
Joe D.
On 02/09/2020 03:40, yaswanth kumar wrote:
I got some understanding now
Hi everybody, sorry in advance if I’m using the mailing list wrong, this is the
first time I’m attempting such a thing.
To all you gophers out there we at Mecenat, have been working at a new solr
client wrapper with focus on single solr instance usage, that supports the
search API, schema API
are the numbers are the numbers. More importantly, I
> have run large imports ~0.5M docs and I have watched as that progresses. My
> crawler paces material into Solr. Memory usage (Linux "top") shows cyclic
> small rises and falls, peaking at about 2GB as the crawler introdu
This is misleading and not particularly good advice.
Solr 8 does NOT contain G1. G1GC is a feature of the JVM. We’ve been using
it with Java 8 and Solr 6.6.2 for a few years.
A test with eighty documents doesn’t test anything. Try a million documents to
get Solr memory usage warmed up.
GC_TUNE
eap zone). This
> causes heap usage to continuously grow and reduce.
>
> Regards
>
> Dominique
>
>
>
>
> Le mar. 1 sept. 2020 à 13:50, yaswanth kumar a
> écrit :
>
>> Can someone make me understand on how the value % on the column Heap is
>> cal
mar. 1 sept. 2020 à 13:50, yaswanth kumar a
écrit :
> Can someone make me understand on how the value % on the column Heap is
> calculated.
>
> I did created a new solr cloud with 3 solr nodes and one zookeeper, its
> not yet live neither interms of indexing or searching, but I do se
Can someone make me understand on how the value % on the column Heap is
calculated.
I did created a new solr cloud with 3 solr nodes and one zookeeper, its not yet
live neither interms of indexing or searching, but I do see some spikes in the
HEAP column against nodes when I refresh the page
01 September 2020, Apache Solr™ 8.6.2 available
The Lucene PMC is pleased to announce the release of Apache Solr 8.6.2
Solr is the popular, blazing fast, open source NoSQL search platform from
the Apache Lucene project. Its major features include powerful full-text
search, hit highlighting
Hi,
I had come across a mail (Oct, 2019 one) which suggested the best way is to
handle it before it reaches Solr. I was curious whether:-
1. Jetty query filter can be used (came across something like
that,, need to check)
2. Any new features in Solr itself (like in a request handler
Good afternoon.
In Solr 8.6 Guide, in chapter "Securing Solr", there is section "Enable IP
Access Control". Here I can theoretically uncomment and edit content of
solr.in.sh commands: SOLR_IP_WHITELIST, SOLR_IP_BLACKLIST.
When I did this, launching local Solr server takes to
Hi,
Can you provide more information : Solr version, how are you indexing (DIH,
threading, ...), more details in Solr logs ?
Did you analyse JVM Gc logs ?
Regards
Dominique
Le ven. 28 août 2020 à 22:53, amit3281 a écrit :
> Hi,
>
>
>
> I am using Solr on EXT4 partitio
Hi,
I am using Solr on EXT4 partition and have TLOG replicas in my collection. I
am using 2 Solr nodes to utilize 2 disk (for getting IOPS) for same
collection. My collection has 150 shards. Each shard size is ~9GB and
48Million docs per shard.
My shards frequently goes into recovery with error
Thank you Jan,
your solution is also super easy, I was not aware of that, thanks for
letting me know, it solves another use case for us.
Yes, we use the REST API, but since we use solr as a docker image I feel
unease to commit the initial password in the image.
We came out with the following
not using it..)
I'm currently using Solr 8.1.1 in production and I use the Schema API to
create the necessary fields before starting to index my new data. (Reason,
the managed-schema would be big for me to take care of and I decided to
automate this process by using the REST API).
I started trying
Cool, it’s even easier than my old Java tool:
https://github.com/cominvent/solr-tools
<https://github.com/cominvent/solr-tools>
Also, I can recommend using the authenitcation REST API to add users instead of
hardcoding. The API takes care of the encoding for you!
Jan
> 27. aug. 20
Thanks for reporting back. I think perhaps we don’t claim to fully support
FreeBSD.
Feel free to submit a PullRequest though if you believe you have a working
FreeBSD setup.
Jan
> 26. aug. 2020 kl. 14:23 skrev Patrik Peng :
>
> Followup regarding the bin/solr issue for anyone run
Hi,
Which version of Solr are you talking about?
You should have a log4j.xml or log4j2.xml file in Solr resources that you can
customize to your needs.
At least that's the way we use to write JSON logs. Maybe there are other
options that I don't know.
Gaël,
De : fidiv...@gmail.com
Envoyé
wrote:
Is this a Solr-side message? Looks like dovecot doing proactive
trimming of some crazy long header.
You can lookup the record by UID in the Admin UI (UID=153535 instead
of *:*) to check what is being indexed. Check that dovecot does not do
any prefixing of field names (any record from first
Is this a Solr-side message? Looks like dovecot doing proactive
trimming of some crazy long header.
You can lookup the record by UID in the Admin UI (UID=153535 instead
of *:*) to check what is being indexed. Check that dovecot does not do
any prefixing of field names (any record from first
It works now! You were right - the files were on a different place. It
seems to be working now.
One last question:
I got this error:
400/49727doveadm(fran...@francisaugusto.com): Warning:
fts-solr(fran...@francisaugusto.com): Mailbox All Mail UID=153535 header
size is huge, truncating
Ok, you may want to step back and do a basic Solr example (download
matching version tgz file, decompress, "bin/solr -e techproducts" is a
good one, may need to shut the other Solr or give different port (-p
flag). Just so you know what you are looking at before dovecot starts
to intro
:152)
at org.eclipse.jetty.server.ha
Best,
Francis
On 2020-08-27 20:58, Francis Augusto Medeiros-Logeay wrote:
Hi Alex and Erick,
Thanks for helping out.
True, restarting solr recreated the directory, but I still get 500
internal errors when reindexing from Dovecot.
Just to be clear: I
Hi Alex and Erick,
Thanks for helping out.
True, restarting solr recreated the directory, but I still get 500
internal errors when reindexing from Dovecot.
Just to be clear: I delete the Data directory inside the
solr/data/dovecot directory.
All the directories are owned by solr:solr, so
Uhm right. I may have forgotten to mention that you do need to reload
the core or maybe restart Solr server as well. If you literally just
deleted the index, Solr is probably freaking out about suddenly gone
files. It needs to redo the path of "is this the first time or do I
reopen the in
uot; under the
"/var/solr/data/dovecot/data", and it didn't get recreated.
makes me _strongly_ suspect that your Solr data directories aren’t where you
think they are.
Hmmm, you start with: sudo -u solr /opt/solr/bin/solr create -c dovecot
which makes me wonder if this is a per
...@francisaugusto.com): Error: Mailbox All Mail:
Transaction commit failed: FTS transaction commit failed: backend deinit
And so on. On Solr I see that I get two errors for each mailbox:
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher
2020 at 12:58, Francis Augusto Medeiros-Logeay
wrote:
>
> Hi,
>
> I have - for a long time now - hoped to use an fts engine with dovecot.
> My dovecot version is 2.3.7.2 under Ubuntu 20.04.
>
> I installed solr 7.7.3 and then 8.6.0 to see if this was a
> version-related err
Hi,
I have - for a long time now - hoped to use an fts engine with dovecot.
My dovecot version is 2.3.7.2 under Ubuntu 20.04.
I installed solr 7.7.3 and then 8.6.0 to see if this was a
version-related error. I copied the schema from 7.7.0 as many people
said this was fine.
I get
Hello,
We want to receive Solr logs in DataDog. Configured and all good, but
the logs are ugly, not parsed and not really useful.
Anyone knows a way to send the logs from Solr in JSON format?
Thank you.
I was a little annoyed of the default "SolrRocks" password so I wrote a
little utility to generate solr passwords for the Basic Authentication
plugin and made it available online.
The password encoder is written in simple plain javascript, there is no
need to install or downloa
Hi,
You can also connect to ZK element and use zkCli.sh tools
http://www.mtitek.com/tutorials/zookeeper/zkCli.php
Regards
Dominique
Le jeu. 27 août 2020 à 17:28, Webster Homer <
webster.ho...@milliporesigma.com> a écrit :
> I am using solr 7.7.2 solr cloud
>
> We version
Hello,
I can't find anything in the docs to understand how Solr sorts suggest results
when the weight is the same (0 in my case).
Here is my suggester config:
---
mySuggester
AnalyzingInfixLookupFactory
DocumentDictionaryFactory
autocomplete
payload
Never mind I figured out my problem.
-Original Message-
From: Webster Homer
Sent: Thursday, August 27, 2020 10:29 AM
To: solr-user@lucene.apache.org
Subject: Odd Solr zkcli script behavior
I am using solr 7.7.2 solr cloud
We version our collection and config set names with dates. I
I am using solr 7.7.2 solr cloud
We version our collection and config set names with dates. I have two
collections sial-catalog-product-20200711 and sial-catalog-product-20200808. A
developer uploaded a configuration file to the 20200711 version that was not
checked into our source control
Hi,
There were few discussions about similar issues these days. A JIRA issue
was created
https://issues.apache.org/jira/browse/SOLR-14768
Regards
Dominique
Le jeu. 27 août 2020 à 15:00, Divino I. Ribeiro Jr. <
divinoirj.ib...@gmail.com> a écrit :
> Hello everyone!
> When I
Hi,
Which Solr version ?
Restart which node ? Solr ? ZK ? Only one node ?
Collections are missing in Solr console (lost in Zookeeper) but cores are
still present ?
Why put Zk data and datalog in a "temp" directory
(dataDir=/applis/24374-iplsp-00/IPLS/apache-zookeeper-3.5.
Any logfiles after restart?
Which Solr version?
I would activate autopurge in Zookeeper
> Am 27.08.2020 um 10:49 schrieb "antonio.di...@bnpparibasfortis.com"
> :
>
> Good morning,
>
>
> I would like to get some help if possible.
>
>
>
&
Good morning,
I would like to get some help if possible.
We have a 3 node Solr cluster (ensemble) with apache-zookeeper 3.5.5.
It works fine until we need to restart one of the nodes. Then all the content
of the collection gets deleted.
This is a production environment, and every time
Hi,
I'm trying to add distributed tracing to Solr following this document:
https://lucene.apache.org/solr/guide/8_6/solr-tracing.html
I downloaded the latest version, 8.6.1, and edited the
./server/solr/solr.xml file with the tracerConfig as listed in the
document. I have a Jaeger backend
end on index size of form collection?
>
> Regards,
> Vishal Patel
>
> From: Erick Erickson
> Sent: Wednesday, August 26, 2020 5:36 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Slow commit in Solr 6.1.0
>
> It depends on how the
, 2020 5:36 PM
To: solr-user@lucene.apache.org
Subject: Re: Slow commit in Solr 6.1.0
It depends on how the commit is called. You have openSearcher=true, which means
the call
won’t return until all your autowarming is done. This _looks_ like it might be
a commit
called from a client, which you should
Hello ,
I opened a bug issue not knowing it's not the correct place to ask the
question at hand,
So I was directed to send an e-mail to the mailing list, hopefully I'm
correct this time.
Here's a link to the issue opened:
https://issues.apache.org/jira/browse/SOLR-14780?page
Followup regarding the bin/solr issue for anyone running Solr on FreeBSD.
The script uses "ps auxww | grep ..." in various places, like:
SOLR_PROC=`ps auxww |grep -w $SOLR_PID|grep start\.jar |grep jetty\.port`
For reasons unknown to me, FreeBSD's "ps auxww" truncates the C
020, at 5:29 AM, vishal patel
> wrote:
>
> I am using solr 6.1.0. We have 2 shards and each has one replica.
> When I checked shard1 log, I found that commit process was going to slow for
> some collection.
>
> Slow commit:
> 2020-08-25 09:08:10.328 INFO (commitSched
I am using solr 6.1.0. We have 2 shards and each has one replica.
When I checked shard1 log, I found that commit process was going to slow for
some collection.
Slow commit:
2020-08-25 09:08:10.328 INFO (commitScheduler-124-thread-1) [c:forms s:shard1
r:core_node1 x:forms
Hi Joe,
Yes I had made these changes for getting HDFS to work with Solr. Below are
config changes which I carried out:
Changes in solr.in.cmd
set SOLR_OPTS=%SOLR_OPTS% -Dsolr.directoryFactory=HdfsDirectoryFactory
set SOLR_OPTS
Hello,
I can't find anything in the
docs<https://lucene.apache.org/solr/guide/7_2/suggester.html> to understand how
Solr sorts suggest results when the weight is the same (0 in my case).
Here is my suggester config:
---
mySuggester
AnalyzingInfixLookupF
Thanks for your input regarding SOLR-14711, that makes sense.
I wasn't able to reproduce the bin/solr script issue on a Debian
machine, so I guess there's something wrong with my setup.
Patrik
On 24.08.20 17:26, Jan Høydahl wrote:
> I think you’re experiencing this:
>
&
ardinality. But if it does not, then all bets are off.
Regards,
Alex.
P.s. I added this question to SOLR-12298, as I don't think I know
enough about this part of Solr to judge.
On Mon, 24 Aug 2020 at 02:28, Munendra S N wrote:
>
> >
> > Interestingly, I was forced to add children as an arra
o create core
[newcollsolr2_shard1_replica_n1] Caused by: Illegal char <:> at index
4:
hdfs://hn1-pjhado.tvbhpqtgh3judk1e5ihrx2k21d.tx.internal.cloudapp.net:8020/user/solr-data/newcollsolr2/core_node3/data\
<http://hn1-pjhado.tvbhpqtgh3judk1e5ihrx2k21d.tx.internal.cloudapp.net:8020/user/solr-data/newcol
I think you’re experiencing this:
https://issues.apache.org/jira/browse/SOLR-14711
No idea why the bin/solr script won’t work with SSL...
Jan
> 24. aug. 2020 kl. 15:52 skrev Patrik Peng :
>
> Greetings
>
> I'm in the process of setting up a SolrCloud cluster with 3 Zookeep
Greetings
I'm in the process of setting up a SolrCloud cluster with 3 Zookeeper
and 3 Solr nodes on FreeBSD and wish to enable SSL between the Solr nodes.
Before enabling SSL, everything worked as expected and I followed the
instructions described in the Solr 8.6 docs
<https://lucene.apache.
Hi Community members,
I tried the following approaches but non of them worked for my use case.
1. For achieving exact match in solr we have to kept sow='false' (solr will
use field centric matching mode) and grouped multiple similar fields into
one copy field. It does solve the problem
this is consistent with the data disappearing from Zookeeper due
to misconfiguration and/or some external process removing it when
you reboot.
So here’s what I’d do next:
Go ahead and reboot. You do _not_ need to start Solr to run bin/solr
scripts, and among them are
bin/solr zk ls -r / -z
exactly what you uncommented. I doubt you uncommented them
>> one by one and tried everything, so you leave us guessing. Uncommenting
>> SOLR_HOME for instance would be shooting yourself in the foot since Solr
>> wouldn’t know where to start. Uncommenting some the authorization pa
On 8/24/2020 12:46 AM, Wang, Ke wrote:
We are using Apache SOLR version 8.4.4.0. The project is planning to
upgrade the Linux server from Oracle Enterprise Linux (Red Hat
Enterprise Linux) 6 to OEL 7. As I was searching on the Confluence
page and was not able to find the information, can I
Yes, it should be no issues to upgrade to RHEL7.
I assume you mean Solr 8.4.0. You can also use the latest Solr version.
Why not RHEL8?
> Am 24.08.2020 um 09:02 schrieb Wang, Ke :
>
Hi there,
We are using Apache SOLR version 8.4.4.0. The project is planning to upgrade
the Linux server from Oracle Enterprise Linux (Red Hat Enterprise Linux) 6 to
OEL 7. As I was searching on the Confluence page and was not able to find the
information, can I please confirm if:
* Apache
d return
> type and having multi-path handling. That's not what Solr does for
> string class (tested). Is that a known issue?
>
> https://github.com/arafalov/SolrJTest/blob/master/src/com/solrstart/solrj/Main.java#L88-L89
Not sure about this. Maybe we might need to check in Dev list or Sl
Hi Erick,
Here is the latest most error that I captured which seems to be actually
deleting the cores ( I did noticed that the core folders under the path
../solr/server/solr were deleted one by one when the server came back from
reboot)
2020-08-24 04:41:27.424 ERROR
(coreContainerWorkExecutor-2
mented. I doubt you uncommented them
> one by one and tried everything, so you leave us guessing. Uncommenting
> SOLR_HOME for instance would be shooting yourself in the foot since Solr
> wouldn’t know where to start. Uncommenting some the authorization parameters
> without providing th
Well, first show exactly what you uncommented. I doubt you uncommented them one
by one and tried everything, so you leave us guessing. Uncommenting SOLR_HOME
for instance would be shooting yourself in the foot since Solr wouldn’t know
where to start. Uncommenting some the authorization
means the
query code has to be a lot more careful about checking field return
type and having multi-path handling. That's not what Solr does for
string class (tested). Is that a known issue?
https://github.com/arafalov/SolrJTest/blob/master/src/com/solrstart/solrj/Main.java#L88-L89
If I switch
On 22/08/2020 22:08, maciejpreg...@tutanota.com.INVALID wrote:
Good morning.
When I uncomment any of commands in solr.in.sh, Solr doesn't run. What do I
have to do to fix a problem?
Best regards,
Maciej Pregiel
On 22/08/2020 22:08, maciejpreg...@tutanota.com.INVALID wrote:
Good morning.
When
Good morning.
When I uncomment any of commands in solr.in.sh, Solr doesn't run. What do I
have to do to fix a problem?
Best regards,
Maciej Pregiel
Hi Alex,
Currently, Fixing the documentation for nested docs is under progress. More
context is available in this JIRA -
https://issues.apache.org/jira/browse/SOLR-14383.
https://github.com/arafalov/SolrJTest/blob/master/src/com/solrstart/solrj/Main.java
The child doc transformer needs
Hello,
I am trying to get up to date with both SolrJ and Nested Document
implementation and not sure where I am failing with a basic test
(https://github.com/arafalov/SolrJTest/blob/master/src/com/solrstart/solrj/Main.java).
I am using Solr 8.6.1 with a core created with bin/solr create -c
solrj
is the default for embededded ZK.
All that said, nothing in Solr just deletes all this. The fact that you only
saw this on reboot is highly suspicious, some external-to-Solr process,
anything from a startup script to restoring a disk image to…. is removing that
data I suspect.
Best,
Erick
solr and my external zoo are pointing
to the same and have same configs if it matters.
Sent from my iPhone
> On Aug 22, 2020, at 9:07 AM, Erick Erickson wrote:
>
> Sounds like you didn’t change Zookeeper data dir. Zookeeper defaults to
> putting its data in /tmp/zookeeper, see t
with only a single replica.
3> shut down all your Solr instances
4> copy the data directories you saved in <2>. You _MUST_ copy to corresponding
shards. The important bit is that a data directory from collection1_shard1 goes
back to collection1_shard1. If you copy it back to collectio
Can someone help me on the below issue??
I have configured solr 8.2 with one zookeeper 3.4 and 3 solr nodes
All the configs were pushed initially and Also Indexed all the data into
multiple collections with 3 replicas on each collection
Now part of server maintenance these solr nodes were
autogeneratePhraseQueries to kick in? (i.e., would any of the
whitespace-separated tokens in your phrases be further split by your
Solr-internal analysis chain -- WordDelimiter, (Solr-internal)
Synonym, etc.?). Would you be able to share the analysis chain on the
relevant fields, and perhaps
I created https://issues.apache.org/jira/browse/SOLR-14768
@Joe Maye you should add your findings there.
Von: Markus Kalkbrenner
Antworten an: "solr-user@lucene.apache.org"
Datum: Donnerstag, 20. August 2020 um 16:24
An: "solr-user@lucene.apache.org"
Betreff: R
601 - 700 of 46318 matches
Mail list logo