look. This is almost certainly a mismatch between what you think is happening
and what you’ve actually told Solr to do ;).
Best,
Erick
> On Jul 14, 2020, at 7:05 AM, Villalba Sans, Raúl
> wrote:
>
> Hello,
>
> We have an app that uses SOLR as search engine. We have detected i
Thank you so much for the response. Below are the configs I have in solr.in.sh
and I followed https://lucene.apache.org/solr/guide/8_5/enabling-ssl.html
documentation
# Enables HTTPS. It is implicitly true if you set SOLR_SSL_KEY_STORE. Use this
config
# to enable https module with custom
Shawn,
thanks for the extra info.
The OOM errors were indeed because of heap space. In my case most of the GC
calls were not full GC. Only when heap was really near the top, a full GC
was done.
I'll try out your suggestion of increasing the G1 heap region size. I've
been using 4m, and from what
reason being is that a
client can only be identified by a single certificate.
Can you share more details about specifically what your solr.in.sh configs
look like related to keystore/truststore and which files? Specifically
highlight which files have multiple certificates in them.
It looks like
I looked at the patch mentioned in the JIRA
https://issues.apache.org/jira/browse/SOLR-14105 reporting the below issue. I
looked at the solr 8.5.1 code base , I see the patch is applied. But still
seeing the same exception with different stack trace. The initial excsption
stacktrace
Thanks for your reply.
When I searched my error "org.apache.http.NoHttpResponseException: failed to
respond" in Google, I found the one Solr jira case :
https://issues.apache.org/jira/browse/SOLR-7483. I saw a comment of @Erick
Erickson<mailto:erickerick...@gmail.com>.
is thi
Am 13.07.20 um 09:55 schrieb Mithun Seal:
> Hi Team,
>
> Could you please help me with below compatibility question.
>
> 1. We are trying to install zookeeper externally along with SOLR 7.5.0.
> As noted, SOLR 7.5.0 comes with Zookeeper 1.3.11.
Where did you get that info
Hi Team,
Could you please help me with below compatibility question.
1. We are trying to install zookeeper externally along with SOLR 7.5.0. As
noted, SOLR 7.5.0 comes with Zookeeper 1.3.11. Can I install Zookeeper
1.3.10 with SOLR 7.5.0. Zookeeper 1.3.10 will be compatible with SOLR 7.5.0?
2
, wondering if this
is fixed for http1 solr client as we pass -Dsolr.http1=true .
Thanks,
Rajeswari
https://issues.apache.org/jira/browse/SOLR-14105
On 7/6/20, 10:02 PM, "Natarajan, Rajeswari"
wrote:
Hi,
We are using Solr 8.5.1 in cloud mode with Java 8. We are ena
After some research came across below article
1. edismax-and-multiterm-synonyms-oddities/
<https://opensourceconnections.com/blog/2018/02/20/edismax-and-multiterm-synonyms-oddities/>
.
2. apache mail archive
<http://mail-archives.apache.org/mod_mbox/lucene-solr-user/20
Whenever I stop Solr with 'bin/solr stop' or 'sudo service solr stop'
Solr's cores are corrupted and the only way to remedy the situation is to
do a full uninstall and reinstall of all traces of Solr.
Please help, I can't live like this. The error is at the bottom, as it is
long
On 6/25/2020 2:08 PM, Odysci wrote:
I have a solrcloud setup with 12GB heap and I've been trying to optimize it
to avoid OOM errors. My index has about 30million docs and about 80GB
total, 2 shards, 2 replicas.
Have you seen the full OutOfMemoryError exception text? OOME can be
caused by
On 7/10/2020 5:14 AM, mithunseal wrote:
I am new to this SOLR-ZOOKEEPER. I am not able to understand the
compatibility thing. For example, I am using SOLR 7.5.0 which uses ZK
3.4.11. So SOLR 7.5.0 will not work with ZK 3.4.10?
Can someone please confirm this?
According to what the ZooKeeper
seconds) or longer.
Look at a 24 hour graph of heap usage. It should look like a sawtooth,
increasing, then dropping after every full GC. The bottom of the sawtooth
is the the memory that Solr actually needs. Take the highest number from
the bottom of the sawtooth and add some extra, maybe 2 GB
nd insert requests are
>> there. I got that node is responding but why it is not responding? Due to
>> lack of memory or any other cause
why we cannot get idea from log for reason of not responding.
Is there any monitor for Solr from where we can find the root
I am new to this SOLR-ZOOKEEPER. I am not able to understand the
compatibility thing. For example, I am using SOLR 7.5.0 which uses ZK
3.4.11. So SOLR 7.5.0 will not work with ZK 3.4.10?
Can someone please confirm this?
Thanks,
Mithun Seal
--
Sent from: https://lucene.472066.n3.nabble.com
gt; Here is my memory snapshot which I have taken from GC.
Yes, I can see that a lot of memory is in use, but the question is why.
I assume caches (are they too large?), perhaps uninverted indexes.
Docvalues would help with latter ones. Do you use them?
> I have tried Solr upgrade from 6.
view>
JVM_memory.PNG<https://drive.google.com/file/d/1LYEdcY9Om_0u8ltIHikU7hsuuKYQPh_m/view>
drive.google.com
you indicated that you also run some other software on the same server. Is it
possible that the other processes hog CPU, disk or network and starve Solr?
>> I will check
thanks for share. nice article.
Charlie Hull wrote:
Thought you might enjoy Eric's blog, it's taken him a while! Some good
hints here for those of you interested in contributing more to Solr.
https://opensourceconnections.com/blog/2020/07/10/i-became-a-solr-committer-in-4662-days-heres-how
Thanks, interesting reading
On Fri, Jul 10, 2020 at 11:18 AM Charlie Hull wrote:
> Hi all,
>
> Thought you might enjoy Eric's blog, it's taken him a while! Some good
> hints here for those of you interested in contributing more to Solr.
>
>
> https://opensourceconnections.
Hi all,
Thought you might enjoy Eric's blog, it's taken him a while! Some good
hints here for those of you interested in contributing more to Solr.
https://opensourceconnections.com/blog/2020/07/10/i-became-a-solr-committer-in-4662-days-heres-how-you-can-do-it-faster/
Cheers
Charlie
, you indicated that you also run some other software on
the same server. Is it possible that the other processes hog CPU, disk
or network and starve Solr?
I must add that Solr 6.1.0 is over four years old. You could be hitting
a bug that has been fixed for years, but even if you encounter an issue
I’ve been running Solr for a dozen years and I’ve never needed a heap larger
than 8 GB.
>> What is your data size? same like us 1 TB? is your searching or indexing
>> frequently? NRT model?
My question is why replica is going into recovery? When replica went down, I
checked GC log
Those are extremely large JVMs. Unless you have proven that you MUST
have 55 GB of heap, use a smaller heap.
I’ve been running Solr for a dozen years and I’ve never needed a heap
larger than 8 GB.
Also, there is usually no need to use one JVM per replica.
Your configuration is using 110 GB (two
On 7/8/2020 3:36 PM, gnandre wrote:
I am using Solr docker image 8.5.2-slim from https://hub.docker.com/_/solr.
I use it as a base image and then add some more stuff to it with my custom
Dockerfile. When I build the final docker image, it is built successfully.
After that, when I try to use
Thanks for reply.
what you mean by "Shard1 Allocated memory”
>> It means JVM memory of one solr node or instance.
How many Solr JVMs are you running?
>> In one server 2 solr JVMs in which one is shard and other is replica.
What is the heap size for your JVMs?
>> 55GB of
elds into a
single field.
Is there a way we can solve this? Any help would be highly appreciated.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi,
I am using Solr docker image 8.5.2-slim from https://hub.docker.com/_/solr.
I use it as a base image and then add some more stuff to it with my custom
Dockerfile. When I build the final docker image, it is built successfully.
After that, when I try to use it in docker-compose.yml (with build
I don’t understand what you mean by "Shard1 Allocated memory”. I don’t know of
any way to dedicate system RAM to an application object like a replica.
How many Solr JVMs are you running?
What is the heap size for your JVMs?
Setting soft commit max time to 100 ms does not magically make
8, 2020 4:23 PM
To: solr-user@lucene.apache.org
Subject: Re: Replica goes into recovery mode in Solr 6.1.0
Hi,
How do you show this? Command for this resume?
*Our collections details are below:
Collection Shard1 Shard1 Replica Shard2 Shard2 Replica
Number of Documents Size(GB
Hi,
How do I enable replication of the model and feature store ?
Thanks
Krishan
351539722
Em seg, 6 de jul de 2020 10:41, vishal patel
escreveu:
> I am using Solr version 6.1.0, Java 8 version and G1GC on production. We
> have 2 shards and each shard has 1 replica. We have 3 collection.
> We do not use any cache and also disable in Solr config.xml. Search and
v=$qx mm=2}^10 OR
{!edismax qf=description_l1 manual_tags_l1 v=$qx mm=2} OR {!edismax
qf=description_l2 v=$qx mm=2} )=amul cheese cake
But we observed that the above are still being converted to field centric
queries with mm per field resulting in no match if the words span across
multiple fields.
Thanks for your reply.
One server has total 320GB ram. In this 2 solr node one is shard1 and second is
shard2 replica. Each solr node have 55GB memory allocated. shard1 has 585GB
data and shard2 replica has 492GB data. means almost 1TB data in this server.
server has also other applications
Hi,
I want to search Solr for server names in a set of Microsoft Word documents,
PDF, and image files like jpg,gif.
Server names are given by the regular expression(regex)
INFP[a-zA-z0-9]{3,9}
TRKP[a-zA-z0-9]{3,9}
PLCP[a-zA-z0-9]{3,9}
SQRP[a-zA-z0-9]{3,9}
Problem
===
I want to get
Hi Swetha,
Given URL is encoded. So, you can decode it before analyzing. Plus
character is used for whitespaces when you encode a URL and minus sign
represents a negative query in Solr.
Kind Regards,
Furkan KAMACI
On Tue, Jul 7, 2020 at 9:16 PM swetha vemula
wrote:
> Hi,
>
> I ha
Hi,
I have an URL and I want to break this down and run it in the admin console
but I am not what is ++ and - represents in the query.
huge JVMs?
The servers will be doing a LOT of disk IO, so look at the read and
write iops. I expect that the solr processes are blocked on disk reads
almost all the time.
"-Dsolr.autoSoftCommit.maxTime=100” is way too short (100 ms).
That is probably causing your outages.
wunder
Walter Und
Any one is looking my issue? Please guide me.
Regards,
Vishal Patel
From: vishal patel
Sent: Monday, July 6, 2020 7:11 PM
To: solr-user@lucene.apache.org
Subject: Replica goes into recovery mode in Solr 6.1.0
I am using Solr version 6.1.0, Java 8 version
Hi Eric, Toke,
Can you please look at the details shared in my trail email & respond with your
suggestions/feedback?
Thanks & Regards,
Vinodh
From: Kommu, Vinodh K.
Sent: Monday, July 6, 2020 4:58 PM
To: solr-user@lucene.apache.org
Subject: RE: Time-out errors while indexing (So
Hi,
We are using Solr 8.5.1 in cloud mode with Java 8. We are enabling TLS with
http1 (as we get a warning java 8 + solr 8.5 SSL can’t be enabled) and we get
below exception
2020-07-07 03:58:53.078 ERROR (main) [ ] o.a.s.c.SolrCore
null:org.apache.solr.common.SolrException: Error
I am using Solr version 6.1.0, Java 8 version and G1GC on production. We have 2
shards and each shard has 1 replica. We have 3 collection.
We do not use any cache and also disable in Solr config.xml. Search and Update
requests are coming frequently in our live platform.
*Our commit
whereas remaining 20 collections in the
cluster holds 864M docs only which gives the total docs in the cluster is 6.3B
docs
On hardware side, cluster sits on 6 solr VMs, each VMs has 170G total memory
(with 2 solr instances running per VM), 16 vCPUs and each solr JVM runs with
31G heap. R
the complete picture. Things gets complicated when
> you mix it with stored, so that you have "stored=true docValues=true".
> There's an article about that at
>
>
> https://sease.io/2020/03/docvalues-vs-stored-fields-apache-solr-features-and-performance-smackdown.html
>
&
Here's my PR, which includes some edits to the ref guide docs where I tried
to clarify these settings a little too.
https://github.com/apache/lucene-solr/pull/1651
~ David
On Sat, Jul 4, 2020 at 8:44 AM Nándor Mátravölgyi
wrote:
> I guess that's fair. Let's have hl.fragsizeIsMinimum=t
iPhone
> On 4 Jul 2020, at 14:37, Erick Erickson wrote:
>
> You need more shards. And, I’m pretty certain, more hardware.
>
> You say you have 13 billion documents and 6 shards. Solr/Lucene has an
> absolute upper limit of 2B (2^31) docs per shard. I don’t quite know how
>
Short answer: no
Neither Solr nor ElasticSearch have such capabilities out of the box.
Solr does have a plugin infrastructure that enables you to provide
better tokenization based on language rules, and some are better
than others.
I saw for example integration of openNLP here:
https
You need more shards. And, I’m pretty certain, more hardware.
You say you have 13 billion documents and 6 shards. Solr/Lucene has an absolute
upper limit of 2B (2^31) docs per shard. I don’t quite know how you’re running
at all unless that 13B is a round number. If you keep adding documents
I guess that's fair. Let's have hl.fragsizeIsMinimum=true as default.
On 7/4/20, David Smiley wrote:
> I doubt that WORD mode is impacted much by hl.fragsizeIsMinimum in terms of
> quality of the highlight since there are vastly more breaks to pick from.
> I think that setting is more useful in
Hi Eric,
There are total 6 VM’s in Solr clusters and 2 nodes are running on each VM.
Total number of shards are 6 with 3 replicas. I can see the index size is more
than 220GB on each node for the collection where we are facing the performance
issue.
The more documents we add to the collection
following question? We are using Solr search
> in our Organisation and now checking whether Solr provides search
> capabilities like Google Enterprise search(Google Knowledge Graph Search).
>
> 1, Does Solr Search provide Voice Search like Google?
> 2. Does Solar Search provide NLP
Dear Team,
Hope you all are doing well.
Can you please help with the following question? We are using Solr search
in our Organisation and now checking whether Solr provides search
capabilities like Google Enterprise search(Google Knowledge Graph Search).
1, Does Solr Search provide Voice Search
I doubt that WORD mode is impacted much by hl.fragsizeIsMinimum in terms of
quality of the highlight since there are vastly more breaks to pick from.
I think that setting is more useful in SENTENCE mode if you can stand the
perf hit. If you agree, then why not just let this one default to "true"?
oving some of
>>> your replicas to them.
>>>
>>> Best,
>>> Erick
>>>
>>>> On Jul 3, 2020, at 7:14 AM, Toke Eskildsen wrote:
>>>>
>>>>> On Thu, 2020-07-02 at 11:16 +, Kommu, Vinodh K. wrote:
>>>>> We ar
;
>>> On Jul 3, 2020, at 7:14 AM, Toke Eskildsen wrote:
>>>
>>>> On Thu, 2020-07-02 at 11:16 +, Kommu, Vinodh K. wrote:
>>>> We are performing QA performance testing on couple of collections
>>>> which holds 2 billion and 3.5 billion docs
on couple of collections
>>> which holds 2 billion and 3.5 billion docs respectively.
>>
>> How many shards?
>>
>>> 1. Our performance team noticed that read operations are pretty
>>> more than write operations like 100:1 ratio, is this expected during
>&
Since the issue seems to be affecting the highlighter differently
based on which mode it is using, having different defaults for the
modes could be explored.
WORD may have the new defaults as it has little effect on performance
and it creates nicer highlights.
SENTENCE should have the defaults
I think we should flip the default of hl.fragsizeIsMinimum to be 'true',
thus have the behavior close to what preceded 8.5.
(a) it was very recently (<= 8.4) the previous behavior and so may require
less tuning for users in 8.6 henceforth
(b) it's significantly faster for long text -- seems to be
ryone will go looking through the javadoc to see if this
> is
> > implied.
>
> This is in the ref guide. Section DocValues. Here's the quote:
>
> DocValues are only available for specific field types. The types chosen
> determine the underlying Lucene
> docValue type t
> implied.
This is in the ref guide. Section DocValues. Here's the quote:
DocValues are only available for specific field types. The types chosen
determine the underlying Lucene
docValue type that will be used. The available Solr field types are:
• StrField, and UUIDField:
◦ If the field is single-valued
If you feel strongly that Solr needs to keep up the Maven bits
up to date, you can volunteer to help maintain it, Solr is
open source after all.
> On Jul 3, 2020, at 12:08 AM, Ali Akhtar wrote:
>
> I had to add an additional repository to get the failing dependency to
> resolve:
&g
gt; more than write operations like 100:1 ratio, is this expected during
>> indexing or solr nodes are doing any other operations like syncing?
>
> Are you saying that there are 100 times more read operations when you
> are indexing? That does not sound too unrealistic as the disk cache
e maintained for
> values in the insertion-order.
>
> Is this correct?
Sorta, but it is not the complete picture. Things gets complicated when
you mix it with stored, so that you have "stored=true docValues=true".
There's an article about that at
https://sease.io/2020/03/docvalu
e than write operations like 100:1 ratio, is this expected during
> indexing or solr nodes are doing any other operations like syncing?
Are you saying that there are 100 times more read operations when you
are indexing? That does not sound too unrealistic as the disk cache
might be fille
ent request so if could please help me on this by
>> today it will be highly appreciated.
>>
>> Thanks & Regards,
>> Gautam Kanaujia
>>
>>> On Thu, Jul 2, 2020 at 7:49 PM Gautam K wrote:
>>>
>>> Dear Team,
>>>
>>> Ho
naujia
>
> On Thu, Jul 2, 2020 at 7:49 PM Gautam K wrote:
>>
>> Dear Team,
>>
>> Hope you all are doing well.
>>
>> Can you please help with the following question? We are using Solr search in
>> our Organisation and now checking whether Solr
Hi
Thanks Erick and Walter for your response.
Solr Version Used : 6.5.0
I tried to elaborate the issue:
Case 1 : Search String : Industrial Electric Oven
Results=945
Case 2 : Search String : Dell laptop bags
Results=992
In above both cases, mm play its role.(match
order of values within a multivalued field should match the
> insertion
> >> order. -- we certainly rely on that in our product.
> >>
> >> Order is guaranteed to be maintained for values in a multi-valued field.
> >>>
> >>
> >>
> https://lucene.
t;
> > On Thu, Jul 2, 2020 at 8:08 PM Colvin Cowie
> > wrote:
> >
> >> The order of values within a multivalued field should match the
> insertion
> >> order. -- we certainly rely on that in our product.
> >>
> >> Order is guaranteed to be main
Anyone has any thoughts or suggestions on this issue?
Thanks & Regards,
Vinodh
From: Kommu, Vinodh K.
Sent: Thursday, July 2, 2020 4:46 PM
To: solr-user@lucene.apache.org
Subject: Time-out errors while indexing (Solr 7.7.1)
Hi,
We are performing QA performance testing on couple of collect
s, they’re there as a
> > convenience, so there may still
> > be issues in future.
> >
> > > On Jul 2, 2020, at 1:27 AM, Ali Akhtar wrote:
> > >
> > > If I try adding solr-core to an existing project, e.g (SBT):
> > >
> >
at 9:06 AM Erick Erickson
wrote:
> How are you sending this to Solr? I just tried 8.5, submitting that doc
> through the admin UI and it works fine.
> I defined “asset_id” with as the same type as your reference_url field.
>
> And does the log on the Solr node that tries to index thi
s guaranteed to be maintained for values in a multi-valued field.
>>>
>>
>> https://lucene.472066.n3.nabble.com/order-question-on-solr-multi-value-field-tp4027695p4028057.html
>>
>> On Thu, 2 Jul 2020 at 18:52, Vincenzo D'Amore wrote:
>>
>>> Hi all,
order. -- we certainly rely on that in our product.
>
> Order is guaranteed to be maintained for values in a multi-valued field.
> >
>
> https://lucene.472066.n3.nabble.com/order-question-on-solr-multi-value-field-tp4027695p4028057.html
>
> On Thu, 2 Jul 2020 at 18:52, Vincenz
The order of values within a multivalued field should match the insertion
order. -- we certainly rely on that in our product.
Order is guaranteed to be maintained for values in a multi-valued field.
>
https://lucene.472066.n3.nabble.com/order-question-on-solr-multi-value-field-tp4027695p4028
Hi all,
simple question: Solr float/double multivalue fields preserve the order of
inserted values?
Best regards,
Vincenzo
--
Vincenzo D'Amore
gt; 2-1 4-30%
>
> When I searched 'bags' as a search string, solr returned 15000 results.
> Query Used :
> http://localhost:8984/solr/core_name/select?fl=title=on=bags=handler1=10=json
>
> And when searched 'books' as a search string, solr returns say 3348 results.
> Query
I think it's better to think of Solr as a piece of infrastructure or
component for you to build these things, rather than a product that has a
lot of capabilities for some specific use case.
So you can find 'lego pieces' to build some of these things, but with Solr
you need to build these things
e, so there may still
> be issues in future.
>
> > On Jul 2, 2020, at 1:27 AM, Ali Akhtar wrote:
> >
> > If I try adding solr-core to an existing project, e.g (SBT):
> >
> > libraryDependencies += "org.apache.solr" % "solr-core" % "8.5.2&q
Please let s know what version of Solr you use, otherwise it’s very hard to know
whether you’re running into https://issues.apache.org/jira/browse/SOLR-8812
or similar.
But two things to try:
1> specify q.op
lr
2> specify mm=0%
Best,
Erick
> On Jul 2, 2020, at 1:22 AM, Tushar Aro
There have been some issues with Maven, see:
https://issues.apache.org/jira/browse/LUCENE-9170
However, we do not officially support Maven builds, they’re there as a
convenience, so there may still
be issues in future.
> On Jul 2, 2020, at 1:27 AM, Ali Akhtar wrote:
>
> If I try ad
that read operations are pretty more than
write operations like 100:1 ratio, is this expected during indexing or solr
nodes are doing any other operations like syncing?
2. Zookeeper has a latency around (min/avg/max: 0/0/2205), can this latency
create instabilities issues to ZK or Solr clusters
If I try adding solr-core to an existing project, e.g (SBT):
libraryDependencies += "org.apache.solr" % "solr-core" % "8.5.2"
It fails due a 404 on the dependencies:
Extracting structure failed
stack trace is suppressed; run last update for the full output
stack
Hi,
I have a scenario with following entry in the request handler(handler1) of
solrconfig.xml.(defType=edismax is used)
description category title^4 demand^0.3
2-1 4-30%
When I searched 'bags' as a search string, solr returned 15000 results.
Query Used :
http://localhost:8984/solr/core_name
"docs": [{
"myID": 123456,
"valueIwant": "Hello World"
},
{
"myID": 123456,
"valueIwant": "Hello Planet"
}
{
"myID": 123456,
"valueIwant": "Hello World123456"
}]]
}
}
]
}
}
Is there a way to do this? I was looking at functions but couldnt find anything
I needed.
Copy of this question can be found
https://stackoverflow.com/questions/62679939/solr-grouping-and-unique-values
Thanks
Nate
Hi,
Maybe https://github.com/sematext/solr-diagnostics can be of use?
Otis
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
On Mon, Jun 29, 2020 at 3:46 PM Erick Erickson
wrote:
> Really look at you
, _lots_ of memory will be collected.
Another rather unlikely scenario, but again worth checking.
Best,
Erick
> On Jun 29, 2020, at 3:27 PM, Ryan W wrote:
>
> On Mon, Jun 29, 2020 at 3:13 PM Erick Erickson
> wrote:
>
>> ps aux | grep solr
>>
>
> [solr@fasp
On Mon, Jun 29, 2020 at 3:13 PM Erick Erickson
wrote:
> ps aux | grep solr
>
[solr@faspbsy0002 database-backups]$ ps aux | grep solr
solr 72072 1.6 33.4 22847816 10966476 ? Sl 13:35 1:36 java
-server -Xms16g -Xmx16g -XX:+UseG1GC -XX:+ParallelRefProcEnabled
-XX:G1HeapRegionS
PM David Hastings
> wrote:
>
>> little nit picky note here, use 31gb, never 32.
>
>
> Good to know.
>
> Just now I got this output from bin/solr status:
>
> "solr_home":"/opt/solr/server/solr",
> "version":"7.7.2 d4c30fc28
On Mon, Jun 29, 2020 at 1:49 PM David Hastings
wrote:
> little nit picky note here, use 31gb, never 32.
Good to know.
Just now I got this output from bin/solr status:
"solr_home":"/opt/solr/server/solr",
"version":"7.7.2 d4c30fc2856154f2c1fefc589eb
ps aux | grep solr
should show you all the parameters Solr is running with, as would the
admin screen. You should see something like:
-XX:OnOutOfMemoryError=your_solr_directory/bin/oom_solr.sh
And there should be some logs laying around if that was the case
similar to:
$SOLR_LOGS_DIR
little nit picky note here, use 31gb, never 32.
On Mon, Jun 29, 2020 at 1:45 PM Ryan W wrote:
> It figures it would happen again a couple hours after I suggested the issue
> might be resolved. Just now, Solr stopped running. I cleared the cache in
> my app a couple times around
It figures it would happen again a couple hours after I suggested the issue
might be resolved. Just now, Solr stopped running. I cleared the cache in
my app a couple times around the time that it happened, so perhaps that was
somehow too taxing for the server. However, I've never allocated so
performance is unpredictable when OOMs happen,
which is the point of the killer script: at least Solr stops rather than do
something inexplicable.
Best,
Erick
> On Jun 29, 2020, at 11:52 AM, David Hastings
> wrote:
>
> sometimes just throwing money/ram/ssd at the problem is j
sometimes just throwing money/ram/ssd at the problem is just the best
answer.
On Mon, Jun 29, 2020 at 11:38 AM Ryan W wrote:
> Thanks everyone. Just to give an update on this issue, I bumped the RAM
> available to Solr up to 16GB a couple weeks ago, and haven’t had any
> prob
Thanks everyone. Just to give an update on this issue, I bumped the RAM
available to Solr up to 16GB a couple weeks ago, and haven’t had any
problem since.
On Tue, Jun 16, 2020 at 1:00 PM David Hastings
wrote:
> me personally, around 290gb. as much as we could shove into them
>
> On
On 28/06/2020 14:42, Erick Erickson wrote:
> We need to draw a sharp distinction between standalone “going away”
> in terms of our internal code and going away in terms of the user
> experience.
It'll be hard to make it completely transparant in terms of user
experience. For instance, tere is
Wandering off topic, but still apropos Solr.
On Sun, Jun 28, 2020 at 12:14:56PM +0200, Ilan Ginzburg wrote:
> I disagree Ishan. We shouldn't get rid of standalone mode.
> I see three layers in Solr:
>
>1. Lucene (the actual search libraries)
>2. The server infra (&
of the user
> experience.
>
> Usually when we’re talking about standalone going a way, it’s the
> former. The assumption is that we’ll use an embedded ZK that
> fires up automatically so Solr behaves very similarly to how it
> behaves in the current standalone mode just without
How are you sending this to Solr? I just tried 8.5, submitting that doc through
the admin UI and it works fine.
I defined “asset_id” with as the same type as your reference_url field.
And does the log on the Solr node that tries to index this give any more info?
Best,
Erick
> On Jun 27, 2
901 - 1000 of 46318 matches
Mail list logo