Hi,
I'm having difficulty configuring JsonLayout for appenders. I have the
following in my log4j2.xml:
%d{-MM-dd HH:mm:ss.SSS} %-5p (%t) [%X{collection} %X{shard}
%X{replica} %X{core}] %c{1.} %m%n
Any help on this.?
On Wed, Jul 22, 2020 at 4:25 PM Man with No Name
wrote:
> The image is pulled from docker hub. After scanning the image from docker
> hub, without any modification, this is the list of CVE we're getting.
>
>
> Image ID CVE Package
Hello folks,
We see similar behavior from time to time. The main difference seems to be
that you see it while using NRT replication and we see it while using TLOG
replication.
* Solr 7.5.0.
* 1 collection with 12 shards, each with 2 TLOG and 2 PULL replicas.
* 12 machines, each machine hosting o
I forgot to mention, the fields being used in the function query are indexed
fields. They are mostly text fields that cannot have DocValues
-Original Message-
From: Webster Homer
Sent: Thursday, July 23, 2020 2:07 PM
To: solr-user@lucene.apache.org
Subject: RE: How to measure search perf
Hi Erick,
This is an example of a pseudo field: wdim_pno_:if(gt(query({!edismax
qf=searchmv_pno v=$q}),0),1,0)
I get your point that it would only be applied to the results returned and not
to all the results. The intent is to be able to identify which of the fields
matched the search. Our busi
This isn’t usually a cause for concern. Clearing the caches doesn’t necessarily
clear the OS caches for instance. I think you’re already aware that Lucene uses
MMapDirectory, meaning the index pages are mapped to OS memory space. Whether
those pages are actually _in_ the OS physical memory or no
This is a long shot, but look in the overseer queue to see if stuff is stuck.
We ran into that with 6.x.
We restarted the instance that was the overseer and the newly-elected overseer
cleared the queue.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> O
On 7/23/2020 8:56 AM, Porritt, Ian wrote:
Note: the solrconfig has class="ClassicIndexSchemaFactory"/> defined.
org.apache.solr.common.SolrException: *This IndexSchema is not mutable*.
at
org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUp
Yes, you should have seen a new tlog after:
- a doc was indexed
- 15 minutes had passed
- another doc was indexed
Well, yes, a leader can be in recovery. It looks like this:
- You’re indexing and docs are written to the tlog.
- Solr un-gracefully shuts down so the segments haven’t been closed. No
> Note that for my previous e-mail you’d have to wait 15 minutes after you
> started indexing to see a new tlog and also wait until at least 1,000 new
> document after _that_ before the large tlog went away. I don't think that’s
> your issue though.
Indeed I did wait 15 minutes but not sure 1000
Hi,
I'm having the exact same issue. Were you able to resolve this?
Kind regards,
Tijmen
I'm trying to determine the overhead of adding some pseudo fields to one of our
standard searches. The pseudo fields are simply function queries to report if
certain fields matched the query or not. I had thought that I could run the
search without the change and then re-run the searches with th
Hi All,
I made a change to schema to add new fields in a
collection, this was uploaded to Zookeeper via the
below command:
For the Schema
solr zk cp
file:E:\SolrCloud\server\solr\configsets\COLLECTIO
N\conf\schema.xml
zk:/configs/COLLECTION/schema.xml -z
SERVERNAME1.uleaf.site
For the
Hmmm, now we’re getting somewhere. Here’s the code block in
DistributedUpdateProcessor
if (ulog == null || ulog.getState() == UpdateLog.State.ACTIVE ||
(cmd.getFlags() & UpdateCommand.REPLAY) != 0) {
super.processCommit(cmd);
} else {
if (log.isInfoEnabled()) {
log.info("Ignoring commit
Hello, Solr Community
I am currently using Solr 8.3.0 with SolrCloud mode.
When I took the following steps, I happened to have super-large
index(approx ), and the process got stopped.
1. indexed hundred of thousands of documents
1.5 one of solrcloud servers had around 650GB.
2. updated(indexed) m
Thanks for all the details.
Everytime I go back to this article and everytime I learn something new (or
should I say I remember something that I had forgotten!).
The scenario you are describing could match our experience except the last step
"you stop indexing entirely and the tlog never gets ro
Hmmm, this doesn't account for the tlog growth, but a 15 minute hard
commit is excessive and accounts for your down time on restart if Solr
is forcefully shut down. I’d shorten it to a minute or less. You also
shouldn’t have any replay if you shut down your Solr gracefully.
Here’s lots of backgro
There’s a space between “l” and “oad” in your second doc. Or perhaps it has
markup etc. If you do what I mentioned and use the /terms endpoint to examine
what’s actually in your index, I’m pretty sure you’ll see “l” and “oad” so not
finding it is perfectly understandable.
What this is is that how
18 matches
Mail list logo