Yeah, but guide 8.8 is still buggy.
As I reported a month ago, "ICU Normalizer 2 Filter" states:
- NFC: ... Normalization Form C, canonical decomposition
- NFD: ... Normalization Form D, canonical decomposition, followed by canonical
composition
- NFKC: ... Normalization Form KC, compatibility
Hello list,
cloud it be that Apache Solr Reference Guide of all versions is wrong?
Example:
https://lucene.apache.org/solr/guide/8_7/filter-descriptions.html#icu-normalizer-2-filter
NFC: (name="nfc" mode="compose") Normalization Form C, canonical decomposition
NFD: (name="nfc"
If you are using multiword synonyms, acronyms, ...
Your should escape the space within the multiwords.
As synonyms.txt:
SRN, Stroke\ Research\ Network
IGBP, isolated\ gastric\ bypass
...
Redards
Bernd
Am 15.01.21 um 10:48 schrieb Shaun Campbell:
I have a medical journals search application
AFAIK, that could be a limit in Jetty and be raised in jetty.xml.
You might check the Jetty docs and look for something like BufferSize.
At least for Solr 6.6.x
Regards
Bernd
Am 14.01.21 um 13:19 schrieb Abhay Kumar:
Thank you Nicolas. Yes, we are making Post request to Solr using SolrNet
different.
You can either use ExactStatsCache that fetches counts for terms before
scoring, so that all replica's use the same counts. Or change the replica
types to TLOG. With TLOG segments are fetched from the leader and thus
identical.
Regards,
Markus
Op wo 13 jan. 2021 om 14:45 schreef Bernd
Hello list,
a question for better understanding scoring of a shard in a cloud.
I see different scores from different replicas of the same shard.
Is this normal and if yes, why?
My understanding until now was that replicas are always the same within a shard
and the same query to each replica
in.com/company/matheo-software> 1425551737
<https://www.youtube.com/user/MatheoSoftware> 1425551760
--
*********
Bernd FehlingBielefeld University Library
Dipl.-Inform. (FH)LibTec -
We are using Munin for years now for Solr monitoring.
Currently Munin 2.0.40 and SolrCloud 6.6.
Regards
Bernd
Am 20.11.20 um 21:02 schrieb Matheo Software:
Hello,
I would like to use Munin to check my Solr 8.7 but it don’t work. I try to
configure munin plugins without success.
Is
Good to know that Version 6.6.6 is not affected, so I am safe ;-)
Regards
Bernd
Am 12.10.20 um 20:38 schrieb Tomas Fernandez Lobbe:
> Severity: High
>
> Vendor: The Apache Software Foundation
>
> Versions Affected:
> 6.6.0 to 6.6.5
> 7.0.0 to 7.7.3
> 8.0.0 to 8.6.2
>
> Description:
> Solr
Hi,
because you are using solr.in.cmd I guess you are using Windows OS.
I don't know much about Solr and Windows but you can check your
Windows, Jetty and Solr time by looking at your solr-8983-console.log
file after starting Solr.
First the timestamp of the file itself, then the timestamp of the
It is kept in zookeeper within /configs/[collection_name], at least with my
SolrCloud 6.6.6.
bin/solr zk ls /configs/[your_collection_name]
Regards
Bernd
Am 08.09.20 um 21:40 schrieb yaswanth kumar:
> Can someone help me on how to persists the data that's updated in
> dataimport.properties
You should _not_ set "-XX:G1HeapRegionSize=n" , because:
"... The goal is to have around 2048 regions based on the minimum Java heap
size"
The value of G1HeapRegionSize is automatically calculated upon start up of the
JVM.
The parameter "-XX:MaxGCPauseMillis=200" is the default.
Where is
At first glance I see many Deprecations but also many TBD at Package location.
:-(
To understand this right, you kicked out the code and now waiting
for the community to take over and reinvent the wheel?
Or are there any recent plans of the PMC members to at least start
Package locations on
Am 15.07.20 um 16:07 schrieb Ishan Chattopadhyaya:
> Dear Solr Users,
>
> In this release (Solr 8.6), we have deprecated the following:
>
> 1. Data Import Handler
>
> 2. HDFS support
>
> 3. Cross Data Center Replication (CDCR)
>
Seriously? :-(
So next steps will be kicking out
Am 13.07.20 um 09:55 schrieb Mithun Seal:
> Hi Team,
>
> Could you please help me with below compatibility question.
>
> 1. We are trying to install zookeeper externally along with SOLR 7.5.0.
> As noted, SOLR 7.5.0 comes with Zookeeper 1.3.11.
Where did you get that info from?
AFAIK, Solr
I'm following this thread now for a while and I can understand
the wish to change some naming/wording/speech in one or the other
programs but I always get back to the one question:
"Is it the weapon which kills people or the hand controlled by
the mind which fires the weapon?"
The thread started
hat about “handling” unique IDs across
> two collections do you think might go wrong?
>
> Best,
> Erick
>
>> On May 13, 2020, at 4:26 AM, Bernd Fehling
>> wrote:
>>
>> Dear list,
>>
>> in my SolrCloud 6.6 I have a huge collection and now
Dear list,
in my SolrCloud 6.6 I have a huge collection and now I will get
much more data from a different source to be indexed.
So I'm thinking about a new collection and combine both, the existing
one and the new one with an alias.
But how to handle the unique key accross collections within a
Dear list and mailer admins,
it looks like the mailer of this list needs some care .
Can someone please set this "ART GALLERY" on a black list?
Thank you,
Bernd
Am 13.05.20 um 08:47 schrieb ART GALLERY:
> check out the videos on this website TROO.TUBE don't be such a
> sheep/zombie/loser/NPC.
+1
And a fully indexed search for the Ref Guide.
I have to use Google to search for infos in Ref Guide of a search engine. :-(
Am 29.04.20 um 02:11 schrieb matthew sporleder:
> I highly recommend a version selector in the header! I am *always*
> landing on 6.x docs from google.
>
> On Tue,
dquery":"-name_s:a IndexOrDocValuesQuery(age_i:[10 TO 10])",
> "parsedquery_toString":"-name_s:a age_i:[10 TO 10]",
> "QParser":"LuceneQParser",
>
>
>
> 2. id:("1" "2") AND (-name_s:a) # I thi
is as follows, I don't understand whether there is a
> grammatical error ?
> 1. -name_s:a OR age_i:10
>
> 2. id:("1" "2") AND (-name_s:a)
>
>
> At 2020-04-08 16:33:20, "Bernd Fehling"
> wrote:
>> Looks correct to me.
>>
Looks correct to me.
You have to obey the level of the operators and the parenthesis.
Turn debugQuery on to see the results of parsing of your query.
Regards
Bernd
Am 08.04.20 um 09:34 schrieb slly:
>
>
> If the following query is executed, the result is different:
>
>
> id:("1" "2") AND
ng phase before sending it Solr.
>
> Regards,
> Munendra S N
>
>
>
> On Fri, Dec 6, 2019 at 2:45 PM Bernd Fehling
> wrote:
>
>> Dear list,
>>
>> for one field I want to change fieldType from string to something
>> equal to string, but only lo
Dear list,
for one field I want to change fieldType from string to something
equal to string, but only lowercase.
currently:
new:
Is this the right replacement for "string"?
Are the attributes for solr.TextField ok?
Regards
Bernd
No, I don't use any highlighting.
Am 03.12.19 um 12:28 schrieb Paras Lehana:
> Hi Bernd,
>
> Have you gone through Highlighting
> <https://lucene.apache.org/solr/guide/8_3/highlighting.html>?
>
> On Mon, 2 Dec 2019 at 17:00, eli chen wrote:
>
>> yes
>>
In short,
you are trying to use an indexer as a full-text search engine, right?
Regards
Bernd
Am 02.12.19 um 12:24 schrieb eli chen:
> hi im kind of new to solr so please be patient
>
> i'll try to explain what do i need and what im trying to do.
>
> we a have a lot of books content and we
dump and MemoryAnalyzer.
Regards
Bernd
Am 30.09.19 um 09:44 schrieb Andrea Gazzarini:
mmm, ok for the core but are you sure things in this case are working per-segment? I would expect a FilterFactory instance per index,
initialized at schema loading time.
On 30/09/2019 09:04, Bernd Fehling
And I think this is per core per index segment.
2 cores per instance, each core with 3 index segments, sums up to 6 times
the 2 SynonymMaps. Results in 12 times SynonymMaps.
Regards
Bernd
Am 30.09.19 um 08:41 schrieb Andrea Gazzarini:
Hi,
looking at the stateful nature of
You might use the Lucene internal CheckIndex included in lucene core.
It should tell you everything you need. At least a good starting
point for writing your own tool.
Copy lucene-core-x.y.z-SNAPSHOT.jar and lucene-misc-x.y.z-SNAPSHOT.jar
to a local directory.
java -cp
ged-schema to zookeeper?
Thanks.
On Fri, Aug 2, 2019 at 11:53 AM Bernd Fehling <
bernd.fehl...@uni-bielefeld.de> wrote:
to 1) yes, because -Djute.maxbuffer is going to JAVA as a start parameter.
to 2) I don't know because i never use internal zookeeper
to 3) the configs are located at
figuration? Or do I have to make chages in
the directory that contains managed-schema and config.xml files with which
I initialized and created collections? And then the solr will pick them up
from there when it restarts?
Regards,
Salmaan
On Thu, Aug 1, 2019 at 5:40 PM Bernd Fehling
wro
ion ever?
Can we make chages when the Solr is running in production?
It depends on your system. In my cloud with 5 shards and 3 replicas I can
take one by one offline, stop, modify and start again without problems.
Thanks.
Regards,
Salmaan
On Tue, Jul 30, 2019 at 4:53 PM Bernd Fehl
You have to increase the -Djute.maxbuffer for large configs.
In Solr bin/solr/solr.in.sh use e.g.
SOLR_OPTS="$SOLR_OPTS -Djute.maxbuffer=1000"
This will increase maxbuffer for zookeeper on solr side to 10MB.
In Zookeeper zookeeper/conf/zookeeper-env.sh
SERVER_JVMFLAGS="$SERVER_JVMFLAGS
I think it is not fair blaiming Solr not also having a load balancer.
It is up to you and your needs to set up the required infrastucture
including load balancing. The are many products available on the market.
If your current system can't handle all requests then install more replicas.
Regards
How about "Pivot (Decision Tree) Faceting"?
http://lucene.apache.org/solr/guide/6_6/faceting.html#Faceting-Pivot_DecisionTree_Faceting
Regards
Bernd
Am 24.05.19 um 14:16 schrieb Gian Marco Tagliani:
Hi all,
I'm facing a problem with Nested Documents.
To illustrate my problem I'll use the
Have a look at "invariants" for your requestHandler in solrconfig.xml.
It might be an option for you.
Regards
Bernd
Am 22.05.19 um 22:23 schrieb RaviTeja:
Hello Solr Expert,
How are you?
Am trying to ignore faceting for some of the fields. Can you please help me
out to ignore faceting using
Your "sort" parameter has "sort=id+desc,id+desc".
1. It doesn't make sense to have a sort on "id" in descending order twice.
2. Be aware that the id field has the highest cadinality.
3. To speedup sorting have a separate field with docValues=true for sorting.
E.g.
Regards
Bernd
Am
I would say yes.
https://lucene.apache.org/solr/guide/7_3/monitoring-solr-with-prometheus-and-grafana.html
Am 30.04.19 um 13:30 schrieb shruti suri:
Prometheus with grafana can be used?
Thanks
Shruti Suri
-
Regards
Shruti
--
Sent from:
We use munin with solr plugin but you can also use zabbix with solr plugin.
But there are much more.
Even Oracle has a Monitoring (Java Mission Control with Java Flight Recorder).
Regards,
Bernd
Am 30.04.19 um 13:09 schrieb shruti suri:
Hi Emir,
Is there any open source tool for
Hi list,
while going to change my JAVA from Oracle to openJDK the big question is
which distribution to take?
Currently we use Oracle JDK Java SE 8 because of LTS.
Next would be JDK Java SE 11 again because of LTS but now we have to
change to openJDK.
Any recommendations about openJDK 11
I have SolrCloud with a collection "test1" with 5 shards 2 replicas accoss 5
server.
This cloud is started at port 8983 on each server.
Now I have a second collection "test2" with 5 shards 1 replica accross the same
5 server. But this second collection is started in seperate JAVA instances at
Isn't there somthing about largePageTables which must be enabled
in JAVA and also supported by OS for such huge heaps?
Just a guess.
Am 19.03.19 um 15:01 schrieb Jörn Franke:
It could be an issue with jdk 8 that may not be suitable for such large heaps.
Have more nodes with smaller heaps (eg
.
And the response from o.a.s.handler.RequestHandlerBase.handleRequest() is
setting rsp.setException(e) which could be used to select logging only
requests which produced an ERROR.
Are there any opinions about this?
Regards
Bernd
Am 18.02.19 um 14:43 schrieb Bernd Fehling:
Hi list,
logging
Hi list,
logging in solr sounds easy but the problem is logging only errors
and the request which produced the error.
I want to log all 4xx and 5xx http and also solr ERROR.
My request_logs from jetty show nothing useful because of POST requests.
Only that a request got HTTP 4xx or 5xx from
a single node (10s at least) there haven't been reliable stats showing
that it's a performance issue. If you have threshold numbers where
you've seen it make a material difference it'd be great to share them.
And I won't be getting back to this until the weekend, other urgent
stuff has come up...
Best,
Erick
Hi,
assuming you have a fieldType for "text_general" defined in your schema, change
from:
to:
Regards,
Bernd
Am 18.01.19 um 11:51 schrieb Kranthi Kumar K:
Hi team,
Thank you Erick Erickson ,Bernd Fehling , Jan Hoydahl for your suggested
solutions. I've tried the sugge
Hi Erik,
yes, I would be happy to test any patches.
Good news, I got rebalance working.
After running the rebalance about 50 times with debugger and watching
the behavior of my problem shard and its core_nodes within my test cloud
I came to the point of failure. I solved it and now it works.
HARDUNIQUE command to make sure that <1> above
doesn't happen.
Meanwhile, if there's an alternative approach that's simpler I'd be all
for it.
Best,
Erick
On Wed, Jan 9, 2019 at 1:32 AM Bernd Fehling
wrote:
Yes, your findings are also very strange.
I wonder if we can discover the "in
Have you lost dataDir from all zookeepers?
If not, first take a backup of remaining dataDir and then start that zookeeper.
Take ZooInspector to connect to dataDir at localhost and get your
state.json including all other configs and setting.
Am 09.01.19 um 12:25 schrieb Yogendra Kumar Soni:
ion I'm making is and we can
test for that too.,
On Tue, Jan 8, 2019 at 1:42 AM Bernd Fehling
wrote:
Hi Erick,
after some more hours of debugging the rough result is, who ever invented
this leader election did not check if an action returns the estimated
result. There are only checks for excep
environment that causes
it to succeed. I've got to get this to fail as a unit test before I
have confidence in any fixes, and also confidence that things
like this will be caught going forward.
Erick
On Fri, Dec 21, 2018 at 3:59 AM Bernd Fehling
wrote:
As far as I could see with debugger
In SolrCloud there are Data Centers.
Your Cluster 1 is DataCenter 1 and your Cluster 2 is Data Center 2.
You can then use CDCR (Cross Data Center Replication).
http://lucene.apache.org/solr/guide/7_0/cross-data-center-replication-cdcr.html
Nevertheless I would spend your Cluster 2 another 2
Hi,
I don't know the limits about Solr 4.2.1 but the RefGuide of Solr 6.6
says about Field Types for Class StrField:
"String (UTF-8 encoded string or Unicode). Strings are intended for
small fields and are not tokenized or analyzed in any way.
They have a hard limit of slightly less than 32K."
nceLeaders to randomly
create TLOG and (maybe) PULL replicas so we'd keep covering the
various cases.
Best,
Erick
On Thu, Dec 20, 2018 at 8:06 AM Bernd Fehling
wrote:
Hi Vadim,
I just tried it with 6.6.5.
In my test cloud with 5 shards, 5 nodes, 3 cores per node it missed
one shard to bec
Hi Vadim,
I just tried it with 6.6.5.
In my test cloud with 5 shards, 5 nodes, 3 cores per node it missed
one shard to become leader. But noticed that one shard already was
leader. No errors or exceptions in logs.
May be I should enable debug logging and try again to see all logging
messages from
This question sounds simple but nevertheless its spinning in my head.
While using Solr 6.6.5 in Cloud mode which has Apache ZooKeeper 3.4.10
in the list of "Major Components" is it possible to use
Apache ZooKeeper 3.4.13 as stand-alone ensemble together with SolrCloud 6.6.5
or do I have to
tted.
On Wed, Nov 28, 2018, 12:54 Bernd Fehling <
bernd.fehl...@uni-bielefeld.de
wrote:
Hi Vadim,
thanks for confirming.
So it seems to be a general problem with Solr 6.x, 7.x and might
be still there in the most recent versions.
But where to start to debug this problem, is it something not
co
with the max setting, but it will have
to do more work to get there. The setting for initial heap size is about the
most useless thing in Java.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
On Dec 4, 2018, at 6:06 AM, Bernd Fehling
wrote:
Hi Danilo
files which build a Synonyms FST for SpellCheckComponent
used for autocomplete suggestions (e.g. Thesuarus).
Thanks
Danilo
On 04/12/18 15:06, Bernd Fehling wrote:
Hi Danilo,
Full GC points out that you need more heap which also implies that you need
more RAM.
Raise your heap to 24GB
Metaspace used 37438K, capacity 38438K, committed 38676K, reserved
1083392K
class space used 4257K, capacity 4521K, committed 4628K, reserved 1048576K
}
Thank you for your help
Danilo
On 03/12/18 10:36, Bernd Fehling wrote:
Hi Danilo,
you have to give more infos about your system and t
Hi Danilo,
you have to give more infos about your system and the config.
- 30gb RAM (physical RAM?) how much heap do you have for JAVA?
- how large (in GByte) are your 40 million raw data being indexed?
- how large is your index (in GByte) with 40 million docs indexed?
- which version of Solr
:-)
Regards, Bernd
Am 27.11.18 um 15:47 schrieb Vadim Ivanov:
Hi, Bernd
I have tried REBALANCELEADERS with Solr 6.3 and 7.5
I had very similar results and notion that it's not reliable :(
--
Br, Vadim
-Original Message-
From: Bernd Fehling [mailto:bernd.fehl...@uni-bielefeld.de]
Sent:
Hi list,
unfortunately REBALANCELEADERS is not reliable and the leader
election has unpredictable results with SolrCloud 6.6.5 and
Zookeeper 3.4.10.
Seen with 5 shards / 3 replicas.
- CLUSTERSTATUS reports all replicas (core_nodes) as state=active.
- setting with ADDREPLICAPROP the property
nts.
https://www.ub.uni-bielefeld.de/~befehl/base/solr/index.html
Regards
Bernd
Am 03.09.2018 um 11:33 schrieb Toke Eskildsen:
On Tue, 2018-08-28 at 09:37 +0200, Bernd Fehling wrote:
Yes, I tested many cases.
Erick is absolutely right about the challenge of finding "best" setups.
What
difference between the two deployments? How
many servers did you use in total? I am curious what's the bottleneck for
the one instance and 3 cores configuration.
Thanks,
Wei
On Mon, Aug 27, 2018 at 1:45 AM Bernd Fehling <
bernd.fehl...@uni-bielefeld.de> wrote:
My tests with many combin
:45 skrev Bernd Fehling :
My tests with many combinations (instance, node, core) on a 3 server cluster
with SolrCloud pointed out that highest performance is with multiple solr
instances and shards and replicas placed by rules so that you get advantage
from preferLocalShards=true.
The disadvantage
My tests with many combinations (instance, node, core) on a 3 server cluster
with SolrCloud pointed out that highest performance is with multiple solr
instances and shards and replicas placed by rules so that you get advantage
from preferLocalShards=true.
The disadvantage ist the handling of the
Something strange happened,
in my Solr 6.6.5 cloud (1 collection, 5 shards, 3 replica) the
leader is stuck on offline node for shard3.
I already tried setting property preferredLeader to true on the
active core_node5 and called REBALANCELEADERS but nothing happened.
In the response of
I don't know if DIH can solve your problem but I would go for
a simple self programmed ETL in JAVA and use SolrJ for loading.
Best regards,
Bernd
Am 18.05.2018 um 21:47 schrieb S.Ashwath:
Hello,
I have 2 directories: 1 with txt files and the other with corresponding
JSON (metadata) files
ks similar, a recursive file traverser with for-loop over files.
But don't forget a final client.add(list) after the while-loop ;-)
Not exactly a sophisticated queue ;).
On Tue, May 15, 2018 at 8:15 AM, Bernd Fehling
<bernd.fehl...@uni-bielefeld.de> wrote:
Hi Erik,
yes indee
.
That's required for NRT. Using CloudSolrClient has no bearing
on that functionality.
Best,
Erick
On Tue, May 15, 2018 at 6:53 AM, Bernd Fehling
<bernd.fehl...@uni-bielefeld.de> wrote:
Thanks, solved, performance is good now.
Regards,
Bernd
Am 15.05.2018 um 08:12 schrieb Bernd Fehling:
OK,
Thanks, solved, performance is good now.
Regards,
Bernd
Am 15.05.2018 um 08:12 schrieb Bernd Fehling:
OK, I have the CloudSolrClient with SolrJ now running but it seams
a bit slower compared to ConcurrentUpdateSolrClient.
This was not expected.
The logs show that CloudSolrClient send the docs
f the docs destined
for a particular shard and sends those to the leader.
Do the default not work for you?
Best,
Erick
On Wed, May 9, 2018 at 2:54 AM, Bernd Fehling
<bernd.fehl...@uni-bielefeld.de> wrote:
Hi list,
while going from single core master/slave to cloud multi core/node
with leade
Hi list,
while going from single core master/slave to cloud multi core/node
with leader/replica I want to change my SolrJ loading, because
ConcurrentUpdateSolrClient isn't cloud aware and has performance
impacts.
I want to use CloudSolrClient with LBHttpSolrClient and updates
should only go to
schrieb Shawn Heisey:
> On 5/7/2018 8:22 AM, Bernd Fehling wrote:
>> thanks for asking, I figured it out this morning.
>> If setting -Xloggc= the option -XX:+PrintGCTimeStamps will be set
>> as default and can't be disabled. It's inside JAVA.
>>
>> Currently using Solr
)
Regards,
Bernd
Am 07.05.2018 um 14:50 schrieb Dominique Bejean:
> Hi,
>
> Which version of Solr are you using ?
>
> Regards
>
> Dominique
>
>
> Le ven. 4 mai 2018 à 09:13, Bernd Fehling <bernd.fehl...@uni-bielefeld.de>
> a écrit :
>
>&g
Hi list,
this sounds simple but I can't disable PrintGCTimeStamps in solr_gc logging.
I tried with GC_LOG_OPTS in start scripts and --verbose reporting during
start to make sure it is not in Solr start scripts.
But if Solr is up and running there are always TimeStamps in solr_gc.log and
the file
Thanks Alessandro for the info.
I am currently in the phase to find the right setup with shards,
nodes, replicas and so on.
I have decided to begin with 5 hosts and want to setup 1 collection with 5
shards.
And start with 2 replicas per shard.
But the next design question is, should each
| |
| | | | | |
host1host2host3
Regards
Bernd
Am 19.04.2018 um 14:43 schrieb Shawn Heisey:
> On 4/19/2018 6:28 AM, Bernd Fehling wrote:
>> How would you setup a SolrCloud an why?
>>
How would you setup a SolrCloud an why?
shard1 shard2 shard3
| | | | | |
| |r1| | | |r1| | | |r1| |
| | | | | |
| | | | | |
| | | | | |
| |r2| |
//sematext.com/
>
>
>
>> On 18 Apr 2018, at 16:30, Shawn Heisey <apa...@elyograg.org> wrote:
>>
>> On 4/18/2018 8:03 AM, Bernd Fehling wrote:
>>> I just tried to change the log level with Solr Admin UI but it
>>> does not change any logging on my
I just tried to change the log level with Solr Admin UI but it
does not change any logging on my running SolrCloud.
It just shows the changes in the Admin UI and the commands in the
request log, but no changes in the level of logging.
Do I have to RELOAD the collection after changing log level?
You have to check your log4j.properties, usually located
server/resources/log4j.properties
There is a line about infostream logging, change it from OFF to ON.
# set to INFO to enable infostream log messages
log4j.logger.org.apache.solr.update.LoggingInfoStream=OFF
Regards
Bernd
Am 17.04.2018
It would help if you can trace it down to a version change.
Do you have a test system and start with 6.3.0 as next version above 6.2.1
to see which version change is making you trouble?
You can then try 6.4.0 and 6.5.0 next. And after that go into subversions.
Regards, Bernd
Am 16.04.2018 um
As a first guess I would say that you have much higher GC activity
which causes much higher CPU usage.
Why do you have much higher GC activity?
Any GC settings changed?
Have you tried increasing heap size?
Regards
Bernd
Am 16.04.2018 um 06:22 schrieb mganeshs:
> Solr experts,
>
> We found
allocations
and not to avoid it under all circumstances.
Regards
Bernd
Am 27.03.2018 um 23:07 schrieb Shawn Heisey:
> On 3/27/2018 12:13 AM, Bernd Fehling wrote:
>> may I give you the advise to _NOT_ set XX:G1HeapRegionSize.
>> That is computed during JAVA start by the engine ac
Hi Walter,
may I give you the advise to _NOT_ set XX:G1HeapRegionSize.
That is computed during JAVA start by the engine according to heap and
available memory.
A wrong set size can even a huge machine with 31GB heap and 157GB RAM force
into OOM.
Guess how I figured that out, took me about one
Dear list,
I would like to poll for the loadbalancer you are using for SolrCloud.
Are you using a loadbalancer for SolrCloud?
If yes, which one (SolrJ, HAProxy, Varnish, Nginx,...) and why?
If not, why not?
Regards, Bernd
ode).
>
> Have DIH support for paging queries?
>
> Thanks!
>
> -Message d'origine-
> De : Bernd Fehling [mailto:bernd.fehl...@uni-bielefeld.de]
> Envoyé : jeudi 15 février 2018 10:13
> À : solr-user@lucene.apache.org
> Objet : Re: Reading data from Oracle
>
>
And where is the bottleneck?
Is it reading from Oracle or injecting to Solr?
Regards
Bernd
Am 15.02.2018 um 08:34 schrieb LOPEZ-CORTES Mariano-ext:
> Hello
>
> We have to delete our Solr collection and feed it periodically from an Oracle
> database (up to 40M rows).
>
> We've done the
Many years ago, in a different universe, when Federated Search was a buzzword we
used Unity from FAST FDS (which is now MS ESP). It worked pretty well across
many systems like FAST FDS, Google, Gigablast, ...
Very flexible with different mixers, parsers, query transformers.
Was written in Python
sers doing bulk loading with archived backups and preserving the
order?
Can't believe that I'm the only one on earth having this need.
Regards
Bernd
Am 11.01.2018 um 08:53 schrieb Shawn Heisey:
> On 1/11/2018 12:05 AM, Bernd Fehling wrote:
>> This will nerver pass a Jepsen test and I c
upid.
Regards
Bernd
Am 11.01.2018 um 02:27 schrieb Shawn Heisey:
> On 1/10/2018 8:33 AM, Bernd Fehling wrote:
>> after some strange search results I was trying to locate the problem
>> and it turned out that it starts with bulk loading with SolrJ
>> and ConcurrentUpdateSolrCli
Hi list,
after some strange search results I was trying to locate the problem
and it turned out that it starts with bulk loading with SolrJ
and ConcurrentUpdateSolrClient.Builder with several threads.
I assume that ConcurrentUpdateSolrClient.Builder is _NOT_ thread safe
according the docs send
What is the precedence when docValues with stored=true is used?
e.g.
My guess, because of useDocValuesAsStored=true is default, that stored=true is
ignored and the values are pulled from docValues.
And only if useDocValuesAsStored=false is explicitly used then stored=true comes
into play.
Or
ords which should
>> be 8?
>>
>> I'm talking about 100 Million records, the 4 above are just an example.
>> This is not a general use case, more for statistical purposes.
>>
>> Regards
>> Bernd
>
>
--
Hi list,
actually a simple question, but somehow i can't figure out how to get
the total number of terms in a field in the index, example:
record_1: fruit: apple, banana, cherry
record_2: fruit: apple, pineapple, cherry
record_3: fruit: kiwi, pineapple
record_4: fruit:
- a search for fruit:*
Hi Walter,
you can check if the JVM OOM hook is acknowledged by JVM
and setup in the JVM. The options are "-XX:+PrintFlagsFinal -version"
You can modify your bin/solr script and tweak the function "launch_solr"
at the end of the script. Replace "-jar start.jar" with "-XX:+PrintFlagsFinal
Thanks,
but I tried to access the mentioned issues of
https://lucene.apache.org/solr/6_6_2/changes/Changes.html
https://issues.apache.org/jira/browse/SOLR-11477
https://issues.apache.org/jira/browse/SOLR-11482
I get something like "permissionViolation=true", even after login!!!
Is SOLR going to
1 - 100 of 396 matches
Mail list logo