"sort" is a regular request parameter. In your non-working query, you
specified it as a local-param inside geofilt which isn't where it belongs.
If you want to sort from two points then you need to make up your mind on
how to combine the distances into some greater aggregate function (e.g.
Hello
Can you please try increasing 'new size' and 'max new size' to 1GB+?
Deepak
On Mon, 30 Sep 2019, 13:35 Yasufumi Mizoguchi,
wrote:
> Hi, Deepak.
> Thank you for replying me.
>
> JVM settings from solr.in.sh file are as follows. (Sorry, I could not show
> all due to our policy)
>
>
You should not leave it in the qf field. You’re getting confused by the
difference between query _parsing_ and the analysis chain. The parsing turns
your top-level query of “ice cream” (assuming without quotes) into something
like
f1:ice f1:cream f2:ice f2:cream
This is happening way before
Hi All
I've checked out lucene-solr project, branch "branch_8x"
When I run "ant precommit" at project root, I get these validation errors on
"analytics.adoc" file. Has anyone seen these before, and if you knew of a fix?
My env
- windows 10 pro
- jdk 1,8_221
- ant 1.10.6
https://stackoverflow.com/questions/48348312/solr-7-how-to-do-full-text-search-w-geo-spatial-search
On Mon, Sep 30, 2019 at 10:31 AM Anushka Gupta <
anushka_gu...@external.mckinsey.com> wrote:
> Hi,
>
> I want to be able to filter on different cities and also sort the results
> based on
Thanks Erick, that seems to work!
Should I leave it in qf also? For example the query "blue dog" may be
represented as separate tokens in the keyword index.
On Mon, Sep 30, 2019 at 9:32 PM Erick Erickson
wrote:
> Have you tried taking your keyword field out of the “qf” param and adding
> it
Well, first of all your first query with the two fq clauses has sort specified
with a space, rather than an ampersand (&). Twice. Even if that worked, Solr
would only use one I think.
It’s really unclear what you’re after. It makes no sense to me to specify two
sorts in a single query, which
On 9/30/2019 9:06 AM, yuri.glad...@swisscom.com wrote:
Is it possible to turn off the weighted search for Solr?
I mean the results have to be presented in a pure alphabetical order, not by
the default weighted order. So if a certain letter appears in a word 2 times,
this word shouldn' t be
Can you give a more detailed example, please? Including the schema bits.
There is a bunch of assumptions in here that are hard to really make
sense of. Solr works with tokens, but you are talking about letter
repetitions. Also, if you want to sort by the string, why not just use
sort parameter?
Hi,
I want to be able to filter on different cities and also sort the results
based on geoproximity. But sorting doesn’t work:
Hello
Is it possible to turn off the weighted search for Solr?
I mean the results have to be presented in a pure alphabetical order, not by
the default weighted order. So if a certain letter appears in a word 2 times,
this word shouldn' t be ranked higher.
I spent the whole day trying to find
Hi,
I want to be able to filter on different cities and also sort the results based
on geoproximity. But sorting doesn’t work:
31G is still a very large heap. We use 8G for all of our different clusters.
Do you have JVM monitoring? Look at the heap used after a major GC. Use that
number, plus some extra, for the heap size.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On
Jochen, right! Sorry for didn't get your point earlier. {!bool filter=}
means Lucene filter, not Solr's one. I suppose {!bool cache=true} flag can
be easily added, but so far there is no laconic syntax for it. Don't
hesitate to raise a jira for it.
On Mon, Sep 30, 2019 at 3:18 PM Jochen Barth
Hi All - I just ran the REPLACENODE command on a cluster with 5 nodes in
it. I ran the command async, and it failed with:
{
"responseHeader":{
"status":0,
"QTime":11},
"Operation replacenode caused
On 9/29/2019 11:44 PM, Yasufumi Mizoguchi wrote:
I am trying some tests to confirm if single Solr instance can perform over
1000 queries per second(!).
In general, I would never expect a single instance to handle a large
number of queries per second unless the index is REALLY small -- dozens
The most basic question is how you are load-testing it? Assuming you have some
kind of client firing queries at Solr, keep adding threads so Solr is handling
more and more queries in parallel. If you start to see the response time at the
client get longer _and_ the QTime in Solr’s response
Solr/Lucene _better_ not have a copy of the synonym map for every segment, if
so it’s a JIRA for sure. I’ve seen indexes with 100s of segments. With a large
synonym file it’d be terrible.
I would be really, really, really surprised if this is the case. The Lucene
people are very careful with
Have you tried taking your keyword field out of the “qf” param and adding it
explicitly? As keyword:”ice cream”
Best,
Erick
> On Sep 30, 2019, at 5:27 AM, Ashwin Ramesh wrote:
>
> Hi everybody,
>
> I am using the edismax parser and have noticed a very specific behaviour
> with how sow=true
Hello everyone.
This message is to let you know that Paul Isaac's has now left Bristol is
Open.
If you need any help going forward please reach out to
nigel.car...@bristolisopen.com and he will endeavor to link you to the best
person to help you.
Please dont respond to this email as it will
That sounds really strange to me.
Segments are created gradually depending on changes applied to the
index, while the Schema should have a completely different lifecycle,
independent from that.
If that is true, that would mean each time a new segment is created Solr
would instantiate a new
Hi everybody,
I am using the edismax parser and have noticed a very specific behaviour
with how sow=true (default) handles multiword keywords.
We have a field called 'keywords', which uses the general
KeywordTokenizerFactory. There are also other text fields like title and
description. etc.
FYI, I succeeded at using SPLITSHARD operation after renaming my core.
My core was named the same as the collection name and the split shard operation
was looking for a metric with key "solr.core.collectionName.shard1.null" while
the metrics available for the core was with key
Hi, Ere.
Thank you for valuable feedback.
I will try Xmx31G and Xms31G instead of current ones.
Thanks and Regards,
Yasufumi.
2019年9月30日(月) 17:19 Ere Maijala :
> Just a side note: -Xmx32G is really bad for performance as it forces
> Java to use non-compressed pointers. You'll actually get
Just a side note: -Xmx32G is really bad for performance as it forces
Java to use non-compressed pointers. You'll actually get better results
with -Xmx31G. For more information, see e.g.
https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-java-jvm-memory-oddities/
Regards,
Ere
Yasufumi
Hi, Deepak.
Thank you for replying me.
JVM settings from solr.in.sh file are as follows. (Sorry, I could not show
all due to our policy)
-verbose:gc
-XX:+PrintHeapAtGC
-XX:+PrintGCDetails
-XX:+PrintGCDateStamps
-XX:+PrintGCTimeStamps
-XX:+PrintTenuringDistribution
Yes, I think so.
While integrating a Thesaurus as synonyms.txt I saw massive memory usage.
A heap dump and analysis with MemoryAnalyzer pointed out that the
SynonymMap took 3 times a huge amount of memory, together with each
opened index segment.
Just try it and check that by yourself with heap
mmm, ok for the core but are you sure things in this case are working
per-segment? I would expect a FilterFactory instance per index,
initialized at schema loading time.
On 30/09/2019 09:04, Bernd Fehling wrote:
And I think this is per core per index segment.
2 cores per instance, each core
On 30/09/2019 09:04, Bernd Fehling wrote:
And I think this is per core per index segment.
2 cores per instance, each core with 3 index segments, sums up to 6 times
the 2 SynonymMaps. Results in 12 times SynonymMaps.
Regards
Bernd
Am 30.09.19 um 08:41 schrieb Andrea Gazzarini:
Hi,
Hello
Can you please share the JVM heap settings in detail?
Deepak
On Mon, 30 Sep 2019, 11:15 Yasufumi Mizoguchi,
wrote:
> Hi,
>
> I am trying some tests to confirm if single Solr instance can perform over
> 1000 queries per second(!).
>
> But now, although CPU usage is 40% or so and iowait
And I think this is per core per index segment.
2 cores per instance, each core with 3 index segments, sums up to 6 times
the 2 SynonymMaps. Results in 12 times SynonymMaps.
Regards
Bernd
Am 30.09.19 um 08:41 schrieb Andrea Gazzarini:
Hi,
looking at the stateful nature of
Hi,
looking at the stateful nature of SynonymGraphFilter/FilterFactory classes,
the answer should be 2 times (one time per type instance).
The SynonymMap, which internally holds the synonyms table, is a private
member of the filter factory and it is loaded each time the factory needs
to create a
32 matches
Mail list logo