Hi all,
How about using hl.tag pre and post. If these are present then there is
actually field match otherwise its default summary ?
Will it work or there are some cases where it will not ?
Thanks in advance.
On Tue, Jul 23, 2019 at 5:48 PM govind nitk wrote:
> Hi all,
>
> How to get more
We’re trying to index a polygon into solr and then filter/calculate geodist on
the polygon (ideally we actually want a circle, but it looks like that’s not
really supported officially by wkt/geojson and instead you have to switch
format=”legacy” which seems like something that might be removed
We’re trying to index a polygon into solr and then filter/calculate geodist on
the polygon (ideally we actually want a circle, but it looks like that’s not
really supported officially by wkt/geojson and instead you have to switch
format=”legacy” which seems like something that might be removed
Do you have exceptions in the log?
How much total memory do you have?
> Am 23.07.2019 um 18:17 schrieb Rodrigo Oliveira
> :
>
> Hi,
>
> In the last 3 months I am using the Solr. However, yesterday my cluster was
> down.
>
> My environment is:
>
> I have 5 nodes from Solr/zookeeper (16 cpu x
Hi,
In the last 3 months I am using the Solr. However, yesterday my cluster was
down.
My environment is:
I have 5 nodes from Solr/zookeeper (16 cpu x 24 Gb each node - xms: 12 Gb -
xmx: 16 Gb). My workload will be bigger. By the way, I know, I'll need to
tunning in the future.
Like I said,
The image has been removed.
Ciao,
Vincenzo
--
mobile: 3498513251
skype: free.dev
> On 23 Jul 2019, at 16:14, Rodrigo Oliveira
> wrote:
>
> Hi,
>
> In the last 3 months I am using the Solr. However, yesterday my cluster was
> down.
>
> My environment is:
>
> I have 5 nodes from
Hi,
In the last 3 months I am using the Solr. However, yesterday my cluster was
down.
My environment is:
I have 5 nodes from Solr/zookeeper (16 cpu x 24 Gb each node - xms: 12 Gb -
xmx: 16 Gb). My workload will be bigger. By the way, I know, I'll need to
tunning in the future.
Like I said,
Ah nevermind, I managed to resolve the issue.
It seems that replication only works if the index changes. I noticed that
both master and slave had same index versions since I had only changed the
schema.
When I modified a random field of a random document, the index versions of
master and slave
Are you sure that you’re _using_ schema.xml and not managed-schema? the default
has changed. If no explicit entry is made in solrconfig.xml to define
, you’ll be using managed-schema, not schema.xml.
Best,
Erick
> On Jul 23, 2019, at 5:51 AM, Sidharth Negi wrote:
>
> Hi,
>
> The
Hi all,
How to get more details for highlighting ?
I am using
hl.method=unified&=title,url,paragraph=true=true
So, if query words not matched, I am getting defaultSummary, which is
great. *Can we get more info saying whether it found matches or default
summary? How to get such information?*
Wrapping the expression in a fetch function as follows works:
fetch(names, select(nodes(enron_emails,
nodes(enron_emails,
walk="kayne.coul...@enron.com->from",
trackTraversal="true",
gather="to"), walk="node->from",
Hi,
The "replicateNow" button in the admin UI doesn't seem to work since the
"schema.xml" (which I modified on slave) is not being updated to reflect
that of the master. I have used this button before and it has always cloned
index right away. Any ideas on what could be the possible reason for
12 matches
Mail list logo