Nope, that is how it works. It is not in place.
> On 31 Jul 2018, at 21:57, Renuka Srishti wrote:
>
> Hi All,
>
> I was using highlighting in solr, solr gives highlighting results within
> the response but not included within the documents.
> Am i missing something? Can i configure so that it c
Hi,
Anyone has any information on this?
Regards,
Edwin
On Mon, 30 Jul 2018 at 11:15, Zheng Lin Edwin Yeo
wrote:
> Hi,
>
> I am using the Solr LTR in Solr 7.4.0, and I am trying to train an example
> learning model using LIBLINEAR.
>
> When I tried to run the code from train_and_upload_demo_mod
This feels like more work than necessary, especially the bit:
"which will require modification in Solr code".
If your needs are to co-locate various groups of documents
on specific nodes, composite id (the default) routing has
the ability to cluster docs together, see:
https://lucene.apache.org/so
Thanks Erick
This is for future. I am exploring to use a custom sharding scheme (which
will require modification in Solr code) together with the benefits of
SolrCloud.
Thanks
Nawab
On Tue, Jul 31, 2018 at 4:51 PM, Erick Erickson
wrote:
> Sure, just use the Collections API ADDREPLICA comma
Sure, just use the Collections API ADDREPLICA command to add as many
replicas for specific shards as you want. There's no way to specify
that at creation time though.
Some of the new autoscaling can do this automatically I believe.
I have to ask what it is about your collection that this is true.
Hi,
I am looking at Solr 7.x and couldn't find an answer in the documentation.
Is it possibly to specify different replicationFactor for different shards
in same collection? E.g. if a certain shard is receiving more queries than
rest of the collection I would like to add more replicas for it to h
Right, two JVMs on the same physical host with different ports are
"different Solrs" by default. If you had two replicas per shard and
both were on either Solr instance (same port) that would be
unexpected.
Problem is that this would have been a bug clear back in the Solr 4x
days so the fact that
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
To whom it may concern,
On 7/31/18 2:56 PM, tedsolr wrote:
> I'm having some trouble with non printable, but valid, UTF8 chars
> when exporting to Amazon Redshift. The export fails but I can't yet
> find this data in my Solr collection. How can I se
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Georg,
On 7/31/18 12:33 PM, Georg Fette wrote:
> Yes ist is only one of the processors that is at maximum capacity.
Ok.
> How do I do something like a thread-dump of a single thread ?
Here's how to get a thread dump of the whole JVM:
https://wiki
This is an example of what the data looks like:
"SOURCEFILEID":"77907",
"APPROP_GROUP_CODE_T":"F\uG\uR",
"APPROP_GROUP_CODE_T_aggr":"F\uG\uR",
"APPROP_GROUP_CODE_T_search":"F\uG\uR",
"OBJECT_DESC_T":"OTHER PROFESSIONAL/TECHNICAL SERVICES",
Hi All,
I was using highlighting in solr, solr gives highlighting results within
the response but not included within the documents.
Am i missing something? Can i configure so that it can show highlighted
keywords matched within the documents.
Thanks
Renuka Srishti
In my case, when trying on Solr7.4 (in response to Shawn Heisey's 6/19/18
comment "If this is a provable and reproducible bug, and it's still a problem
in the current stable branch"), I had only installed Solr7.4 on one host, and
so I was testing with two nodes on the same host (different port n
I'm having some trouble with non printable, but valid, UTF8 chars when
exporting to Amazon Redshift. The export fails but I can't yet find this
data in my Solr collection. How can I search, say from the admin console,
for a particular character? I'm looking for U+001E and U+001F
thanks!
Solr 5.5.4
On 7/27/2018 8:26 PM, Erick Erickson wrote:
> Yes with some fiddling as far as "placement rules", start here:
> https://lucene.apache.org/solr/guide/6_6/rule-based-replica-placement.html
>
> The idea (IIUC) is that you provide a snitch" that identifies what
> "rack" the Solr instance is on and can
Hello list,
I currently observed a very strange behaviour of fuzzy searches with
Solr Cloud 5.5.0.
I have two identical documents in 2 different collections. Something
like {name: "Tomas"}. I find the document in the first collection with a
search like name:Thomass~2. But I don't find it in
Hi Christoph,
Yes ist is only one of the processors that is at maximum capacity.
How do I do something like a thread-dump of a single thread ? We run the
Solr from the command line out-of-the-box and not in a code development
environment. Are there parameters that can be configured so that the
ser
Ok, your OOM errors are most likely due to
trying to stuff too many replicas into too little memory.
You have 100 collections, 8 shards per collection and
1 replica per shard. So if my math is right, you have
800 replicas total, 400 replicas per Solr instance.
6G of memory is very little for that
Thanks for responding! That's some good info. Here are the answers to the
questions you had...
Solr has 6gb of heap
We have 1 replica per shard at 8 shards per collection
We currently have approximately 100 collections
Zookeeper is an external ensemble each with their own server
-Original M
On 7/31/2018 2:39 AM, Georg Fette wrote:
We run the server version 7.3.1. on a machine with 32GB RAM in a mode
having -10g.
When requesting a query with
q={!boost
b=sv_int_catalog_count_document}string_catalog_aliases:(*2*)&fq=string_field_type:catalog_entry&rows=2147483647
the server takes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Georg,
On 7/31/18 4:39 AM, Georg Fette wrote:
> We run the server version 7.3.1. on a machine with 32GB RAM in a
> mode having -10g.
>
> When requesting a query with
>
> q={!boost
> b=sv_int_catalog_count_document}string_catalog_aliases:(*2*)&fq=
Yes, but 581 is the final number you got in the response, which is the
result of the main query intersected with the filter query so I wouldn't
take in account this number. The main and the filter query are executed
separately, so I guess (but I'm guessing because I don't know these
internals)
Hi,
we are seeing the following NPE sometimes when we delete a collection
right after we modify the schema:
08:47:46.407 [zkCallback-5-thread-4] INFO
org.apache.solr.rest.ManagedResource 209 processStoredData - Loaded
initArgs {ignoreCase=true} for /schema/analysis/stopwords/text_ar
08:47:46
Hi Andrea,
I agree that receiving too much data in one request is bad. But I was
surprised that the query works with a lower but still very large rows
parameter and that there is a threshold at which it crashes the server.
Furthermore, it seems that the reason for the crash is not the size of
We run the server version 7.3.1. on a machine with 32GB RAM in a mode
having -10g.
When requesting a query with
q={!boost
b=sv_int_catalog_count_document}string_catalog_aliases:(*2*)&fq=string_field_type:catalog_entry&rows=2147483647
the server takes all available memory up to 10GB and is th
Hi, there
>From Solr 7.x doc I know auto-scaling is triggered by number of replicas,
just want to know if I can achieve auto-scaling based on system-load
dynamically?
Appreciate your reply.
Thanks,
Xiaoming
Hi Georg,
I would say, without knowing your context, that this is not what Solr is
supposed to do. You're asking to load everything in a single
request/response and this poses a problem.
Since I guess that, even we assume it works, you should then iterate
those results one by one or in blocks,
Hello Georg,
As you have seen, a high rows parameter is a bad idea. Use cursor mark [1]
instead.
Regards,
Markus
[1] https://lucene.apache.org/solr/guide/7_4/pagination-of-results.html
-Original message-
> From:Georg Fette
> Sent: Tuesday 31st July 2018 10:44
> To: solr-user@lucene
Hello,
We run the server version 7.3.1. on a machine with 32GB RAM in a mode
having -10g.
When requesting a query with
q={!boost
b=sv_int_catalog_count_document}string_catalog_aliases:(*2*)&fq=string_field_type:catalog_entry&rows=2147483647
the server takes all available memory up to 10GB and i
28 matches
Mail list logo