Re: SolrCloud 8.2.0 - adding a field

2020-04-01 Thread Joe Obernberger
Nevermind - I see that I need to specify an existing collection not a schema.  There is no collection called UNCLASS - only a schema. -Joe On 4/1/2020 4:52 PM, Joe Obernberger wrote: Hi All - I'm trying this: curl -X POST -H 'Content-type:application/json' --data-binary

SolrCloud 8.2.0 - adding a field

2020-04-01 Thread Joe Obernberger
Hi All - I'm trying this: curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field":{"name":"Para450","type":"text_general","stored":"false","indexed":"true","docValues":"false","multiValued":"false"}}' http://ursula.querymasters.com:9100/api/cores/UNCLASS/schema This

Re: Required operator (+) is being ignored when using default conjunction operator AND

2020-04-01 Thread Chris Hostetter
: Using solr 8.3.0 it seems like required operator isn't functioning properly : when default conjunction operator is AND. You're mixing the "prefix operators" with the "infix operators" which is always a recipe for disaster. The use of q.op=AND vs q.op=OR in these examples only complicates

Required operator (+) is being ignored when using default conjunction operator AND

2020-04-01 Thread Eran Buchnick
Using solr 8.3.0 it seems like required operator isn't functioning properly when default conjunction operator is AND. Steps to reproduce: 20 docs all have text field 17 have the value A 13 have the value B 10 have both A and B (the intersection) ===the data=== [ { "id": "0",

Re: Solrcloud 7.6 OOM due to unable to create native threads

2020-04-01 Thread Walter Underwood
We have defined a “search feed” as a file of JSONL objects, one per line. The feed files can be stored in S3, reloaded, sent to two clusters, etc. Each destination can keep its own log of failures and retries. We’ve been doing this for full batch feeds and incrementals for a few years. We’ve been

Re: Request Tracking in Solr

2020-04-01 Thread Jason Gerlowski
Hi Prakhar, Newer versions of Solr offer an "Audit Logging" plugin for use cases similar to yours. https://lucene.apache.org/solr/guide/8_1/audit-logging.html If don't think that's available as far back as 5.2.1 though. Just thought I'd mention it in case upgrading is an option. Best, Jason

Re: Unable to delete zookeeper queue

2020-04-01 Thread Jörn Franke
Maybe you need I inc on zk server and zk client Jute Max bufffer to execute this . You can better ask the ZK mailing list > Am 01.04.2020 um 14:53 schrieb Kommu, Vinodh K. : > > Hi, > > Does anyone know a working solution to delete zookeeper queue data? Please > help!! > > > Regards, >

RE: Unable to delete zookeeper queue

2020-04-01 Thread Kommu, Vinodh K.
Hi, Does anyone know a working solution to delete zookeeper queue data? Please help!! Regards, Vinodh From: Kommu, Vinodh K. Sent: Tuesday, March 31, 2020 12:55 PM To: solr-user@lucene.apache.org Subject: Unable to delete zookeeper queue All, For some reason one of our zookeeper queue was

solr 8.5.0 and autoscaling cluster policy not applying on creation of collection

2020-04-01 Thread Andrew Doherty
Hello, I am trying to resolve an issue with setting the autoscaling cluster policy and not having any luck I have tried "set-cluster-policy" : [ {"replica": "#EQUAL","shard": "#EACH","sysprop.zone":"#EACH"} ]}' And it sets correctly, but doesnt seem to stick to that policy when creating a

Request Tracking in Solr

2020-04-01 Thread Prakhar Kumar
Hello Folks, I'm looking for a way to track requests in Solr from a particular user/client. Suppose, I've created a user, say *Client1*, using the basic authentication/authorization plugin. Now I want to get a count of the number of requests/queries made by *Client1* on the Solr server. Looking

Re: Solrcloud 7.6 OOM due to unable to create native threads

2020-04-01 Thread S G
One approach could be to buffer the messages in Kafka before pushing to Solr. And then use "Kafka mirror" to replicate the messages to the other DC. Now both DCs' Kafka pipelines are in sync by the mirror and you can run storm/spark/flink etc jobs to consume local Kafka and publish to local Solr