Re: How to use stopwords, synonyms along with fuzzy match in a SOLR

2019-05-08 Thread Erick Erickson
Well, I’d start by adding debug=true, that’ll show you the parsed query as well 
as why certain documents scored the way they did. But do note that q=junk~ will 
search against the default text field (the ”df” parameter in the request 
handler definition in solrconfig.xml). Is that what you’re expecting?

Or, I suppose, it’s searching against the fields defined if you’re using 
(e)dismax as your query parser. But the debut output (parsed query part) will 
show what the actual search is.

You should also look at the admin/analysis page. For instance, the way you have 
the field defined at index time, it’ll break on whitespace. But “junk.” won’t 
be found because your stopword doesn’t contain the period.

Plus, your EdgeNGramFilterFactory is pretty strange. A min gram size of 1 means 
you’re searching for single characters.

So what I’d do is back off the definition and build it up bit by bit to see 
if/when you have this problem. But if stopwords are working correctly at index 
time, the “junk” will not be _in_ the index, therefore it’ll be impossible to 
find fuzzy search or not. So you’re making some assumptions that aren’t true, 
and the analysis process combined with looking at the parsed query should show 
you quite a lot.

Best,
Erick

> On May 8, 2019, at 4:43 PM, bbarani  wrote:
> 
> Hi,
> Is there a way to use stopwords and fuzzy match in a SOLR query?
> 
> The below query matches 'jack' too and I added 'junk' to the stopwords (in
> query) to avoid returning results but looks like its not honoring the
> stopwords when using the fuzzy search. 
> 
> solr/collection1/select?app-qf=title_autoComplete=false=*=true=-1=marketingSequence%20asc=productId=true=on=categoryFilter=defaultMarketingSequence%20asc=junk~
> 
> 
>
>
> ignoreCase="true"/>
>
>
>
>
> synonyms="synonyms.txt"/>
> catenateNumbers="0" generateNumberParts="0" generateWordParts="0"
> preserveOriginal="1" catenateAll="0" catenateWords="1"/>
> minGramSize="1"/>
>
>
> ignoreCase="true"/>
>
>
>
>
> synonyms="synonyms.txt"/>
> catenateNumbers="0" generateNumberParts="0" generateWordParts="0"
> preserveOriginal="1" catenateAll="0" catenateWords="1"/>
>
>
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html



Re: Modify partial configsets using API

2019-05-08 Thread Tulsi Das
That's right Mike.

If same config set is used for multiple collection , changing any file in
this would apply to other collections as well.

On Wed, May 8, 2019 at 11:49 PM Mike Drob  wrote:

>
>
> On 2019/05/08 16:52:52, Shawn Heisey  wrote:
> > On 5/8/2019 10:50 AM, Mike Drob wrote:
> > > Solr Experts,
> > >
> > > Is there an existing API to modify just part of my configset, for
> example
> > > synonyms or stopwords? I see that there is the schema API, but that is
> > > pretty specific in scope.
> > >
> > > Not sure if I should be looking at configset API to upload a zip with a
> > > single file, or if there are more granular options available.
> >
> > Here's a documentation link for managed resources:
> >
> > https://lucene.apache.org/solr/guide/6_6/managed-resources.html
> >
> > That's the 6.6 version of the documentation.  If you're running
> > something newer, which seems likely since 6.6 is quite old now, you
> > might want to look into a later documentation version.
> >
> > Thanks,
> > Shawn
> >
>
> Thanks Shawn, this looks like it will fit the bill nicely!
>
> One more question that I don't see covered in the documentation - if I
> have multiple collections sharing the same config set, does updating the
> managed stop words for one collection apply the change to all? Is this
> change persisted in zookeeper?
>
> Mike
>


collection exists but delete by query fails

2019-05-08 Thread Aroop Ganguly


Hi 

I am on Solr 7.5 and I am issuing a delete-by-query using CloudSolrClient
The collection exists but issuing a deletebyquery is failing every single time.
I am wondering what is happening, and how to debug this.

org.apache.solr.client.solrj.SolrServerException: 
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:995)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at java.util.Collections$UnmodifiableList.get(Collections.java:1309)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:486)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1012)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
... 6 more


How to use stopwords, synonyms along with fuzzy match in a SOLR

2019-05-08 Thread bbarani
Hi,
Is there a way to use stopwords and fuzzy match in a SOLR query?

The below query matches 'jack' too and I added 'junk' to the stopwords (in
query) to avoid returning results but looks like its not honoring the
stopwords when using the fuzzy search. 

solr/collection1/select?app-qf=title_autoComplete=false=*=true=-1=marketingSequence%20asc=productId=true=on=categoryFilter=defaultMarketingSequence%20asc=junk~


























--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


How to use stopwords, synonyms along with fuzzy match in a SOLR

2019-05-08 Thread bbarani
Hi,
Is there a way to use stopwords and fuzzy match in a SOLR query?

The below query matches 'jack' too and I added 'junk' to the stopwords (in
query) to avoid returning results but looks like its not honoring the
stopwords when using the fuzzy search. 

solr/collection1/select?app-qf=title_autoComplete=false=*=true=-1=marketingSequence%20asc=productId=true=on=categoryFilter=defaultMarketingSequence%20asc=junk~


























--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Softer version of grouping and/or filter query

2019-05-08 Thread Emir Arnautović
Hi Doug,
It seems to me that you’ve found a way to increase score for those that are 
within selected price range, but “A price higher than $150 should not increase 
the score”. I’ll just remind you that scores in Solr are relevant to query and 
that you cannot do much other than sorting on it so it should not matter much 
if you boost the one that you like more or decrease score for those that are 
not your first choice.

HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/



> On 8 May 2019, at 23:56, Doug Reeder  wrote:
> 
> We have a query to return products related to a given product. To give some
> variety to the results, we group by vendor:
> group=true=true=merchantId
> 
> We need at least four results to display. Unfortunately, some categories
> don't have a lot of products, and grouping takes us (say) from five results
> to three.
> 
> Can I "soften" the grouping, so other products by the same vendor will
> appear in the results, but with much lower score?
> 
> 
> Similarly, we have a filter query that only returns products over $150:
> fq=price:[150+TO+*]
> 
> Can this be changed to a q or qf parameter where products less than $150
> have score less than any product priced $150 or more? (A price higher than
> $150 should not increase the score.)



Softer version of grouping and/or filter query

2019-05-08 Thread Doug Reeder
We have a query to return products related to a given product. To give some
variety to the results, we group by vendor:
group=true=true=merchantId

We need at least four results to display. Unfortunately, some categories
don't have a lot of products, and grouping takes us (say) from five results
to three.

Can I "soften" the grouping, so other products by the same vendor will
appear in the results, but with much lower score?


Similarly, we have a filter query that only returns products over $150:
fq=price:[150+TO+*]

Can this be changed to a q or qf parameter where products less than $150
have score less than any product priced $150 or more? (A price higher than
$150 should not increase the score.)


Re: Load suggest dictionary from non-Zookeeper file?

2019-05-08 Thread Mikhail Khludnev
Right.

On Wed, May 8, 2019 at 11:49 PM Shawn Heisey  wrote:

> On 5/8/2019 2:34 PM, Mikhail Khludnev wrote:
> > It reminds me
> https://lucene.apache.org/solr/guide/7_6/blob-store-api.html but
> > I don't think it's already integrated with suggester.
>
> I'm having one of of those days where I can't seem to recall things easily.
>
> With the blob store, the blobs are in the Lucene index, right?
>
> Thanks,
> Shawn
>


-- 
Sincerely yours
Mikhail Khludnev


Re: Load suggest dictionary from non-Zookeeper file?

2019-05-08 Thread Shawn Heisey

On 5/8/2019 2:34 PM, Mikhail Khludnev wrote:

It reminds me  https://lucene.apache.org/solr/guide/7_6/blob-store-api.html but
I don't think it's already integrated with suggester.


I'm having one of of those days where I can't seem to recall things easily.

With the blob store, the blobs are in the Lucene index, right?

Thanks,
Shawn


Re: Load suggest dictionary from non-Zookeeper file?

2019-05-08 Thread Mikhail Khludnev
It reminds me  https://lucene.apache.org/solr/guide/7_6/blob-store-api.html but
I don't think it's already integrated with suggester.

On Wed, May 8, 2019 at 11:26 PM Shawn Heisey  wrote:

> On 5/8/2019 1:59 PM, Walter Underwood wrote:
> > Our suggest dictionary is too big for Zookeeper. I’m trying to load it
> from an absolute path, but the Solr 6.6.1 insists on interpreting that as a
> Zookeeper path. Any way to disable that?
>
> I wouldn't be surprised to learn it's not possible to get it to go
> outside zookeeper for config files.  I do not know, though.
>
> For right now, your only option will probably be to increase the
> jute.maxbuffer system property on all relevant ZK servers and Solr
> servers.  Then you will be able to store data larger than 1MB in ZK.
> Somebody from the ZK project would probably frown on that solution, and
> if I'm honest, I don't like it much myself.
>
> There are use cases like this where a SolrCloud replica (core) needs to
> access some large data that would be better kept on the local disk
> instead of in ZK.  I think it's probably a good idea to open an issue
> for allowing access to config data on the filesystem for SolrCloud.  So
> it's probably a good idea to open an issue to make that possible.  I'd
> like some of the other people here to sanity check that idea, though.
>
> Thanks,
> Shawn
>


-- 
Sincerely yours
Mikhail Khludnev


Re: Load suggest dictionary from non-Zookeeper file?

2019-05-08 Thread Walter Underwood
The file is 33 Megabytes, so I don’t think increasing jute.maxbuffer is a wise 
idea.

The current documentation is not at all clear about how the dictionary file 
name is interpreted. I could see an absolute path being local and a relative 
path being relative to the ZK config folder. I wouldn’t mind using a “file:” 
URL for local stuff.

None of that is going to get this prototype working today, so I’m back to a 
non-cloud cluster. That is a real pain in the ass to set up with 6.x and 7.x. I 
got it working before vacation and now I can’t remember the steps.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On May 8, 2019, at 1:26 PM, Shawn Heisey  wrote:
> 
> On 5/8/2019 1:59 PM, Walter Underwood wrote:
>> Our suggest dictionary is too big for Zookeeper. I’m trying to load it from 
>> an absolute path, but the Solr 6.6.1 insists on interpreting that as a 
>> Zookeeper path. Any way to disable that?
> 
> I wouldn't be surprised to learn it's not possible to get it to go outside 
> zookeeper for config files.  I do not know, though.
> 
> For right now, your only option will probably be to increase the 
> jute.maxbuffer system property on all relevant ZK servers and Solr servers.  
> Then you will be able to store data larger than 1MB in ZK. Somebody from the 
> ZK project would probably frown on that solution, and if I'm honest, I don't 
> like it much myself.
> 
> There are use cases like this where a SolrCloud replica (core) needs to 
> access some large data that would be better kept on the local disk instead of 
> in ZK.  I think it's probably a good idea to open an issue for allowing 
> access to config data on the filesystem for SolrCloud.  So it's probably a 
> good idea to open an issue to make that possible.  I'd like some of the other 
> people here to sanity check that idea, though.
> 
> Thanks,
> Shawn



Re: Load suggest dictionary from non-Zookeeper file?

2019-05-08 Thread Shawn Heisey

On 5/8/2019 1:59 PM, Walter Underwood wrote:

Our suggest dictionary is too big for Zookeeper. I’m trying to load it from an 
absolute path, but the Solr 6.6.1 insists on interpreting that as a Zookeeper 
path. Any way to disable that?


I wouldn't be surprised to learn it's not possible to get it to go 
outside zookeeper for config files.  I do not know, though.


For right now, your only option will probably be to increase the 
jute.maxbuffer system property on all relevant ZK servers and Solr 
servers.  Then you will be able to store data larger than 1MB in ZK. 
Somebody from the ZK project would probably frown on that solution, and 
if I'm honest, I don't like it much myself.


There are use cases like this where a SolrCloud replica (core) needs to 
access some large data that would be better kept on the local disk 
instead of in ZK.  I think it's probably a good idea to open an issue 
for allowing access to config data on the filesystem for SolrCloud.  So 
it's probably a good idea to open an issue to make that possible.  I'd 
like some of the other people here to sanity check that idea, though.


Thanks,
Shawn


Load suggest dictionary from non-Zookeeper file?

2019-05-08 Thread Walter Underwood
Our suggest dictionary is too big for Zookeeper. I’m trying to load it from an 
absolute path, but the Solr 6.6.1 insists on interpreting that as a Zookeeper 
path. Any way to disable that?

java.lang.IllegalArgumentException: Invalid path string 
"/configs/questions-suggest//solr/suggest-data/questions-suggest/ngram_counts.tsv"

I could bring up a non-cloud cluster just for this suggester, but that seems 
like an ugly hack.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)



Re: Error when merging segments ("terms out of order")

2019-05-08 Thread Yannick Alméras
Hello,

I installed it with the openjdk-11-jdk package from Ubuntu 18.04 repository 
with apt-get... 

For the moment, no more problem with openjdk-8... I don't know the exact reason 
of problems with openjdk-11 32bit (?). 

I will go on a 64bit system when possible (server is installed as 32bit since 
long time with many different things on it... That will be a big work). 

Best regards, 
Y. Alméras 



Le 8 mai 2019 21:24:19 GMT+02:00, Shawn Heisey  a écrit :
>On 5/8/2019 10:47 AM, Alméras Yannick wrote:
>> The problem of segments merging seems to be solved when I replace
>Java 11
>> 32bit with Java 8 32bit on my prod Ubuntu server... (On my dev
>archlinux
>> computer, no problem with Java 11 64bit...).
>
>It is strongly recommended to run a 64-bit version of Java if you can 
>... but it does seem very weird that 32-bit Java would cause that 
>particular issue.
>
>It's not that there's anything inherently wrong with 32-bit software
>... 
>but 32-bit Java can only access 2GB of heap memory, and more is often 
>needed.
>
>I'm curious how you obtained a 32 bit version of Java 11.  Looking at 
>the websites for Oracle Java and OpenJDK, I do not see any way to 
>download it.  Oracle stopped putting 32-bit versions on their public 
>download page with Java 9.
>
>Thanks,
>Shawn


Re: Error when merging segments ("terms out of order")

2019-05-08 Thread Shawn Heisey

On 5/8/2019 10:47 AM, Alméras Yannick wrote:

The problem of segments merging seems to be solved when I replace Java 11
32bit with Java 8 32bit on my prod Ubuntu server... (On my dev archlinux
computer, no problem with Java 11 64bit...).


It is strongly recommended to run a 64-bit version of Java if you can 
... but it does seem very weird that 32-bit Java would cause that 
particular issue.


It's not that there's anything inherently wrong with 32-bit software ... 
but 32-bit Java can only access 2GB of heap memory, and more is often 
needed.


I'm curious how you obtained a 32 bit version of Java 11.  Looking at 
the websites for Oracle Java and OpenJDK, I do not see any way to 
download it.  Oracle stopped putting 32-bit versions on their public 
download page with Java 9.


Thanks,
Shawn


Re: Modify partial configsets using API

2019-05-08 Thread Mike Drob



On 2019/05/08 16:52:52, Shawn Heisey  wrote: 
> On 5/8/2019 10:50 AM, Mike Drob wrote:
> > Solr Experts,
> > 
> > Is there an existing API to modify just part of my configset, for example
> > synonyms or stopwords? I see that there is the schema API, but that is
> > pretty specific in scope.
> > 
> > Not sure if I should be looking at configset API to upload a zip with a
> > single file, or if there are more granular options available.
> 
> Here's a documentation link for managed resources:
> 
> https://lucene.apache.org/solr/guide/6_6/managed-resources.html
> 
> That's the 6.6 version of the documentation.  If you're running 
> something newer, which seems likely since 6.6 is quite old now, you 
> might want to look into a later documentation version.
> 
> Thanks,
> Shawn
> 

Thanks Shawn, this looks like it will fit the bill nicely!

One more question that I don't see covered in the documentation - if I have 
multiple collections sharing the same config set, does updating the managed 
stop words for one collection apply the change to all? Is this change persisted 
in zookeeper?

Mike


Re: Modify partial configsets using API

2019-05-08 Thread Shawn Heisey

On 5/8/2019 10:50 AM, Mike Drob wrote:

Solr Experts,

Is there an existing API to modify just part of my configset, for example
synonyms or stopwords? I see that there is the schema API, but that is
pretty specific in scope.

Not sure if I should be looking at configset API to upload a zip with a
single file, or if there are more granular options available.


Here's a documentation link for managed resources:

https://lucene.apache.org/solr/guide/6_6/managed-resources.html

That's the 6.6 version of the documentation.  If you're running 
something newer, which seems likely since 6.6 is quite old now, you 
might want to look into a later documentation version.


Thanks,
Shawn


Modify partial configsets using API

2019-05-08 Thread Mike Drob
Solr Experts,

Is there an existing API to modify just part of my configset, for example
synonyms or stopwords? I see that there is the schema API, but that is
pretty specific in scope.

Not sure if I should be looking at configset API to upload a zip with a
single file, or if there are more granular options available.

Thanks,
Mike


Re: Error when merging segments ("terms out of order")

2019-05-08 Thread Alméras Yannick
Hello !

The problem of segments merging seems to be solved when I replace Java 11 
32bit with Java 8 32bit on my prod Ubuntu server... (On my dev archlinux 
computer, no problem with Java 11 64bit...).

Hope this will be the real solution... "I cross my fingers" (from french "je 
croise les doigts" ;-) ).

Best regards,
Y. Alméras 


Le mardi 7 mai 2019, 15:25:13 CEST Shawn Heisey a écrit :
> On 5/7/2019 5:28 AM, Alméras Yannick wrote:
> > I don't understand a problem on my ubuntu 18.04 solr server (version
> > 7.6.0)...
> > 
> > When merge of segments is called, there is an error and then, the index is
> > not writable. The logs of a failed segments merging are at the end of
> > this message (here, I forced the merge but it's the same without
> > forcing).
> > 
> > How can I debug this ? I don't understand the cause of the following error
> > 
> > java.lang.IllegalArgumentException: terms out of order: priorTerm=[6e 61
> > 64 69 6e 65 2e 76 61 6c 61 64 65 5f 33 35 32 37],currentTerm=[6e 61 64 69
> > 6e 65 2e 76 61 6c 61 64 65 5f 33 35 32 37]"
> > 
> > Another thing : I can't reproduce the bug on archlinux with the same
> > configuration of solr server (version 7.6.0)... 
> 
> Are the two systems running different Java?  What vendor and version of
> Java is on each server?  Knowing the vendor is very important.  Some of
> this information is on the admin UI dashboard, and you can get
> definitive information by running "java -version" at the commandline.
> Since it is easy to have multiple Java versions on a system, and only
> one of those versions is likely the be accessible via the system PATH,
> you need to make sure you're running the right one.
> 
> Can you share the entire solr.log file that contains the problem, so we
> can see the entire sequence of errors?
> 
> You won't be able to attach the log to an email message.  The mailing
> list filters out most attachments.  You'll need to use a file sharing site.
> 
> Normally I would say there's nothing sensitive in the log, but people
> disagree with that all the time.  If you do redact anything, please do
> so sparingly, and do it in a way that we can tell redacted things apart
> from each other.
> 
> Thanks,
> Shawn





Re: Error when merging segments ("terms out of order")

2019-05-08 Thread Alméras Yannick
Hello !

I'm going deeper into the mystery...

There is something wrong in my configuration or with solr because segments 
merging never work as it should.

Example (step by step) :

* I clear my index.

* I had entries one by one. No problem when looking at items ; all fields are 
ok -if no segments merging is done-.

* When segments merging is done, the fields of my entries go wrong ! For 
example, here is the result of a query :

  - before merging (ok ; all item_id equals is_nid) :

{
  "response":{"numFound":3,"start":0,"docs":[
  {
"item_id":"84",
"is_nid":84},
  {
"item_id":"85",
"is_nid":85},
  {
"item_id":"87",
"is_nid":87}]
  }}

 - after merging (wrong ; look at the second item where item_id <> is_nid and 
the good number is is_nid) : 

{
  "response":{"numFound":3,"start":0,"docs":[
  {
"item_id":"84",
"is_nid":84},
  {
"item_id":"87",
"is_nid":85},
  {
"item_id":"87",
"is_nid":87}]
  }}

Here are links to my configuration files if it helps...

https://www.dropbox.com/s/pn19va6ss6n2gql/solrconfig.xml?dl=0
https://www.dropbox.com/s/9sjo7q1zi1y1o1k/schema.xml?dl=0

Best Regards,
Y. Alméras

Le mardi 7 mai 2019, 15:25:13 CEST Shawn Heisey a écrit :
> On 5/7/2019 5:28 AM, Alméras Yannick wrote:
> > I don't understand a problem on my ubuntu 18.04 solr server (version
> > 7.6.0)...
> > 
> > When merge of segments is called, there is an error and then, the index is
> > not writable. The logs of a failed segments merging are at the end of
> > this message (here, I forced the merge but it's the same without
> > forcing).
> > 
> > How can I debug this ? I don't understand the cause of the following error
> > 
> > java.lang.IllegalArgumentException: terms out of order: priorTerm=[6e 61
> > 64 69 6e 65 2e 76 61 6c 61 64 65 5f 33 35 32 37],currentTerm=[6e 61 64 69
> > 6e 65 2e 76 61 6c 61 64 65 5f 33 35 32 37]"
> > 
> > Another thing : I can't reproduce the bug on archlinux with the same
> > configuration of solr server (version 7.6.0)... 
> 
> Are the two systems running different Java?  What vendor and version of
> Java is on each server?  Knowing the vendor is very important.  Some of
> this information is on the admin UI dashboard, and you can get
> definitive information by running "java -version" at the commandline.
> Since it is easy to have multiple Java versions on a system, and only
> one of those versions is likely the be accessible via the system PATH,
> you need to make sure you're running the right one.
> 
> Can you share the entire solr.log file that contains the problem, so we
> can see the entire sequence of errors?
> 
> You won't be able to attach the log to an email message.  The mailing
> list filters out most attachments.  You'll need to use a file sharing site.
> 
> Normally I would say there's nothing sensitive in the log, but people
> disagree with that all the time.  If you do redact anything, please do
> so sparingly, and do it in a way that we can tell redacted things apart
> from each other.
> 
> Thanks,
> Shawn





Re: Solr RuleBasedAuthorizationPlugin question

2019-05-08 Thread Jérémy
Hi Jason,

Thanks for the your help again.

Your suggestion for the core creation works well. I tried both workarounds
for the admin UI but without any success. No worries I'll watch the issue
and wait for its resolution.

Thank you!
Jeremy

On Tue, May 7, 2019 at 6:08 PM Jason Gerlowski 
wrote:

> The Admin UI lockdown is a known-issue in RBAP that's since been
> fixed. (https://issues.apache.org/jira/browse/SOLR-13344), but only in
> very recent versions of Solr.  I haven't tried this, but you should be
> able to work around it by putting a rule like: {path: /, role: *}
> right before your catch-all rule.  (I think "/" is the path that RBAP
> sees for Admin UI requests.  Though you may also want to try
> "/solr/").
>
> As for why core-creation is still allowed with that config, I'll try
> to take a quick look after work today, but may not have time to get to
> it.  It's a bit of a hack, and it'd be nice to understand the behavior
> now before making additional changes, but if you need to you can add
> an explicit rule to cover core creation:
>
> {
> "name": "core-admin-edit",
> "role": "admin"
> },
> {
>"name": "read",
>"role": "readonly"
>  },
>   {
> "path": "*",
> "role": "admin"
>   },
>   {
> "name": "*",
> "role": "admin"
>}
>
> Good luck,
>
> Jason
>
> On Tue, May 7, 2019 at 11:31 AM Jérémy  wrote:
> >
> > Hi Jason,
> >
> > Thanks a lot for the detailed explanation. It's still very unclear in my
> > head how things work, but now I know about the weird fallback mechanism
> of
> > RBAP. Despite your example I still didn't manage to get the behavior I
> > wanted.
> > Here's the closest I've been so far. Any logged in user can still create
> > cores but now the readonly user cannot delete or update documents.
> However
> > the admin UI webinterface is completely locked now.
> >
> > {
> >  "authentication": {
> >"blockUnknown": true,
> >"class": "solr.BasicAuthPlugin",
> >"credentials": {
> >  "adminuser": "adminpwd",
> >  "readuser": "readpwd"
> >}
> >  },
> >  "authorization": {
> >"class": "solr.RuleBasedAuthorizationPlugin",
> >"permissions": [
> >  {
> >"name": "read",
> >"role": "readonly"
> >  },
> >   {
> > "path": "*",
> > "role": "admin"
> >   },
> >   {
> > "name": "*",
> > "role": "admin"
> >}
> >],
> >"user-role": {
> >  "readuser": "readonly",
> >  "adminuser": ["admin", "readonly"]
> >}
> >  }
> > }
> >
> > I feel like I'm almost there and that the json is just missing a bit.
> >
> > Thanks for your help, I really appreciate it,
> > Jeremy
> >
> >
> >
> >
> > On Mon, May 6, 2019 at 11:00 PM Jason Gerlowski 
> > wrote:
> >
> > > Hey Jeremy,
> > >
> > > One important thing to remember about the RuleBasedAuthorizationPlugin
> > > is that if it doesn't find any rules matching a particular API call,
> > > it will allow the request.  I think that's what you're running into
> > > here.  Let's trace through how RBAP will process your rules:
> > >
> > > 1. Solr receives an API call.  For this example, let's say its a new
> > > doc sent to /solr/someCollection/update
> > > 2. Solr fetches security.json and parses the auth rules.  It'll look
> > > at each of these in turn.
> > > 3. First Rule: Solr checks "/solr/someCollection/update" against the
> > > "read" rule.  /update isn't a read API, so this rule doesn't apply to
> > > our request.
> > > 4. Second Rule: Solr checks "/solr/someCollection/update" agains the
> > > "security-edit" rule.  /update isn't a security-related API, so this
> > > rule doesn't apply to our request either.
> > > 5. Solr is out of rules to try.  Since no rules locked down /update to
> > > a particular user/role, Solr allows the request.
> > >
> > > This is pretty unintuitive and rarely is what people expect.  The way
> > > that RBAP works, you almost always will want to have the last rule in
> > > your security.json be a "catch-all" rule of some sort.  You can do
> > > this by appending a rule entry with the wildcard path "*".  In the
> > > latest Solr releases, you can also use the predefined "all" permission
> > > (but beware of SOLR-13355 in earlier version).  e.g.
> > >
> > >  {
> > > "name": "read",
> > > "role": "readonly"
> > >   },
> > >   {
> > > "name": "security-edit",
> > > "role": "admin"
> > >   },
> > >   {
> > > "path": "*",
> > > "role": "admin"
> > >}
> > >
> > >
> > > Hope that helps.
> > >
> > > Jason
> > >
> > > On Fri, May 3, 2019 at 5:23 PM Jérémy  wrote:
> > > >
> > > > Hi,
> > > >
> > > > I hope that this question wasn't answered already, but I couldn't
> find
> > > what
> > > > I was looking for in the archives.
> > > >
> > > > I'm having a hard time to use solr with the BasicAuth and
> > > > RoleBasedAuthorization plugins.
> > > > The auth part