// Adding the dev DL, as this may be a bug

Solr v7.7.0

I’m expecting the 401 on all the servers in all 3 clusters using the security 
configuration.
For example, when I access the core or collection APIs without authentication, 
it should return a 401.

On one of the servers, in one of the clusters, the authorization is completely 
ignored. The http response is 200 and the API returns results.
The other server in this cluster works properly, returning a 401 when the 
protected API is accessed without authentication.

Interesting notes –
- If I use the IP or FQDN to access the server, authorization works properly 
and a 401 is returned. It’s only when I use the short hostname to access the 
server, that the authorization is bypassed.
- On the broken server, a 401 is returned correctly when the ‘autoscaling 
suggestions’ api is accessed. This api uses a different resource path, which 
may be a clue to why the others fail.
  https://solr:8443/api/cluster/autoscaling/suggestions

Here is the security.json with sensitive data changed/removed –

{
"authentication":{
   "blockUnknown": false,
   "class":"solr.BasicAuthPlugin",
   "credentials":{
     "admin":"--REDACTED--",
     "reader":"--REDACTED--",
     "writer":"--REDACTED--"
   },
   "realm":"solr"
},
"authorization":{
   "class":"solr.RuleBasedAuthorizationPlugin",
   "permissions":[
     {"name":"security-edit", "role":"admin"},
     {"name":"security-read", "role":"admin"},
     {"name":"schema-edit", "role":"admin"},
     {"name":"config-edit", "role":"admin"},
     {"name":"core-admin-edit", "role":"admin"},
     {"name":"collection-admin-edit", "role":"admin"},
     {"name":"autoscaling-read", "role":"admin"},
     {"name":"autoscaling-write", "role":"admin"},
     {"name":"autoscaling-history-read", "role":"admin"},
     {"name":"read","role":"*"},
     {"name":"schema-read","role":"*"},
     {"name":"config-read","role":"*"},
     {"name":"collection-admin-read", "role":"*"},
     {"name":"core-admin-read","role":"*"},
     {"name":"update", "role":"write"},
     {"collection":null, "path":"/admin/info/system", "role":"admin"}
   ],
   "user-role":{
     "admin": "admin",
     "reader": "read",
     "writer": "write"
   }
}}


 
Jeremy Branham
jb...@allstate.com

On 3/14/19, 10:06 PM, "Zheng Lin Edwin Yeo" <edwinye...@gmail.com> wrote:

    Hi,
    
    Can't really catch your question. Are you facing the error 401 on all the
    clusters or just one of them?
    
    Also, which Solr version are you using?
    
    Regards,
    Edwin
    
    On Fri, 15 Mar 2019 at 05:15, Branham, Jeremy (Experis) <jb...@allstate.com>
    wrote:
    
    > I’ve discovered the authorization works properly if I use the FQDN to
    > access the Solr node, but the short hostname completely circumvents it.
    > They are all internal server clusters, so I’m using self-signed
    > certificates [the same exact certificate] on each. The SAN portion of the
    > cert contains the IP, short, and FQDN of each server.
    >
    > I also diff’d the two servers Solr installation directories, and confirmed
    > they are identical.
    > They are using the same exact versions of Java and zookeeper, with the
    > same chroot configuration. [different zk clusters]
    >
    >
    > Jeremy Branham
    > jb...@allstate.com
    >
    > On 3/14/19, 10:44 AM, "Branham, Jeremy (Experis)" <jb...@allstate.com>
    > wrote:
    >
    >     I’m using Basic Auth on 3 different clusters.
    >     On 2 of the clusters, authorization works fine. A 401 is returned when
    > I try to access the core/collection apis.
    >
    >     On the 3rd cluster I can see the authorization failed, but the api
    > results are still returned.
    >
    >     Solr.log
    >     2019-03-14 09:25:47.680 INFO  (qtp1546693040-152) [   ]
    > o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal.
    > failed permission {
    >       "name":"core-admin-read",
    >       "role":"*"}
    >
    >
    >     I’m using different zookeeper clusters for each solr cluster, but
    > using the same security.json contents.
    >     I’ve tried refreshing the ZK node, and bringing the whole Solr cluster
    > down and back up.
    >
    >     Is there some sort of caching that could be happening?
    >
    >     I wrote an installation script that I’ve used to setup each cluster,
    > so I’m thinking I’ll wipe it out and re-run.
    >     But before I do this, I thought I’d ask the community for input. Maybe
    > a bug?
    >
    >
    >     Jeremy Branham
    >     jb...@allstate.com<mailto:jb...@allstate.com>
    >     Allstate Insurance Company | UCV Technology Services | Information
    > Services Group
    >
    >
    >
    >
    

Reply via email to