Senthil,

What you see is the right behavior , if ranger is not able to make decision it 
falls back to Yarn for authorization and since yarn.acl.enable= true it uses 
yarn ACL.

Also parent queues ACL is passed down to the child’s, you can make the parent 
“root” to have “ “(space) to restrict the access to the parent and specify the 
access to the child queue.  Please try this for your use case.

<property>

 <name>yarn.scheduler.capacity.root.acl_submit_applications</name>
 <value> </value>
</property>

<property>
 <name>yarn.scheduler.capacity.root.other.acl_submit_applications</name>
 <value>john</value>
</property>

Thanks,

Ramesh


From: Senthil <[email protected]<mailto:[email protected]>>
Reply-To: 
"[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Wednesday, February 3, 2016 at 6:51 AM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: Ranger + YARN Not working with HDP 2.3

Hi Ramesh,

Thanks. I verified  yarn.acl.enable is set to  true . I am able to see audit 
logs under /ranger/audit/yarn directory. My use case is to restrict a YARN 
queue to limited set of users using Ranger.

I defined policy on queue "other"  to include user david with privilege of 
submit job , the audit log shows ranger-acl is working

{"repoType":4,"repo":"c1_yarn","reqUser":"david","evtTime":"2016-02-03 
14:07:49.999","access":"submit-app","resource":"root.other","resType":"queue","action":"submit-app","result":1,"policy":6,"enforcer":"ranger-acl","agentHost":"ip-10-0-2-69.us-west-2.compute.internal","logType":"RangerAudit","id":"a51d217c-dda7-4ba6-a2cb-b9387592bf37","seq_num":45,"event_count":1,"event_dur_ms":0

If I remove the user david from the policy of the queue other - I am still able 
to submit the job to this queue other - This time audit log shows that

:{"repoType":4,"repo":"c1_yarn","reqUser":"david","evtTime":"2016-02-03 
14:38:46.713","access":"submit-app","resource":"root.other","resType":"queue","action":"submit-app","result":1,"policy":-1,"enforcer":"yarn-acl","agentHost":"
 
ip-10-0-st-2.compute.internal","logType":"RangerAudit","id":"ce0af8e0-9409-4902-9a48-58099c8dc672","seq_num":1415,"event_count":1,"event_dur_ms":0}

In second scenario, yarn-acl is triggered.

I tried to change default Yarn-ACL using 
http://hortonworks.com/hadoop-tutorial/configuring-yarn-capacity-scheduler-ambari/
 . I removed the default permission like

yarn.scheduler.capacity.root.other.acl_administer_queue=john

yarn.scheduler.capacity.root.other.acl_submit_applications=john

Still user david is able to submit jobs in other queue.

How can we restrict the users access to queue using Ranger?

Thanks


Senthil



On Wed, Feb 3, 2016 at 1:25 AM, Ramesh Mani 
<[email protected]<mailto:[email protected]>> wrote:
Senthil,

Is audit enabled for the Yarn Ranger policies you created, is there audit 
showing up for the operation you do. By default if Ranger cannot make decision 
on the authorization it falls back to Yarn ACL and that gives the permission.
Please verify if audit is present and also YARN ACL is on.

Regards,
Ramesh


From: Senthil <[email protected]<mailto:[email protected]>>
Reply-To: 
"[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Tuesday, February 2, 2016 at 12:06 AM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Ranger + YARN Not working with HDP 2.3

I tried using Ranger with YARN without any success. I used HDP 2.3.  After 
installing ranger, enabled it in HDFS and YARN. Using Ambari Yarn Queue Manager 
(Ambari View) created two additional queues namely miner and other. Using 
Ranger Policy UI, I gave permission to user david to submit job only in miner 
queue. However user david can post job in both miner and other queue. Below is 
the Scheduler config for YARN from Ambari dashboard.

How do i configure ranger so that david can post jobs only in miner queue and 
not in anyother queue.

Thanks for your help

yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.queue-mappings-override.enable=false
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.root.acl_administer_queue=yarn
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.default.acl_administer_queue=yarn
yarn.scheduler.capacity.root.default.acl_submit_applications=yarn
yarn.scheduler.capacity.root.default.capacity=20
yarn.scheduler.capacity.root.default.maximum-capacity=100
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.user-limit-factor=1
yarn.scheduler.capacity.root.miner.acl_administer_queue=*
yarn.scheduler.capacity.root.miner.acl_submit_applications=*
yarn.scheduler.capacity.root.miner.capacity=40
yarn.scheduler.capacity.root.miner.maximum-capacity=53
yarn.scheduler.capacity.root.miner.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.miner.ordering-policy=fifo
yarn.scheduler.capacity.root.miner.state=RUNNING
yarn.scheduler.capacity.root.miner.user-limit-factor=1
yarn.scheduler.capacity.root.other.acl_administer_queue=*
yarn.scheduler.capacity.root.other.acl_submit_applications=*
yarn.scheduler.capacity.root.other.capacity=40
yarn.scheduler.capacity.root.other.maximum-capacity=50
yarn.scheduler.capacity.root.other.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.other.ordering-policy=fifo
yarn.scheduler.capacity.root.other.state=RUNNING
yarn.scheduler.capacity.root.other.user-limit-factor=1
yarn.scheduler.capacity.root.queues=default,miner,other



- Senthil



--
- Senthil

Reply via email to