[ 
https://issues.apache.org/jira/browse/HADOOP-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12530499
 ] 

Owen O'Malley commented on HADOOP-1873:
---------------------------------------

Actually, I think we should add into Configuration:

{code}
  void setWritable(String key, Writable value) throws IOException;
  Writable getWritable(String key) throws IOException;
{code}

which puts a generic writable into a Configuration with stringification/quoting 
and use that to put the tickets into the JobConf.

Another issue that needs to be addressed for map/reduce is the usage of the 
system directory, which is related to the job conf security. Obviously the 
current usage doesn't work at all once we have permissions. I'm not sure what 
the right solution is. We either need to move the job.xml and the input splits 
via rpc or some other solution. The easiest solution would be to give the 
JobTracker super-user dfs permissions, but that doesn't map well into have 
dynamic map/reduce clusters. Clearly the job.xml (and input splits) should be 
protected for general reading/writing.

> User permissions for Map/Reduce
> -------------------------------
>
>                 Key: HADOOP-1873
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1873
>             Project: Hadoop
>          Issue Type: Improvement
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>
> HADOOP-1298 and HADOOP-1701 add permissions and pluggable security for DFS 
> files and DFS accesses. Same users permission should work for Map/Reduce jobs 
> as well. 
> User persmission should propegate from client to map/reduce tasks and all the 
> file operations should be subject to user permissions. This is transparent to 
> the user (i.e. no changes to user code should be required). 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to