[ 
https://issues.apache.org/jira/browse/HADOOP-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12530485
 ] 

Doug Cutting commented on HADOOP-1873:
--------------------------------------

> It implies we should be able to write and read objects as entries in the conf 
> file. [ ... ]

Why can't we just convert tickets to strings?  Configuration#set(String name, 
Object value) calls the toString() method, so all that we really need is a 
ticket constructor or factory method that takes a string, no?

> Keeping the ticket it self in the job conf implies that the file should be 
> world readable [...]

Not necessarily. The job conf file should be made readable in HDFS only by its 
creator and by the mapreduce system.  The local copy of job.xml on a jobtracker 
node should also not be visible to other jobs, but that might be harder to 
implement...

I'm still on the fence about whether we should store tickets in the job or 
handle them separately.  It would certainly be more convenient to keep them in 
the job configuration, since that already travels with the job, and, arguably, 
the other parts of a job configuration should also only be viewable by the 
job's owner and the mapreduce system.  The alternative I see would be to never 
write tickets to disk, but to only pass them in RPCs, from client to 
jobtracker, from jobtracker to tasktracker, and from tasktracker to task.  That 
would certainly be more secure, but it might also be overkill.

A hybrid approach that occurs to me would be to never write job configurations 
to disk, but rather to keep them in memory and pass them via RPC.  The job.jar 
would still be on disk, and potentially be viewable by other jobs, but job.xml 
would no longer exist.  Is that viable?


> User permissions for Map/Reduce
> -------------------------------
>
>                 Key: HADOOP-1873
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1873
>             Project: Hadoop
>          Issue Type: Improvement
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>
> HADOOP-1298 and HADOOP-1701 add permissions and pluggable security for DFS 
> files and DFS accesses. Same users permission should work for Map/Reduce jobs 
> as well. 
> User persmission should propegate from client to map/reduce tasks and all the 
> file operations should be subject to user permissions. This is transparent to 
> the user (i.e. no changes to user code should be required). 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to