[
https://issues.apache.org/jira/browse/HADOOP-1298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12549528
]
Raghu Angadi commented on HADOOP-1298:
--------------------------------------
> My current patch only optimizes memory usage but not CPU usage since CPU
> usage is not critical compared to memory in NameNode. I will run some test
> later on to see whether there is a need to do the improvement.
I don't see how this is related to memory usage. I think memory optimizations
can still stay. I think the current patch would affect NNBench open() benchmark
quite a bit. Whether it needs improvement or not I think is more of a
subjective question. But I think we can quantify how much more cpu is required.
> Why is it not correct? Can you explain more?
Because permission checking and actual action are not done in the same log.
For. e.g. namesystem.delete() : we check for write permission and then lock
FSNamesystem to delete. By that time, the file's permissions could have changed.
> adding user info to file
> ------------------------
>
> Key: HADOOP-1298
> URL: https://issues.apache.org/jira/browse/HADOOP-1298
> Project: Hadoop
> Issue Type: New Feature
> Components: dfs, fs
> Reporter: Kurtis Heimerl
> Assignee: Christophe Taton
> Attachments: 1298_2007-09-22_1.patch, 1298_2007-10-04_1.patch,
> 1298_20071206b.patch, hadoop-user-munncha.patch17
>
>
> I'm working on adding a permissions model to hadoop's DFS. The first step is
> this change, which associates user info with files. Following this I'll
> assoicate permissions info, then block methods based on that user info, then
> authorization of the user info.
> So, right now i've implemented adding user info to files. I'm looking for
> feedback before I clean this up and make it offical.
> I wasn't sure what release, i'm working off trunk.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.