[ 
https://issues.apache.org/jira/browse/HDFS-14633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16880158#comment-16880158
 ] 

Ayush Saxena commented on HDFS-14633:
-------------------------------------

Firstly, To clear :
{quote}Another concern is the change will let setStoragePolicy throw 
QuotaByStorageTypeExceededException which it doesn't before. I don't think it's 
a big problem since the setStoragePolicy already throws IOException. Or we can 
wrap the QuotaByStorageTypeExceededException in an IOException, but I won't 
recommend that because it's ugly.
{quote}
This is not a code level problem, Even if it was throwing something else, and 
that was a desired stuff, We could have changed to some different exception.
 But the fact is, it is the change in behavior of the API, Earlier No Exception 
in this scenario, but now an exception would be thrown, the old clients won't 
be expecting this exception, So, a problem of backward compatibility comes in.

Secondly :
 For the admin calls, I don't think so we need this, Like setStoragePolicy and 
all, Since the SetQuota also doesn't throw any Exception, if setting a quota 
less than already occupied. Admin should handle this and should be well  aware 
of the repercussions.
 Such validations and restrictions are supposedly for the client calls. But 
this is just my opinion, Lets wait for some more too. :)

Maybe I need to check for renaming a file into a different directory why the 
quota isn't checked(if that is so.), This if not seems something to be done, If 
this too doesn't have a  reason. Need to check once.

 

For SPS may be [~rakeshr] or [~umamaheswararao] may help if avbl!!!

> The StorageType quota and consume in QuotaFeature is not handled when 
> addBlock, delete, rename, setStoragePolicy etc. 
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-14633
>                 URL: https://issues.apache.org/jira/browse/HDFS-14633
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Jinglun
>            Assignee: Jinglun
>            Priority: Major
>
> The NameNode manages the global state of the cluster. We should always take 
> NameNode's records as the sole criterion because no matter what inconsistent 
> is the NameNode should finally make everything right based on it's records. 
> Let's call it rule NSC(NameNode is the Sole Criterion). That means when we do 
> all quota related rpcs, we do the quota check according to NameNode's records 
> regardless of any inconsistent situation, such as the replicas doesn't match 
> the storage policy of the file, or the replicas count doesn't match the 
> file's set replica.
>  The work SPS deals with the wrongly palced replicas. There is a thought 
> about putting off the consume update of the DirectoryQuota until all replicas 
> are re-placed by SPS. I can't agree that because if we do so we will abandon 
> letting the NameNode's records to be the sole criterion. The block 
> replication is a good example of the rule NSC. When we count the consume of a 
> file(CONTIGUOUS), we multiply the replication factor with the file's length, 
> no matter the blocks are under replicated or excessed. We should do the same 
> thing for the storage type quota.
>  Another concern is the change will let setStoragePolicy throw 
> QuotaByStorageTypeExceededException which it doesn't before. I don't think 
> it's a big problem since the setStoragePolicy already throws IOException. Or 
> we can wrap the QuotaByStorageTypeExceededException in an IOException, but I 
> won't recommend that because it's ugly.
>  To make storage type consume follow the rule NSC, we must change every rpc 
> related to consume changing, such as addBlock, delete, rename(especially 
> moving a file with storage policy inherited from it's parent), 
> setStoragePolicy, setReplication etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to