[ 
https://issues.apache.org/jira/browse/HDFS-12207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-12207:
----------------------------------------
    Target Version/s: 3.4.0  (was: 3.3.0)

Bulk update: moved all 3.3.0 non-blocker issues, please move back if it is a 
blocker.

> A few DataXceiver#writeBlock cleanups related to optional storage IDs and 
> types
> -------------------------------------------------------------------------------
>
>                 Key: HDFS-12207
>                 URL: https://issues.apache.org/jira/browse/HDFS-12207
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>    Affects Versions: 3.0.0-alpha4
>            Reporter: Andrew Wang
>            Assignee: Sean Mackrory
>            Priority: Major
>
> Here's the conversation that [~ehiggs] and I had on HDFS-12151 regarding some 
> improvements:
> bq. Should we use nst > 0 rather than targetStorageTypes.length > 0 (amended) 
> here for clarity?
> Yes.
> bq. Should the targetStorageTypes.length > 0 check really be nsi > 0? We 
> could elide it then since it's already captured in the outside if.
> This does look redundant since targetStorageIds.length will be either 0 or == 
> targetStorageTypes.length
> bq. Finally, I don't understand why we need to add the targeted ID/type for 
> checkAccess. Each DN only needs to validate itself, yea? BTSM#checkAccess 
> indicates this in its javadoc, but it looks like we run through ourselves and 
> the targets each time:
> That seems like a good simplification. I think I had assumed the BTI and 
> requested types being checked should be the same (String - String, uint64 - 
> uint64); but I don't see a reason why they have to be. Chris Douglas, what do 
> you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to