[ https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16416007#comment-16416007 ]
Rushabh S Shah commented on HDFS-13281: --------------------------------------- This will be used in HDFS-12597 but it is not directly related. By definition, if I am using {{/.reserved/raw}} path for reading then namenode will not return {{FileEncryptionInfo}} object in {{LocatedBlocks}} and HdfsClient will serve the raw bytes(i.e client will not decrypt). That same rule should apply for write path also. In write path, namenode is not returning {{FileEncryptionInfo}} in {{HdfsFileStatus}} but internally it is creating {{FileEncryptionInfo}} and adding to file. If I use distcp to copy files using {{/.reserved/raw}} paths and assuming both the path are in EZ, then also namenode will generate a new edek (different from the source's edek) while creating the file. As the last step in distcp, it copies all the raw xattrs from source to destination and that way it overwrites the old (wrong) edek. bq. I'm a bit nervous on supportability, {{/.reserved/raw}} is a special path prefix and you should be very careful to use it and not use irresponsibly. > Namenode#createFile should be /.reserved/raw/ aware. > ---------------------------------------------------- > > Key: HDFS-13281 > URL: https://issues.apache.org/jira/browse/HDFS-13281 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption > Affects Versions: 2.8.3 > Reporter: Rushabh S Shah > Assignee: Rushabh S Shah > Priority: Critical > Attachments: HDFS-13281.001.patch > > > If I want to write to /.reserved/raw/<dir> and if that directory happens to > be in EZ, then namenode *should not* create edek and just copy the raw bytes > from the source. > Namenode#startFileInt should be /.reserved/raw/ aware. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org