[ https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16478257#comment-16478257 ]
Rushabh S Shah commented on HDFS-13281: --------------------------------------- Thanks [~xiaochen] for reviewing the latest patch. bq. So here should we write 'encryptedBytes' to a reserved raw, and verify when reading it from p2, The whole point of this jira is namenode shouldn't create an edek when client writes to {{/.reserved/raw}} path. Its the client's responsibility to setXAttr. Also note that {{setXAttr}} in test is out of scope of this jira. So if we somehow write some encrypted bytes, then how would we decrypt in absence of an edek ? {code} try { fs.getXAttr(reservedRawP2Path, HdfsServerConstants .CRYPTO_XATTR_FILE_ENCRYPTION_INFO); fail("getXAttr should have thrown an exception"); } catch (IOException ioe) { assertExceptionContains("At least one of the attributes provided was " + "not found.", ioe); } {code} IMHO the above chunk of code is only required to test this jira which tests that there is no {{CRYPTO_XATTR_FILE_ENCRYPTION_INFO}} xattr on that path. > Namenode#createFile should be /.reserved/raw/ aware. > ---------------------------------------------------- > > Key: HDFS-13281 > URL: https://issues.apache.org/jira/browse/HDFS-13281 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption > Affects Versions: 2.8.3 > Reporter: Rushabh S Shah > Assignee: Rushabh S Shah > Priority: Critical > Attachments: HDFS-13281.001.patch, HDFS-13281.002.branch-2.patch, > HDFS-13281.002.patch, HDFS-13281.003.patch > > > If I want to write to /.reserved/raw/<dir> and if that directory happens to > be in EZ, then namenode *should not* create edek and just copy the raw bytes > from the source. > Namenode#startFileInt should be /.reserved/raw/ aware. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org