[ https://issues.apache.org/jira/browse/HDFS-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17003325#comment-17003325 ]
Xiaoqiao He commented on HDFS-15079: ------------------------------------ Thanks [~ayushtkn] for your detailed comments. RetryCache may be one solution, however we have to pass clientid and callid to router (it could need to improve protocol, I am not very sure, will check it later) and reset these two parameter then forward this rpc request to namenode. IIRC, we have discussed this solution for long time about data locality but no conclusion up to now. Another side, if we could do that, I am not sure if there are some security issue such as some one using fake clientid + callid and it could pollute next other rpc requests. > RBF: Client maybe get an unexpected result with network anomaly > ---------------------------------------------------------------- > > Key: HDFS-15079 > URL: https://issues.apache.org/jira/browse/HDFS-15079 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf > Affects Versions: 3.3.0 > Reporter: Fei Hui > Priority: Critical > Attachments: UnexpectedOverWriteUT.patch > > > I find there is a critical problem on RBF, HDFS-15078 can resolve it on some > Scenarios, but i have no idea about the overall resolution. > The problem is that > Client with RBF(r0, r1) create a file HDFS file via r0, it gets Exception and > failovers to r1 > r0 has been send create rpc to namenode(1st create) > Client create a HDFS file via r1(2nd create) > Client writes the HDFS file and close it finally(3rd close) > Maybe namenode receiving the rpc in order as follow > 2nd create > 3rd close > 1st create > And overwrite is true by default, this would make the file had been written > an empty file. This is an critical problem > We had encountered this problem. There are many hive and spark jobs running > on our cluster, sometimes it occurs -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org