[jira] [Work logged] (HDFS-16105) Edit log corruption due to mismatch between fileId and path

2021-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16105?focusedWorklogId=617423&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617423
 ]

ASF GitHub Bot logged work on HDFS-16105:
-

Author: ASF GitHub Bot
Created on: 01/Jul/21 05:54
Start Date: 01/Jul/21 05:54
Worklog Time Spent: 10m 
  Work Description: ferhui opened a new pull request #3161:
URL: https://github.com/apache/hadoop/pull/3161


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 617423)
Remaining Estimate: 0h
Time Spent: 10m

> Edit log corruption due to mismatch between fileId and path
> ---
>
> Key: HDFS-16105
> URL: https://issues.apache.org/jira/browse/HDFS-16105
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namanode
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Hui Fei
>Assignee: Hui Fei
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We do stress testing in FUSE with HDFS, Standby Namenode crashes.
> The log is following
> {quote}
> 2021-06-25 17:13:02,953 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered exception 
> on operation AddBlockOp [path=/xxx/fiotest_write.354.46, 
> penultimateBlock=xxx, lastBlock=xxx, numOfBytes=0}, RpcClientId=, 
> RpcCallId=-2]
> java.io.FileNotFoundException: File /xxx/fiotest_write.354.46 does not exist.
> {quote}
> The following steps can reproduce it (Illegal writes).
> 1 create file A (fileId X) (1st block is being written)
> 2 rename file A to file B(stile fileId X)
> 3 continue to write file A(using above the same outputstream, 2nd block need 
> to be written)
> 4 standby namenode load the above edits and would crash



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16105) Edit log corruption due to mismatch between fileId and path

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16105?focusedWorklogId=617606&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617606
 ]

ASF GitHub Bot logged work on HDFS-16105:
-

Author: ASF GitHub Bot
Created on: 01/Jul/21 13:26
Start Date: 01/Jul/21 13:26
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3161:
URL: https://github.com/apache/hadoop/pull/3161#issuecomment-872246146


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 356m 16s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3161/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 450m 35s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestINodeFile |
   |   | hadoop.hdfs.TestFileCreation |
   |   | hadoop.hdfs.server.namenode.TestDeleteRace |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractAppend |
   |   | hadoop.hdfs.TestRenameWhileOpen |
   |   | hadoop.hdfs.TestLease |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.TestNameNodeXAttr |
   |   | hadoop.hdfs.TestReservedRawPaths |
   |   | hadoop.hdfs.server.namenode.TestHDFSConcat |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.namenode.TestFileContextXAttr |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | hadoop.hdfs.TestEncryptionZones |
   |   | hadoop.hdfs.TestEncryptionZonesWithKMS |
   |   | hadoop.hdfs.web.TestWebHDFSXAttr |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestFileAppend3 |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop

[jira] [Work logged] (HDFS-16105) Edit log corruption due to mismatch between fileId and path

2021-07-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16105?focusedWorklogId=617935&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617935
 ]

ASF GitHub Bot logged work on HDFS-16105:
-

Author: ASF GitHub Bot
Created on: 02/Jul/21 01:06
Start Date: 02/Jul/21 01:06
Worklog Time Spent: 10m 
  Work Description: ferhui closed pull request #3161:
URL: https://github.com/apache/hadoop/pull/3161


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 617935)
Time Spent: 0.5h  (was: 20m)

> Edit log corruption due to mismatch between fileId and path
> ---
>
> Key: HDFS-16105
> URL: https://issues.apache.org/jira/browse/HDFS-16105
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namanode
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Hui Fei
>Assignee: Hui Fei
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We do stress testing in FUSE with HDFS, Standby Namenode crashes.
> The log is following
> {quote}
> 2021-06-25 17:13:02,953 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered exception 
> on operation AddBlockOp [path=/xxx/fiotest_write.354.46, 
> penultimateBlock=xxx, lastBlock=xxx, numOfBytes=0}, RpcClientId=, 
> RpcCallId=-2]
> java.io.FileNotFoundException: File /xxx/fiotest_write.354.46 does not exist.
> {quote}
> The following steps can reproduce it (Illegal writes).
> 1 create file A (fileId X) (1st block is being written)
> 2 rename file A to file B(stile fileId X)
> 3 continue to write file A(using above the same outputstream, 2nd block need 
> to be written)
> 4 standby namenode load the above edits and would crash



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org