[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=617425&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617425 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 01/Jul/21 06:23 Start Date: 01/Jul/21 06:23 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-871958420 > The checkstyle warnings are old, unrelated. Mergeing the PR. Thanks @jojochuang for the merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 617425) Time Spent: 3h 20m (was: 3h 10m) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 3h 20m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=617409&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617409 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 01/Jul/21 05:06 Start Date: 01/Jul/21 05:06 Worklog Time Spent: 10m Work Description: jojochuang commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-871923263 The checkstyle warnings are old, unrelated. Mergeing the PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 617409) Time Spent: 3h (was: 2h 50m) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 3h > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=617410&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-617410 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 01/Jul/21 05:07 Start Date: 01/Jul/21 05:07 Worklog Time Spent: 10m Work Description: jojochuang merged pull request #3136: URL: https://github.com/apache/hadoop/pull/3136 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 617410) Time Spent: 3h 10m (was: 3h) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 3h 10m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=616994&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616994 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 30/Jun/21 10:50 Start Date: 30/Jun/21 10:50 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-871296935 Hi @ayushtkn , could you please take a quick look at this. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616994) Time Spent: 2h 50m (was: 2h 40m) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 2h 50m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=616947&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616947 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 30/Jun/21 07:16 Start Date: 30/Jun/21 07:16 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-871157070 Hi @jojochuang , could you please take a look again? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616947) Time Spent: 2h 40m (was: 2.5h) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 2h 40m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=616489&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616489 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 14:06 Start Date: 29/Jun/21 14:06 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-870507562 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 31s | | trunk passed | | +1 :green_heart: | compile | 1m 25s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 58s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 1m 18s | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/3/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 36 new + 467 unchanged - 36 fixed = 503 total (was 503) | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 54s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 406 unchanged - 0 fixed = 407 total (was 406) | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 51s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 347m 56s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 440m 13s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | Subsystem | Report/No
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=616429&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616429 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:57 Start Date: 29/Jun/21 13:57 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-870513731 These failed UTs work fine locally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616429) Time Spent: 2h 20m (was: 2h 10m) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 2h 20m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=616403&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616403 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:54 Start Date: 29/Jun/21 13:54 Worklog Time Spent: 10m Work Description: jojochuang commented on a change in pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#discussion_r660246410 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/Replica.java ## @@ -19,49 +19,56 @@ import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState; +import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; /** * This represents block replicas which are stored in DataNode. */ @InterfaceAudience.Private public interface Replica { /** Get the block ID */ - public long getBlockId(); + long getBlockId(); /** Get the generation stamp */ - public long getGenerationStamp(); + long getGenerationStamp(); /** * Get the replica state * @return the replica state */ - public ReplicaState getState(); + ReplicaState getState(); /** * Get the number of bytes received * @return the number of bytes that have been received */ - public long getNumBytes(); + long getNumBytes(); /** * Get the number of bytes that have written to disk * @return the number of bytes that have written to disk */ - public long getBytesOnDisk(); + long getBytesOnDisk(); /** * Get the number of bytes that are visible to readers * @return the number of bytes that are visible to readers */ - public long getVisibleLength(); + long getVisibleLength(); Review comment: please do not change these interface methods. These changes are not required and makes backport harder. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -587,7 +587,7 @@ public void readBlock(final ExtendedBlock block, final String clientTraceFmt = clientName.length() > 0 && ClientTraceLog.isInfoEnabled() ? String.format(DN_CLIENTTRACE_FORMAT, localAddress, remoteAddress, -"%d", "HDFS_READ", clientName, "%d", +"", "%d", "HDFS_READ", clientName, "%d", Review comment: looks like redundant change? ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java ## @@ -1631,6 +1633,7 @@ public ReplicaHandler createRbw( if (ref == null) { ref = volumes.getNextVolume(storageType, storageId, b.getNumBytes()); } + LOG.info("Creating Rbw, block: {} on volume: {}", b, ref.getVolume()); Review comment: is this really necessary? IMO logging one message for every rbw is just too much. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -929,7 +929,7 @@ public void writeBlock(final ExtendedBlock block, if (isDatanode || stage == BlockConstructionStage.PIPELINE_CLOSE_RECOVERY) { datanode.closeBlock(block, null, storageUuid, isOnTransientStorage); -LOG.info("Received {} src: {} dest: {} of size {}", +LOG.info("Received {} src: {} dest: {} volume: {} of size {}", Review comment: missing the parameter for volume. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616403) Time Spent: 2h 10m (was: 2h) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 2h 10m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=616395&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616395 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:53 Start Date: 29/Jun/21 13:53 Worklog Time Spent: 10m Work Description: tomscut commented on a change in pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#discussion_r660249250 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/Replica.java ## @@ -19,49 +19,56 @@ import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState; +import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; /** * This represents block replicas which are stored in DataNode. */ @InterfaceAudience.Private public interface Replica { /** Get the block ID */ - public long getBlockId(); + long getBlockId(); /** Get the generation stamp */ - public long getGenerationStamp(); + long getGenerationStamp(); /** * Get the replica state * @return the replica state */ - public ReplicaState getState(); + ReplicaState getState(); /** * Get the number of bytes received * @return the number of bytes that have been received */ - public long getNumBytes(); + long getNumBytes(); /** * Get the number of bytes that have written to disk * @return the number of bytes that have written to disk */ - public long getBytesOnDisk(); + long getBytesOnDisk(); /** * Get the number of bytes that are visible to readers * @return the number of bytes that are visible to readers */ - public long getVisibleLength(); + long getVisibleLength(); Review comment: Thanks @jojochuang for your review. This change is to fix checkstyle. I will restore it. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -587,7 +587,7 @@ public void readBlock(final ExtendedBlock block, final String clientTraceFmt = clientName.length() > 0 && ClientTraceLog.isInfoEnabled() ? String.format(DN_CLIENTTRACE_FORMAT, localAddress, remoteAddress, -"%d", "HDFS_READ", clientName, "%d", +"", "%d", "HDFS_READ", clientName, "%d", Review comment: Because volume has been added to DN_CLIENTTRACE_FORMAT, some adaptations have been made. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java ## @@ -1631,6 +1633,7 @@ public ReplicaHandler createRbw( if (ref == null) { ref = volumes.getNextVolume(storageType, storageId, b.getNumBytes()); } + LOG.info("Creating Rbw, block: {} on volume: {}", b, ref.getVolume()); Review comment: > is this really necessary? IMO logging one message for every rbw is just too much. I will change this to DEBUG level, do you think it is OK? ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -929,7 +929,7 @@ public void writeBlock(final ExtendedBlock block, if (isDatanode || stage == BlockConstructionStage.PIPELINE_CLOSE_RECOVERY) { datanode.closeBlock(block, null, storageUuid, isOnTransientStorage); -LOG.info("Received {} src: {} dest: {} of size {}", +LOG.info("Received {} src: {} dest: {} volume: {} of size {}", Review comment: Thanks for pointing this, I fixed it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616395) Time Spent: 2h (was: 1h 50m) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 2h > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) -
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=616099&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616099 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 11:29 Start Date: 29/Jun/21 11:29 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-870513731 These failed UTs work fine locally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616099) Time Spent: 1h 50m (was: 1h 40m) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 1h 50m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=616097&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616097 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 11:19 Start Date: 29/Jun/21 11:19 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-870507562 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 31s | | trunk passed | | +1 :green_heart: | compile | 1m 25s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 58s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 1m 18s | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/3/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 36 new + 467 unchanged - 36 fixed = 503 total (was 503) | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 54s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 406 unchanged - 0 fixed = 407 total (was 406) | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 51s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 347m 56s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 440m 13s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | Subsystem | Report/No
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=615975&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615975 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 03:59 Start Date: 29/Jun/21 03:59 Worklog Time Spent: 10m Work Description: tomscut commented on a change in pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#discussion_r660262918 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -929,7 +929,7 @@ public void writeBlock(final ExtendedBlock block, if (isDatanode || stage == BlockConstructionStage.PIPELINE_CLOSE_RECOVERY) { datanode.closeBlock(block, null, storageUuid, isOnTransientStorage); -LOG.info("Received {} src: {} dest: {} of size {}", +LOG.info("Received {} src: {} dest: {} volume: {} of size {}", Review comment: Thanks for pointing this, I fixed it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615975) Time Spent: 1.5h (was: 1h 20m) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 1.5h > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=615970&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615970 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 03:21 Start Date: 29/Jun/21 03:21 Worklog Time Spent: 10m Work Description: tomscut commented on a change in pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#discussion_r660251541 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java ## @@ -1631,6 +1633,7 @@ public ReplicaHandler createRbw( if (ref == null) { ref = volumes.getNextVolume(storageType, storageId, b.getNumBytes()); } + LOG.info("Creating Rbw, block: {} on volume: {}", b, ref.getVolume()); Review comment: > is this really necessary? IMO logging one message for every rbw is just too much. I will change this to DEBUG level, do you think it is OK? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615970) Time Spent: 1h 20m (was: 1h 10m) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 1h 20m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=615969&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615969 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 03:20 Start Date: 29/Jun/21 03:20 Worklog Time Spent: 10m Work Description: tomscut commented on a change in pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#discussion_r660251192 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -587,7 +587,7 @@ public void readBlock(final ExtendedBlock block, final String clientTraceFmt = clientName.length() > 0 && ClientTraceLog.isInfoEnabled() ? String.format(DN_CLIENTTRACE_FORMAT, localAddress, remoteAddress, -"%d", "HDFS_READ", clientName, "%d", +"", "%d", "HDFS_READ", clientName, "%d", Review comment: Because volume has been added to DN_CLIENTTRACE_FORMAT, some adaptations have been made. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615969) Time Spent: 1h 10m (was: 1h) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 1h 10m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=615968&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615968 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 03:13 Start Date: 29/Jun/21 03:13 Worklog Time Spent: 10m Work Description: tomscut commented on a change in pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#discussion_r660249250 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/Replica.java ## @@ -19,49 +19,56 @@ import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState; +import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; /** * This represents block replicas which are stored in DataNode. */ @InterfaceAudience.Private public interface Replica { /** Get the block ID */ - public long getBlockId(); + long getBlockId(); /** Get the generation stamp */ - public long getGenerationStamp(); + long getGenerationStamp(); /** * Get the replica state * @return the replica state */ - public ReplicaState getState(); + ReplicaState getState(); /** * Get the number of bytes received * @return the number of bytes that have been received */ - public long getNumBytes(); + long getNumBytes(); /** * Get the number of bytes that have written to disk * @return the number of bytes that have written to disk */ - public long getBytesOnDisk(); + long getBytesOnDisk(); /** * Get the number of bytes that are visible to readers * @return the number of bytes that are visible to readers */ - public long getVisibleLength(); + long getVisibleLength(); Review comment: Thanks @jojochuang for your review. This change is to fix checkstyle. I will restore it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615968) Time Spent: 1h (was: 50m) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 1h > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=615967&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615967 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 03:07 Start Date: 29/Jun/21 03:07 Worklog Time Spent: 10m Work Description: jojochuang commented on a change in pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#discussion_r660246410 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/Replica.java ## @@ -19,49 +19,56 @@ import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState; +import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; /** * This represents block replicas which are stored in DataNode. */ @InterfaceAudience.Private public interface Replica { /** Get the block ID */ - public long getBlockId(); + long getBlockId(); /** Get the generation stamp */ - public long getGenerationStamp(); + long getGenerationStamp(); /** * Get the replica state * @return the replica state */ - public ReplicaState getState(); + ReplicaState getState(); /** * Get the number of bytes received * @return the number of bytes that have been received */ - public long getNumBytes(); + long getNumBytes(); /** * Get the number of bytes that have written to disk * @return the number of bytes that have written to disk */ - public long getBytesOnDisk(); + long getBytesOnDisk(); /** * Get the number of bytes that are visible to readers * @return the number of bytes that are visible to readers */ - public long getVisibleLength(); + long getVisibleLength(); Review comment: please do not change these interface methods. These changes are not required and makes backport harder. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -587,7 +587,7 @@ public void readBlock(final ExtendedBlock block, final String clientTraceFmt = clientName.length() > 0 && ClientTraceLog.isInfoEnabled() ? String.format(DN_CLIENTTRACE_FORMAT, localAddress, remoteAddress, -"%d", "HDFS_READ", clientName, "%d", +"", "%d", "HDFS_READ", clientName, "%d", Review comment: looks like redundant change? ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java ## @@ -1631,6 +1633,7 @@ public ReplicaHandler createRbw( if (ref == null) { ref = volumes.getNextVolume(storageType, storageId, b.getNumBytes()); } + LOG.info("Creating Rbw, block: {} on volume: {}", b, ref.getVolume()); Review comment: is this really necessary? IMO logging one message for every rbw is just too much. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -929,7 +929,7 @@ public void writeBlock(final ExtendedBlock block, if (isDatanode || stage == BlockConstructionStage.PIPELINE_CLOSE_RECOVERY) { datanode.closeBlock(block, null, storageUuid, isOnTransientStorage); -LOG.info("Received {} src: {} dest: {} of size {}", +LOG.info("Received {} src: {} dest: {} volume: {} of size {}", Review comment: missing the parameter for volume. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615967) Time Spent: 50m (was: 40m) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 50m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=614821&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614821 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 25/Jun/21 03:13 Start Date: 25/Jun/21 03:13 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-868171074 These UTs works fine locally and are unrelated to the change. Hi @tasanuma @aajisaka @jojochuang @Hexiaoqiao , could you please review the code when you have time. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 614821) Time Spent: 40m (was: 0.5h) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=614664&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614664 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 24/Jun/21 18:42 Start Date: 24/Jun/21 18:42 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-867869816 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 41s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 14s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 24s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 13s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 19s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 1m 19s | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/2/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 36 new + 467 unchanged - 36 fixed = 503 total (was 503) | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 55s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 398 unchanged - 8 fixed = 398 total (was 406) | | +1 :green_heart: | mvnsite | 1m 15s | | the patch passed | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 18s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 341m 37s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 431m 48s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3136 | | Optional Tes
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=614379&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614379 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 24/Jun/21 08:43 Start Date: 24/Jun/21 08:43 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-867455179 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 2s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 25s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 16s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 30s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 21s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 1m 21s | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/1/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 36 new + 467 unchanged - 36 fixed = 503 total (was 503) | | +1 :green_heart: | compile | 1m 12s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 12s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 55s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 406 unchanged - 0 fixed = 408 total (was 406) | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 23s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 52s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 343m 48s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 434m 20s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/1/artifact/out/Dockerfile
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=614300&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614300 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 24/Jun/21 01:27 Start Date: 24/Jun/21 01:27 Worklog Time Spent: 10m Work Description: tomscut opened a new pull request #3136: URL: https://github.com/apache/hadoop/pull/3136 JIRA: [HDFS-16086](https://issues.apache.org/jira/browse/HDFS-16086) To keep track of the block in volume, we can add the volume information to the datanode log. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 614300) Remaining Estimate: 0h Time Spent: 10m > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org