[GitHub] [hadoop] tomscut commented on a change in pull request #3136: HDFS-16086. Add volume information to datanode log for tracing
tomscut commented on a change in pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#discussion_r660249250 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/Replica.java ## @@ -19,49 +19,56 @@ import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState; +import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; /** * This represents block replicas which are stored in DataNode. */ @InterfaceAudience.Private public interface Replica { /** Get the block ID */ - public long getBlockId(); + long getBlockId(); /** Get the generation stamp */ - public long getGenerationStamp(); + long getGenerationStamp(); /** * Get the replica state * @return the replica state */ - public ReplicaState getState(); + ReplicaState getState(); /** * Get the number of bytes received * @return the number of bytes that have been received */ - public long getNumBytes(); + long getNumBytes(); /** * Get the number of bytes that have written to disk * @return the number of bytes that have written to disk */ - public long getBytesOnDisk(); + long getBytesOnDisk(); /** * Get the number of bytes that are visible to readers * @return the number of bytes that are visible to readers */ - public long getVisibleLength(); + long getVisibleLength(); Review comment: Thanks @jojochuang for your review. This change is to fix checkstyle. I will restore it. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -587,7 +587,7 @@ public void readBlock(final ExtendedBlock block, final String clientTraceFmt = clientName.length() > 0 && ClientTraceLog.isInfoEnabled() ? String.format(DN_CLIENTTRACE_FORMAT, localAddress, remoteAddress, -"%d", "HDFS_READ", clientName, "%d", +"", "%d", "HDFS_READ", clientName, "%d", Review comment: Because volume has been added to DN_CLIENTTRACE_FORMAT, some adaptations have been made. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java ## @@ -1631,6 +1633,7 @@ public ReplicaHandler createRbw( if (ref == null) { ref = volumes.getNextVolume(storageType, storageId, b.getNumBytes()); } + LOG.info("Creating Rbw, block: {} on volume: {}", b, ref.getVolume()); Review comment: > is this really necessary? IMO logging one message for every rbw is just too much. I will change this to DEBUG level, do you think it is OK? ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -929,7 +929,7 @@ public void writeBlock(final ExtendedBlock block, if (isDatanode || stage == BlockConstructionStage.PIPELINE_CLOSE_RECOVERY) { datanode.closeBlock(block, null, storageUuid, isOnTransientStorage); -LOG.info("Received {} src: {} dest: {} of size {}", +LOG.info("Received {} src: {} dest: {} volume: {} of size {}", Review comment: Thanks for pointing this, I fixed it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on a change in pull request #3136: HDFS-16086. Add volume information to datanode log for tracing
tomscut commented on a change in pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#discussion_r660262918 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -929,7 +929,7 @@ public void writeBlock(final ExtendedBlock block, if (isDatanode || stage == BlockConstructionStage.PIPELINE_CLOSE_RECOVERY) { datanode.closeBlock(block, null, storageUuid, isOnTransientStorage); -LOG.info("Received {} src: {} dest: {} of size {}", +LOG.info("Received {} src: {} dest: {} volume: {} of size {}", Review comment: Thanks for pointing this, I fixed it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on a change in pull request #3136: HDFS-16086. Add volume information to datanode log for tracing
tomscut commented on a change in pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#discussion_r660251541 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java ## @@ -1631,6 +1633,7 @@ public ReplicaHandler createRbw( if (ref == null) { ref = volumes.getNextVolume(storageType, storageId, b.getNumBytes()); } + LOG.info("Creating Rbw, block: {} on volume: {}", b, ref.getVolume()); Review comment: > is this really necessary? IMO logging one message for every rbw is just too much. I will change this to DEBUG level, do you think it is OK? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on a change in pull request #3136: HDFS-16086. Add volume information to datanode log for tracing
tomscut commented on a change in pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#discussion_r660251192 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -587,7 +587,7 @@ public void readBlock(final ExtendedBlock block, final String clientTraceFmt = clientName.length() > 0 && ClientTraceLog.isInfoEnabled() ? String.format(DN_CLIENTTRACE_FORMAT, localAddress, remoteAddress, -"%d", "HDFS_READ", clientName, "%d", +"", "%d", "HDFS_READ", clientName, "%d", Review comment: Because volume has been added to DN_CLIENTTRACE_FORMAT, some adaptations have been made. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on a change in pull request #3136: HDFS-16086. Add volume information to datanode log for tracing
tomscut commented on a change in pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#discussion_r660249250 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/Replica.java ## @@ -19,49 +19,56 @@ import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState; +import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; /** * This represents block replicas which are stored in DataNode. */ @InterfaceAudience.Private public interface Replica { /** Get the block ID */ - public long getBlockId(); + long getBlockId(); /** Get the generation stamp */ - public long getGenerationStamp(); + long getGenerationStamp(); /** * Get the replica state * @return the replica state */ - public ReplicaState getState(); + ReplicaState getState(); /** * Get the number of bytes received * @return the number of bytes that have been received */ - public long getNumBytes(); + long getNumBytes(); /** * Get the number of bytes that have written to disk * @return the number of bytes that have written to disk */ - public long getBytesOnDisk(); + long getBytesOnDisk(); /** * Get the number of bytes that are visible to readers * @return the number of bytes that are visible to readers */ - public long getVisibleLength(); + long getVisibleLength(); Review comment: Thanks @jojochuang for your review. This change is to fix checkstyle. I will restore it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org