[ 
https://issues.apache.org/jira/browse/HDFS-9839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15155935#comment-15155935
 ] 

Hudson commented on HDFS-9839:
------------------------------

FAILURE: Integrated in Hadoop-trunk-Commit #9335 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9335/])
HDFS-9839. Reduce verbosity of processReport logging. (Contributed by (arp: rev 
d5abd293a890a8a1da48a166a291ae1c5644ad57)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Reduce verbosity of processReport logging
> -----------------------------------------
>
>                 Key: HDFS-9839
>                 URL: https://issues.apache.org/jira/browse/HDFS-9839
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.8.0
>            Reporter: Arpit Agarwal
>            Assignee: Arpit Agarwal
>             Fix For: 2.8.0
>
>         Attachments: HDFS-9839.01.patch
>
>
> {{BlockManager#processReport}} logs one line for each invalidated block at 
> INFO. HDFS-7503 moved this logging outside the NameSystem write lock but we 
> still see the NameNode being slowed down when the number of block 
> invalidations is very large e.g. just after a large amount of data is deleted.
> {code}
>       for (Block b : invalidatedBlocks) {
>         blockLog.info("BLOCK* processReport: {} on node {} size {} does not " 
> +
>             "belong to any file", b, node, b.getNumBytes());
>       }
> {code}
> We can change this statement to DEBUG and just log the number of block 
> invalidations at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to