[ 
https://issues.apache.org/jira/browse/HDFS-16179?focusedWorklogId=639697&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-639697
 ]

ASF GitHub Bot logged work on HDFS-16179:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 19/Aug/21 05:01
            Start Date: 19/Aug/21 05:01
    Worklog Time Spent: 10m 
      Work Description: tomscut commented on pull request #3313:
URL: https://github.com/apache/hadoop/pull/3313#issuecomment-901611191


   > LGTM
   
   Thanks @ayushtkn for your review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 639697)
    Time Spent: 20m  (was: 10m)

> Update loglevel for BlockManager#chooseExcessRedundancyStriped to avoid too 
> much logs
> -------------------------------------------------------------------------------------
>
>                 Key: HDFS-16179
>                 URL: https://issues.apache.org/jira/browse/HDFS-16179
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 3.1.0
>            Reporter: tomscut
>            Assignee: tomscut
>            Priority: Minor
>              Labels: pull-request-available
>         Attachments: log-count.jpg, logs.jpg
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:java}
> private void chooseExcessRedundancyStriped(BlockCollection bc,
>     final Collection<DatanodeStorageInfo> nonExcess,
>     BlockInfo storedBlock,
>     DatanodeDescriptor delNodeHint) {
>   ...
>   // cardinality of found indicates the expected number of internal blocks
>   final int numOfTarget = found.cardinality();
>   final BlockStoragePolicy storagePolicy = storagePolicySuite.getPolicy(
>       bc.getStoragePolicyID());
>   final List<StorageType> excessTypes = storagePolicy.chooseExcess(
>       (short) numOfTarget, DatanodeStorageInfo.toStorageTypes(nonExcess));
>   if (excessTypes.isEmpty()) {
>     LOG.warn("excess types chosen for block {} among storages {} is empty",
>         storedBlock, nonExcess);
>     return;
>   }
>   ...
> }
> {code}
>  
> IMO, here is just detecting excess StorageType and setting the log level to 
> debug has no effect.
>  
> We have a cluster that uses the EC policy to store data. The current log 
> level is WARN here, and in about 50 minutes, 286,093 logs are printed, which 
> can cause other important logs to drown out.
>  
> !logs.jpg|width=1167,height=62!
>  
> !log-count.jpg|width=760,height=30!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to