[ 
https://issues.apache.org/jira/browse/HDFS-16074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-16074.
------------------------------------
    Fix Version/s: 3.3.2
                   3.2.3
                   3.4.0
       Resolution: Fixed

Thanks a lot for the review

> Remove an expensive debug string concatenation
> ----------------------------------------------
>
>                 Key: HDFS-16074
>                 URL: https://issues.apache.org/jira/browse/HDFS-16074
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 3.0.0-alpha1
>            Reporter: Wei-Chiu Chuang
>            Assignee: Wei-Chiu Chuang
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.4.0, 3.2.3, 3.3.2
>
>         Attachments: Screen Shot 2021-06-16 at 2.32.29 PM.png, Screen Shot 
> 2021-06-17 at 10.32.21 AM.png
>
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> Running a YCSB load query, found that we do an expensive string concatenation 
> on the write path in DFSOutputStream.writeChunkPrepare(). 
> Nearly 25% of HDFS client write CPU time is spent here. It is not necessary 
> because it's supposed to be a debug message. So let's remove it.
> {code}
>  if (currentPacket == null) {
>       currentPacket = createPacket(packetSize, chunksPerPacket, getStreamer()
>           .getBytesCurBlock(), getStreamer().getAndIncCurrentSeqno(), false);
>       DFSClient.LOG.debug("WriteChunk allocating new packet seqno={},"
>               + " src={}, packetSize={}, chunksPerPacket={}, 
> bytesCurBlock={}",
>           currentPacket.getSeqno(), src, packetSize, chunksPerPacket,
>           getStreamer().getBytesCurBlock() + ", " + this); <---- here
>     }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to