[ https://issues.apache.org/jira/browse/HADOOP-19292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886255#comment-17886255 ]
Benoit Sigoure commented on HADOOP-19292: ----------------------------------------- Hi Steve, done: https://github.com/apache/hadoop/pull/7090 > BlockDecompressorStream#rawReadInt wastes about 1% of overall CPU cycles > creating new EOFException > -------------------------------------------------------------------------------------------------- > > Key: HADOOP-19292 > URL: https://issues.apache.org/jira/browse/HADOOP-19292 > Project: Hadoop Common > Issue Type: Improvement > Components: compress, io > Affects Versions: 3.3.6 > Reporter: Benoit Sigoure > Priority: Major > Attachments: > HADOOP-19292-Don-t-create-new-EOFException-in-BlockD.patch > > > On our HBase clusters, while looking at CPU profiles, I noticed that about 1% > of overall CPU cycles are spent under BlockDecompressorStream#rawReadInt just > throwing EOFException. This could be easily avoided. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org