[ https://issues.apache.org/jira/browse/HDFS-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Lisheng Sun updated HDFS-14820: ------------------------------- Description: this issue is similar to HDFS-14535. {code:java} public static BlockReader newBlockReader(String file, ExtendedBlock block, Token<BlockTokenIdentifier> blockToken, long startOffset, long len, boolean verifyChecksum, String clientName, Peer peer, DatanodeID datanodeID, PeerCache peerCache, CachingStrategy cachingStrategy, int networkDistance) throws IOException { // in and out will be closed when sock is closed (by the caller) final DataOutputStream out = new DataOutputStream(new BufferedOutputStream( peer.getOutputStream())); new Sender(out).readBlock(block, blockToken, clientName, startOffset, len, verifyChecksum, cachingStrategy); } public BufferedOutputStream(OutputStream out) { this(out, 8192); } {code} Sender#readBlock parameter( block,blockToken, clientName, startOffset, len, verifyChecksum, cachingStrategy) could not use such a big buffer. So i think it should reduce BufferedOutputStream buffer. was: this issue is similar to HDFS-14535. {code:java} public static BlockReader newBlockReader(String file, ExtendedBlock block, Token<BlockTokenIdentifier> blockToken, long startOffset, long len, boolean verifyChecksum, String clientName, Peer peer, DatanodeID datanodeID, PeerCache peerCache, CachingStrategy cachingStrategy, int networkDistance) throws IOException { // in and out will be closed when sock is closed (by the caller) final DataOutputStream out = new DataOutputStream(new BufferedOutputStream( peer.getOutputStream())); new Sender(out).readBlock(block, blockToken, clientName, startOffset, len, verifyChecksum, cachingStrategy); public BufferedOutputStream(OutputStream out) { this(out, 8192); } {code} Sender#readBlock parameter( block,blockToken, clientName, startOffset, len, verifyChecksum, cachingStrategy) could not use such a big buffer. So i think it should reduce BufferedOutputStream buffer. > The default 8KB buffer of > BlockReaderRemote#newBlockReader#BufferedOutputStream is too big > ------------------------------------------------------------------------------------------- > > Key: HDFS-14820 > URL: https://issues.apache.org/jira/browse/HDFS-14820 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: Lisheng Sun > Priority: Major > > this issue is similar to HDFS-14535. > {code:java} > public static BlockReader newBlockReader(String file, > ExtendedBlock block, > Token<BlockTokenIdentifier> blockToken, > long startOffset, long len, > boolean verifyChecksum, > String clientName, > Peer peer, DatanodeID datanodeID, > PeerCache peerCache, > CachingStrategy cachingStrategy, > int networkDistance) throws IOException { > // in and out will be closed when sock is closed (by the caller) > final DataOutputStream out = new DataOutputStream(new BufferedOutputStream( > peer.getOutputStream())); > new Sender(out).readBlock(block, blockToken, clientName, startOffset, len, > verifyChecksum, cachingStrategy); > } > public BufferedOutputStream(OutputStream out) { > this(out, 8192); > } > {code} > Sender#readBlock parameter( block,blockToken, clientName, startOffset, len, > verifyChecksum, cachingStrategy) could not use such a big buffer. > So i think it should reduce BufferedOutputStream buffer. -- This message was sent by Atlassian Jira (v8.3.2#803003) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org