[ https://issues.apache.org/jira/browse/HDFS-6698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14068361#comment-14068361 ]
Liang Xie commented on HDFS-6698: --------------------------------- In normal situation, The HFiles which all of HBase (p)reads against to should be immuable, so i assumed the attached patch per [~saint....@gmail.com]'s suggestion is enough to relieve the "pread(s) were blocked by read request in HBase" issue. Let's see QA result... > try to optimize DFSInputStream.getFileLength() > ---------------------------------------------- > > Key: HDFS-6698 > URL: https://issues.apache.org/jira/browse/HDFS-6698 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client > Affects Versions: 3.0.0 > Reporter: Liang Xie > Assignee: Liang Xie > Attachments: HDFS-6698.txt > > > HBase prefers to invoke read() serving scan request, and invoke pread() > serving get reqeust. Because pread() almost holds no lock. > Let's image there's a read() running, because the definition is: > {code} > public synchronized int read > {code} > so no other read() request could run concurrently, this is known, but pread() > also could not run... because: > {code} > public int read(long position, byte[] buffer, int offset, int length) > throws IOException { > // sanity checks > dfsClient.checkOpen(); > if (closed) { > throw new IOException("Stream closed"); > } > failures = 0; > long filelen = getFileLength(); > {code} > the getFileLength() also needs lock. so we need to figure out a no lock impl > for getFileLength() before HBase multi stream feature done. > [~saint....@gmail.com] -- This message was sent by Atlassian JIRA (v6.2#6252)