[ 
https://issues.apache.org/jira/browse/HDFS-11280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15796738#comment-15796738
 ] 

Brahma Reddy Battula commented on HDFS-11280:
---------------------------------------------

As changes are only in hdfs-client ,Tests did not run on hadoop-hdfs 
project.hence jenkins did not catch here..

Actually I suggested earlier,atleast we should run parent project( cd to parent 
project) testcases..but it did not happen..

{noformat}
unit test pre-reqs:
cd /testptch/hadoop/hadoop-common-project/hadoop-common
mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-0 install 
-DskipTests -Pnative -Drequire.libwebhdfs -Drequire.snappy -Drequire.openssl 
-Drequire.fuse -Drequire.test.libhadoop -Pyarn-ui > 
/testptch/hadoop/patchprocess/maven-unit-prereq-hadoop-common-project_hadoop-common-install.txt
 2>&1
cd /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-client
mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-0 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.libwebhdfs -Drequire.snappy 
-Drequire.openssl -Drequire.fuse -Drequire.test.libhadoop -Pyarn-ui clean test 
-fae > 
/testptch/hadoop/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt
 2>&1
Elapsed:   1m  1s

hadoop-hdfs-client in the patch passed.
{noformat}

> Allow WebHDFS to reuse HTTP connections to NN
> ---------------------------------------------
>
>                 Key: HDFS-11280
>                 URL: https://issues.apache.org/jira/browse/HDFS-11280
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 2.7.3, 2.6.5, 3.0.0-alpha1
>            Reporter: Zheng Shao
>            Assignee: Zheng Shao
>         Attachments: HDFS-11280.for.2.7.and.below.patch, 
> HDFS-11280.for.2.8.and.beyond.2.patch, HDFS-11280.for.2.8.and.beyond.3.patch, 
> HDFS-11280.for.2.8.and.beyond.4.patch, HDFS-11280.for.2.8.and.beyond.5.patch, 
> HDFS-11280.for.2.8.and.beyond.patch
>
>
> WebHDFSClient calls "conn.disconnect()", which disconnects from the NameNode. 
>  When we use webhdfs as the source in distcp, this used up all ephemeral 
> ports on the client side since all closed connections continue to occupy the 
> port with TIME_WAIT status for some time.
> According to http://tinyurl.com/java7-http-keepalive, we should call 
> conn.getInputStream().close() instead to make sure the connection is kept 
> alive.  This will get rid of the ephemeral port problem.
> Manual steps used to verify the bug fix:
> 1. Build original hadoop jar.
> 2. Try out distcp from webhdfs as source, and "netstat -n | grep TIME_WAIT | 
> grep -c 50070" on the local machine shows a big number (100s).
> 3. Build hadoop jar with this diff.
> 4. Try out distcp from webhdfs as source, and "netstat -n | grep TIME_WAIT | 
> grep -c 50070" on the local machine shows 0.
> 5. The explanation:  distcp's client side does a lot of directory scanning, 
> which would create and close a lot of connections to the namenode HTTP port.
> Reference:
> 2.7 and below: 
> https://github.com/apache/hadoop/blob/branch-2.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L743
> 2.8 and above: 
> https://github.com/apache/hadoop/blob/branch-2.8/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L898



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to