[ 
https://issues.apache.org/jira/browse/HDFS-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14503737#comment-14503737
 ] 

Hadoop QA commented on HDFS-8078:
---------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12726636/HDFS-8078.4.patch
  against trunk revision f967fd2.

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client:

                  org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA

                                      The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-client:

org.apache.hadoop.hdfs.TestFileCreation

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10320//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10320//console

This message is automatically generated.

> HDFS client gets errors trying to to connect to IPv6 DataNode
> -------------------------------------------------------------
>
>                 Key: HDFS-8078
>                 URL: https://issues.apache.org/jira/browse/HDFS-8078
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client
>    Affects Versions: 2.6.0
>            Reporter: Nate Edel
>            Assignee: Nate Edel
>              Labels: ipv6
>         Attachments: HDFS-8078.4.patch
>
>
> 1st exception, on put:
> 15/03/23 18:43:18 WARN hdfs.DFSClient: DataStreamer Exception
> java.lang.IllegalArgumentException: Does not contain a valid host:port 
> authority: 2401:db00:1010:70ba:face:0:8:0:50010
>       at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
>       at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>       at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1607)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
> Appears to actually stem from code in DataNodeID which assumes it's safe to 
> append together (ipaddr + ":" + port) -- which is OK for IPv4 and not OK for 
> IPv6.  NetUtils.createSocketAddr( ) assembles a Java URI object, which 
> requires the format proto://[2401:db00:1010:70ba:face:0:8:0]:50010
> Currently using InetAddress.getByName() to validate IPv6 (guava 
> InetAddresses.forString has been flaky) but could also use our own parsing. 
> (From logging this, it seems like a low-enough frequency call that the extra 
> object creation shouldn't be problematic, and for me the slight risk of 
> passing in bad input that is not actually an IPv4 or IPv6 address and thus 
> calling an external DNS lookup is outweighed by getting the address 
> normalized and avoiding rewriting parsing.)
> Alternatively, sun.net.util.IPAddressUtil.isIPv6LiteralAddress()
> -------
> 2nd exception (on datanode)
> 15/04/13 13:18:07 ERROR datanode.DataNode: 
> dev1903.prn1.facebook.com:50010:DataXceiver error processing unknown 
> operation  src: /2401:db00:20:7013:face:0:7:0:54152 dst: 
> /2401:db00:11:d010:face:0:2f:0:50010
> java.io.EOFException
>         at java.io.DataInputStream.readShort(DataInputStream.java:315)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226)
>         at java.lang.Thread.run(Thread.java:745)
> Which also comes as client error "-get: 2401 is not an IP string literal."
> This one has existing parsing logic which needs to shift to the last colon 
> rather than the first.  Should also be a tiny bit faster by using lastIndexOf 
> rather than split.  Could alternatively use the techniques above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to