[jira] [Commented] (HADOOP-18249) Fix getUri() in HttpRequest has been deprecated
[ https://issues.apache.org/jira/browse/HADOOP-18249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541638#comment-17541638 ] Hemanth Boyina commented on HADOOP-18249: - committed to trunk thanks for the contribution [~slfan1989] > Fix getUri() in HttpRequest has been deprecated > --- > > Key: HADOOP-18249 > URL: https://issues.apache.org/jira/browse/HADOOP-18249 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.4.0 >Reporter: fanshilun >Assignee: fanshilun >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: getUri() deprecated -1.png > > Time Spent: 1h 10m > Remaining Estimate: 0h > > When reading the code, I found that the method used has been deprecated due > to the upgrade of the netty component. The main methods are as follows: > io.netty.handler.codec.http#HttpRequest > @Deprecated > HttpMethod getMethod(); > Deprecated. Use method() instead. > @Deprecated > String getUri() > Deprecated. Use uri() instead. > io.netty.handler.codec.http#DefaultHttpResponse > @Deprecated > public HttpResponseStatus getStatus() > { return this.status(); } > Deprecated. Use status() instead. > > WebHdfsHandler.java:125:35:[deprecation] getUri() in HttpRequest has been > deprecated > HostRestrictingAuthorizationFilterHandler.java:200:27:[deprecation] getUri() > in HttpRequest has been deprecated -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18249) Fix getUri() in HttpRequest has been deprecated
[ https://issues.apache.org/jira/browse/HADOOP-18249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina resolved HADOOP-18249. - Resolution: Fixed > Fix getUri() in HttpRequest has been deprecated > --- > > Key: HADOOP-18249 > URL: https://issues.apache.org/jira/browse/HADOOP-18249 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.4.0 >Reporter: fanshilun >Assignee: fanshilun >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: getUri() deprecated -1.png > > Time Spent: 1h 10m > Remaining Estimate: 0h > > When reading the code, I found that the method used has been deprecated due > to the upgrade of the netty component. The main methods are as follows: > io.netty.handler.codec.http#HttpRequest > @Deprecated > HttpMethod getMethod(); > Deprecated. Use method() instead. > @Deprecated > String getUri() > Deprecated. Use uri() instead. > io.netty.handler.codec.http#DefaultHttpResponse > @Deprecated > public HttpResponseStatus getStatus() > { return this.status(); } > Deprecated. Use status() instead. > > WebHdfsHandler.java:125:35:[deprecation] getUri() in HttpRequest has been > deprecated > HostRestrictingAuthorizationFilterHandler.java:200:27:[deprecation] getUri() > in HttpRequest has been deprecated -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18239) Update guava to 30.1.1-jre
[ https://issues.apache.org/jira/browse/HADOOP-18239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina resolved HADOOP-18239. - Resolution: Duplicate > Update guava to 30.1.1-jre > -- > > Key: HADOOP-18239 > URL: https://issues.apache.org/jira/browse/HADOOP-18239 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > > Update guava to 30.1.1-jre -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18239) Update guava to 30.1.1-jre
Hemanth Boyina created HADOOP-18239: --- Summary: Update guava to 30.1.1-jre Key: HADOOP-18239 URL: https://issues.apache.org/jira/browse/HADOOP-18239 Project: Hadoop Common Issue Type: Improvement Reporter: Hemanth Boyina Assignee: Hemanth Boyina Update guava to 30.1.1-jre -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16889) NetUtils.createSocketAddr was not handling IPV6 scoped address
[ https://issues.apache.org/jira/browse/HADOOP-16889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17423109#comment-17423109 ] Hemanth Boyina commented on HADOOP-16889: - issue was fixed in HADOOP-17542 > NetUtils.createSocketAddr was not handling IPV6 scoped address > -- > > Key: HADOOP-16889 > URL: https://issues.apache.org/jira/browse/HADOOP-16889 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-16889.001.patch, HADOOP-16889.002.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16889) NetUtils.createSocketAddr was not handling IPV6 scoped address
[ https://issues.apache.org/jira/browse/HADOOP-16889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-16889: Resolution: Duplicate Status: Resolved (was: Patch Available) > NetUtils.createSocketAddr was not handling IPV6 scoped address > -- > > Key: HADOOP-16889 > URL: https://issues.apache.org/jira/browse/HADOOP-16889 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-16889.001.patch, HADOOP-16889.002.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17542) IPV6 support in Netutils#createSocketAddress
[ https://issues.apache.org/jira/browse/HADOOP-17542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17396608#comment-17396608 ] Hemanth Boyina commented on HADOOP-17542: - thanks for the contribution [~prasad-acit], thanks for the review [~brahmareddy] > IPV6 support in Netutils#createSocketAddress > - > > Key: HADOOP-17542 > URL: https://issues.apache.org/jira/browse/HADOOP-17542 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.1.1 >Reporter: ANANDA G B >Assignee: Renukaprasad C >Priority: Minor > Labels: ipv6, pull-request-available > Fix For: HADOOP-17800 > > Attachments: HADOOP-17542-HADOOP-11890-001.patch, Test Scenarios > Verified in IPV6 cluster.doc > > Time Spent: 3h 50m > Remaining Estimate: 0h > > Currently NetUtils#createSocketAddress not supporting if target is IPV6 ip. > If target is IPV6 ip then it throw "Does not contain a valid host:port > authority: ". > This need be support. > public static InetSocketAddress createSocketAddr(String target, > int defaultPort, > String configName, > boolean useCacheIfPresent) { > String helpText = ""; > if (configName != null) > { helpText = " (configuration property '" + configName + "')"; } > if (target == null) > { throw new IllegalArgumentException("Target address cannot be null." + > helpText); } > target = target.trim(); > boolean hasScheme = target.contains("://"); > URI uri = createURI(target, hasScheme, helpText, useCacheIfPresent); > String host = uri.getHost(); > int port = uri.getPort(); > if (port == -1) > { port = defaultPort; } > String path = uri.getPath(); > if ((host == null) || (port < 0) || > (!hasScheme && path != null && !path.isEmpty())) > { throw new IllegalArgumentException( *"Does not contain a valid host:port > authority: " + target + helpText* ); } > return createSocketAddrForHost(host, port); > } -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17542) IPV6 support in Netutils#createSocketAddress
[ https://issues.apache.org/jira/browse/HADOOP-17542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina resolved HADOOP-17542. - Fix Version/s: HADOOP-17800 Resolution: Fixed > IPV6 support in Netutils#createSocketAddress > - > > Key: HADOOP-17542 > URL: https://issues.apache.org/jira/browse/HADOOP-17542 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.1.1 >Reporter: ANANDA G B >Assignee: Renukaprasad C >Priority: Minor > Labels: ipv6, pull-request-available > Fix For: HADOOP-17800 > > Attachments: HADOOP-17542-HADOOP-11890-001.patch, Test Scenarios > Verified in IPV6 cluster.doc > > Time Spent: 3h 50m > Remaining Estimate: 0h > > Currently NetUtils#createSocketAddress not supporting if target is IPV6 ip. > If target is IPV6 ip then it throw "Does not contain a valid host:port > authority: ". > This need be support. > public static InetSocketAddress createSocketAddr(String target, > int defaultPort, > String configName, > boolean useCacheIfPresent) { > String helpText = ""; > if (configName != null) > { helpText = " (configuration property '" + configName + "')"; } > if (target == null) > { throw new IllegalArgumentException("Target address cannot be null." + > helpText); } > target = target.trim(); > boolean hasScheme = target.contains("://"); > URI uri = createURI(target, hasScheme, helpText, useCacheIfPresent); > String host = uri.getHost(); > int port = uri.getPort(); > if (port == -1) > { port = defaultPort; } > String path = uri.getPath(); > if ((host == null) || (port < 0) || > (!hasScheme && path != null && !path.isEmpty())) > { throw new IllegalArgumentException( *"Does not contain a valid host:port > authority: " + target + helpText* ); } > return createSocketAddrForHost(host, port); > } -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17542) IPV6 support in Netutils#createSocketAddress
[ https://issues.apache.org/jira/browse/HADOOP-17542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17391399#comment-17391399 ] Hemanth Boyina commented on HADOOP-17542: - Yes [~weichiu], the PR has to be raised against HADOOP-17800 branch , [~prasad-acit] can you please do that > IPV6 support in Netutils#createSocketAddress > - > > Key: HADOOP-17542 > URL: https://issues.apache.org/jira/browse/HADOOP-17542 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.1.1 >Reporter: ANANDA G B >Assignee: Renukaprasad C >Priority: Minor > Labels: ipv6, pull-request-available > Attachments: HADOOP-17542-HADOOP-11890-001.patch, Test Scenarios > Verified in IPV6 cluster.doc > > Time Spent: 50m > Remaining Estimate: 0h > > Currently NetUtils#createSocketAddress not supporting if target is IPV6 ip. > If target is IPV6 ip then it throw "Does not contain a valid host:port > authority: ". > This need be support. > public static InetSocketAddress createSocketAddr(String target, > int defaultPort, > String configName, > boolean useCacheIfPresent) { > String helpText = ""; > if (configName != null) > { helpText = " (configuration property '" + configName + "')"; } > if (target == null) > { throw new IllegalArgumentException("Target address cannot be null." + > helpText); } > target = target.trim(); > boolean hasScheme = target.contains("://"); > URI uri = createURI(target, hasScheme, helpText, useCacheIfPresent); > String host = uri.getHost(); > int port = uri.getPort(); > if (port == -1) > { port = defaultPort; } > String path = uri.getPath(); > if ((host == null) || (port < 0) || > (!hasScheme && path != null && !path.isEmpty())) > { throw new IllegalArgumentException( *"Does not contain a valid host:port > authority: " + target + helpText* ); } > return createSocketAddrForHost(host, port); > } -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
[ https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-12670: Attachment: HADOOP-12670-HADOOP-17800.002.patch > Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only > - > > Key: HADOOP-12670 > URL: https://issues.apache.org/jira/browse/HADOOP-12670 > Project: Hadoop Common > Issue Type: Sub-task > Components: net >Affects Versions: HADOOP-11890 >Reporter: Elliott Neil Clark >Assignee: Elliott Neil Clark >Priority: Major > Attachments: HADOOP-12670-HADOOP-11890.0.patch, > HADOOP-12670-HADOOP-11890.2.patch, HADOOP-12670-HADOOP-11890.3.patch, > HADOOP-12670-HADOOP-17800.001.patch, HADOOP-12670-HADOOP-17800.002.patch > > > {code} > TestSecurityUtil.testBuildTokenServiceSockAddr:165 > expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123> > TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but > was:<[0:0:0:0:0:0:0:]1:123> > > TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > > TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > > TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but > was:<[127.0.0.]1> > TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
[ https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17390434#comment-17390434 ] Hemanth Boyina commented on HADOOP-12670: - reopened and uploaded patch against trunk as the current patch has conflicts , please see here https://issues.apache.org/jira/browse/HADOOP-11890?focusedCommentId=17379845=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17379845 for more details > Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only > - > > Key: HADOOP-12670 > URL: https://issues.apache.org/jira/browse/HADOOP-12670 > Project: Hadoop Common > Issue Type: Sub-task > Components: net >Affects Versions: HADOOP-11890 >Reporter: Elliott Neil Clark >Assignee: Elliott Neil Clark >Priority: Major > Attachments: HADOOP-12670-HADOOP-11890.0.patch, > HADOOP-12670-HADOOP-11890.2.patch, HADOOP-12670-HADOOP-11890.3.patch, > HADOOP-12670-HADOOP-17800.001.patch > > > {code} > TestSecurityUtil.testBuildTokenServiceSockAddr:165 > expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123> > TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but > was:<[0:0:0:0:0:0:0:]1:123> > > TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > > TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > > TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but > was:<[127.0.0.]1> > TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
[ https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-12670: Attachment: HADOOP-12670-HADOOP-17800.001.patch Status: Patch Available (was: Reopened) > Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only > - > > Key: HADOOP-12670 > URL: https://issues.apache.org/jira/browse/HADOOP-12670 > Project: Hadoop Common > Issue Type: Sub-task > Components: net >Affects Versions: HADOOP-11890 >Reporter: Elliott Neil Clark >Assignee: Elliott Neil Clark >Priority: Major > Attachments: HADOOP-12670-HADOOP-11890.0.patch, > HADOOP-12670-HADOOP-11890.2.patch, HADOOP-12670-HADOOP-11890.3.patch, > HADOOP-12670-HADOOP-17800.001.patch > > > {code} > TestSecurityUtil.testBuildTokenServiceSockAddr:165 > expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123> > TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but > was:<[0:0:0:0:0:0:0:]1:123> > > TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > > TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > > TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but > was:<[127.0.0.]1> > TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
[ https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina reopened HADOOP-12670: - > Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only > - > > Key: HADOOP-12670 > URL: https://issues.apache.org/jira/browse/HADOOP-12670 > Project: Hadoop Common > Issue Type: Sub-task > Components: net >Affects Versions: HADOOP-11890 >Reporter: Elliott Neil Clark >Assignee: Elliott Neil Clark >Priority: Major > Attachments: HADOOP-12670-HADOOP-11890.0.patch, > HADOOP-12670-HADOOP-11890.2.patch, HADOOP-12670-HADOOP-11890.3.patch, > HADOOP-12670-HADOOP-17800.001.patch > > > {code} > TestSecurityUtil.testBuildTokenServiceSockAddr:165 > expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123> > TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but > was:<[0:0:0:0:0:0:0:]1:123> > > TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > > TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > > TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but > was:<[127.0.0.]1> > TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 > expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1> > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-12491: Attachment: HADOOP-12491-HADOOP-17800.004.patch > Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 > literals > --- > > Key: HADOOP-12491 > URL: https://issues.apache.org/jira/browse/HADOOP-12491 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: HADOOP-11890 >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-12491-HADOOP-11890.1.patch, > HADOOP-12491-HADOOP-11890.2.patch, HADOOP-12491-HADOOP-17800.001.patch, > HADOOP-12491-HADOOP-17800.002.patch, HADOOP-12491-HADOOP-17800.003.patch, > HADOOP-12491-HADOOP-17800.004.patch > > Original Estimate: 48h > Remaining Estimate: 48h > > Hadoop-common portion of HADOOP-12122 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12432) Add support for include/exclude lists on IPv6 setup
[ https://issues.apache.org/jira/browse/HADOOP-12432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-12432: Attachment: HADOOP-12432-HADOOP-17800.001.patch Status: Patch Available (was: Reopened) > Add support for include/exclude lists on IPv6 setup > --- > > Key: HADOOP-12432 > URL: https://issues.apache.org/jira/browse/HADOOP-12432 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 > Environment: This affects only IPv6 cluster setup >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-12432-HADOOP-11890.1.patch, > HADOOP-12432-HADOOP-17800.001.patch, HADOOP-12432-trunk.patch, > HADOOP-12432.1.patch, HADOOP-12432.2.patch, HADOOP-12432.3.patch, > HDFS-8078.15_plus_HDFS-9026.patch, HDFS-9026-1.patch, HDFS-9026-2.patch, > HDFS-9026-HADOOP-11890.002.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > This is a tracking item for having e2e IPv6 support in HDFS. > Nate did great ground work in HDFS-8078 but for having whole feature working > e2e this one of the items missing. > Basically today NN won't be able to parse IPv6 addresses if they are present > in include or exclude list. > Patch has a dependency (and has been tested on IPv6 only cluster) on top of > HDFS-8078.14.patch > This should be committed to HADOOP-11890 branch. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-12432) Add support for include/exclude lists on IPv6 setup
[ https://issues.apache.org/jira/browse/HADOOP-12432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina reopened HADOOP-12432: - > Add support for include/exclude lists on IPv6 setup > --- > > Key: HADOOP-12432 > URL: https://issues.apache.org/jira/browse/HADOOP-12432 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 > Environment: This affects only IPv6 cluster setup >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-12432-HADOOP-11890.1.patch, > HADOOP-12432-HADOOP-17800.001.patch, HADOOP-12432-trunk.patch, > HADOOP-12432.1.patch, HADOOP-12432.2.patch, HADOOP-12432.3.patch, > HDFS-8078.15_plus_HDFS-9026.patch, HDFS-9026-1.patch, HDFS-9026-2.patch, > HDFS-9026-HADOOP-11890.002.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > This is a tracking item for having e2e IPv6 support in HDFS. > Nate did great ground work in HDFS-8078 but for having whole feature working > e2e this one of the items missing. > Basically today NN won't be able to parse IPv6 addresses if they are present > in include or exclude list. > Patch has a dependency (and has been tested on IPv6 only cluster) on top of > HDFS-8078.14.patch > This should be committed to HADOOP-11890 branch. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12432) Add support for include/exclude lists on IPv6 setup
[ https://issues.apache.org/jira/browse/HADOOP-12432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17389278#comment-17389278 ] Hemanth Boyina commented on HADOOP-12432: - reopened and uploaded patch against trunk as the current patch has conflicts , please see here https://issues.apache.org/jira/browse/HADOOP-11890?focusedCommentId=17379845=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17379845 for more details > Add support for include/exclude lists on IPv6 setup > --- > > Key: HADOOP-12432 > URL: https://issues.apache.org/jira/browse/HADOOP-12432 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 > Environment: This affects only IPv6 cluster setup >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-12432-HADOOP-11890.1.patch, > HADOOP-12432-trunk.patch, HADOOP-12432.1.patch, HADOOP-12432.2.patch, > HADOOP-12432.3.patch, HDFS-8078.15_plus_HDFS-9026.patch, HDFS-9026-1.patch, > HDFS-9026-2.patch, HDFS-9026-HADOOP-11890.002.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > This is a tracking item for having e2e IPv6 support in HDFS. > Nate did great ground work in HDFS-8078 but for having whole feature working > e2e this one of the items missing. > Basically today NN won't be able to parse IPv6 addresses if they are present > in include or exclude list. > Patch has a dependency (and has been tested on IPv6 only cluster) on top of > HDFS-8078.14.patch > This should be committed to HADOOP-11890 branch. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-12491: Attachment: HADOOP-12491-HADOOP-17800.003.patch > Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 > literals > --- > > Key: HADOOP-12491 > URL: https://issues.apache.org/jira/browse/HADOOP-12491 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: HADOOP-11890 >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-12491-HADOOP-11890.1.patch, > HADOOP-12491-HADOOP-11890.2.patch, HADOOP-12491-HADOOP-17800.001.patch, > HADOOP-12491-HADOOP-17800.002.patch, HADOOP-12491-HADOOP-17800.003.patch > > Original Estimate: 48h > Remaining Estimate: 48h > > Hadoop-common portion of HADOOP-12122 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17388501#comment-17388501 ] Hemanth Boyina commented on HADOOP-12491: - thanks for the review [~brahmareddy], uploaded patch fixing checkstyle > Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 > literals > --- > > Key: HADOOP-12491 > URL: https://issues.apache.org/jira/browse/HADOOP-12491 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: HADOOP-11890 >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-12491-HADOOP-11890.1.patch, > HADOOP-12491-HADOOP-11890.2.patch, HADOOP-12491-HADOOP-17800.001.patch, > HADOOP-12491-HADOOP-17800.002.patch > > Original Estimate: 48h > Remaining Estimate: 48h > > Hadoop-common portion of HADOOP-12122 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-12491: Attachment: HADOOP-12491-HADOOP-17800.002.patch > Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 > literals > --- > > Key: HADOOP-12491 > URL: https://issues.apache.org/jira/browse/HADOOP-12491 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: HADOOP-11890 >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-12491-HADOOP-11890.1.patch, > HADOOP-12491-HADOOP-11890.2.patch, HADOOP-12491-HADOOP-17800.001.patch, > HADOOP-12491-HADOOP-17800.002.patch > > Original Estimate: 48h > Remaining Estimate: 48h > > Hadoop-common portion of HADOOP-12122 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-12491: Attachment: HADOOP-12491-HADOOP-17800.001.patch Status: Patch Available (was: Reopened) > Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 > literals > --- > > Key: HADOOP-12491 > URL: https://issues.apache.org/jira/browse/HADOOP-12491 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: HADOOP-11890 >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-12491-HADOOP-11890.1.patch, > HADOOP-12491-HADOOP-11890.2.patch, HADOOP-12491-HADOOP-17800.001.patch > > Original Estimate: 48h > Remaining Estimate: 48h > > Hadoop-common portion of HADOOP-12122 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-12491: Attachment: (was: HADOOP-12491-HADOOP-17800.001.patch) > Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 > literals > --- > > Key: HADOOP-12491 > URL: https://issues.apache.org/jira/browse/HADOOP-12491 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: HADOOP-11890 >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-12491-HADOOP-11890.1.patch, > HADOOP-12491-HADOOP-11890.2.patch, HADOOP-12491-HADOOP-17800.001.patch > > Original Estimate: 48h > Remaining Estimate: 48h > > Hadoop-common portion of HADOOP-12122 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-12491: Attachment: HADOOP-12491-HADOOP-17800.001.patch > Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 > literals > --- > > Key: HADOOP-12491 > URL: https://issues.apache.org/jira/browse/HADOOP-12491 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: HADOOP-11890 >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-12491-HADOOP-11890.1.patch, > HADOOP-12491-HADOOP-11890.2.patch, HADOOP-12491-HADOOP-17800.001.patch > > Original Estimate: 48h > Remaining Estimate: 48h > > Hadoop-common portion of HADOOP-12122 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387342#comment-17387342 ] Hemanth Boyina commented on HADOOP-12491: - reopened and uploaded patch against trunk as the current patch has conflicts , please see here https://issues.apache.org/jira/browse/HADOOP-11890?focusedCommentId=17379845=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17379845 for more details > Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 > literals > --- > > Key: HADOOP-12491 > URL: https://issues.apache.org/jira/browse/HADOOP-12491 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: HADOOP-11890 >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-12491-HADOOP-11890.1.patch, > HADOOP-12491-HADOOP-11890.2.patch > > Original Estimate: 48h > Remaining Estimate: 48h > > Hadoop-common portion of HADOOP-12122 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-12491) Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals
[ https://issues.apache.org/jira/browse/HADOOP-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina reopened HADOOP-12491: - > Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 > literals > --- > > Key: HADOOP-12491 > URL: https://issues.apache.org/jira/browse/HADOOP-12491 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: HADOOP-11890 >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-12491-HADOOP-11890.1.patch, > HADOOP-12491-HADOOP-11890.2.patch > > Original Estimate: 48h > Remaining Estimate: 48h > > Hadoop-common portion of HADOOP-12122 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12430) Fix HDFS client gets errors trying to to connect to IPv6 DataNode
[ https://issues.apache.org/jira/browse/HADOOP-12430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387285#comment-17387285 ] Hemanth Boyina commented on HADOOP-12430: - thanks [~brahmareddy] for the review , ran the test cases locally, test cases are passing locally, failures are not related > Fix HDFS client gets errors trying to to connect to IPv6 DataNode > - > > Key: HADOOP-12430 > URL: https://issues.apache.org/jira/browse/HADOOP-12430 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.6.0 >Reporter: Nate Edel >Assignee: Nate Edel >Priority: Major > Labels: BB2015-05-TBR, ipv6 > Attachments: HDFS-8078-HADOOP-17800.001.patch, > HDFS-8078-HADOOP-17800.002.patch, HDFS-8078.10.patch, HDFS-8078.11.patch, > HDFS-8078.12.patch, HDFS-8078.13.patch, HDFS-8078.14.patch, > HDFS-8078.15.patch, HDFS-8078.9.patch, dummy.patch > > > 1st exception, on put: > 15/03/23 18:43:18 WARN hdfs.DFSClient: DataStreamer Exception > java.lang.IllegalArgumentException: Does not contain a valid host:port > authority: 2401:db00:1010:70ba:face:0:8:0:50010 > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153) > at > org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1607) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588) > Appears to actually stem from code in DataNodeID which assumes it's safe to > append together (ipaddr + ":" + port) -- which is OK for IPv4 and not OK for > IPv6. NetUtils.createSocketAddr( ) assembles a Java URI object, which > requires the format proto://[2401:db00:1010:70ba:face:0:8:0]:50010 > Currently using InetAddress.getByName() to validate IPv6 (guava > InetAddresses.forString has been flaky) but could also use our own parsing. > (From logging this, it seems like a low-enough frequency call that the extra > object creation shouldn't be problematic, and for me the slight risk of > passing in bad input that is not actually an IPv4 or IPv6 address and thus > calling an external DNS lookup is outweighed by getting the address > normalized and avoiding rewriting parsing.) > Alternatively, sun.net.util.IPAddressUtil.isIPv6LiteralAddress() > --- > 2nd exception (on datanode) > 15/04/13 13:18:07 ERROR datanode.DataNode: > dev1903.prn1.facebook.com:50010:DataXceiver error processing unknown > operation src: /2401:db00:20:7013:face:0:7:0:54152 dst: > /2401:db00:11:d010:face:0:2f:0:50010 > java.io.EOFException > at java.io.DataInputStream.readShort(DataInputStream.java:315) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226) > at java.lang.Thread.run(Thread.java:745) > Which also comes as client error "-get: 2401 is not an IP string literal." > This one has existing parsing logic which needs to shift to the last colon > rather than the first. Should also be a tiny bit faster by using lastIndexOf > rather than split. Could alternatively use the techniques above. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12430) Fix HDFS client gets errors trying to to connect to IPv6 DataNode
[ https://issues.apache.org/jira/browse/HADOOP-12430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17385366#comment-17385366 ] Hemanth Boyina commented on HADOOP-12430: - reopened and uploaded patch against trunk as the current patch has conflicts , please see here https://issues.apache.org/jira/browse/HADOOP-11890?focusedCommentId=17379845=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17379845 for more details > Fix HDFS client gets errors trying to to connect to IPv6 DataNode > - > > Key: HADOOP-12430 > URL: https://issues.apache.org/jira/browse/HADOOP-12430 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.6.0 >Reporter: Nate Edel >Assignee: Nate Edel >Priority: Major > Labels: BB2015-05-TBR, ipv6 > Attachments: HDFS-8078-HADOOP-17800.001.patch, > HDFS-8078-HADOOP-17800.002.patch, HDFS-8078.10.patch, HDFS-8078.11.patch, > HDFS-8078.12.patch, HDFS-8078.13.patch, HDFS-8078.14.patch, > HDFS-8078.15.patch, HDFS-8078.9.patch, dummy.patch > > > 1st exception, on put: > 15/03/23 18:43:18 WARN hdfs.DFSClient: DataStreamer Exception > java.lang.IllegalArgumentException: Does not contain a valid host:port > authority: 2401:db00:1010:70ba:face:0:8:0:50010 > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153) > at > org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1607) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588) > Appears to actually stem from code in DataNodeID which assumes it's safe to > append together (ipaddr + ":" + port) -- which is OK for IPv4 and not OK for > IPv6. NetUtils.createSocketAddr( ) assembles a Java URI object, which > requires the format proto://[2401:db00:1010:70ba:face:0:8:0]:50010 > Currently using InetAddress.getByName() to validate IPv6 (guava > InetAddresses.forString has been flaky) but could also use our own parsing. > (From logging this, it seems like a low-enough frequency call that the extra > object creation shouldn't be problematic, and for me the slight risk of > passing in bad input that is not actually an IPv4 or IPv6 address and thus > calling an external DNS lookup is outweighed by getting the address > normalized and avoiding rewriting parsing.) > Alternatively, sun.net.util.IPAddressUtil.isIPv6LiteralAddress() > --- > 2nd exception (on datanode) > 15/04/13 13:18:07 ERROR datanode.DataNode: > dev1903.prn1.facebook.com:50010:DataXceiver error processing unknown > operation src: /2401:db00:20:7013:face:0:7:0:54152 dst: > /2401:db00:11:d010:face:0:2f:0:50010 > java.io.EOFException > at java.io.DataInputStream.readShort(DataInputStream.java:315) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226) > at java.lang.Thread.run(Thread.java:745) > Which also comes as client error "-get: 2401 is not an IP string literal." > This one has existing parsing logic which needs to shift to the last colon > rather than the first. Should also be a tiny bit faster by using lastIndexOf > rather than split. Could alternatively use the techniques above. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12430) Fix HDFS client gets errors trying to to connect to IPv6 DataNode
[ https://issues.apache.org/jira/browse/HADOOP-12430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-12430: Attachment: HDFS-8078-HADOOP-17800.002.patch > Fix HDFS client gets errors trying to to connect to IPv6 DataNode > - > > Key: HADOOP-12430 > URL: https://issues.apache.org/jira/browse/HADOOP-12430 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.6.0 >Reporter: Nate Edel >Assignee: Nate Edel >Priority: Major > Labels: BB2015-05-TBR, ipv6 > Attachments: HDFS-8078-HADOOP-17800.001.patch, > HDFS-8078-HADOOP-17800.002.patch, HDFS-8078.10.patch, HDFS-8078.11.patch, > HDFS-8078.12.patch, HDFS-8078.13.patch, HDFS-8078.14.patch, > HDFS-8078.15.patch, HDFS-8078.9.patch, dummy.patch > > > 1st exception, on put: > 15/03/23 18:43:18 WARN hdfs.DFSClient: DataStreamer Exception > java.lang.IllegalArgumentException: Does not contain a valid host:port > authority: 2401:db00:1010:70ba:face:0:8:0:50010 > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153) > at > org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1607) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588) > Appears to actually stem from code in DataNodeID which assumes it's safe to > append together (ipaddr + ":" + port) -- which is OK for IPv4 and not OK for > IPv6. NetUtils.createSocketAddr( ) assembles a Java URI object, which > requires the format proto://[2401:db00:1010:70ba:face:0:8:0]:50010 > Currently using InetAddress.getByName() to validate IPv6 (guava > InetAddresses.forString has been flaky) but could also use our own parsing. > (From logging this, it seems like a low-enough frequency call that the extra > object creation shouldn't be problematic, and for me the slight risk of > passing in bad input that is not actually an IPv4 or IPv6 address and thus > calling an external DNS lookup is outweighed by getting the address > normalized and avoiding rewriting parsing.) > Alternatively, sun.net.util.IPAddressUtil.isIPv6LiteralAddress() > --- > 2nd exception (on datanode) > 15/04/13 13:18:07 ERROR datanode.DataNode: > dev1903.prn1.facebook.com:50010:DataXceiver error processing unknown > operation src: /2401:db00:20:7013:face:0:7:0:54152 dst: > /2401:db00:11:d010:face:0:2f:0:50010 > java.io.EOFException > at java.io.DataInputStream.readShort(DataInputStream.java:315) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226) > at java.lang.Thread.run(Thread.java:745) > Which also comes as client error "-get: 2401 is not an IP string literal." > This one has existing parsing logic which needs to shift to the last colon > rather than the first. Should also be a tiny bit faster by using lastIndexOf > rather than split. Could alternatively use the techniques above. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11630) Allow hadoop.sh to bind to ipv6 conditionally
[ https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-11630: Attachment: HADOOP-11630-HADOOP-17800.003.patch > Allow hadoop.sh to bind to ipv6 conditionally > - > > Key: HADOOP-11630 > URL: https://issues.apache.org/jira/browse/HADOOP-11630 > Project: Hadoop Common > Issue Type: Sub-task > Components: scripts >Affects Versions: 2.6.0 >Reporter: Elliott Neil Clark >Assignee: Elliott Neil Clark >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-11630-HADOOP-17800.001.patch, > HADOOP-11630-HADOOP-17800.002.patch, HADOOP-11630-HADOOP-17800.003.patch, > HDFS-7834-branch-2-0.patch, HDFS-7834-trunk-0.patch > > > Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true > While this was needed a while ago. IPV6 on java works much better now and > there should be a way to allow it to bind dual stack if needed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-11630) Allow hadoop.sh to bind to ipv6 conditionally
[ https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17382091#comment-17382091 ] Hemanth Boyina edited comment on HADOOP-11630 at 7/16/21, 2:28 PM: --- reopened and uploaded patch against trunk as the current patch has conflicts , please see here https://issues.apache.org/jira/browse/HADOOP-11890?focusedCommentId=17379845=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17379845 for more details was (Author: hemanthboyina): reopened and uploaded patch against trunk as the current patch has conflicts , please see here for more details > Allow hadoop.sh to bind to ipv6 conditionally > - > > Key: HADOOP-11630 > URL: https://issues.apache.org/jira/browse/HADOOP-11630 > Project: Hadoop Common > Issue Type: Sub-task > Components: scripts >Affects Versions: 2.6.0 >Reporter: Elliott Neil Clark >Assignee: Elliott Neil Clark >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-11630-HADOOP-17800.001.patch, > HADOOP-11630-HADOOP-17800.002.patch, HDFS-7834-branch-2-0.patch, > HDFS-7834-trunk-0.patch > > > Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true > While this was needed a while ago. IPV6 on java works much better now and > there should be a way to allow it to bind dual stack if needed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11630) Allow hadoop.sh to bind to ipv6 conditionally
[ https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17382091#comment-17382091 ] Hemanth Boyina commented on HADOOP-11630: - reopened and uploaded patch against trunk as the current patch has conflicts , please see here for more details > Allow hadoop.sh to bind to ipv6 conditionally > - > > Key: HADOOP-11630 > URL: https://issues.apache.org/jira/browse/HADOOP-11630 > Project: Hadoop Common > Issue Type: Sub-task > Components: scripts >Affects Versions: 2.6.0 >Reporter: Elliott Neil Clark >Assignee: Elliott Neil Clark >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-11630-HADOOP-17800.001.patch, > HADOOP-11630-HADOOP-17800.002.patch, HDFS-7834-branch-2-0.patch, > HDFS-7834-trunk-0.patch > > > Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true > While this was needed a while ago. IPV6 on java works much better now and > there should be a way to allow it to bind dual stack if needed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12430) Fix HDFS client gets errors trying to to connect to IPv6 DataNode
[ https://issues.apache.org/jira/browse/HADOOP-12430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-12430: Attachment: HDFS-8078-HADOOP-17800.001.patch Status: Patch Available (was: Reopened) > Fix HDFS client gets errors trying to to connect to IPv6 DataNode > - > > Key: HADOOP-12430 > URL: https://issues.apache.org/jira/browse/HADOOP-12430 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.6.0 >Reporter: Nate Edel >Assignee: Nate Edel >Priority: Major > Labels: BB2015-05-TBR, ipv6 > Attachments: HDFS-8078-HADOOP-17800.001.patch, HDFS-8078.10.patch, > HDFS-8078.11.patch, HDFS-8078.12.patch, HDFS-8078.13.patch, > HDFS-8078.14.patch, HDFS-8078.15.patch, HDFS-8078.9.patch, dummy.patch > > > 1st exception, on put: > 15/03/23 18:43:18 WARN hdfs.DFSClient: DataStreamer Exception > java.lang.IllegalArgumentException: Does not contain a valid host:port > authority: 2401:db00:1010:70ba:face:0:8:0:50010 > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153) > at > org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1607) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588) > Appears to actually stem from code in DataNodeID which assumes it's safe to > append together (ipaddr + ":" + port) -- which is OK for IPv4 and not OK for > IPv6. NetUtils.createSocketAddr( ) assembles a Java URI object, which > requires the format proto://[2401:db00:1010:70ba:face:0:8:0]:50010 > Currently using InetAddress.getByName() to validate IPv6 (guava > InetAddresses.forString has been flaky) but could also use our own parsing. > (From logging this, it seems like a low-enough frequency call that the extra > object creation shouldn't be problematic, and for me the slight risk of > passing in bad input that is not actually an IPv4 or IPv6 address and thus > calling an external DNS lookup is outweighed by getting the address > normalized and avoiding rewriting parsing.) > Alternatively, sun.net.util.IPAddressUtil.isIPv6LiteralAddress() > --- > 2nd exception (on datanode) > 15/04/13 13:18:07 ERROR datanode.DataNode: > dev1903.prn1.facebook.com:50010:DataXceiver error processing unknown > operation src: /2401:db00:20:7013:face:0:7:0:54152 dst: > /2401:db00:11:d010:face:0:2f:0:50010 > java.io.EOFException > at java.io.DataInputStream.readShort(DataInputStream.java:315) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226) > at java.lang.Thread.run(Thread.java:745) > Which also comes as client error "-get: 2401 is not an IP string literal." > This one has existing parsing logic which needs to shift to the last colon > rather than the first. Should also be a tiny bit faster by using lastIndexOf > rather than split. Could alternatively use the techniques above. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-12430) Fix HDFS client gets errors trying to to connect to IPv6 DataNode
[ https://issues.apache.org/jira/browse/HADOOP-12430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina reopened HADOOP-12430: - > Fix HDFS client gets errors trying to to connect to IPv6 DataNode > - > > Key: HADOOP-12430 > URL: https://issues.apache.org/jira/browse/HADOOP-12430 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.6.0 >Reporter: Nate Edel >Assignee: Nate Edel >Priority: Major > Labels: BB2015-05-TBR, ipv6 > Attachments: HDFS-8078.10.patch, HDFS-8078.11.patch, > HDFS-8078.12.patch, HDFS-8078.13.patch, HDFS-8078.14.patch, > HDFS-8078.15.patch, HDFS-8078.9.patch, dummy.patch > > > 1st exception, on put: > 15/03/23 18:43:18 WARN hdfs.DFSClient: DataStreamer Exception > java.lang.IllegalArgumentException: Does not contain a valid host:port > authority: 2401:db00:1010:70ba:face:0:8:0:50010 > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164) > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153) > at > org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1607) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588) > Appears to actually stem from code in DataNodeID which assumes it's safe to > append together (ipaddr + ":" + port) -- which is OK for IPv4 and not OK for > IPv6. NetUtils.createSocketAddr( ) assembles a Java URI object, which > requires the format proto://[2401:db00:1010:70ba:face:0:8:0]:50010 > Currently using InetAddress.getByName() to validate IPv6 (guava > InetAddresses.forString has been flaky) but could also use our own parsing. > (From logging this, it seems like a low-enough frequency call that the extra > object creation shouldn't be problematic, and for me the slight risk of > passing in bad input that is not actually an IPv4 or IPv6 address and thus > calling an external DNS lookup is outweighed by getting the address > normalized and avoiding rewriting parsing.) > Alternatively, sun.net.util.IPAddressUtil.isIPv6LiteralAddress() > --- > 2nd exception (on datanode) > 15/04/13 13:18:07 ERROR datanode.DataNode: > dev1903.prn1.facebook.com:50010:DataXceiver error processing unknown > operation src: /2401:db00:20:7013:face:0:7:0:54152 dst: > /2401:db00:11:d010:face:0:2f:0:50010 > java.io.EOFException > at java.io.DataInputStream.readShort(DataInputStream.java:315) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226) > at java.lang.Thread.run(Thread.java:745) > Which also comes as client error "-get: 2401 is not an IP string literal." > This one has existing parsing logic which needs to shift to the last colon > rather than the first. Should also be a tiny bit faster by using lastIndexOf > rather than split. Could alternatively use the techniques above. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11630) Allow hadoop.sh to bind to ipv6 conditionally
[ https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-11630: Attachment: HADOOP-11630-HADOOP-17800.002.patch Status: Patch Available (was: Reopened) > Allow hadoop.sh to bind to ipv6 conditionally > - > > Key: HADOOP-11630 > URL: https://issues.apache.org/jira/browse/HADOOP-11630 > Project: Hadoop Common > Issue Type: Sub-task > Components: scripts >Affects Versions: 2.6.0 >Reporter: Elliott Neil Clark >Assignee: Elliott Neil Clark >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-11630-HADOOP-17800.001.patch, > HADOOP-11630-HADOOP-17800.002.patch, HDFS-7834-branch-2-0.patch, > HDFS-7834-trunk-0.patch > > > Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true > While this was needed a while ago. IPV6 on java works much better now and > there should be a way to allow it to bind dual stack if needed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11630) Allow hadoop.sh to bind to ipv6 conditionally
[ https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-11630: Attachment: HADOOP-11630-HADOOP-17800.001.patch > Allow hadoop.sh to bind to ipv6 conditionally > - > > Key: HADOOP-11630 > URL: https://issues.apache.org/jira/browse/HADOOP-11630 > Project: Hadoop Common > Issue Type: Sub-task > Components: scripts >Affects Versions: 2.6.0 >Reporter: Elliott Neil Clark >Assignee: Elliott Neil Clark >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-11630-HADOOP-17800.001.patch, > HDFS-7834-branch-2-0.patch, HDFS-7834-trunk-0.patch > > > Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true > While this was needed a while ago. IPV6 on java works much better now and > there should be a way to allow it to bind dual stack if needed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-11630) Allow hadoop.sh to bind to ipv6 conditionally
[ https://issues.apache.org/jira/browse/HADOOP-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina reopened HADOOP-11630: - > Allow hadoop.sh to bind to ipv6 conditionally > - > > Key: HADOOP-11630 > URL: https://issues.apache.org/jira/browse/HADOOP-11630 > Project: Hadoop Common > Issue Type: Sub-task > Components: scripts >Affects Versions: 2.6.0 >Reporter: Elliott Neil Clark >Assignee: Elliott Neil Clark >Priority: Major > Labels: ipv6 > Fix For: HADOOP-11890 > > Attachments: HADOOP-11630-HADOOP-17800.001.patch, > HDFS-7834-branch-2-0.patch, HDFS-7834-trunk-0.patch > > > Currently the bash scripts unconditionally add -Djava.net.preferIPv4Stack=true > While this was needed a while ago. IPV6 on java works much better now and > there should be a way to allow it to bind dual stack if needed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11890) Uber-JIRA: Hadoop should support IPv6
[ https://issues.apache.org/jira/browse/HADOOP-11890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17375835#comment-17375835 ] Hemanth Boyina commented on HADOOP-11890: - Yes [~arp], Earlier while we are working on IPv6 ,we planned to rebase the IPV6 branch(HADOOP-11890) to trunk, but there are many conflicts , so we manually rebased all the IPV6 Jiras to our local trunk and worked out Now we come with an approach, Under Trunk , Create a new branch for IPV6 and rebase the all Jiras committed on branch HADOOP-11890 to new IPV6 branch you have any suggestions or any thoughts on this > Uber-JIRA: Hadoop should support IPv6 > - > > Key: HADOOP-11890 > URL: https://issues.apache.org/jira/browse/HADOOP-11890 > Project: Hadoop Common > Issue Type: Improvement > Components: net >Reporter: Nate Edel >Assignee: Nate Edel >Priority: Major > Labels: ipv6 > Attachments: hadoop_2.7.3_ipv6_commits.txt > > > Hadoop currently treats IPv6 as unsupported. Track related smaller issues to > support IPv6. > (Current case here is mainly HBase on HDFS, so any suggestions about other > test cases/workload are really appreciated.) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11890) Uber-JIRA: Hadoop should support IPv6
[ https://issues.apache.org/jira/browse/HADOOP-11890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17363591#comment-17363591 ] Hemanth Boyina commented on HADOOP-11890: - thanks for the ping [~arp], sorry for the late response Yes we have tried these changes , the subtasks under this Jira were written on top of Branch-2.7.3, so we have manually rebased all these subtasks on top of Trunk and deployed the cluster, but there is an issue while parsing IPV6 address in NetUtils#createSocketAddr . NetUtils#createSocketAddr is common Util which parses IP address and creates an INetSocketAddress which is used all over the Hadoop, So we have modified this Util to support IPv6 address In HADOOP-17542 we have attached the test scenarios which we have verified on a successful deployment of Hadoop with IPV6 > Uber-JIRA: Hadoop should support IPv6 > - > > Key: HADOOP-11890 > URL: https://issues.apache.org/jira/browse/HADOOP-11890 > Project: Hadoop Common > Issue Type: Improvement > Components: net >Reporter: Nate Edel >Assignee: Nate Edel >Priority: Major > Labels: ipv6 > Attachments: hadoop_2.7.3_ipv6_commits.txt > > > Hadoop currently treats IPv6 as unsupported. Track related smaller issues to > support IPv6. > (Current case here is mainly HBase on HDFS, so any suggestions about other > test cases/workload are really appreciated.) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17542) IPV6 support in Netutils#createSocketAddress
[ https://issues.apache.org/jira/browse/HADOOP-17542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17363447#comment-17363447 ] Hemanth Boyina commented on HADOOP-17542: - [~gb.ana...@gmail.com] can you please raise it as a github PR > IPV6 support in Netutils#createSocketAddress > - > > Key: HADOOP-17542 > URL: https://issues.apache.org/jira/browse/HADOOP-17542 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.1.1 >Reporter: ANANDA G B >Priority: Minor > Labels: ipv6 > Attachments: HADOOP-17542-HADOOP-11890-001.patch, Test Scenarios > Verified in IPV6 cluster.doc > > > Currently NetUtils#createSocketAddress not supporting if target is IPV6 ip. > If target is IPV6 ip then it throw "Does not contain a valid host:port > authority: ". > This need be support. > public static InetSocketAddress createSocketAddr(String target, > int defaultPort, > String configName, > boolean useCacheIfPresent) { > String helpText = ""; > if (configName != null) > { helpText = " (configuration property '" + configName + "')"; } > if (target == null) > { throw new IllegalArgumentException("Target address cannot be null." + > helpText); } > target = target.trim(); > boolean hasScheme = target.contains("://"); > URI uri = createURI(target, hasScheme, helpText, useCacheIfPresent); > String host = uri.getHost(); > int port = uri.getPort(); > if (port == -1) > { port = defaultPort; } > String path = uri.getPath(); > if ((host == null) || (port < 0) || > (!hasScheme && path != null && !path.isEmpty())) > { throw new IllegalArgumentException( *"Does not contain a valid host:port > authority: " + target + helpText* ); } > return createSocketAddrForHost(host, port); > } -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17588) CryptoInputStream#close() should be syncronized
[ https://issues.apache.org/jira/browse/HADOOP-17588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-17588: Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) > CryptoInputStream#close() should be syncronized > --- > > Key: HADOOP-17588 > URL: https://issues.apache.org/jira/browse/HADOOP-17588 > Project: Hadoop Common > Issue Type: Bug >Reporter: Renukaprasad C >Assignee: Renukaprasad C >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-17588.001.patch, image-2021-03-13-23-56-18-865.png > > > org.apache.hadoop.crypto.CryptoInputStream.close() - when 2 threads try to > close the stream second thread, fails with error. > This operation should be synchronized to avoid multiple threads to perform > the close operation concurrently. > !image-2021-03-13-23-56-18-865.png|thumbnail! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17588) CryptoInputStream#close() should be syncronized
[ https://issues.apache.org/jira/browse/HADOOP-17588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17315497#comment-17315497 ] Hemanth Boyina commented on HADOOP-17588: - +1, committing to trunk thanks [~prasad-acit] for your contribution , thanks [~brahmareddy] for the review > CryptoInputStream#close() should be syncronized > --- > > Key: HADOOP-17588 > URL: https://issues.apache.org/jira/browse/HADOOP-17588 > Project: Hadoop Common > Issue Type: Bug >Reporter: Renukaprasad C >Assignee: Renukaprasad C >Priority: Major > Attachments: HADOOP-17588.001.patch, image-2021-03-13-23-56-18-865.png > > > org.apache.hadoop.crypto.CryptoInputStream.close() - when 2 threads try to > close the stream second thread, fails with error. > This operation should be synchronized to avoid multiple threads to perform > the close operation concurrently. > !image-2021-03-13-23-56-18-865.png|thumbnail! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17532) Yarn Job execution get failed when LZ4 Compression Codec is used
[ https://issues.apache.org/jira/browse/HADOOP-17532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285892#comment-17285892 ] Hemanth Boyina commented on HADOOP-17532: - this was a hadoop common issue , hence moved > Yarn Job execution get failed when LZ4 Compression Codec is used > > > Key: HADOOP-17532 > URL: https://issues.apache.org/jira/browse/HADOOP-17532 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bhavik Patel >Priority: Major > Attachments: HDFS-15838.001.patch, LZ4.png > > > When we try to compress a file using the LZ4 codec compression type then the > yarn job gets failed with the error message : > {code:java} > net.jpountz.lz4.LZ4Compressorcompres(Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)V > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-17532) Yarn Job execution get failed when LZ4 Compression Codec is used
[ https://issues.apache.org/jira/browse/HADOOP-17532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina moved HDFS-15838 to HADOOP-17532: Component/s: (was: hdfs) Key: HADOOP-17532 (was: HDFS-15838) Project: Hadoop Common (was: Hadoop HDFS) > Yarn Job execution get failed when LZ4 Compression Codec is used > > > Key: HADOOP-17532 > URL: https://issues.apache.org/jira/browse/HADOOP-17532 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bhavik Patel >Priority: Major > Attachments: HDFS-15838.001.patch, LZ4.png > > > When we try to compress a file using the LZ4 codec compression type then the > yarn job gets failed with the error message : > {code:java} > net.jpountz.lz4.LZ4Compressorcompres(Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)V > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-17144: Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch, HADOOP-17144.004.patch, HADOOP-17144.005.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17216185#comment-17216185 ] Hemanth Boyina commented on HADOOP-17144: - thanks for the review [~iwasakims] committed to trunk > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch, HADOOP-17144.004.patch, HADOOP-17144.005.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208798#comment-17208798 ] Hemanth Boyina commented on HADOOP-17144: - sorry for being late , thanks for the review [~iwasakims] updated the patch , please review > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch, HADOOP-17144.004.patch, HADOOP-17144.005.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-17144: Attachment: HADOOP-17144.005.patch > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch, HADOOP-17144.004.patch, HADOOP-17144.005.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195051#comment-17195051 ] Hemanth Boyina commented on HADOOP-17144: - thanks for the comment [~iwasakims] , sorry for late response {quote}Adding a test case similar to TestLz4CompressorDecompressor#testSetInputWithBytesSizeMoreThenDefaultLz4CompressorByfferSize for decompressor would make the point clear {quote} we do have a test case similar to this scenario in TestCompressorDecompressor#testCompressorDecompressorWithExeedBufferLimit , modified the lz4 constructors to use default buffer size , the compressor worked the same way as you have mentioned but decompressor didnt work the same as the lz4 decompressor api returned negative value for this scenario which is incorrect please correct me if i am missing something here > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch, HADOOP-17144.004.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17188694#comment-17188694 ] Hemanth Boyina edited comment on HADOOP-17144 at 9/1/20, 5:41 PM: -- {quote}Lz4Decompressor#setInputFromSavedData copies the part of the userBuf to compressedDirectBuf (working buffer) considering the capacit {quote} directBufferSize will be the size of raw data , so the compressedDirectBuf and UncompressedDirectBuf will have of same size {code:java} ((ByteBuffer) compressedDirectBuf).put(userBuf, userBufOff, compressedDirectBufLen); {code} userBuf will be total compressed buffer and compressedDirectBufLen is total number of compressed bytes , so the total userBuf will be kept in compressedDirectBuf here and the same is used for decompression With existing code for a raw data of length 204800 , the compressedDirectBuf and UncompressedDirectBuf will have capacity of 204800 , the compressed bytes size is 197622 , so the total compressed bytes is getting kept in compressedDirectBuf kindly correct me if i am missing something here was (Author: hemanthboyina): {quote}Lz4Decompressor#setInputFromSavedData copies the part of the userBuf to compressedDirectBuf (working buffer) considering the capacit {quote} directBufferSize will be the size of raw data , so the compressedDirectBuf and UncompressedDirectBuf will have of same size {code:java} ((ByteBuffer) compressedDirectBuf).put(userBuf, userBufOff, compressedDirectBufLen); {code} > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch, HADOOP-17144.004.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17188694#comment-17188694 ] Hemanth Boyina commented on HADOOP-17144: - {quote}Lz4Decompressor#setInputFromSavedData copies the part of the userBuf to compressedDirectBuf (working buffer) considering the capacit {quote} directBufferSize will be the size of raw data , so the compressedDirectBuf and UncompressedDirectBuf will have of same size {code:java} ((ByteBuffer) compressedDirectBuf).put(userBuf, userBufOff, compressedDirectBufLen); {code} > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch, HADOOP-17144.004.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17222) Create socket address combined with cache to speed up hdfs client choose DataNode
[ https://issues.apache.org/jira/browse/HADOOP-17222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17185086#comment-17185086 ] Hemanth Boyina commented on HADOOP-17222: - thanks for the reporting the issue [~fanrui] {quote}The CPU usage of DFSInputStream.getBestNodeDNAddrPair has been optimized from 4.86% to 0.54%. {quote} can you report the CPU usage for same number of samples for DFSInputStream.getBestNodeDNAddrPair {quote}I’m thinking two new configs seem a bit overkill - unrelated to this but Hadoop has too many configs while we seldom clean them up {quote} agree with this , there has been increasing number of configurations , we can avoid configs here > Create socket address combined with cache to speed up hdfs client choose > DataNode > - > > Key: HADOOP-17222 > URL: https://issues.apache.org/jira/browse/HADOOP-17222 > Project: Hadoop Common > Issue Type: Improvement > Components: common, hdfs-client > Environment: HBase version: 2.1.0 > JVM: -Xmx2g -Xms2g > hadoop hdfs version: 2.7.4 > disk:SSD > OS:CentOS Linux release 7.4.1708 (Core) > JMH Benchmark: @Fork(value = 1) > @Warmup(iterations = 300) > @Measurement(iterations = 300) >Reporter: fanrui >Assignee: fanrui >Priority: Major > Attachments: After Optimization remark.png, After optimization.svg, > Before Optimization remark.png, Before optimization.svg > > > Note:Not only the hdfs client can get the current benefit, all callers of > NetUtils.createSocketAddr will get the benefit. Just use hdfs client as an > example. > > Hdfs client selects best DN for hdfs Block. method call stack: > DFSInputStream.chooseDataNode -> getBestNodeDNAddrPair -> > NetUtils.createSocketAddr > NetUtils.createSocketAddr creates the corresponding InetSocketAddress based > on the host and port. There are some heavier operations in the > NetUtils.createSocketAddr method, for example: URI.create(target), so > NetUtils.createSocketAddr takes more time to execute. > The following is my performance report. The report is based on HBase calling > hdfs. HBase is a high-frequency access client for hdfs, because HBase read > operations often access a small DataBlock (about 64k) instead of the entire > HFile. In the case of high frequency access, the NetUtils.createSocketAddr > method is time-consuming. > h3. Test Environment: > > {code:java} > HBase version: 2.1.0 > JVM: -Xmx2g -Xms2g > hadoop hdfs version: 2.7.4 > disk:SSD > OS:CentOS Linux release 7.4.1708 (Core) > JMH Benchmark: @Fork(value = 1) > @Warmup(iterations = 300) > @Measurement(iterations = 300) > {code} > h4. Before Optimization FlameGraph: > In the figure, we can see that DFSInputStream.getBestNodeDNAddrPair accounts > for 4.86% of the entire CPU, and the creation of URIs accounts for a larger > proportion. > !Before Optimization remark.png! > h3. Optimization ideas: > NetUtils.createSocketAddr creates InetSocketAddress based on host and port. > Here we can add Cache to InetSocketAddress. The key of Cache is host and > port, and the value is InetSocketAddress. > h4. After Optimization FlameGraph: > In the figure, we can see that DFSInputStream.getBestNodeDNAddrPair accounts > for 0.54% of the entire CPU. Here, ConcurrentHashMap is used as the Cache, > and the ConcurrentHashMap.get() method gets data from the Cache. The CPU > usage of DFSInputStream.getBestNodeDNAddrPair has been optimized from 4.86% > to 0.54%. > !After Optimization remark.png! > h3. Original FlameGraph link: > [Before > Optimization|https://drive.google.com/file/d/133L5m75u2tu_KgKfGHZLEUzGR0XAfUl6/view?usp=sharing] > [After Optimization > FlameGraph|https://drive.google.com/file/d/133L5m75u2tu_KgKfGHZLEUzGR0XAfUl6/view?usp=sharing] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184169#comment-17184169 ] Hemanth Boyina commented on HADOOP-17144: - thanks for the review [~iwasakims] {quote}Since user of Lz4Decompressor provides already compressed data as input, we do not need to expand the internal buffer? {quote} user's compressed data input will be first kept in userBuf and in setInputFromSavedData userBuf will be put in compressedDirectBuf , as the compressed data length could be greater than source length we need to expanded the buffer {quote}This looks incorrect since the userBufLen could be greater than directBufferSize. {quote} In LZ4 decompressor the userBufLen is nothing but the compressed data length , so as the compressed data length could be greater than source length , we need to set compressedDirectBufLen with userBufLen LZ4 compression states that Compression is guaranteed to succeed if 'dstCapacity' >= LZ4_compressBound(srcSize) {quote}cc and whitespace warnings should be addressed too updating the calculation of the maxlength {quote} will update on the next patch > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch, HADOOP-17144.004.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-17219) ZStandardCodec compression mail fail(generic error) when encounter specific file
[ https://issues.apache.org/jira/browse/HADOOP-17219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina moved HDFS-15445 to HADOOP-17219: Component/s: (was: hdfs) Key: HADOOP-17219 (was: HDFS-15445) Affects Version/s: (was: 2.6.5) 2.6.5 Project: Hadoop Common (was: Hadoop HDFS) > ZStandardCodec compression mail fail(generic error) when encounter specific > file > > > Key: HADOOP-17219 > URL: https://issues.apache.org/jira/browse/HADOOP-17219 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.6.5 > Environment: zstd 1.3.3 > hadoop 2.6.5 > > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java > @@ -62,10 +62,8 @@ > @BeforeClass > public static void beforeClass() throws Exception { > CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64); > - uncompressedFile = new File(TestZStandardCompressorDecompressor.class > - .getResource("/zstd/test_file.txt").toURI()); > - compressedFile = new File(TestZStandardCompressorDecompressor.class > - .getResource("/zstd/test_file.txt.zst").toURI()); > + uncompressedFile = new File("/tmp/badcase.data"); > + compressedFile = new File("/tmp/badcase.data.zst"); >Reporter: Igloo >Priority: Blocker > Attachments: HDFS-15445.patch, badcase.data, > image-2020-06-30-11-35-46-859.png, image-2020-06-30-11-39-17-861.png, > image-2020-06-30-11-42-44-585.png, image-2020-06-30-11-51-18-026.png > > > *Problem:* > In our production environment, we put file in hdfs with zstd compressor, > recently, we find that a specific file may leads to zstandard compressor > failures. > And we can reproduce the issue with specific file(attached file: badcase.data) > !image-2020-06-30-11-51-18-026.png|width=1031,height=230! > > *Analysis*: > ZStandarCompressor use buffersize( From zstd recommended compress out buffer > size) for both inBufferSize and outBufferSize > !image-2020-06-30-11-35-46-859.png|width=1027,height=387! > but zstd indeed provides two separately recommending inputBufferSize and > outputBufferSize > !image-2020-06-30-11-39-17-861.png! > > *Workaround* > One workaround, using recommended in/out buffer size provided by zstd lib > can avoid the problem, but we don't know why. > zstd recommended input buffer size: 1301072 (128 * 1024) > zstd recommended ouput buffer size: 131591 > !image-2020-06-30-11-42-44-585.png|width=1023,height=196! > > > > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17182638#comment-17182638 ] Hemanth Boyina commented on HADOOP-17144: - thanks for the review [~iwasakims] updated the patch by fixing the comments , please review > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch, HADOOP-17144.004.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-17144: Attachment: HADOOP-17144.004.patch > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch, HADOOP-17144.004.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17179503#comment-17179503 ] Hemanth Boyina commented on HADOOP-17144: - thanks for the comment [~iwasakims] i have verified in CentOS 7, SUSE and in UBUNTU > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17178798#comment-17178798 ] Hemanth Boyina commented on HADOOP-17144: - [~aajisaka] [~iwasakims] can you please review the patch > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175444#comment-17175444 ] Hemanth Boyina commented on HADOOP-17144: - test failures were not related please review > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-17144: Attachment: HADOOP-17144.003.patch > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17172544#comment-17172544 ] Hemanth Boyina commented on HADOOP-17144: - LZ4 compression states that Compression is guaranteed to succeed if 'dstCapacity' >= LZ4_compressBound(srcSize) , where LZ4_compressBound is {code:java} LZ4_compressBound(isize) = (isize) + ((isize)/255) + 16){code} updated the patch as per the above rule and changed the buffer capacity in hadoop > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-17144: Attachment: HADOOP-17144.002.patch > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17167804#comment-17167804 ] Hemanth Boyina commented on HADOOP-17144: - hi [~cyan4973] [~cnauroth] [~cmccabe] , i was trying to update LZ4 to v1.9.2 , i could find some incompatibility by using LZ4_compress in lz4compressor.c , so i tried with LZ4_compress_default , still i could see some related test failures any suggestions ? > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HADOOP-17144: Attachment: HADOOP-17144.001.patch Status: Patch Available (was: Open) > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
Hemanth Boyina created HADOOP-17144: --- Summary: Update Hadoop's lz4 to v1.9.2 Key: HADOOP-17144 URL: https://issues.apache.org/jira/browse/HADOOP-17144 Project: Hadoop Common Issue Type: Improvement Reporter: Hemanth Boyina Assignee: Hemanth Boyina Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17140) KMSClientProvider Sends HTTP GET with null "Content-Type" Header
[ https://issues.apache.org/jira/browse/HADOOP-17140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161424#comment-17161424 ] Hemanth Boyina commented on HADOOP-17140: - thanks [~agrams] for the report , can you provide a patch with a UT > KMSClientProvider Sends HTTP GET with null "Content-Type" Header > > > Key: HADOOP-17140 > URL: https://issues.apache.org/jira/browse/HADOOP-17140 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.7.3 >Reporter: Axton Grams >Priority: Major > > Hive Server uses 'org.apache.hadoop.crypto.key.kms.KMSClientProvider' when > interacting with HDFS TDE zones. This triggers a call to the KMS server. If > the request method is a GET, the HTTP Header Content-Type is sent with a null > value. > When using Ranger KMS, the embedded Tomcat server returns a HTTP 400 error > with the following error message: > {quote}HTTP Status 400 - Bad Content-Type header value: '' > The request sent by the client was syntactically incorrect. > {quote} > This only occurs with HTTP GET method calls. > This is a captured HTTP request: > > {code:java} > GET /kms/v1/key/xxx/_metadata?doAs=yyy=yyy HTTP/1.1 > Cookie: > hadoop.auth="u=hive=hive/domain@domain.com=kerberos-dt=123789456=xxx=" > Content-Type: > Cache-Control: no-cache > Pragma: no-cache > User-Agent: Java/1.8.0_241 > Host: kms.domain.com:9292 > Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2 > Connection: keep-alive{code} > > Note the empty 'Content-Type' header. > And the corresponding response: > > {code:java} > HTTP/1.1 400 Bad Request > Server: Apache-Coyote/1.1 > Content-Type: text/html;charset=utf-8 > Content-Language: en > Content-Length: 1034 > Date: Thu, 16 Jul 2020 04:23:18 GMT > Connection: close{code} > > This is the stack trace from the Hive server: > > {code:java} > Caused by: java.io.IOException: HTTP status [400], message [Bad Request] > at > org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:608) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:597) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:566) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.getMetadata(KMSClientProvider.java:861) > at > org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.compareKeyStrength(Hadoop23Shims.java:1506) > at > org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.comparePathKeyStrength(Hadoop23Shims.java:1442) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.comparePathKeyStrength(SemanticAnalyzer.java:1990) > ... 38 more{code} > > This looks to occur in > [https://github.com/hortonworks/hadoop-release/blob/HDP-2.6.5.165-3-tag/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java#L591-L599] > {code:java} > if (authRetryCount > 0) { > String contentType = conn.getRequestProperty(CONTENT_TYPE); > String requestMethod = conn.getRequestMethod(); > URL url = conn.getURL(); > conn = createConnection(url, requestMethod); > conn.setRequestProperty(CONTENT_TYPE, contentType); > return call(conn, jsonOutput, expectedResponse, klass, > authRetryCount - 1); > }{code} > I think when a GET method is received, the Content-Type header is not > defined, then in line 592: > {code:java} > String contentType = conn.getRequestProperty(CONTENT_TYPE); > {code} > The code attempts to retrieve the CONTENT_TYPE Request Property, which > returns null. > Then in line 596: > {code:java} > conn.setRequestProperty(CONTENT_TYPE, contentType); > {code} > The null content type is used to construct the HTTP call to the KMS server. > A null Content-Type header is not allowed/considered malformed by the > receiving KMS server. > I propose this code be updated to inspect the value returned by > conn.getRequestProperty(CONTENT_TYPE), and not use a null value to construct > the new KMS connection. > Proposed pseudo-patch: > {code:java} > --- > a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java > +++ > b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java > @@ -593,7 +593,9 @@ public HttpURLConnection run() throws Exception { > String requestMethod = conn.getRequestMethod(); > URL url = conn.getURL(); > conn = createConnection(url, requestMethod); > -conn.setRequestProperty(CONTENT_TYPE, contentType); > +if (conn.getRequestProperty(CONTENT_TYPE) != null) { > +