[jira] [Commented] (HADOOP-14313) Replace/improve Hadoop's byte[] comparator
[ https://issues.apache.org/jira/browse/HADOOP-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979767#comment-15979767 ] Vikas Vishwakarma commented on HADOOP-14313: yep will go through all the compare calls carefully before making the change. We could also do a wrapper implementation over the guava comparator that provides it the input arrays after adjusting for offset and length parameters provided by hadoop > Replace/improve Hadoop's byte[] comparator > -- > > Key: HADOOP-14313 > URL: https://issues.apache.org/jira/browse/HADOOP-14313 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Vikas Vishwakarma > Attachments: HADOOP-14313.master.001.patch > > > Hi, > Recently we were looking at the Lexicographic byte array comparison in HBase. > We did microbenchmark for the byte array comparator of HADOOP ( > https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/io/FastByteComparisons.java#L161 > ) , HBase Vs the latest byte array comparator from guava ( > https://github.com/google/guava/blob/master/guava/src/com/google/common/primitives/UnsignedBytes.java#L362 > ) and observed that the guava main branch version is much faster. > Specifically we see very good improvement when the byteArraySize%8 != 0 and > also for large byte arrays. I will update the benchmark results using JMH for > Hadoop vs Guava. For the jira on HBase, please refer HBASE-17877. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14339) Fix warnings from Spotbugs in hadoop-mapreduce
[ https://issues.apache.org/jira/browse/HADOOP-14339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-14339: - Attachment: HADOOP-14339.003.patch > Fix warnings from Spotbugs in hadoop-mapreduce > -- > > Key: HADOOP-14339 > URL: https://issues.apache.org/jira/browse/HADOOP-14339 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HADOOP-14339.001.patch, HADOOP-14339.002.patch, > HADOOP-14339.003.patch > > > Fix warnings from Spotbugs in hadoop-mapreduce since switched from findbugs > to spotbugs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14338) Fix warnings from Spotbugs in hadoop-yarn
[ https://issues.apache.org/jira/browse/HADOOP-14338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-14338: - Attachment: HADOOP-14338.003.patch > Fix warnings from Spotbugs in hadoop-yarn > - > > Key: HADOOP-14338 > URL: https://issues.apache.org/jira/browse/HADOOP-14338 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HADOOP-14338.001.patch, HADOOP-14338.002.patch, > HADOOP-14338.003.patch > > > Fix warnings from Spotbugs in hadoop-yarn since switched from findbugs to > spotbugs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979662#comment-15979662 ] Tsuyoshi Ozawa edited comment on HADOOP-14284 at 4/22/17 1:02 AM: -- [~andrew.wang] [~busbey] I see. Let me try in this weekend. Thanks for your pointing. was (Author: ozawa): [~andrew.wang] [~busbey] ah, I see. Let me try in this weekend. Thanks for your pointing. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979662#comment-15979662 ] Tsuyoshi Ozawa commented on HADOOP-14284: - [~andrew.wang] [~busbey] ah, I see. Let me try in this weekend. Thanks for your pointing. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14331) HadoopScheduledThreadPoolExecutor broken for periodic task running
[ https://issues.apache.org/jira/browse/HADOOP-14331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14331: - Affects Version/s: (was: 3.0.0-alpha3) 2.8.0 3.0.0-alpha1 Target Version/s: 2.8.1, 3.0.0-alpha3 > HadoopScheduledThreadPoolExecutor broken for periodic task running > -- > > Key: HADOOP-14331 > URL: https://issues.apache.org/jira/browse/HADOOP-14331 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Gabriel Reid >Assignee: Gabriel Reid > Attachments: HADOOP-14331_demonstrate_bug.patch > > > The HadoopScheduledThreadPoolExecutor (introduced in HADOOP-12749) is broken > for the scheduling of periodic tasks (i.e. for tasks submitted with > {{scheduleAtFixedRate}} and {{scheduleAfterFixedDelay}}). > The behavior of the executor with these methods is that the underlying task > is executed once, and then blocks the running thread indefinitely in > {{ExecutorHelper::logThrowableFromAfterExecute}}, meaning further executions > of the task won't run, and will also block the running thread from running > any other tasks. > A quick scan of the source has shown me that these methods are used on > HadoopScheduledThreadPoolExecutor instances in several places in the code > base, at least in {{JobHistory}} and {{CleanerService}}, which appears to > mean that these classes also no longer function correctly. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14345) S3Guard: S3GuardTool to support provisioning metadata store
Mingliang Liu created HADOOP-14345: -- Summary: S3Guard: S3GuardTool to support provisioning metadata store Key: HADOOP-14345 URL: https://issues.apache.org/jira/browse/HADOOP-14345 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Mingliang Liu Priority: Minor I don't know if this is considered a requested feature from S3Guard code, as the user can always provision the DDB tables via the cloud portal. Implementing should be straightforward. The {{DynamoDBMetadataStore}} has provision method to use. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14341) Support multi-line value for ssl.server.exclude.cipher.list
[ https://issues.apache.org/jira/browse/HADOOP-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14341: Attachment: HADOOP-14341.003.patch Patch 003 * Fix checkstyle > Support multi-line value for ssl.server.exclude.cipher.list > --- > > Key: HADOOP-14341 > URL: https://issues.apache.org/jira/browse/HADOOP-14341 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.4 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14341.001.patch, HADOOP-14341.002.patch, > HADOOP-14341.003.patch > > > The multi-line value for {{ssl.server.exclude.cipher.list}} shown in > {{ssl-server.xml.exmple}} does not work. The property value > {code} > > ssl.server.exclude.cipher.list > TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, > SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA, > SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, > SSL_RSA_WITH_RC4_128_MD5 > Optional. The weak security cipher suites that you want > excluded > from SSL communication. > > {code} > is actually parsed into: > * "TLS_ECDHE_RSA_WITH_RC4_128_SHA" > * "SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA" > * "\nSSL_RSA_WITH_DES_CBC_SHA" > * "SSL_DHE_RSA_WITH_DES_CBC_SHA" > * "\nSSL_RSA_EXPORT_WITH_RC4_40_MD5" > * "SSL_RSA_EXPORT_WITH_DES40_CBC_SHA" > * "\nSSL_RSA_WITH_RC4_128_MD5" -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14341) Support multi-line value for ssl.server.exclude.cipher.list
[ https://issues.apache.org/jira/browse/HADOOP-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14341: Attachment: HADOOP-14341.002.patch Patch 002 * Steve's comment Wait for community input for a few days since the patch changes {{StringUtils.getTrimmedStrings}} which impacts {{Configuration.getTrimmedStrings}} as well. All users of {{Configuration.getTrimmedStrings}} will get multi-line property value support for free. Is there any case when multi-line property value is not desired? > Support multi-line value for ssl.server.exclude.cipher.list > --- > > Key: HADOOP-14341 > URL: https://issues.apache.org/jira/browse/HADOOP-14341 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.4 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14341.001.patch, HADOOP-14341.002.patch > > > The multi-line value for {{ssl.server.exclude.cipher.list}} shown in > {{ssl-server.xml.exmple}} does not work. The property value > {code} > > ssl.server.exclude.cipher.list > TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, > SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA, > SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, > SSL_RSA_WITH_RC4_128_MD5 > Optional. The weak security cipher suites that you want > excluded > from SSL communication. > > {code} > is actually parsed into: > * "TLS_ECDHE_RSA_WITH_RC4_128_SHA" > * "SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA" > * "\nSSL_RSA_WITH_DES_CBC_SHA" > * "SSL_DHE_RSA_WITH_DES_CBC_SHA" > * "\nSSL_RSA_EXPORT_WITH_RC4_40_MD5" > * "SSL_RSA_EXPORT_WITH_DES40_CBC_SHA" > * "\nSSL_RSA_WITH_RC4_128_MD5" -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14267) Make DistCpOptions class immutable
[ https://issues.apache.org/jira/browse/HADOOP-14267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14267: - Release Note: DistCpOptions has been changed to be constructed with a Builder pattern. This potentially affects applications that invoke DistCp with the Java API. > Make DistCpOptions class immutable > -- > > Key: HADOOP-14267 > URL: https://issues.apache.org/jira/browse/HADOOP-14267 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-10533.000.patch, HDFS-10533.000.patch, > HDFS-10533.001.patch, HDFS-10533.002.patch, HDFS-10533.003.patch, > HDFS-10533.004.patch, HDFS-10533.005.patch, HDFS-10533.006.patch, > HDFS-10533.007.patch, HDFS-10533.008.patch, HDFS-10533.009.patch, > HDFS-10533.010.patch, HDFS-10533.011.patch, HDFS-10533.012.patch > > > Currently the {{DistCpOptions}} class encapsulates all DistCp options, which > may be set from command-line (via the {{OptionsParser}}) or may be set > manually (eg construct an instance and call setters). As there are multiple > option fields and more (e.g. [HDFS-9868], [HDFS-10314]) to add, validating > them can be cumbersome. Ideally, the {{DistCpOptions}} object should be > immutable. The benefits are: > # {{DistCpOptions}} is simple and easier to use and share, plus it scales well > # validation is automatic, e.g. manually constructed {{DistCpOptions}} gets > validated before usage > # validation error message is well-defined which does not depend on the order > of setters > This jira is to track the effort of making the {{DistCpOptions}} immutable by > using a Builder pattern for creation. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13606) swift FS to add a service load metadata file
[ https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge reassigned HADOOP-13606: --- Assignee: Steve Loughran (was: John Zhuge) > swift FS to add a service load metadata file > > > Key: HADOOP-13606 > URL: https://issues.apache.org/jira/browse/HADOOP-13606 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/swift >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13606.002.patch, HADOOP-13606-branch-2-001.patch, > unit_test_results_for_HADOOP-13606.002 > > > add a metadata file giving the FS impl of swift; remove the entry from > core-default.xml -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13606) swift FS to add a service load metadata file
[ https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979519#comment-15979519 ] Andrew Wang commented on HADOOP-13606: -- Thanks John! > swift FS to add a service load metadata file > > > Key: HADOOP-13606 > URL: https://issues.apache.org/jira/browse/HADOOP-13606 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/swift >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: John Zhuge > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13606.002.patch, HADOOP-13606-branch-2-001.patch, > unit_test_results_for_HADOOP-13606.002 > > > add a metadata file giving the FS impl of swift; remove the entry from > core-default.xml -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13606) swift FS to add a service load metadata file
[ https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-13606: Target Version/s: (was: 3.0.0-alpha3) > swift FS to add a service load metadata file > > > Key: HADOOP-13606 > URL: https://issues.apache.org/jira/browse/HADOOP-13606 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/swift >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: John Zhuge > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13606.002.patch, HADOOP-13606-branch-2-001.patch, > unit_test_results_for_HADOOP-13606.002 > > > add a metadata file giving the FS impl of swift; remove the entry from > core-default.xml -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13606) swift FS to add a service load metadata file
[ https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979515#comment-15979515 ] John Zhuge commented on HADOOP-13606: - Done, created the revert JIRA HADOOP-14344, and updated fixed versions of this JIRA. > swift FS to add a service load metadata file > > > Key: HADOOP-13606 > URL: https://issues.apache.org/jira/browse/HADOOP-13606 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/swift >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: John Zhuge > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13606.002.patch, HADOOP-13606-branch-2-001.patch, > unit_test_results_for_HADOOP-13606.002 > > > add a metadata file giving the FS impl of swift; remove the entry from > core-default.xml -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13606) swift FS to add a service load metadata file
[ https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-13606: Fix Version/s: (was: 3.0.0-alpha3) (was: 2.8.1) (was: 2.9.0) 3.0.0-alpha2 2.8.0 > swift FS to add a service load metadata file > > > Key: HADOOP-13606 > URL: https://issues.apache.org/jira/browse/HADOOP-13606 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/swift >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: John Zhuge > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13606.002.patch, HADOOP-13606-branch-2-001.patch, > unit_test_results_for_HADOOP-13606.002 > > > add a metadata file giving the FS impl of swift; remove the entry from > core-default.xml -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14344) Revert HADOOP-13606 swift FS to add a service load metadata file
[ https://issues.apache.org/jira/browse/HADOOP-14344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-14344. - Resolution: Fixed Fix Version/s: 3.0.0-alpha3 2.8.1 2.9.0 The revert patch is in HADOOP-13606: https://issues.apache.org/jira/secure/attachment/12856766/HADOOP-13606.002.patch > Revert HADOOP-13606 swift FS to add a service load metadata file > > > Key: HADOOP-14344 > URL: https://issues.apache.org/jira/browse/HADOOP-14344 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 2.8.0 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3 > > > Create the revert JIRA for release notes. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14344) Revert HADOOP-13606 swift FS to add a service load metadata file
[ https://issues.apache.org/jira/browse/HADOOP-14344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14344: Description: Create the revert JIRA for release notes. (was: As titled) > Revert HADOOP-13606 swift FS to add a service load metadata file > > > Key: HADOOP-14344 > URL: https://issues.apache.org/jira/browse/HADOOP-14344 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 2.8.0 >Reporter: John Zhuge >Assignee: John Zhuge > > Create the revert JIRA for release notes. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14338) Fix warnings from Spotbugs in hadoop-yarn
[ https://issues.apache.org/jira/browse/HADOOP-14338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979511#comment-15979511 ] Hadoop QA commented on HADOOP-14338: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 3s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in trunk has 2 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 47s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 32s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice in trunk has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 5s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager in trunk has 8 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 36s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client in trunk has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 34s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 5 new + 385 unchanged - 3 fixed = 390 total (was 388) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 11s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 0 new + 0 unchanged - 2 fixed = 0 total (was 2) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 59s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager generated 0 new + 1 unchanged - 4 fixed = 1 total (was 5) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice genera
[jira] [Created] (HADOOP-14344) Revert HADOOP-13606 swift FS to add a service load metadata file
John Zhuge created HADOOP-14344: --- Summary: Revert HADOOP-13606 swift FS to add a service load metadata file Key: HADOOP-14344 URL: https://issues.apache.org/jira/browse/HADOOP-14344 Project: Hadoop Common Issue Type: Task Affects Versions: 2.8.0 Reporter: John Zhuge Assignee: John Zhuge As titled -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13606) swift FS to add a service load metadata file
[ https://issues.apache.org/jira/browse/HADOOP-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979470#comment-15979470 ] Andrew Wang commented on HADOOP-13606: -- [~jzhuge] can you file a new JIRA to track the revert? This was released in at least 3.0.0-alpha2. > swift FS to add a service load metadata file > > > Key: HADOOP-13606 > URL: https://issues.apache.org/jira/browse/HADOOP-13606 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/swift >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: John Zhuge > Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3 > > Attachments: HADOOP-13606.002.patch, HADOOP-13606-branch-2-001.patch, > unit_test_results_for_HADOOP-13606.002 > > > add a metadata file giving the FS impl of swift; remove the entry from > core-default.xml -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14114) S3A can no longer handle unencoded + in URIs
[ https://issues.apache.org/jira/browse/HADOOP-14114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14114: - Fix Version/s: 3.0.0-alpha3 > S3A can no longer handle unencoded + in URIs > - > > Key: HADOOP-14114 > URL: https://issues.apache.org/jira/browse/HADOOP-14114 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Sean Mackrory >Assignee: Sean Mackrory >Priority: Minor > Fix For: 2.8.1, 3.0.0-alpha3 > > Attachments: HADOOP-14114.001.patch > > > Amazon secret access keys can include alphanumeric characters, but also / and > + (I wish there was an official source that was really specific on what they > can contain, but I'll have to rely on a few blog posts and my own experience). > Keys containing slashes used to be impossible to embed in the URL (e.g. > s3a://access_key:secret_key@bucket/) but it is now possible to do it via URL > encoding. Pluses used to work, but that is now *only* possible via URL > encoding. > In the case of pluses, they don't appear to cause any other problems for > parsing. So IMO the best all-around solution here is for people to URL-encode > these keys always, but so that keys that used to work just fine can continue > to work fine, all we need to do is detect that, log a warning, and we can > re-encode it for the user. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13665) Erasure Coding codec should support fallback coder
[ https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13665: - Fix Version/s: (was: 3.0.0-alpha2) 3.0.0-alpha3 > Erasure Coding codec should support fallback coder > -- > > Key: HADOOP-13665 > URL: https://issues.apache.org/jira/browse/HADOOP-13665 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Wei-Chiu Chuang >Assignee: Kai Sasaki >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-13665.01.patch, HADOOP-13665.02.patch, > HADOOP-13665.03.patch, HADOOP-13665.04.patch, HADOOP-13665.05.patch, > HADOOP-13665.06.patch, HADOOP-13665.07.patch, HADOOP-13665.08.patch, > HADOOP-13665.09.patch, HADOOP-13665.10.patch, HADOOP-13665.11.patch, > HADOOP-13665.12.patch > > > The current EC codec supports a single coder only (by default pure Java > implementation). If the native coder is specified but is unavailable, it > should fallback to pure Java implementation. > One possible solution is to follow the convention of existing Hadoop native > codec, such as transport encryption (see {{CryptoCodec.java}}). It supports > fallback by specifying two or multiple coders as the value of property, and > loads coders in order. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14028) S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or handle part upload failures
[ https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14028: - Fix Version/s: 3.0.0-alpha3 > S3A BlockOutputStreams doesn't delete temporary files in multipart uploads or > handle part upload failures > - > > Key: HADOOP-14028 > URL: https://issues.apache.org/jira/browse/HADOOP-14028 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0 > Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2 >Reporter: Seth Fitzsimmons >Assignee: Steve Loughran >Priority: Critical > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HADOOP-14028-006.patch, HADOOP-14028-007.patch, > HADOOP-14028-branch-2-001.patch, HADOOP-14028-branch-2-008.patch, > HADOOP-14028-branch-2-009.patch, HADOOP-14028-branch-2.8-002.patch, > HADOOP-14028-branch-2.8-003.patch, HADOOP-14028-branch-2.8-004.patch, > HADOOP-14028-branch-2.8-005.patch, HADOOP-14028-branch-2.8-007.patch, > HADOOP-14028-branch-2.8-008.patch > > > I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I > was looking for after running into the same OOM problems) and don't see it > cleaning up the disk-cached blocks. > I'm generating a ~50GB file on an instance with ~6GB free when the process > starts. My expectation is that local copies of the blocks would be deleted > after those parts finish uploading, but I'm seeing more than 15 blocks in > /tmp (and none of them have been deleted thus far). > I see that DiskBlock deletes temporary files when closed, but is it closed > after individual blocks have finished uploading or when the entire file has > been fully written to the FS (full upload completed, including all parts)? > As a temporary workaround to avoid running out of space, I'm listing files, > sorting by atime, and deleting anything older than the first 20: `ls -ut | > tail -n +21 | xargs rm` > Steve Loughran says: > > They should be deleted as soon as the upload completes; the close() call > > that the AWS httpclient makes on the input stream triggers the deletion. > > Though there aren't tests for it, as I recall. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14204) S3A multipart commit failing, "UnsupportedOperationException at java.util.Collections$UnmodifiableList.sort"
[ https://issues.apache.org/jira/browse/HADOOP-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979460#comment-15979460 ] Andrew Wang commented on HADOOP-14204: -- Please set the appropriate 3.x fix version when committing to trunk, thanks! > S3A multipart commit failing, "UnsupportedOperationException at > java.util.Collections$UnmodifiableList.sort" > > > Key: HADOOP-14204 > URL: https://issues.apache.org/jira/browse/HADOOP-14204 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Critical > Fix For: 2.8.1, 3.0.0-alpha3 > > Attachments: HADOOP-14204-branch-2.8-001.patch > > > Stack trace seen trying to commit a multipart upload, as the EMR code (which > takes a {{List etags}} is trying to sort that list directly, which it > can't do if the list doesn't want to be sorted. > later versions of the SDK clone the list before sorting. > We need to make sure that the list passed in can be sorted. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14204) S3A multipart commit failing, "UnsupportedOperationException at java.util.Collections$UnmodifiableList.sort"
[ https://issues.apache.org/jira/browse/HADOOP-14204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14204: - Fix Version/s: 3.0.0-alpha3 > S3A multipart commit failing, "UnsupportedOperationException at > java.util.Collections$UnmodifiableList.sort" > > > Key: HADOOP-14204 > URL: https://issues.apache.org/jira/browse/HADOOP-14204 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Critical > Fix For: 2.8.1, 3.0.0-alpha3 > > Attachments: HADOOP-14204-branch-2.8-001.patch > > > Stack trace seen trying to commit a multipart upload, as the EMR code (which > takes a {{List etags}} is trying to sort that list directly, which it > can't do if the list doesn't want to be sorted. > later versions of the SDK clone the list before sorting. > We need to make sure that the list passed in can be sorted. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14081) S3A: Consider avoiding array copy in S3ABlockOutputStream (ByteArrayBlock)
[ https://issues.apache.org/jira/browse/HADOOP-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14081: - Fix Version/s: 3.0.0-alpha3 > S3A: Consider avoiding array copy in S3ABlockOutputStream (ByteArrayBlock) > -- > > Key: HADOOP-14081 > URL: https://issues.apache.org/jira/browse/HADOOP-14081 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HADOOP-14081.001.patch > > > In {{S3ADataBlocks::ByteArrayBlock}}, data is copied whenever {{startUpload}} > is called. It might be possible to directly access the byte[] array from > ByteArrayOutputStream. > Might have to extend ByteArrayOutputStream and create a method like > getInputStream() which can return ByteArrayInputStream. This would avoid > expensive array copy during large upload. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14173) Remove unused AdlConfKeys#ADL_EVENTS_TRACKING_SOURCE
[ https://issues.apache.org/jira/browse/HADOOP-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14173: - Fix Version/s: 3.0.0-alpha3 > Remove unused AdlConfKeys#ADL_EVENTS_TRACKING_SOURCE > > > Key: HADOOP-14173 > URL: https://issues.apache.org/jira/browse/HADOOP-14173 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/adl >Affects Versions: 2.8.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Trivial > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HADOOP-14173.001.patch > > > Split off from a big patch in HADOOP-14038. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14019) fix some typos in the s3a docs
[ https://issues.apache.org/jira/browse/HADOOP-14019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14019: - Fix Version/s: 3.0.0-alpha3 > fix some typos in the s3a docs > -- > > Key: HADOOP-14019 > URL: https://issues.apache.org/jira/browse/HADOOP-14019 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HADOOP-14019-001.patch > > > There's a few errors in the s3a docs, including one cut-and-paste error > related to the per-bucket config and JCEKS files which is potentially > misleading. > fix -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14321) Explicitly exclude S3A root dir ITests from parallel runs
[ https://issues.apache.org/jira/browse/HADOOP-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14321: - Fix Version/s: 3.0.0-alpha3 > Explicitly exclude S3A root dir ITests from parallel runs > - > > Key: HADOOP-14321 > URL: https://issues.apache.org/jira/browse/HADOOP-14321 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 2.8.1, 3.0.0-alpha3 > > Attachments: HADOOP-14321-branch-2-001.patch > > > the s3 root dir tests are running, even though they are meant to be excluded > via the statement > {code} > **/ITest*Root*.java > {code} > Maybe the double * in the pattern is causing confusion. Fix: explicitly list > the relevant tests (s3, s3n, s3a instead) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14339) Fix warnings from Spotbugs in hadoop-mapreduce
[ https://issues.apache.org/jira/browse/HADOOP-14339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979446#comment-15979446 ] Hadoop QA commented on HADOOP-14339: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 56s{color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core in trunk has 3 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 42s{color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app in trunk has 3 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 37s{color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs in trunk has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 35s{color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-examples in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 38s{color} | {color:orange} hadoop-mapreduce-project: The patch generated 2 new + 142 unchanged - 4 fixed = 144 total (was 146) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 53s{color} | {color:green} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s{color} | {color:green} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 35s{color} | {color:green} hadoop-mapreduce-project/hadoop-mapreduce-examples generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m
[jira] [Updated] (HADOOP-13962) Update ADLS SDK to 2.1.4
[ https://issues.apache.org/jira/browse/HADOOP-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13962: - Fix Version/s: (was: 3.0.0-alpha2) 3.0.0-alpha3 > Update ADLS SDK to 2.1.4 > > > Key: HADOOP-13962 > URL: https://issues.apache.org/jira/browse/HADOOP-13962 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/adl >Affects Versions: 3.0.0-alpha2 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HADOOP-13962.001.patch > > > ADLS has multiple upgrades since the version 2.0.11 we are using: 2.1.1, > 2.1.2, and 2.1.4. Change list: > https://github.com/Azure/azure-data-lake-store-java/blob/master/CHANGES.md. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13119) Add ability to secure log servlet using proxy users
[ https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13119: - Fix Version/s: (was: 3.0.0-alpha2) 3.0.0-alpha3 > Add ability to secure log servlet using proxy users > --- > > Key: HADOOP-13119 > URL: https://issues.apache.org/jira/browse/HADOOP-13119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.4 >Reporter: Jeffrey E Rodriguez >Assignee: Yuanbo Liu > Labels: security > Fix For: 2.7.4, 2.8.1, 3.0.0-alpha3 > > Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, > HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, > HADOOP-13119.005.patch, screenshot-1.png > > > User Hadoop on secure mode. > login as kdc user, kinit. > start firefox and enable Kerberos > access http://localhost:50070/logs/ > Get 403 authorization errors. > only hdfs user could access logs. > Would expect as a user to be able to web interface logs link. > Same results if using curl: > curl -v --negotiate -u tester: http://localhost:50070/logs/ > HTTP/1.1 403 User tester is unauthorized to access this page. > so: > 1. either don't show links if hdfs user is able to access. > 2. provide mechanism to add users to web application realm. > 3. note that we are pass authentication so the issue is authorization to > /logs/ > suspect that /logs/ path is secure in webdescriptor so suspect users by > default don't have access to secure paths. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13119) Add ability to secure log servlet using proxy users
[ https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13119: - Summary: Add ability to secure log servlet using proxy users (was: Web UI error accessing links which need authorization when Kerberos) > Add ability to secure log servlet using proxy users > --- > > Key: HADOOP-13119 > URL: https://issues.apache.org/jira/browse/HADOOP-13119 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.4 >Reporter: Jeffrey E Rodriguez >Assignee: Yuanbo Liu > Labels: security > Fix For: 2.7.4, 3.0.0-alpha2, 2.8.1 > > Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, > HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, > HADOOP-13119.005.patch, screenshot-1.png > > > User Hadoop on secure mode. > login as kdc user, kinit. > start firefox and enable Kerberos > access http://localhost:50070/logs/ > Get 403 authorization errors. > only hdfs user could access logs. > Would expect as a user to be able to web interface logs link. > Same results if using curl: > curl -v --negotiate -u tester: http://localhost:50070/logs/ > HTTP/1.1 403 User tester is unauthorized to access this page. > so: > 1. either don't show links if hdfs user is able to access. > 2. provide mechanism to add users to web application realm. > 3. note that we are pass authentication so the issue is authorization to > /logs/ > suspect that /logs/ path is secure in webdescriptor so suspect users by > default don't have access to secure paths. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14092) Typo in hadoop-aws index.md
[ https://issues.apache.org/jira/browse/HADOOP-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14092: - Fix Version/s: 3.0.0-alpha3 > Typo in hadoop-aws index.md > --- > > Key: HADOOP-14092 > URL: https://issues.apache.org/jira/browse/HADOOP-14092 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.0.0-alpha3 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Trivial > Labels: newbie > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HADOOP-14092.001.patch > > > In section {{Testing against different regions}}, {{contract-tests.xml}} > should be {{contract-test-options.xml}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14099) Split S3 testing documentation out into its own file
[ https://issues.apache.org/jira/browse/HADOOP-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14099: - Fix Version/s: 3.0.0-alpha3 > Split S3 testing documentation out into its own file > > > Key: HADOOP-14099 > URL: https://issues.apache.org/jira/browse/HADOOP-14099 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HADOOP-14099.002.patch, HADOOP-14099-branch-2-001.patch, > HADOOP-14099-branch-2-001.patch, HADOOP-14099-branch-2.002.patch, > HADOOP-14099-branch-2.003.patch > > > The aws/index.md file is way too big. > We should split out the testing section into its own file, as its of > relevance to developers, not users. > This is also the time to clean it up and add a section on what you have to do > to get a patch reviewed, what we want from tests, etc. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14120) needless S3AFileSystem.setOptionalPutRequestParameters in S3ABlockOutputStream putObject()
[ https://issues.apache.org/jira/browse/HADOOP-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14120: - Fix Version/s: 3.0.0-alpha3 > needless S3AFileSystem.setOptionalPutRequestParameters in > S3ABlockOutputStream putObject() > -- > > Key: HADOOP-14120 > URL: https://issues.apache.org/jira/browse/HADOOP-14120 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Yuanbo Liu >Priority: Minor > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HADOOP-14120.001.patch > > > There's a call to {{S3AFileSystem.setOptionalPutRequestParameters()}} in {{ > S3ABlockOutputStream putObject()}} > The put request has already been created by the FS; this call is only > superflous and potentially confusing. > Proposed: cut it, make the {{setOptionalPutRequestParameters()}} method > private. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14055) SwiftRestClient includes pass length in exception if auth fails
[ https://issues.apache.org/jira/browse/HADOOP-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14055: - Fix Version/s: 3.0.0-alpha3 > SwiftRestClient includes pass length in exception if auth fails > > > Key: HADOOP-14055 > URL: https://issues.apache.org/jira/browse/HADOOP-14055 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Marcell Hegedus >Assignee: Marcell Hegedus >Priority: Minor > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HADOOP-14055.01.patch, HADOOP-14055.02.patch > > > SwiftRestClient.exec(M method) throws SwiftAuthenticationFailedException if > auth fails and its message will contain the pass length that may leak into > logs. > Fix is trivial. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14233) Delay construction of PreCondition.check failure message in Configuration#set
[ https://issues.apache.org/jira/browse/HADOOP-14233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979409#comment-15979409 ] Andrew Wang commented on HADOOP-14233: -- Little reminder, please include the JIRA ID in the first component of the commit message. > Delay construction of PreCondition.check failure message in Configuration#set > - > > Key: HADOOP-14233 > URL: https://issues.apache.org/jira/browse/HADOOP-14233 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Jonathan Eagles >Assignee: Jonathan Eagles > Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3 > > Attachments: HADOOP-14233.1.patch > > > The String in the precondition check is constructed prior to failure > detection. Since the normal case is no error, we can gain performance by > delaying the construction of the string until the failure is detected. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13760) S3Guard: add delete tracking
[ https://issues.apache.org/jira/browse/HADOOP-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979411#comment-15979411 ] Mingliang Liu commented on HADOOP-13760: I'll review this next week. Thanks for working on this, [~mackrorysd]! > S3Guard: add delete tracking > > > Key: HADOOP-13760 > URL: https://issues.apache.org/jira/browse/HADOOP-13760 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Sean Mackrory > Attachments: HADOOP-13760-HADOOP-13345.001.patch, > HADOOP-13760-HADOOP-13345.002.patch > > > Following the S3AFileSystem integration patch in HADOOP-13651, we need to add > delete tracking. > Current behavior on delete is to remove the metadata from the MetadataStore. > To make deletes consistent, we need to add a {{isDeleted}} flag to > {{PathMetadata}} and check it when returning results from functions like > {{getFileStatus()}} and {{listStatus()}}. In HADOOP-13651, I added TODO > comments in most of the places these new conditions are needed. The work > does not look too bad. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13760) S3Guard: add delete tracking
[ https://issues.apache.org/jira/browse/HADOOP-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979405#comment-15979405 ] Aaron Fabbri commented on HADOOP-13760: --- Ok.. looking at v2 patch now. Just to set expectations, this is a complex little feature and I expect it to take some time to get right. Some initial questions here: 1. It looks like you make significant changes to rename. - This makes me nervous (rename should make anybody nervous, it is healthy). I'd suggest we break it out to separate "pre-refactor" patch if we still need it. In general the more we can split out pre-refactoring the better. It can be done later when the patch is closer to complete. - You change from iterating over S3's ObjectListing (which is subset of actual directory subtree vertices), to iterating over actual directory tree via listFilesAndDirectories(). I think the problem here is that you will be (A) deleting directory blobs that don't exist on S3, and (B) creating directory blobs that *should not* exist on S3. I'd expect this rename code to cause destination subtrees to disappear, as those directory keys you write would be interpreted as empty dirs by S3A later on. *If* I'm correct on the last point, I hope there is a integration or unit test that fails with this code. If not we should create one. 2. In innerMkdirs(), i think the try/catch block around your checkPathForDirectory() is no longer needed. You use return values, not FNF exception. 3. Switch statements should probably just be if / else in these cases. Any savings you had were given up having to add a fallthrough findbugs exclusion IMO. I'm continuing to look at the patch but wanted to get you some early feedback. > S3Guard: add delete tracking > > > Key: HADOOP-13760 > URL: https://issues.apache.org/jira/browse/HADOOP-13760 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Sean Mackrory > Attachments: HADOOP-13760-HADOOP-13345.001.patch, > HADOOP-13760-HADOOP-13345.002.patch > > > Following the S3AFileSystem integration patch in HADOOP-13651, we need to add > delete tracking. > Current behavior on delete is to remove the metadata from the MetadataStore. > To make deletes consistent, we need to add a {{isDeleted}} flag to > {{PathMetadata}} and check it when returning results from functions like > {{getFileStatus()}} and {{listStatus()}}. In HADOOP-13651, I added TODO > comments in most of the places these new conditions are needed. The work > does not look too bad. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14323) ITestS3GuardListConsistency failure w/ Local, authoritative metadata store
[ https://issues.apache.org/jira/browse/HADOOP-14323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979391#comment-15979391 ] Mingliang Liu commented on HADOOP-14323: Sorry didn't mean to resolve. Re-opened. > ITestS3GuardListConsistency failure w/ Local, authoritative metadata store > -- > > Key: HADOOP-14323 > URL: https://issues.apache.org/jira/browse/HADOOP-14323 > Project: Hadoop Common > Issue Type: Sub-task > Components: s3 >Affects Versions: HADOOP-13345 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri >Priority: Minor > Fix For: HADOOP-13345 > > Attachments: HADOOP-14323-HADOOP-13345.001.patch, > HADOOP-14323-HADOOP-13345.002.patch > > > When doing some testing for HADOOP-14266 I noticed this test failure: > {noformat} > java.lang.NullPointerException: null > at > org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testListStatusWriteBack(ITestS3GuardListConsistency.java:317) > {noformat} > I was running with LocalMetadataStore and > {{fs.s3a.metadatastore.authoritative}} set to true. I haven't been testing > this mode recently so not sure if this case ever worked. Lower priority but > we should fix it. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14323) ITestS3GuardListConsistency failure w/ Local, authoritative metadata store
[ https://issues.apache.org/jira/browse/HADOOP-14323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-14323: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I saw test failures: {code} $ mvn -Dit.test='ITestS3GuardListConsistency#testListStatusWriteBack' -Dtest=none -Ds3guard -Ddynamo -q clean verify --- T E S T S --- Running org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.441 sec - in org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 $ mvn -Dit.test='ITestS3GuardListConsistency#testListStatusWriteBack' -Dtest=none -Ds3guard -Ddynamodblocal -q verify --- T E S T S --- Running org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.032 sec - in org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 $ mvn -Dit.test='ITestS3GuardListConsistency#testListStatusWriteBack' -Dtest=none -Ds3guard -Dlocal -q verify --- T E S T S --- Running org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.273 sec <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency testListStatusWriteBack(org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency) Time elapsed: 3.179 sec <<< FAILURE! java.lang.AssertionError: Metadata store without write back should still only know about /OnS3AndMS, but it has: DirListingMetadata{path=s3a://mliu-s3guard/test/ListStatusWriteBack, listMap={s3a://mliu-s3guard/test/ListStatusWriteBack/OnS3AndMS=PathMetadata{fileStatus=S3AFileStatus{path=s3a://mliu-s3guard/test/ListStatusWriteBack/OnS3AndMS; isDirectory=true; modification_time=0; access_time=0; owner=mliu; group=mliu; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=false; isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN}, s3a://mliu-s3guard/test/ListStatusWriteBack/OnS3=PathMetadata{fileStatus=S3AFileStatus{path=s3a://mliu-s3guard/test/ListStatusWriteBack/OnS3; isDirectory=true; modification_time=0; access_time=0; owner=mliu; group=mliu; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=false; isErasureCoded=false} isEmptyDirectory=FALSE; isEmptyDirectory=UNKNOWN}}, isAuthoritative=false} expected:<1> but was:<2> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testListStatusWriteBack(ITestS3GuardListConsistency.java:322) {code} If it's not failing before, is it because the recent change in listFiles() (which I just committed)? One nit {{asS3AFS()}} be static. > ITestS3GuardListConsistency failure w/ Local, authoritative metadata store > -- > > Key: HADOOP-14323 > URL: https://issues.apache.org/jira/browse/HADOOP-14323 > Project: Hadoop Common > Issue Type: Sub-task > Components: s3 >Affects Versions: HADOOP-13345 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri >Priority: Minor > Fix For: HADOOP-13345 > > Attachments: HADOOP-14323-HADOOP-13345.001.patch, > HADOOP-14323-HADOOP-13345.002.patch > > > When doing some testing for HADOOP-14266 I noticed this test failure: > {noformat} > java.lang.NullPointerException: null > at > org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testListStatusWriteBack(ITestS3GuardListConsistency.java:317) > {noformat} > I was running with LocalMetadataStore and > {{fs.s3a.metadatastore.authoritative}} set to true. I haven't been testing > this mode recently so not sure if this case ever worked. Lower priority but > we should fix it. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-14323) ITestS3GuardListConsistency failure w/ Local, authoritative metadata store
[ https://issues.apache.org/jira/browse/HADOOP-14323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu reopened HADOOP-14323: > ITestS3GuardListConsistency failure w/ Local, authoritative metadata store > -- > > Key: HADOOP-14323 > URL: https://issues.apache.org/jira/browse/HADOOP-14323 > Project: Hadoop Common > Issue Type: Sub-task > Components: s3 >Affects Versions: HADOOP-13345 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri >Priority: Minor > Fix For: HADOOP-13345 > > Attachments: HADOOP-14323-HADOOP-13345.001.patch, > HADOOP-14323-HADOOP-13345.002.patch > > > When doing some testing for HADOOP-14266 I noticed this test failure: > {noformat} > java.lang.NullPointerException: null > at > org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testListStatusWriteBack(ITestS3GuardListConsistency.java:317) > {noformat} > I was running with LocalMetadataStore and > {{fs.s3a.metadatastore.authoritative}} set to true. I haven't been testing > this mode recently so not sure if this case ever worked. Lower priority but > we should fix it. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14138) Remove S3A ref from META-INF service discovery, rely on existing core-default entry
[ https://issues.apache.org/jira/browse/HADOOP-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979369#comment-15979369 ] Siddharth Seth commented on HADOOP-14138: - bq. those JIRAs are so old they are implicitly dead. Don't think they're any less relevant today, than they were when they were filed. Realistically though, the jiras will likely not be fixed - 1) Incompatible, and incompatible in a manner that is not easy to find since this is not a compilation breakage. 2) Someone needs to actually put in some work to make this happen. bq. To me, having to change defaults is pretty common (we frequently have to tweak core-default settings for a shipping product), and being able to do that in a default config is very low-friction compared to code changes. Isn't that what the site files are for? A lot of people consider the core-default files as documentation. Available Config, Default Value, Description. In Tez we went the approach of explicitly not having a default file, and generated an output file from the code defaults. Hive uses a nice approach where HiveConf.get(ParamName) implicitly picks up default values. No *-default.xml file here either. That said, if we're moving to discussing core-default.xml vs Code defaults - probably needs a wider audience. The change helps with performance, so that's really good. Think this affects simple invocations like hadoop fs -ls, and it's really good to see this run faster. Hoping that a longer term change to fix service loaders goes in. Unfortunately will not be able to contribute, the patch in any case. > Remove S3A ref from META-INF service discovery, rely on existing core-default > entry > --- > > Key: HADOOP-14138 > URL: https://issues.apache.org/jira/browse/HADOOP-14138 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Critical > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha3 > > Attachments: HADOOP-14138.001.patch, HADOOP-14138-branch-2-001.patch > > > As discussed in HADOOP-14132, the shaded AWS library is killing performance > starting all hadoop operations, due to classloading on FS service discovery. > This is despite the fact that there is an entry for fs.s3a.impl in > core-default.xml, *we don't need service discovery here* > Proposed: > # cut the entry from > {{/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}} > # when HADOOP-14132 is in, move to that, including declaring an XML file > exclusively for s3a entries > I want this one in first as its a major performance regression, and one we > coula actually backport to 2.7.x, just to improve load time slightly there too -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13826) S3A Deadlock in multipart copy due to thread pool limits.
[ https://issues.apache.org/jira/browse/HADOOP-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13826: - Fix Version/s: 3.0.0-alpha3 > S3A Deadlock in multipart copy due to thread pool limits. > - > > Key: HADOOP-13826 > URL: https://issues.apache.org/jira/browse/HADOOP-13826 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Sean Mackrory >Assignee: Sean Mackrory >Priority: Critical > Fix For: 2.8.0, 3.0.0-alpha3 > > Attachments: HADOOP-13206-branch-2-005.patch, HADOOP-13826.001.patch, > HADOOP-13826.002.patch, HADOOP-13826.003.patch, HADOOP-13826.004.patch, > HADOOP-13826-branch-2-006.patch, HADOOP-13826-branch-2-007.patch > > > In testing HIVE-15093 we have encountered deadlocks in the s3a connector. The > TransferManager javadocs > (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html) > explain how this is possible: > {quote}It is not recommended to use a single threaded executor or a thread > pool with a bounded work queue as control tasks may submit subtasks that > can't complete until all sub tasks complete. Using an incorrectly configured > thread pool may cause a deadlock (I.E. the work queue is filled with control > tasks that can't finish until subtasks complete but subtasks can't execute > because the queue is filled).{quote} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14338) Fix warnings from Spotbugs in hadoop-yarn
[ https://issues.apache.org/jira/browse/HADOOP-14338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-14338: - Attachment: HADOOP-14338.002.patch > Fix warnings from Spotbugs in hadoop-yarn > - > > Key: HADOOP-14338 > URL: https://issues.apache.org/jira/browse/HADOOP-14338 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HADOOP-14338.001.patch, HADOOP-14338.002.patch > > > Fix warnings from Spotbugs in hadoop-yarn since switched from findbugs to > spotbugs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14266) S3Guard: S3AFileSystem::listFiles() to employ MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-14266: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HADOOP-13345 Status: Resolved (was: Patch Available) I committed to the feature branch (javadoc fixed). Thank you very much [~fabbri] and [~ste...@apache.org] for your help here. I really appreciate that. Thanks [~rajesh.balamohan] for offline discussion. Now I see no blockers for merging back to {{trunk}} as initial preview. I'll +1 on the vote from now on. But if we can have the delete tracking committed before that, it will be great. Thanks, > S3Guard: S3AFileSystem::listFiles() to employ MetadataStore > --- > > Key: HADOOP-14266 > URL: https://issues.apache.org/jira/browse/HADOOP-14266 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Fix For: HADOOP-13345 > > Attachments: HADOOP-14266-HADOOP-13345.000.patch, > HADOOP-14266-HADOOP-13345.001.patch, HADOOP-14266-HADOOP-13345.002.patch, > HADOOP-14266-HADOOP-13345.003.patch, HADOOP-14266-HADOOP-13345.003.patch, > HADOOP-14266-HADOOP-13345.004.patch, HADOOP-14266-HADOOP-13345-005.patch, > HADOOP-14266-HADOOP-13345.005.patch, HADOOP-14266-HADOOP-13345.006.patch, > HADOOP-14266-HADOOP-13345.007.patch > > > Similar to [HADOOP-13926], this is to track the effort of employing > MetadataStore in {{S3AFileSystem::listFiles()}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14339) Fix warnings from Spotbugs in hadoop-mapreduce
[ https://issues.apache.org/jira/browse/HADOOP-14339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979343#comment-15979343 ] Weiwei Yang commented on HADOOP-14339: -- Hi [~aw] Would you please take a look at this.. I am a bit confused by the jenkins result.. I thought I have fixed those warnings as I see following {noformat} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) hadoop-mapreduce-project/hadoop-mapreduce-examples generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {noformat} but it still complains at first few lines, looks like the old result. What does this mean? Thank you > Fix warnings from Spotbugs in hadoop-mapreduce > -- > > Key: HADOOP-14339 > URL: https://issues.apache.org/jira/browse/HADOOP-14339 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HADOOP-14339.001.patch, HADOOP-14339.002.patch > > > Fix warnings from Spotbugs in hadoop-mapreduce since switched from findbugs > to spotbugs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14339) Fix warnings from Spotbugs in hadoop-mapreduce
[ https://issues.apache.org/jira/browse/HADOOP-14339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-14339: - Attachment: HADOOP-14339.002.patch > Fix warnings from Spotbugs in hadoop-mapreduce > -- > > Key: HADOOP-14339 > URL: https://issues.apache.org/jira/browse/HADOOP-14339 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HADOOP-14339.001.patch, HADOOP-14339.002.patch > > > Fix warnings from Spotbugs in hadoop-mapreduce since switched from findbugs > to spotbugs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13760) S3Guard: add delete tracking
[ https://issues.apache.org/jira/browse/HADOOP-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HADOOP-13760: --- Attachment: HADOOP-13760-HADOOP-13345.002.patch [~fabbri] - Just the unit tests with Null, Local, and Dynamo implementations. I'm also getting an ecnryption test and the one after it failing - haven't entirely looked into it yet but they succeed in isolation so I'm assuming it's HADOOP-14305. As you pointed out offline, -Dlocal doesn't do anything, but because Local's the default it still ran the tests as I intended. And it definitely exercised all 3 implementations because I saw failures definitely related to each one that I had to fix. I'm getting ready to run some actually workloads on an actual cluster, too. [~ste...@apache.org] - schema versioning aside, this would cause clusters running the old code to continue including deleted items in lists. So it effectly prolongs the inconsistency I'm trying to eliminate until the tombstone gets pruned or otherwise removed. Attaching another incremental patch. I've implemented the todo to filter out deleted children server-side when we're deciding if a directory is empty. I'm not sure I like this - the docs indicate there are limits that apply on the pre-filtering data size, that very large directories may hit. I'm not clear on whether regular queries would hit the same limits, but with large directories this saves us some network traffic (but not read-bandwidth-against-quotas usage). I also need to dig into the use of .withMaxResults. In my .001. patch I was applying that limit before filtering out deletes, so it's only luck / coincidence that tests didn't fail thinking non-empty directories were empty. So I need to add a test to catch that. Also not sure if that limit applies before or after filtering. If it applies before, I shouldn't use it. Also added a test that does a circular series of renames and a few fixes that it required. Most notably if a directory is created and then renamed fast enough that S3 doesn't return it in lists yet, we used to throw a FileNotFoundException trying to decide if it was empty. We now assume it IS empty. > S3Guard: add delete tracking > > > Key: HADOOP-13760 > URL: https://issues.apache.org/jira/browse/HADOOP-13760 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Sean Mackrory > Attachments: HADOOP-13760-HADOOP-13345.001.patch, > HADOOP-13760-HADOOP-13345.002.patch > > > Following the S3AFileSystem integration patch in HADOOP-13651, we need to add > delete tracking. > Current behavior on delete is to remove the metadata from the MetadataStore. > To make deletes consistent, we need to add a {{isDeleted}} flag to > {{PathMetadata}} and check it when returning results from functions like > {{getFileStatus()}} and {{listStatus()}}. In HADOOP-13651, I added TODO > comments in most of the places these new conditions are needed. The work > does not look too bad. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14338) Fix warnings from Spotbugs in hadoop-yarn
[ https://issues.apache.org/jira/browse/HADOOP-14338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979329#comment-15979329 ] Hadoop QA commented on HADOOP-14338: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} HADOOP-14338 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-14338 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12864562/HADOOP-14338.001.patch | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12155/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix warnings from Spotbugs in hadoop-yarn > - > > Key: HADOOP-14338 > URL: https://issues.apache.org/jira/browse/HADOOP-14338 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HADOOP-14338.001.patch > > > Fix warnings from Spotbugs in hadoop-yarn since switched from findbugs to > spotbugs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14338) Fix warnings from Spotbugs in hadoop-yarn
[ https://issues.apache.org/jira/browse/HADOOP-14338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-14338: - Status: Patch Available (was: Open) > Fix warnings from Spotbugs in hadoop-yarn > - > > Key: HADOOP-14338 > URL: https://issues.apache.org/jira/browse/HADOOP-14338 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HADOOP-14338.001.patch > > > Fix warnings from Spotbugs in hadoop-yarn since switched from findbugs to > spotbugs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14338) Fix warnings from Spotbugs in hadoop-yarn
[ https://issues.apache.org/jira/browse/HADOOP-14338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-14338: - Attachment: HADOOP-14338.001.patch > Fix warnings from Spotbugs in hadoop-yarn > - > > Key: HADOOP-14338 > URL: https://issues.apache.org/jira/browse/HADOOP-14338 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HADOOP-14338.001.patch > > > Fix warnings from Spotbugs in hadoop-yarn since switched from findbugs to > spotbugs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14335) Improve DynamoDB schema update story
[ https://issues.apache.org/jira/browse/HADOOP-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979286#comment-15979286 ] Aaron Fabbri commented on HADOOP-14335: --- Let's say we define: "Backward" schema compatibility: Old Code, New Schema Forward schema compatibility: New Code, Old Schema It seems like the ability to support forward / backward compatibility for schema versions depends on the semantics of the change. Take this example of adding an {{is_deleted}} boolean "tombstone" to the schema (HADOOP-13760): Since we're just adding a field / column, you'd think we could gracefully provide backwards compatibility, since old code could simply ignore the new field. However, since old code doesn't know what a tombstone is, it silently drops the {{is_deleted=true}} and thinks the file exists. In this example, I'm not sure how we can provide backward compatibility in a clean way. For forward compatibility, we could runtime-disable any tombstone value writes when the schema version is older. This essentially allows older schema version to disable delete tracking. The offline marker is an interesting idea, but I'm not sure how we handle running clusters. Checking for an offline marker on every operation seems expensive, but necessary, to make this robust. (?) I'm wondering if there is an administrative way to make the table unavailable, i.e. by temporarily changing access credentials (you really want a temporary "single user" mode during schema upgrade). In practice, though, I expect a schema upgrade to consist of nuking the table and updating all your clusters' software. Having a small inconsistency window when you bring the clusters back up with a new empty table seems workable versus having to deal with a schema upgrade script. Thoughts? > Improve DynamoDB schema update story > > > Key: HADOOP-14335 > URL: https://issues.apache.org/jira/browse/HADOOP-14335 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > > On HADOOP-13760 I'm realizing that changes to the DynamoDB schema aren't > great to deal with. Currently a build of Hadoop is hard-coded to a specific > schema version. So if you upgrade from one to the next you have to upgrade > everything (and then update the version in the table - which we don't have a > tool or document for) before you can keep using S3Guard. We could possibly > also make the definition of compatibility a bit more flexible, but it's going > to be very tough to do that without knowing what kind of future schema > changes we might want ahead of time. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14339) Fix warnings from Spotbugs in hadoop-mapreduce
[ https://issues.apache.org/jira/browse/HADOOP-14339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979256#comment-15979256 ] Hadoop QA commented on HADOOP-14339: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 49s{color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core in trunk has 3 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 35s{color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app in trunk has 3 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 29s{color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs in trunk has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 26s{color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-examples in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 28s{color} | {color:orange} hadoop-mapreduce-project: The patch generated 2 new + 142 unchanged - 4 fixed = 144 total (was 146) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 51s{color} | {color:green} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} hadoop-mapreduce-project/hadoop-mapreduce-examples generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 5
[jira] [Commented] (HADOOP-14261) Some refactoring work for erasure coding raw coder
[ https://issues.apache.org/jira/browse/HADOOP-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979221#comment-15979221 ] Hudson commented on HADOOP-14261: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11620 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11620/]) HADOOP-14261. Some refactoring work for erasure coding raw coder. (wang: rev a22fe02fba66280a8e994282e9ead23d9e20669a) * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestRSRawCoderInteroperable2.java * (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderBenchmark.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestXORRawCoderInteroperable2.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestDummyRawCoder.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestRSRawCoderInteroperable1.java * (delete) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactoryLegacy.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestRawCoderBase.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCodecRawCoderMapping.java * (delete) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestRSRawCoderLegacy.java * (delete) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoderLegacy.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestXORRawCoder.java * (delete) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoderLegacy.java * (add) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestRSLegacyRawCoder.java * (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSLegacyRawEncoder.java * (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSLegacyRawErasureCoderFactory.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestRSRawCoder.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestNativeXORRawCoder.java * (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSLegacyRawDecoder.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestNativeRSRawCoder.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestXORRawCoderInteroperable1.java > Some refactoring work for erasure coding raw coder > -- > > Key: HADOOP-14261 > URL: https://issues.apache.org/jira/browse/HADOOP-14261 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Lin Zeng > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-14261.001.patch > > > This is from [~andrew.wang] commented in HADOOP-13200: > {quote} > Other questions/comments from looking at this code: > We also should rename RSRawEncoderLegacy to RSLegacyRawEncoder and > RSRawErasureCoderFactoryLegacy to RSLegacyErasureCoderFactory, to match the > naming of other subclasses. > TestRawEncoderBase, should this use a configured factory to get the raw > coders, rather than referencing the raw coders directly? I didn't check other > usages, but it seems like we should be creating via the appropriate factory > whenever possible. > {quote} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14261) Some refactoring work for erasure coding raw coder
[ https://issues.apache.org/jira/browse/HADOOP-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14261: - Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk, thanks for the contribution [~zenglinx]! > Some refactoring work for erasure coding raw coder > -- > > Key: HADOOP-14261 > URL: https://issues.apache.org/jira/browse/HADOOP-14261 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Lin Zeng > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-14261.001.patch > > > This is from [~andrew.wang] commented in HADOOP-13200: > {quote} > Other questions/comments from looking at this code: > We also should rename RSRawEncoderLegacy to RSLegacyRawEncoder and > RSRawErasureCoderFactoryLegacy to RSLegacyErasureCoderFactory, to match the > naming of other subclasses. > TestRawEncoderBase, should this use a configured factory to get the raw > coders, rather than referencing the raw coders directly? I didn't check other > usages, but it seems like we should be creating via the appropriate factory > whenever possible. > {quote} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14261) Some refactoring work for erasure coding raw coder
[ https://issues.apache.org/jira/browse/HADOOP-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979194#comment-15979194 ] Andrew Wang commented on HADOOP-14261: -- +1 LGTM, will commit shortly > Some refactoring work for erasure coding raw coder > -- > > Key: HADOOP-14261 > URL: https://issues.apache.org/jira/browse/HADOOP-14261 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Lin Zeng > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-14261.001.patch > > > This is from [~andrew.wang] commented in HADOOP-13200: > {quote} > Other questions/comments from looking at this code: > We also should rename RSRawEncoderLegacy to RSLegacyRawEncoder and > RSRawErasureCoderFactoryLegacy to RSLegacyErasureCoderFactory, to match the > naming of other subclasses. > TestRawEncoderBase, should this use a configured factory to get the raw > coders, rather than referencing the raw coders directly? I didn't check other > usages, but it seems like we should be creating via the appropriate factory > whenever possible. > {quote} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14339) Fix warnings from Spotbugs in hadoop-mapreduce
[ https://issues.apache.org/jira/browse/HADOOP-14339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-14339: - Status: Patch Available (was: Open) > Fix warnings from Spotbugs in hadoop-mapreduce > -- > > Key: HADOOP-14339 > URL: https://issues.apache.org/jira/browse/HADOOP-14339 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HADOOP-14339.001.patch > > > Fix warnings from Spotbugs in hadoop-mapreduce since switched from findbugs > to spotbugs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14339) Fix warnings from Spotbugs in hadoop-mapreduce
[ https://issues.apache.org/jira/browse/HADOOP-14339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979191#comment-15979191 ] Weiwei Yang commented on HADOOP-14339: -- Got 8 warnings by running {code} dev-support/bin/qbt --plugins=findbugs --console-report-file=/tmp/myrpt.txt --dirty-workspace {code} they are in following packages {noformat} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core in trunk has 3 extant Findbugs warnings. hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app in trunk has 3 extant Findbugs warnings. hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs in trunk has 1 extant Findbugs warnings. hadoop-mapreduce-project/hadoop-mapreduce-examples in trunk has 1 extant Findbugs warnings. {noformat} submit v1 patch to fix. > Fix warnings from Spotbugs in hadoop-mapreduce > -- > > Key: HADOOP-14339 > URL: https://issues.apache.org/jira/browse/HADOOP-14339 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HADOOP-14339.001.patch > > > Fix warnings from Spotbugs in hadoop-mapreduce since switched from findbugs > to spotbugs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14339) Fix warnings from Spotbugs in hadoop-mapreduce
[ https://issues.apache.org/jira/browse/HADOOP-14339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-14339: - Attachment: HADOOP-14339.001.patch > Fix warnings from Spotbugs in hadoop-mapreduce > -- > > Key: HADOOP-14339 > URL: https://issues.apache.org/jira/browse/HADOOP-14339 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HADOOP-14339.001.patch > > > Fix warnings from Spotbugs in hadoop-mapreduce since switched from findbugs > to spotbugs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978968#comment-15978968 ] Wei-Chiu Chuang edited comment on HADOOP-13200 at 4/21/17 4:01 PM: --- I saw a few nits in the 06 patch: * Could you switch to slf4j? That way you can avoid {{LOG.isDebugEnabled()}} and can use curl based parameterization. * Would it be possible to add a test for the case where two coders register the same name? * The following code {code} " cannot be registered because its coder name " + coderFactory {code} shouldn't it be coderFactory.getCoderName()? * Still in CodecRegistry constructor. I think you want to continue instead of break if a coder has conflict. Otherwise you would throw an exception instead of just log an error message. was (Author: jojochuang): I saw a few nits in the 06 patch: * Could you switch to slf4j? That way you can avoid {{LOG.isDebugEnabled()}} and can use curl based parameterization. * Could you add a test for the case where two coders register the same name? * The following code {code} " cannot be registered because its coder name " + coderFactory {code} shouldn't it be coderFactory.getCoderName()? > Seeking a better approach allowing to customize and configure erasure coders > > > Key: HADOOP-13200 > URL: https://issues.apache.org/jira/browse/HADOOP-13200 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Tim Yao >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HADOOP-13200.02.patch, HADOOP-13200.03.patch, > HADOOP-13200.04.patch, HADOOP-13200.05.patch, HADOOP-13200.06.patch > > > This is a follow-on task for HADOOP-13010 as discussed over there. There may > be some better approach allowing to customize and configure erasure coders > than the current having raw coder factory, as [~cmccabe] suggested. Will copy > the relevant comments here to continue the discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978968#comment-15978968 ] Wei-Chiu Chuang commented on HADOOP-13200: -- I saw a few nits in the 06 patch: * Could you switch to slf4j? That way you can avoid {{LOG.isDebugEnabled()}} and can use curl based parameterization. * Could you add a test for the case where two coders register the same name? * The following code {code} " cannot be registered because its coder name " + coderFactory {code} shouldn't it be coderFactory.getCoderName()? > Seeking a better approach allowing to customize and configure erasure coders > > > Key: HADOOP-13200 > URL: https://issues.apache.org/jira/browse/HADOOP-13200 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Tim Yao >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HADOOP-13200.02.patch, HADOOP-13200.03.patch, > HADOOP-13200.04.patch, HADOOP-13200.05.patch, HADOOP-13200.06.patch > > > This is a follow-on task for HADOOP-13010 as discussed over there. There may > be some better approach allowing to customize and configure erasure coders > than the current having raw coder factory, as [~cmccabe] suggested. Will copy > the relevant comments here to continue the discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14295) Authentication proxy filter may fail authorization because of getRemoteAddr
[ https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978757#comment-15978757 ] Wei-Chiu Chuang edited comment on HADOOP-14295 at 4/21/17 3:45 PM: --- Sorry to come back to this late. I took a step back and though about this in depth. I am not quite familiar with this part of code, but I think it is used by other HDFS daemons (KMS, Httpfs, etc), and it looks like RM http server also reference this class (AuthenticationWithProxyUserFilter). So I think it's not prudent to think it affects just NN and DN. On a separate node, should the class {{AuthenticationWithProxyUserFilter}} be annotated with {{\@InterfaceAudience.Private \@InterfaceStability.Unstable}} just like {{AuthenticationFilter}}? Otherwise downstream applications can use it. Would other jira watcher like to chime in? I still would like to know if there are better alternatives than logging a warning and say it can be a false positive. was (Author: jojochuang): Sorry to come back to this later. I took a step back and though about this in depth. I am not quite familiar with this part of code, but I think it is used by other HDFS daemons (KMS, Httpfs, etc), and it looks like RM http server also reference this class (AuthenticationWithProxyUserFilter). So I think it's not prudent to think it affects just NN and DN. On a separate node, should the class {{AuthenticationWithProxyUserFilter}} be annotated with {{\@InterfaceAudience.Private \@InterfaceStability.Unstable}} just like {{AuthenticationFilter}}? Otherwise downstream applications can use it. Would other jira watcher like to chime in? > Authentication proxy filter may fail authorization because of getRemoteAddr > --- > > Key: HADOOP-14295 > URL: https://issues.apache.org/jira/browse/HADOOP-14295 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.7.4, 3.0.0-alpha2, 2.8.1 >Reporter: Jeffrey E Rodriguez >Assignee: Jeffrey E Rodriguez >Priority: Critical > Fix For: 3.0.0-alpha2 > > Attachments: hadoop-14295.001.patch, HADOOP-14295.002.patch, > HADOOP-14295.003.patch, HADOOP-14295.004.patch > > > When we turn on Hadoop UI Kerberos and try to access Datanode /logs the proxy > (Knox) would get an Authorization failure and it hosts would should as > 127.0.0.1 even though Knox wasn't in local host to Datanode, error message: > {quote} > "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter > (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify > proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1" > {quote} > We were able to figure out that Datanode have Jetty listening on localhost > and that Netty is used to server request to DataNode, this was a measure to > improve performance because of Netty Async NIO design. > I propose to add a check for x-forwarded-for header since proxys usually > inject that header before we do a getRemoteAddr -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978907#comment-15978907 ] Wei-Chiu Chuang commented on HADOOP-13200: -- bq. This is a good point. I don’t worry about this much and guess JVM can manage the temp objects well, so I’d prefer to leave this simpler. If to avoid this and use another map managing coder names, it will incur some kinds of complexity overhead. Thanks for the comment. This might be true for other projects, but untrue for Hadoop, especially in NameNodes. For DataNodes, this patch returns a new array every time a connection is established. Given that a DataNode can have thousands of concurrent client connections on a busy cluster, adding this extra overhead is not a good idea. Plus, this array will not be updated after initialization. I think we can do a better job than that. If you think the improvement incurs extra code complexity, I don't mind filing a new jira to improve this. > Seeking a better approach allowing to customize and configure erasure coders > > > Key: HADOOP-13200 > URL: https://issues.apache.org/jira/browse/HADOOP-13200 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Tim Yao >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HADOOP-13200.02.patch, HADOOP-13200.03.patch, > HADOOP-13200.04.patch, HADOOP-13200.05.patch, HADOOP-13200.06.patch > > > This is a follow-on task for HADOOP-13010 as discussed over there. There may > be some better approach allowing to customize and configure erasure coders > than the current having raw coder factory, as [~cmccabe] suggested. Will copy > the relevant comments here to continue the discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14295) Authentication proxy filter may fail authorization because of getRemoteAddr
[ https://issues.apache.org/jira/browse/HADOOP-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978757#comment-15978757 ] Wei-Chiu Chuang commented on HADOOP-14295: -- Sorry to come back to this later. I took a step back and though about this in depth. I am not quite familiar with this part of code, but I think it is used by other HDFS daemons (KMS, Httpfs, etc), and it looks like RM http server also reference this class (AuthenticationWithProxyUserFilter). So I think it's not prudent to think it affects just NN and DN. On a separate node, should the class {{AuthenticationWithProxyUserFilter}} be annotated with {{\@InterfaceAudience.Private \@InterfaceStability.Unstable}} just like {{AuthenticationFilter}}? Otherwise downstream applications can use it. Would other jira watcher like to chime in? > Authentication proxy filter may fail authorization because of getRemoteAddr > --- > > Key: HADOOP-14295 > URL: https://issues.apache.org/jira/browse/HADOOP-14295 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.7.4, 3.0.0-alpha2, 2.8.1 >Reporter: Jeffrey E Rodriguez >Assignee: Jeffrey E Rodriguez >Priority: Critical > Fix For: 3.0.0-alpha2 > > Attachments: hadoop-14295.001.patch, HADOOP-14295.002.patch, > HADOOP-14295.003.patch, HADOOP-14295.004.patch > > > When we turn on Hadoop UI Kerberos and try to access Datanode /logs the proxy > (Knox) would get an Authorization failure and it hosts would should as > 127.0.0.1 even though Knox wasn't in local host to Datanode, error message: > {quote} > "2017-04-08 07:01:23,029 ERROR security.AuthenticationWithProxyUserFilter > (AuthenticationWithProxyUserFilter.java:getRemoteUser(94)) - Unable to verify > proxy user: Unauthorized connection for super-user: knox from IP 127.0.0.1" > {quote} > We were able to figure out that Datanode have Jetty listening on localhost > and that Netty is used to server request to DataNode, this was a measure to > improve performance because of Netty Async NIO design. > I propose to add a check for x-forwarded-for header since proxys usually > inject that header before we do a getRemoteAddr -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14343) Wrong pid file name in error message when starting secure daemon
[ https://issues.apache.org/jira/browse/HADOOP-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978732#comment-15978732 ] Hadoop QA commented on HADOOP-14343: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 2m 34s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 10s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 4s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 5s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | HADOOP-14343 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12864500/HADOOP-14343.01.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs | | uname | Linux a87668cfd936 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b080338 | | shellcheck | v0.4.6 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12153/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12153/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Wrong pid file name in error message when starting secure daemon > > > Key: HADOOP-14343 > URL: https://issues.apache.org/jira/browse/HADOOP-14343 > Project: Hadoop Common > Issue Type: Bug >Reporter: Andras Bokor >Assignee: Andras Bokor >Priority: Minor > Attachments: HADOOP-14343.01.patch > > > {code}# this is for the daemon pid creation > #shellcheck disable=SC2086 > echo $! > "${jsvcpidfile}" 2>/dev/null > if [[ $? -gt 0 ]]; then > hadoop_error "ERROR: Cannot write ${daemonname} pid ${daemonpidfile}." > fi{code} > It will log datanode's pid file instead of JSVC's pid file. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14333) New exception thrown by (private) DFSClient API isHDFSEncryptionEnabled broke hacky hive code
[ https://issues.apache.org/jira/browse/HADOOP-14333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978721#comment-15978721 ] Daryn Sharp commented on HADOOP-14333: -- Quiet bothered by hive using private hdfs apis with the added touch of reflection. It's one thing to hack your own project because when you break it you fix it, but completely different to hack another project... There should be a followup jira to remove the method to ensure hive makes the change. +1 After adding @Deprecated to the method and moving this jira to the hdfs project. > New exception thrown by (private) DFSClient API isHDFSEncryptionEnabled broke > hacky hive code > -- > > Key: HADOOP-14333 > URL: https://issues.apache.org/jira/browse/HADOOP-14333 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.1, 3.0.0-alpha3 >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > Attachments: HADOOP-14333.001.patch, HADOOP-14333.002.patch, > HADOOP-14333.003.patch > > > Though Hive should be fixed not to access DFSClient which is private to > HADOOP, removing the throws added by HADOOP-14104 is a quicker solution to > unblock hive. > Hive code > {code} > private boolean isEncryptionEnabled(DFSClient client, Configuration conf) { > try { > DFSClient.class.getMethod("isHDFSEncryptionEnabled"); > } catch (NoSuchMethodException e) { > // the method is available since Hadoop-2.7.1 > // if we run with an older Hadoop, check this ourselves > return !conf.getTrimmed(DFSConfigKeys.DFS_ENCRYPTION_KEY_PROVIDER_URI, > "").isEmpty(); > } > return client.isHDFSEncryptionEnabled(); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14343) Wrong pid file name in error message when starting secure daemon
[ https://issues.apache.org/jira/browse/HADOOP-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-14343: -- Priority: Minor (was: Major) > Wrong pid file name in error message when starting secure daemon > > > Key: HADOOP-14343 > URL: https://issues.apache.org/jira/browse/HADOOP-14343 > Project: Hadoop Common > Issue Type: Bug >Reporter: Andras Bokor >Assignee: Andras Bokor >Priority: Minor > Attachments: HADOOP-14343.01.patch > > > {code}# this is for the daemon pid creation > #shellcheck disable=SC2086 > echo $! > "${jsvcpidfile}" 2>/dev/null > if [[ $? -gt 0 ]]; then > hadoop_error "ERROR: Cannot write ${daemonname} pid ${daemonpidfile}." > fi{code} > It will log datanode's pid file instead of JSVC's pid file. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14229) hadoop.security.auth_to_local example is incorrect in the documentation
[ https://issues.apache.org/jira/browse/HADOOP-14229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-14229: -- Priority: Major (was: Trivial) > hadoop.security.auth_to_local example is incorrect in the documentation > --- > > Key: HADOOP-14229 > URL: https://issues.apache.org/jira/browse/HADOOP-14229 > Project: Hadoop Common > Issue Type: Bug >Reporter: Andras Bokor >Assignee: Andras Bokor > Attachments: HADOOP-14229.01.patch, HADOOP-14229.02.patch > > > Let's see jhs as example: > {code}RULE:[2:$1@$0](jhs/.*@.*REALM.TLD)s/.*/mapred/{code} > That means principal has 2 components (jhs/myhost@REALM). > The second column converts this to jhs@REALM. So the regex will not match on > this since regex expects / in the principal. > My suggestion is > {code}RULE:[2:$1](jhs)s/.*/mapred/{code} > https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14343) Wrong pid file name in error message when starting secure daemon
[ https://issues.apache.org/jira/browse/HADOOP-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-14343: -- Priority: Major (was: Minor) > Wrong pid file name in error message when starting secure daemon > > > Key: HADOOP-14343 > URL: https://issues.apache.org/jira/browse/HADOOP-14343 > Project: Hadoop Common > Issue Type: Bug >Reporter: Andras Bokor >Assignee: Andras Bokor > Attachments: HADOOP-14343.01.patch > > > {code}# this is for the daemon pid creation > #shellcheck disable=SC2086 > echo $! > "${jsvcpidfile}" 2>/dev/null > if [[ $? -gt 0 ]]; then > hadoop_error "ERROR: Cannot write ${daemonname} pid ${daemonpidfile}." > fi{code} > It will log datanode's pid file instead of JSVC's pid file. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14231) Using parentheses is not allowed in auth_to_local regex
[ https://issues.apache.org/jira/browse/HADOOP-14231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-14231: -- Target Version/s: 3.0.0-alpha3 > Using parentheses is not allowed in auth_to_local regex > --- > > Key: HADOOP-14231 > URL: https://issues.apache.org/jira/browse/HADOOP-14231 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Andras Bokor >Assignee: Andras Bokor >Priority: Minor > Attachments: HADOOP-14231.01.patch > > > I tried to set the following property for auth_to_local property: > {code}"RULE:[2:$1]((n|d)n)s/.*/hdfs//{code} > but I got the following exception: > {code}Exception in thread "main" java.util.regex.PatternSyntaxException: > Unclosed group near index 9 > (nn|dn|jn{code} > I found that this occurs because {{ruleParser}} in > {{org.apache.hadoop.security.authentication.util.KerberosName}} excludes > closing parentheses. > I do not really see the value of excluding parentheses (do I miss something?) > so I would remove this restriction to be able to use more regex > functionalities. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14343) Wrong pid file name in error message when starting secure daemon
[ https://issues.apache.org/jira/browse/HADOOP-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-14343: -- Target Version/s: 3.0.0-alpha3 > Wrong pid file name in error message when starting secure daemon > > > Key: HADOOP-14343 > URL: https://issues.apache.org/jira/browse/HADOOP-14343 > Project: Hadoop Common > Issue Type: Bug >Reporter: Andras Bokor >Assignee: Andras Bokor >Priority: Minor > Attachments: HADOOP-14343.01.patch > > > {code}# this is for the daemon pid creation > #shellcheck disable=SC2086 > echo $! > "${jsvcpidfile}" 2>/dev/null > if [[ $? -gt 0 ]]; then > hadoop_error "ERROR: Cannot write ${daemonname} pid ${daemonpidfile}." > fi{code} > It will log datanode's pid file instead of JSVC's pid file. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13238) pid handling is failing on secure datanode
[ https://issues.apache.org/jira/browse/HADOOP-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-13238: -- Target Version/s: 3.0.0-alpha3 > pid handling is failing on secure datanode > -- > > Key: HADOOP-13238 > URL: https://issues.apache.org/jira/browse/HADOOP-13238 > Project: Hadoop Common > Issue Type: Bug > Components: scripts, security >Reporter: Allen Wittenauer >Assignee: Andras Bokor > Attachments: HADOOP-13238.01.patch > > > {code} > hdfs --daemon stop datanode > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: pid has changed for datanode, skip deleting pid file > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: daemon pid has changed for datanode, skip deleting daemon pid file > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14343) Wrong pid file name in error message when starting secure daemon
[ https://issues.apache.org/jira/browse/HADOOP-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-14343: -- Status: Patch Available (was: Open) > Wrong pid file name in error message when starting secure daemon > > > Key: HADOOP-14343 > URL: https://issues.apache.org/jira/browse/HADOOP-14343 > Project: Hadoop Common > Issue Type: Bug >Reporter: Andras Bokor >Assignee: Andras Bokor >Priority: Minor > Attachments: HADOOP-14343.01.patch > > > {code}# this is for the daemon pid creation > #shellcheck disable=SC2086 > echo $! > "${jsvcpidfile}" 2>/dev/null > if [[ $? -gt 0 ]]; then > hadoop_error "ERROR: Cannot write ${daemonname} pid ${daemonpidfile}." > fi{code} > It will log datanode's pid file instead of JSVC's pid file. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14343) Wrong pid file name in error message when starting secure daemon
[ https://issues.apache.org/jira/browse/HADOOP-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-14343: -- Attachment: HADOOP-14343.01.patch > Wrong pid file name in error message when starting secure daemon > > > Key: HADOOP-14343 > URL: https://issues.apache.org/jira/browse/HADOOP-14343 > Project: Hadoop Common > Issue Type: Bug >Reporter: Andras Bokor >Assignee: Andras Bokor >Priority: Minor > Attachments: HADOOP-14343.01.patch > > > {code}# this is for the daemon pid creation > #shellcheck disable=SC2086 > echo $! > "${jsvcpidfile}" 2>/dev/null > if [[ $? -gt 0 ]]; then > hadoop_error "ERROR: Cannot write ${daemonname} pid ${daemonpidfile}." > fi{code} > It will log datanode's pid file instead of JSVC's pid file. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-14343) Wrong pid file name in error message when starting secure daemon
[ https://issues.apache.org/jira/browse/HADOOP-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor moved MAPREDUCE-6880 to HADOOP-14343: -- Key: HADOOP-14343 (was: MAPREDUCE-6880) Project: Hadoop Common (was: Hadoop Map/Reduce) > Wrong pid file name in error message when starting secure daemon > > > Key: HADOOP-14343 > URL: https://issues.apache.org/jira/browse/HADOOP-14343 > Project: Hadoop Common > Issue Type: Bug >Reporter: Andras Bokor >Assignee: Andras Bokor >Priority: Minor > > {code}# this is for the daemon pid creation > #shellcheck disable=SC2086 > echo $! > "${jsvcpidfile}" 2>/dev/null > if [[ $? -gt 0 ]]; then > hadoop_error "ERROR: Cannot write ${daemonname} pid ${daemonpidfile}." > fi{code} > It will log datanode's pid file instead of JSVC's pid file. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13238) pid handling is failing on secure datanode
[ https://issues.apache.org/jira/browse/HADOOP-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978683#comment-15978683 ] Hadoop QA commented on HADOOP-13238: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 2m 33s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 9s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 2s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | HADOOP-13238 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12864494/HADOOP-13238.01.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 457f56a91e80 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b080338 | | shellcheck | v0.4.6 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12152/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12152/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > pid handling is failing on secure datanode > -- > > Key: HADOOP-13238 > URL: https://issues.apache.org/jira/browse/HADOOP-13238 > Project: Hadoop Common > Issue Type: Bug > Components: scripts, security >Reporter: Allen Wittenauer >Assignee: Andras Bokor > Attachments: HADOOP-13238.01.patch > > > {code} > hdfs --daemon stop datanode > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: pid has changed for datanode, skip deleting pid file > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: daemon pid has changed for datanode, skip deleting daemon pid file > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14342) hadoop binary tarball has doubled in size and is multiple GB unpacked
[ https://issues.apache.org/jira/browse/HADOOP-14342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-14342. - Resolution: Duplicate Fix Version/s: 2.8.1 duplicate of HADOOP-14270; closing Thanks for catching this, you weren't the first person :) > hadoop binary tarball has doubled in size and is multiple GB unpacked > - > > Key: HADOOP-14342 > URL: https://issues.apache.org/jira/browse/HADOOP-14342 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: André Kelpe >Priority: Minor > Fix For: 2.8.1 > > > I downloaded the binary tarball to play with hadoop 2.8.0 and noticed that > the size has almost doubled. The unpacked tarball is multiple GB big: > {code} > $ du -sh hadoop-2.8.0 > 2.2G hadoop-2.8.0 > {code} > The latest hadoop 2.7.x is only 332MB upacked: > {code} > $ du -sh hadoop-2.7.3 > 332M hadoop-2.7.3 > {code} > The size increase seems to be in share/doc/ > {code} > $ du -sh hadoop-2.8.0/share/doc/ > 2.0G hadoop-2.8.0/share/doc/ > {code} > {code} > $ du -sh hadoop-2.7.3/share/doc/ > 94M hadoop-2.7.3/share/doc/ > {code} > It looks like something went wrong during the build. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14341) Support multi-line value for ssl.server.exclude.cipher.list
[ https://issues.apache.org/jira/browse/HADOOP-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978680#comment-15978680 ] Steve Loughran commented on HADOOP-14341: - LGTM +1, after one minor change: the logDebug line 145 should be guarded to avoid the joinstrings when debug==false > Support multi-line value for ssl.server.exclude.cipher.list > --- > > Key: HADOOP-14341 > URL: https://issues.apache.org/jira/browse/HADOOP-14341 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.4 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14341.001.patch > > > The multi-line value for {{ssl.server.exclude.cipher.list}} shown in > {{ssl-server.xml.exmple}} does not work. The property value > {code} > > ssl.server.exclude.cipher.list > TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, > SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA, > SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, > SSL_RSA_WITH_RC4_128_MD5 > Optional. The weak security cipher suites that you want > excluded > from SSL communication. > > {code} > is actually parsed into: > * "TLS_ECDHE_RSA_WITH_RC4_128_SHA" > * "SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA" > * "\nSSL_RSA_WITH_DES_CBC_SHA" > * "SSL_DHE_RSA_WITH_DES_CBC_SHA" > * "\nSSL_RSA_EXPORT_WITH_RC4_40_MD5" > * "SSL_RSA_EXPORT_WITH_DES40_CBC_SHA" > * "\nSSL_RSA_WITH_RC4_128_MD5" -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14313) Replace/improve Hadoop's byte[] comparator
[ https://issues.apache.org/jira/browse/HADOOP-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978676#comment-15978676 ] Steve Loughran commented on HADOOP-14313: - if there's work in comparing array offsets, assume its being done to minimise array copies, hence boost performance. Have a look @ where it's used to see what's happening > Replace/improve Hadoop's byte[] comparator > -- > > Key: HADOOP-14313 > URL: https://issues.apache.org/jira/browse/HADOOP-14313 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Vikas Vishwakarma > Attachments: HADOOP-14313.master.001.patch > > > Hi, > Recently we were looking at the Lexicographic byte array comparison in HBase. > We did microbenchmark for the byte array comparator of HADOOP ( > https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/io/FastByteComparisons.java#L161 > ) , HBase Vs the latest byte array comparator from guava ( > https://github.com/google/guava/blob/master/guava/src/com/google/common/primitives/UnsignedBytes.java#L362 > ) and observed that the guava main branch version is much faster. > Specifically we see very good improvement when the byteArraySize%8 != 0 and > also for large byte arrays. I will update the benchmark results using JMH for > Hadoop vs Guava. For the jira on HBase, please refer HBASE-17877. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14138) Remove S3A ref from META-INF service discovery, rely on existing core-default entry
[ https://issues.apache.org/jira/browse/HADOOP-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978670#comment-15978670 ] Steve Loughran commented on HADOOP-14138: - +use XSD to define schema, allow for type of defval to be declared too. {code} Which input strategy to use for buffering, seeking and similar when reading data. normal {code} (or, if you declare the fieldname, the DEFAULT value falls out). scope (public, private) would be @ scope attribute; deprecated would set the deprecated tag on both {code} /** * Which input strategy to use for buffering, seeking and similar when * reading data. * Value: {@value} */ @InterfaceStability.Unstable @InterfaceAudience.Public public static final String INPUT_FADVISE = "fs.s3a.experimental.input.fadvise"; /** Default value for {@link #INPUT_FADVISE}: value: (@value}. */ @InterfaceAudience.Public @InterfaceStability.Unstable public static final String INPUT_FADVISE_DEFAULT = "normal"; {code} Build wise you'd need a new src/xml area, a build section calling ant for {{}} then {{}} to generate things. Oh, and we need somebody who understands XSL. > Remove S3A ref from META-INF service discovery, rely on existing core-default > entry > --- > > Key: HADOOP-14138 > URL: https://issues.apache.org/jira/browse/HADOOP-14138 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Critical > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha3 > > Attachments: HADOOP-14138.001.patch, HADOOP-14138-branch-2-001.patch > > > As discussed in HADOOP-14132, the shaded AWS library is killing performance > starting all hadoop operations, due to classloading on FS service discovery. > This is despite the fact that there is an entry for fs.s3a.impl in > core-default.xml, *we don't need service discovery here* > Proposed: > # cut the entry from > {{/hadoop-aws/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem}} > # when HADOOP-14132 is in, move to that, including declaring an XML file > exclusively for s3a entries > I want this one in first as its a major performance regression, and one we > coula actually backport to 2.7.x, just to improve load time slightly there too -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14333) New exception thrown by (private) DFSClient API isHDFSEncryptionEnabled broke hacky hive code
[ https://issues.apache.org/jira/browse/HADOOP-14333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978642#comment-15978642 ] Steve Loughran commented on HADOOP-14333: - Filesystems are a special case, not just because HDFS adds stuff, but because there are fundamental differences between different filesystems (case sensitivity, full posix seek+write, atomic dir rename, o(1) File rename, consistent world view). You can't declare that something supports this just through an interface, as (a) it varies at runtime and (b) {{FSDataOutputStream}} shows how base classes declare functionality which subclasses end up rejecting by dynamically throwing exceptions. without getting into the versioning row, note HADOOP-9565 has narrowed down to some method on FileSystem to probe for features, something like {code} boolean hasFeature(Path, String) {code} Implementations can switch on the feature string, return true iff the feature is present and enabled. There's been discussion of a similar problem related to output stream features, we could do some similar interface here. > New exception thrown by (private) DFSClient API isHDFSEncryptionEnabled broke > hacky hive code > -- > > Key: HADOOP-14333 > URL: https://issues.apache.org/jira/browse/HADOOP-14333 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.1, 3.0.0-alpha3 >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > Attachments: HADOOP-14333.001.patch, HADOOP-14333.002.patch, > HADOOP-14333.003.patch > > > Though Hive should be fixed not to access DFSClient which is private to > HADOOP, removing the throws added by HADOOP-14104 is a quicker solution to > unblock hive. > Hive code > {code} > private boolean isEncryptionEnabled(DFSClient client, Configuration conf) { > try { > DFSClient.class.getMethod("isHDFSEncryptionEnabled"); > } catch (NoSuchMethodException e) { > // the method is available since Hadoop-2.7.1 > // if we run with an older Hadoop, check this ourselves > return !conf.getTrimmed(DFSConfigKeys.DFS_ENCRYPTION_KEY_PROVIDER_URI, > "").isEmpty(); > } > return client.isHDFSEncryptionEnabled(); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13743) error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials has too many spaces
[ https://issues.apache.org/jira/browse/HADOOP-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978634#comment-15978634 ] Hadoop QA commented on HADOOP-13743: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 31s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 30s{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_121. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:8515d35 | | JIRA Issue | HADOOP-13743 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12864488/HADOOP-14373-branch-2-002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f2793b664015 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | branch-2 / 3b7bb7b | | Default Java | 1.7.0_121 | | Multi-JDK versions | /u
[jira] [Commented] (HADOOP-14335) Improve DynamoDB schema update story
[ https://issues.apache.org/jira/browse/HADOOP-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978614#comment-15978614 ] Steve Loughran commented on HADOOP-14335: - + also should add this to hadoop compatibility guidelines > Improve DynamoDB schema update story > > > Key: HADOOP-14335 > URL: https://issues.apache.org/jira/browse/HADOOP-14335 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > > On HADOOP-13760 I'm realizing that changes to the DynamoDB schema aren't > great to deal with. Currently a build of Hadoop is hard-coded to a specific > schema version. So if you upgrade from one to the next you have to upgrade > everything (and then update the version in the table - which we don't have a > tool or document for) before you can keep using S3Guard. We could possibly > also make the definition of compatibility a bit more flexible, but it's going > to be very tough to do that without knowing what kind of future schema > changes we might want ahead of time. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13760) S3Guard: add delete tracking
[ https://issues.apache.org/jira/browse/HADOOP-13760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978613#comment-15978613 ] Steve Loughran commented on HADOOP-13760: - commented on the other schema version stuff. One q here is: what will a v1 client do when it hits a table with delete markers? Ignore them? Fail? > S3Guard: add delete tracking > > > Key: HADOOP-13760 > URL: https://issues.apache.org/jira/browse/HADOOP-13760 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Sean Mackrory > Attachments: HADOOP-13760-HADOOP-13345.001.patch > > > Following the S3AFileSystem integration patch in HADOOP-13651, we need to add > delete tracking. > Current behavior on delete is to remove the metadata from the MetadataStore. > To make deletes consistent, we need to add a {{isDeleted}} flag to > {{PathMetadata}} and check it when returning results from functions like > {{getFileStatus()}} and {{listStatus()}}. In HADOOP-13651, I added TODO > comments in most of the places these new conditions are needed. The work > does not look too bad. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13238) pid handling is failing on secure datanode
[ https://issues.apache.org/jira/browse/HADOOP-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-13238: -- Attachment: HADOOP-13238.01.patch > pid handling is failing on secure datanode > -- > > Key: HADOOP-13238 > URL: https://issues.apache.org/jira/browse/HADOOP-13238 > Project: Hadoop Common > Issue Type: Bug > Components: scripts, security >Reporter: Allen Wittenauer >Assignee: Andras Bokor > Attachments: HADOOP-13238.01.patch > > > {code} > hdfs --daemon stop datanode > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: pid has changed for datanode, skip deleting pid file > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: daemon pid has changed for datanode, skip deleting daemon pid file > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13238) pid handling is failing on secure datanode
[ https://issues.apache.org/jira/browse/HADOOP-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-13238: -- Status: Patch Available (was: In Progress) > pid handling is failing on secure datanode > -- > > Key: HADOOP-13238 > URL: https://issues.apache.org/jira/browse/HADOOP-13238 > Project: Hadoop Common > Issue Type: Bug > Components: scripts, security >Reporter: Allen Wittenauer >Assignee: Andras Bokor > Attachments: HADOOP-13238.01.patch > > > {code} > hdfs --daemon stop datanode > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: pid has changed for datanode, skip deleting pid file > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: daemon pid has changed for datanode, skip deleting daemon pid file > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14335) Improve DynamoDB schema update story
[ https://issues.apache.org/jira/browse/HADOOP-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978608#comment-15978608 ] Steve Loughran commented on HADOOP-14335: - I added the version in there so we can at least recognise that there are version mismatches; without that we'd be stuck. At least now we can recognise the problem. I didn't add any logic about forward/backward compat as I didn't know what to add. Same for an update tool: didn't write one until is was needed Maybe we could add a minor version marker and have logic of "guaranteed compatibility across minor versions, but not major ones". One aspect of table update is we may also want to add an offline marker to the DB too; when doing a schema update or other maintenance, you should be able to mark the DB as offline; clients would be expected to see this and for non-auth: downgrade, for auth: fail. > Improve DynamoDB schema update story > > > Key: HADOOP-14335 > URL: https://issues.apache.org/jira/browse/HADOOP-14335 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > > On HADOOP-13760 I'm realizing that changes to the DynamoDB schema aren't > great to deal with. Currently a build of Hadoop is hard-coded to a specific > schema version. So if you upgrade from one to the next you have to upgrade > everything (and then update the version in the table - which we don't have a > tool or document for) before you can keep using S3Guard. We could possibly > also make the definition of compatibility a bit more flexible, but it's going > to be very tough to do that without knowing what kind of future schema > changes we might want ahead of time. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14324) Refine S3 server-side-encryption key as encryption secret; improve error reporting and diagnostics
[ https://issues.apache.org/jira/browse/HADOOP-14324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978606#comment-15978606 ] Steve Loughran commented on HADOOP-14324: - [~jzhuge] OK, I see HADOOP-12451: you're aware of it, and it looks like I've seen it at one point. This patch does do the hoops of deprecation by looking for the old one, and using that as the default for the new one. Even though the ASF hasn't shipped this code and we could opt not to care, having CDH's S3 in sync with the ASF one can only be good all round: consistent stack traces, etc. > Refine S3 server-side-encryption key as encryption secret; improve error > reporting and diagnostics > -- > > Key: HADOOP-14324 > URL: https://issues.apache.org/jira/browse/HADOOP-14324 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HADOOP-14324-branch-2-001.patch, > HADOOP-14324-branch-2-002.patch, HADOOP-14324-branch-2-003.patch, > HADOOP-14324-trunk-003.patch > > > Before this ships, can we rename {{fs.s3a.server-side-encryption-key}} to > {{fs.s3a.server-side-encryption.key}}. > This makes it consistent with all other .key secrets in S3A. so > * simplifies documentation > * reduces confusion "is it a - or a ."? This confusion is going to surface in > config and support > I know that CDH is shipping with the old key, but it'll be easy for them to > add a deprecation property to handle the migration. I do at least what the > ASF release to be stable before it ships. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978587#comment-15978587 ] Hadoop QA commented on HADOOP-13200: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 44s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 17 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 2s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 9s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}175m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | | | hadoop.hdfs.server.datanode.metrics.TestDataNodeOutlierDetectionViaMetrics | | | hadoop.hdfs.TestMaintenanceState | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestCrcCorruption | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | HADOOP-13200 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12864457/HADOOP-13200.06.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 69be201ec9d4 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Bui
[jira] [Updated] (HADOOP-13743) error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials has too many spaces
[ https://issues.apache.org/jira/browse/HADOOP-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13743: Attachment: HADOOP-14373-branch-2-002.patch This is patch 001 rebased onto branch-2 (no actual change in the patch). testing: Azure EU > error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials > has too many spaces > > > Key: HADOOP-13743 > URL: https://issues.apache.org/jira/browse/HADOOP-13743 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 2.8.0, 2.7.3 >Reporter: Steve Loughran >Priority: Trivial > Attachments: HADOOP-13743-branch-2-001.patch, > HADOOP-14373-branch-2-002.patch > > > The error message on a failed hadoop fs -ls command against an unauthed azure > container has an extra space in {{" them in"}} > {code} > ls: org.apache.hadoop.fs.azure.AzureException: Unable to access container > demo in account example.blob.core.windows.net using anonymous credentials, > and no credentials found for them in the configuration. > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13743) error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials has too many spaces
[ https://issues.apache.org/jira/browse/HADOOP-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13743: Assignee: Steve Loughran Affects Version/s: 2.8.0 Status: Patch Available (was: Open) > error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials > has too many spaces > > > Key: HADOOP-13743 > URL: https://issues.apache.org/jira/browse/HADOOP-13743 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 2.7.3, 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Trivial > Attachments: HADOOP-13743-branch-2-001.patch, > HADOOP-14373-branch-2-002.patch > > > The error message on a failed hadoop fs -ls command against an unauthed azure > container has an extra space in {{" them in"}} > {code} > ls: org.apache.hadoop.fs.azure.AzureException: Unable to access container > demo in account example.blob.core.windows.net using anonymous credentials, > and no credentials found for them in the configuration. > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13743) error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials has too many spaces
[ https://issues.apache.org/jira/browse/HADOOP-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13743: Status: Open (was: Patch Available) > error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials > has too many spaces > > > Key: HADOOP-13743 > URL: https://issues.apache.org/jira/browse/HADOOP-13743 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Priority: Trivial > Attachments: HADOOP-13743-branch-2-001.patch > > > The error message on a failed hadoop fs -ls command against an unauthed azure > container has an extra space in {{" them in"}} > {code} > ls: org.apache.hadoop.fs.azure.AzureException: Unable to access container > demo in account example.blob.core.windows.net using anonymous credentials, > and no credentials found for them in the configuration. > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13238) pid handling is failing on secure datanode
[ https://issues.apache.org/jira/browse/HADOOP-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978504#comment-15978504 ] Andras Bokor edited comment on HADOOP-13238 at 4/21/17 11:56 AM: - [~aw] The root cause here is that the JSVC will delete its own pid file which was passed with {{-pidfile}} option. So after stop {{cat}} will fail. Honestly, I feel HADOOP-12364 solves a bug in an external monitoring tool rather than in Hadoop. That is a pretty rare case (I cannot even imagine how can it happen) so I think here it is enough to check that whether the pid file exists or not. If not that means JSVC deleted the file so we do not need to do check and delete. In addition the error message shows up twice because either {{hadoop_stop_daemon}} or {{hadoop_stop_secure_daemon}} do the same check and deletes the same pid file. The second one can be removed from the code. After my patch the test still passes. {{hadoop_stop_daemon.bats}} and {{hadoop_stop_secure_daemon.bats}} do the same test so the first one seems unnecessary. Also, I added a new test to prove that the pid file is deleted when everything went well. {code}abokor$ bats hadoop_stop_secure_daemon.bats ✓ hadoop_stop_secure_daemon_when_pid_file_changes ✓ hadoop_stop_secure_daemon_deletes_pid_file 2 tests, 0 failures{code} Output after patch: {code}root@abokor-practice-5:/grid/0# hadoop-3.0.0-alpha2/sbin/start-dfs.sh Starting namenodes on [abokor-practice-2.openstacklocal] Starting datanodes Starting secondary namenodes [abokor-practice-5] root@abokor-practice-5:/grid/0# hadoop-3.0.0-alpha2/sbin/stop-dfs.sh Stopping namenodes on [abokor-practice-2.openstacklocal] Stopping datanodes Stopping secondary namenodes [abokor-practice-5]{code} was (Author: boky01): [~aw] The root cause here is that JSVC will delete the pid file which was passed to it with {{-pidfile}} option. So after stop {{cat}} will fail. Honestly, I feel HADOOP-12364 solves a bug in an external monitoring tool rather than in Hadoop. That is a pretty rare case (I cannot even imagine how can it happen) so I think here it is enough to check that whether the pid file exists or not. If not that means JSVC deleted the file so we do not need to do check and delete. In addition the error message shows up twice because either {{hadoop_stop_daemon.bats}} or {{hadoop_stop_secure_daemon.bats}} do the same check and deletes the same pid file. The second one can be removed from the code. After my patch the test still passes. {{adoop_stop_daemon.bats}} and {{adoop_stop_secure_daemon.bats}} do the same test so the first seems unnecessary. {code}abokor$ bats hadoop_stop_secure_daemon.bats ✓ hadoop_stop_secure_daemon 1 test, 0 failures{code} > pid handling is failing on secure datanode > -- > > Key: HADOOP-13238 > URL: https://issues.apache.org/jira/browse/HADOOP-13238 > Project: Hadoop Common > Issue Type: Bug > Components: scripts, security >Reporter: Allen Wittenauer >Assignee: Andras Bokor > > {code} > hdfs --daemon stop datanode > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: pid has changed for datanode, skip deleting pid file > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: daemon pid has changed for datanode, skip deleting daemon pid file > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14261) Some refactoring work for erasure coding raw coder
[ https://issues.apache.org/jira/browse/HADOOP-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978563#comment-15978563 ] Hadoop QA commented on HADOOP-14261: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 14 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 0s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 17 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 48s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 43s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 2 new + 26 unchanged - 2 fixed = 28 total (was 28) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 3s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}109m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | HADOOP-14261 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12864465/HADOOP-14261.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux c8896eff1c46 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b080338 | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/12149/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/12149/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12149/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12149/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was aut
[jira] [Updated] (HADOOP-13238) pid handling is failing on secure datanode
[ https://issues.apache.org/jira/browse/HADOOP-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-13238: -- Attachment: (was: HADOOP-13238.01.patch) > pid handling is failing on secure datanode > -- > > Key: HADOOP-13238 > URL: https://issues.apache.org/jira/browse/HADOOP-13238 > Project: Hadoop Common > Issue Type: Bug > Components: scripts, security >Reporter: Allen Wittenauer >Assignee: Andras Bokor > > {code} > hdfs --daemon stop datanode > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: pid has changed for datanode, skip deleting pid file > cat: /home/hadoop/H/pids/hadoop-hdfs-root-datanode.pid: No such file or > directory > WARNING: daemon pid has changed for datanode, skip deleting daemon pid file > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14342) hadoop binary tarball has doubled in size and is multiple GB unpacked
André Kelpe created HADOOP-14342: Summary: hadoop binary tarball has doubled in size and is multiple GB unpacked Key: HADOOP-14342 URL: https://issues.apache.org/jira/browse/HADOOP-14342 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.8.0 Reporter: André Kelpe Priority: Minor I downloaded the binary tarball to play with hadoop 2.8.0 and noticed that the size has almost doubled. The unpacked tarball is multiple GB big: {code} $ du -sh hadoop-2.8.0 2.2Ghadoop-2.8.0 {code} The latest hadoop 2.7.x is only 332MB upacked: {code} $ du -sh hadoop-2.7.3 332Mhadoop-2.7.3 {code} The size increase seems to be in share/doc/ {code} $ du -sh hadoop-2.8.0/share/doc/ 2.0Ghadoop-2.8.0/share/doc/ {code} {code} $ du -sh hadoop-2.7.3/share/doc/ 94Mhadoop-2.7.3/share/doc/ {code} It looks like something went wrong during the build. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14310) RolloverSignerSecretProvider.LOG should be @VisibleForTesting
[ https://issues.apache.org/jira/browse/HADOOP-14310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978527#comment-15978527 ] Daniel Templeton commented on HADOOP-14310: --- Typically it should be private, but this class' logger is used in {{TestZKSignerSecretProvider}} and {{TestRandomSignerSecretProvider}}. > RolloverSignerSecretProvider.LOG should be @VisibleForTesting > - > > Key: HADOOP-14310 > URL: https://issues.apache.org/jira/browse/HADOOP-14310 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.8.0 >Reporter: Daniel Templeton >Assignee: Arun Shanmugam Kumar >Priority: Minor > Labels: newbie > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14341) Support multi-line value for ssl.server.exclude.cipher.list
[ https://issues.apache.org/jira/browse/HADOOP-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15978523#comment-15978523 ] Hadoop QA commented on HADOOP-14341: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 26s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 17 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 14s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 39s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 2 new + 170 unchanged - 1 fixed = 172 total (was 171) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 7s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 55s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | HADOOP-14341 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12864466/HADOOP-14341.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3a0b9758325d 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b080338 | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/12150/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/12150/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12150/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12150/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Support multi-line value for ssl.server.exclude.cipher.list > --- > >