[jira] [Resolved] (HADOOP-13191) FileSystem#listStatus should not return null
[ https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-13191. - Resolution: Duplicate Thanks [~boky01], it is a dup of HADOOP-7352. One idea is to add {{@NotNull}} to {{FileSystem#listStatus}}: {code} @NotNull public abstract FileStatus[] listStatus(Path f) throws IOException; {code} Then run IntelliJ - Analyze - "Run Inspection by Name ..." - "@NotNull/@Nullible problems" (multiple times if necessary) in order to: * Propagate {{@NotNull}} to subclasses' {{listStatus}} method * Detect any implementation of {{listStatus}} that may return {{null}} > FileSystem#listStatus should not return null > > > Key: HADOOP-13191 > URL: https://issues.apache.org/jira/browse/HADOOP-13191 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > > This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} > contract does not indicate {{null}} is a valid return and some callers do not > test {{null}} before use: > AbstractContractGetFileStatusTest#testListStatusEmptyDirectory: > {code} > assertEquals("ls on an empty directory not of length 0", 0, > fs.listStatus(subfolder).length); > {code} > ChecksumFileSystem#copyToLocalFile: > {code} > FileStatus[] srcs = listStatus(src); > for (FileStatus srcFile : srcs) { > {code} > SimpleCopyLIsting#getFileStatus: > {code} > FileStatus[] fileStatuses = fileSystem.listStatus(path); > if (excludeList != null && excludeList.size() > 0) { > ArrayList fileStatusList = new ArrayList<>(); > for(FileStatus status : fileStatuses) { > {code} > IMHO, there is no good reason for {{listStatus}} to return {{null}}. It > should return empty list instead. > To enforce the contract that null is an invalid return, update javadoc and > consider Intellij IDEA's @Nullable and @NotNull annotations. > So far, I am only aware of the following functions that can return null: > * RawLocalFileSystem#listStatus -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13155) Implement TokenRenewer in KMS and HttpFS
[ https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13155: --- Attachment: HADOOP-13155.02.patch Thanks Wei-Chiu for the review comments. # I think the API in KMS documentation are for the HTTP REST APIs, which is on {{org.apache.hadoop.crypto.key.kms.server.KMS}}, so no need to update. # Good catch on the xml, added it to core-defaults.xml as well. The only intention was that KMSUtil cannot access HDFSConfigKeys, so I had to expose it into a header that common can use. I guess we have to keep it in both places... Patch 2 also fixes javac/checkstyle above. Please take a look and provide your feedback. Thanks! > Implement TokenRenewer in KMS and HttpFS > > > Key: HADOOP-13155 > URL: https://issues.apache.org/jira/browse/HADOOP-13155 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, > HADOOP-13155.pre.patch > > > Service DelegationToken (DT) renewal is done in Yarn by > {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}}, > where it calls {{Token#renew}} and uses ServiceLoader to find the renewer > class > ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]), > and invokes the renew method from it. > We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence > Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the > token not being renewed. > As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} > API, but I don't see it invoked in hadoop code base. KMS does not have any > renew hook. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12754) Client.handleSaslConnectionFailure() uses wrong user in exception text
[ https://issues.apache.org/jira/browse/HADOOP-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297669#comment-15297669 ] Vinayakumar B commented on HADOOP-12754: {code} } else { - String msg = "Couldn't setup connection for " - + UserGroupInformation.getLoginUser().getUserName() + " to " - + remoteId; + String msg = + "Couldn't setup connection for " + ugi + " to " + remoteId; LOG.warn(msg, ex); {code} How about the above change. It will give full details. > Client.handleSaslConnectionFailure() uses wrong user in exception text > -- > > Key: HADOOP-12754 > URL: https://issues.apache.org/jira/browse/HADOOP-12754 > Project: Hadoop Common > Issue Type: Sub-task > Components: ipc, security >Affects Versions: 2.7.2 >Reporter: Steve Loughran >Priority: Minor > Attachments: HADOOP-12754-001.patch > > > {{Client.handleSaslConnectionFailure()}} includes the user in SASL failure > messages, but it calls {{UGI.getLoginUser()}} for its text. If there's an > auth problem in a {{doAs()}} context, this exception is fundamentally > misleading -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13196) DF default interval value is not consistent
[ https://issues.apache.org/jira/browse/HADOOP-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297543#comment-15297543 ] Masatake Iwasaki edited comment on HADOOP-13196 at 5/24/16 2:21 AM: bq. Changed the default value to 3000 to make sure its backward compatible. I think this is incompatible change for the code using {{DF(File path, Configuration conf)}}. 3000 is used if {{DF}} is run as main class (or core-default.xml is not in the classpath) otherwise 6 has been the default value. was (Author: iwasakims): bq. Changed the default value to 3000 to make sure its backward compatible. I think this is incompatible change for the code using {{DF(File path, Configuration conf)}}. 3000 is used if {{DF}} is run as main class (or core-default.xml is not in the classpath) otherwise 6 has been the default value since HADOOP-6233. > DF default interval value is not consistent > --- > > Key: HADOOP-13196 > URL: https://issues.apache.org/jira/browse/HADOOP-13196 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Attachments: HADOOP-13196.001.patch > > > In {{core-default.xml}}, the value of the property {{fs.df.interval}} is > 6. This value is defined in > {{CommonConfigurationKeysPublic.FS_DF_INTERVAL_DEFAULT}}, however, this value > is never used. > When this property is used in {{DF}}, the default value is > {{DF.DF_INTERVAL_DEFAULT}} = 3000. > This can cause potential confusion and should be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13196) DF default interval value is not consistent
[ https://issues.apache.org/jira/browse/HADOOP-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297543#comment-15297543 ] Masatake Iwasaki commented on HADOOP-13196: --- bq. Changed the default value to 3000 to make sure its backward compatible. I think this is incompatible change for the code using {{DF(File path, Configuration conf)}}. 3000 is used if {{DF}} is run as main class (or core-default.xml is not in the classpath) otherwise 6 has been the default value since HADOOP-6233. > DF default interval value is not consistent > --- > > Key: HADOOP-13196 > URL: https://issues.apache.org/jira/browse/HADOOP-13196 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Attachments: HADOOP-13196.001.patch > > > In {{core-default.xml}}, the value of the property {{fs.df.interval}} is > 6. This value is defined in > {{CommonConfigurationKeysPublic.FS_DF_INTERVAL_DEFAULT}}, however, this value > is never used. > When this property is used in {{DF}}, the default value is > {{DF.DF_INTERVAL_DEFAULT}} = 3000. > This can cause potential confusion and should be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs
[ https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297478#comment-15297478 ] Hadoop QA commented on HADOOP-13162: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 40s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 32s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 22s {color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 17s {color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 21s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s {color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s {color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 35s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 13s {color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 20s {color} | {color:green} root: The patch generated 0 new + 23 unchanged - 2 fixed = 23 total (was 25) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s {color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 38s {color} | {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s {color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 54s {color} | {color:red} hadoop-common in the patch failed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s {color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {colo
[jira] [Commented] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.
[ https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297346#comment-15297346 ] Hadoop QA commented on HADOOP-13105: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 47s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 27s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 26s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s {color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 24 new + 36 unchanged - 0 fixed = 60 total (was 36) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 29s {color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 36m 48s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.net.TestDNS | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805768/HADOOP-13105.001.patch | | JIRA Issue | HADOOP-13105 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 137fd747300d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4b0f55b | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/9563/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9563/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HADOOP-Build/9563/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9563/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCo
[jira] [Updated] (HADOOP-13135) Encounter response code 500 when accessing /metrics endpoint
[ https://issues.apache.org/jira/browse/HADOOP-13135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HADOOP-13135: Description: When accessing /metrics endpoint on hbase master through hadoop 2.7.1, I got: {code} HTTP ERROR 500 Problem accessing /metrics. Reason: INTERNAL_SERVER_ERROR Caused by: java.lang.NullPointerException at org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1029) at org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) at org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) {code} [~ajisakaa] suggested that code 500 should be 404 (NOT FOUND). was: When accessing /metrics endpoint on hbase master through hadoop 2.7.1, I got: {code} HTTP ERROR 500 Problem accessing /metrics. Reason: INTERNAL_SERVER_ERROR Caused by: java.lang.NullPointerException at org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1029) at org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) at org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) {code} [~ajisakaa] suggested that code 500 should be 404 (NOT FOUND). > Encounter response code 500 when accessing /metrics endpoint > > > Key: HADOOP-13135 > URL: https://issues.apache.org/jira/browse/HADOOP-13135 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Ted Yu > > When accessing /metrics endpoint on hbase master through hadoop 2.7.1, I got: > {code} > HTTP ERROR 500 > Problem accessing /metrics. Reason: > INTERNAL_SERVER_ERROR > Caused by: > java.lang.NullPointerException > at > org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1029) > at > org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) > at > org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > {code} > [~ajisakaa] suggested that code 500 should be 404 (NOT FOUND). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs
[ https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan updated HADOOP-13162: -- Status: Patch Available (was: Open) > Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs > --- > > Key: HADOOP-13162 > URL: https://issues.apache.org/jira/browse/HADOOP-13162 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13162-branch-2-002.patch, > HADOOP-13162-branch-2-003.patch, HADOOP-13162.001.patch > > > getFileStatus is relatively expensive call and mkdirs invokes it multiple > times depending on how deep the directory structure is. It would be good to > reduce the number of getFileStatus calls in such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.
[ https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13105: --- Attachment: HADOOP-13105.001.patch Thanks [~cnauroth] for the suggestion. I had a look at minikdc and find it's not straightforward to simply extend it. Actually I figured out a way similar to your last comment {{TestWebHdfsTimeouts}}. The only magic is {{AUTHENTICATE_SUCCESS_MSG}}. I don't like this hacking message but this is the best I can tell. The bright side is that, we're testing both connect and read timeout using a dummy server. As you stated, the JNDI documentation clearly spells out how to set both connection and read timeout. But still, in case the JNDI env variables are not working in upstream package, we'll find it out sooner than later. As to exploring ApacheDS for testing the LDAP mapping code, I like the idea. Thanks for letting me know the in-progress [HADOOP-8145] work, [~jojochuang]. Actually I was expecting something alike before I checked out the {{TestLdapGroupsMapping}}. I was disappointed that we were just mocking the stuff. However, as 1) the change will bring new dependencies (ApacheDS test module), 2) heavy to use (I personally don't like the aspect-like annotations) 3) I don't know easy way to make the server delay for a specific period, I suggest we consolidate the effort of testing these features against a real LDAP server along with other test cases in [HADOOP-8145], clearly in a new class as what's you're doing. > Support timeouts in LDAP queries in LdapGroupsMapping. > -- > > Key: HADOOP-13105 > URL: https://issues.apache.org/jira/browse/HADOOP-13105 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Reporter: Chris Nauroth >Assignee: Mingliang Liu > Attachments: HADOOP-13105.000.patch, HADOOP-13105.001.patch > > > {{LdapGroupsMapping}} currently does not set timeouts on the LDAP queries. > This can create a risk of a very long/infinite wait on a connection. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13112) Change CredentialShell to use CommandShell base class
[ https://issues.apache.org/jira/browse/HADOOP-13112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297063#comment-15297063 ] Hudson commented on HADOOP-13112: - SUCCESS: Integrated in Hadoop-trunk-Commit #9843 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9843/]) HADOOP-13112. Change CredentialShell to use CommandShell base class (aw: rev eebb39a56fe504672b79ea04c6040e360496b6d7) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyShell.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tools/CommandShell.java * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java > Change CredentialShell to use CommandShell base class > - > > Key: HADOOP-13112 > URL: https://issues.apache.org/jira/browse/HADOOP-13112 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Matthew Paduano >Assignee: Matthew Paduano >Priority: Minor > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-13112.01.patch, HADOOP-13112.02.patch, > HADOOP-13112.03.patch, HADOOP-13112.04.patch, HADOOP-13112.05.patch, > HADOOP-13112.06.patch > > > org.apache.hadoop.tools.CommandShell is a base class created for use by > DtUtilShell. It was inspired by CredentialShell and much of it was taken > verbatim. It should be a simple change to get CredentialShell to use the > base class and simplify its code without changing its functionality. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations
[ https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297060#comment-15297060 ] Hadoop QA commented on HADOOP-13171: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 7 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 17s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s {color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s {color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 30s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s {color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s {color} | {color:red} hadoop-tools/hadoop-aws: The patch generated 6 new + 53 unchanged - 12 fixed = 59 total (was 65) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s {color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s {color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 8s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:babe025 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805734/HADOOP-13171-branch-2-006.patch | | JIRA Issue | HADOOP-13171 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux a5982f8d9d6b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/pro
[jira] [Updated] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-13197: Assignee: Xiaoyu Yao > Add non-decayed call metrics for DecayRpcScheduler > -- > > Key: HADOOP-13197 > URL: https://issues.apache.org/jira/browse/HADOOP-13197 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc, metrics >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > > DecayRpcScheduler currently exposes decayed call count over the time. It will > be useful to expose the non-decayed raw count for monitoring applications. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler
Xiaoyu Yao created HADOOP-13197: --- Summary: Add non-decayed call metrics for DecayRpcScheduler Key: HADOOP-13197 URL: https://issues.apache.org/jira/browse/HADOOP-13197 Project: Hadoop Common Issue Type: Improvement Components: ipc, metrics Reporter: Xiaoyu Yao DecayRpcScheduler currently exposes decayed call count over the time. It will be useful to expose the non-decayed raw count for monitoring applications. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials
[ https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297046#comment-15297046 ] Sean Mackrory commented on HADOOP-12537: I'm getting a 301 Moved Permanently as though my bucket and S3 endpoint weren't matching up, but I've double-checked they're right. I also just tested with version 1.11.2 of the SDK and did not have success. I'm less concerned by the fact that it seems to require matching STS and S3 regions much of the time, although it sounds from my previous comments like it was more flexible last time around. > s3a: Add flag for session ID to allow Amazon STS temporary credentials > -- > > Key: HADOOP-12537 > URL: https://issues.apache.org/jira/browse/HADOOP-12537 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 2.7.1 >Reporter: Sean Mackrory >Priority: Minor > Attachments: HADOOP-12537.001.patch, HADOOP-12537.002.patch, > HADOOP-12537.003.patch, HADOOP-12537.004.patch, HADOOP-12537.diff, > HADOOP-12537.diff > > > Amazon STS allows you to issue temporary access key id / secret key pairs for > your a user / role. However, using these credentials also requires specifying > a session ID. There is currently no such configuration property or the > required code to pass it through to the API (at least not that I can find) in > any of the S3 connectors. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13112) Change CredentialShell to use CommandShell base class
[ https://issues.apache.org/jira/browse/HADOOP-13112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13112: -- Resolution: Fixed Fix Version/s: 3.0.0-alpha1 Status: Resolved (was: Patch Available) +1 committed to trunk Thanks! > Change CredentialShell to use CommandShell base class > - > > Key: HADOOP-13112 > URL: https://issues.apache.org/jira/browse/HADOOP-13112 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Matthew Paduano >Assignee: Matthew Paduano >Priority: Minor > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-13112.01.patch, HADOOP-13112.02.patch, > HADOOP-13112.03.patch, HADOOP-13112.04.patch, HADOOP-13112.05.patch, > HADOOP-13112.06.patch > > > org.apache.hadoop.tools.CommandShell is a base class created for use by > DtUtilShell. It was inspired by CredentialShell and much of it was taken > verbatim. It should be a simple change to get CredentialShell to use the > base class and simplify its code without changing its functionality. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials
[ https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297026#comment-15297026 ] Hadoop QA commented on HADOOP-12537: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 19s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 5s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 19s {color} | {color:red} root: The patch generated 8 new + 22 unchanged - 0 fixed = 30 total (was 22) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s {color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 38s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s {color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 56s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805726/HADOOP-12537.004.patch | | JIRA Issue | HADOOP-12537 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux 43d479c8bffb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/pa
[jira] [Commented] (HADOOP-13181) WASB append support: getPos incorrect
[ https://issues.apache.org/jira/browse/HADOOP-13181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297020#comment-15297020 ] Bogdan Raducanu commented on HADOOP-13181: -- I attached a small patch for this, with the solution I described. I've also added 2 assertions on getPos in {{AbstractContractAppendTest}} as suggested. Please review when possible > WASB append support: getPos incorrect > - > > Key: HADOOP-13181 > URL: https://issues.apache.org/jira/browse/HADOOP-13181 > Project: Hadoop Common > Issue Type: Bug > Components: azure >Affects Versions: 2.8.0 >Reporter: Bogdan Raducanu > Attachments: HADOOP-13181.001.patch, append.java > > > See attached code. > Cause: > In NativeAzureFileSystem.java: the append method returns > {code} > new FSDataOutputStream(appendStream, statistics) > {code} > Instead, it should probably return > {code} > new FSDataOutputStream(appendStream, statistics, meta.getLength()) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations
[ https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13171: Status: Patch Available (was: Open) > Add StorageStatistics to S3A; instrument some more operations > - > > Key: HADOOP-13171 > URL: https://issues.apache.org/jira/browse/HADOOP-13171 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13171-branch-2-001.patch, > HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, > HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, > HADOOP-13171-branch-2-006.patch > > > Add {{StorageStatistics}} support to S3A, collecting the same metrics as the > instrumentation, but sharing across all instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations
[ https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13171: Attachment: HADOOP-13171-branch-2-006.patch HADOOP-13171 Patch 006 checkstyle, plus a bit more logging on one of the cost tests. > Add StorageStatistics to S3A; instrument some more operations > - > > Key: HADOOP-13171 > URL: https://issues.apache.org/jira/browse/HADOOP-13171 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13171-branch-2-001.patch, > HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, > HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, > HADOOP-13171-branch-2-006.patch > > > Add {{StorageStatistics}} support to S3A, collecting the same metrics as the > instrumentation, but sharing across all instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations
[ https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13171: Status: Open (was: Patch Available) > Add StorageStatistics to S3A; instrument some more operations > - > > Key: HADOOP-13171 > URL: https://issues.apache.org/jira/browse/HADOOP-13171 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13171-branch-2-001.patch, > HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, > HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch > > > Add {{StorageStatistics}} support to S3A, collecting the same metrics as the > instrumentation, but sharing across all instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials
[ https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297004#comment-15297004 ] Steve Loughran commented on HADOOP-12537: - interesting: frankfurt is rejecting me with a 400, even if I use normal credentials. Does moving to a later AWS library make a difference? > s3a: Add flag for session ID to allow Amazon STS temporary credentials > -- > > Key: HADOOP-12537 > URL: https://issues.apache.org/jira/browse/HADOOP-12537 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 2.7.1 >Reporter: Sean Mackrory >Priority: Minor > Attachments: HADOOP-12537.001.patch, HADOOP-12537.002.patch, > HADOOP-12537.003.patch, HADOOP-12537.004.patch, HADOOP-12537.diff, > HADOOP-12537.diff > > > Amazon STS allows you to issue temporary access key id / secret key pairs for > your a user / role. However, using these credentials also requires specifying > a session ID. There is currently no such configuration property or the > required code to pass it through to the API (at least not that I can find) in > any of the S3 connectors. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials
[ https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HADOOP-12537: --- Attachment: HADOOP-12537.004.patch Forgot to git add the new files... > s3a: Add flag for session ID to allow Amazon STS temporary credentials > -- > > Key: HADOOP-12537 > URL: https://issues.apache.org/jira/browse/HADOOP-12537 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 2.7.1 >Reporter: Sean Mackrory >Priority: Minor > Attachments: HADOOP-12537.001.patch, HADOOP-12537.002.patch, > HADOOP-12537.003.patch, HADOOP-12537.004.patch, HADOOP-12537.diff, > HADOOP-12537.diff > > > Amazon STS allows you to issue temporary access key id / secret key pairs for > your a user / role. However, using these credentials also requires specifying > a session ID. There is currently no such configuration property or the > required code to pass it through to the API (at least not that I can find) in > any of the S3 connectors. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials
[ https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HADOOP-12537: --- Attachment: HADOOP-12537.003.patch Attaching an updated patch. Most (but not all) of the regions seem to be failing if the bucket and STS endpoint are for different regions. I'm having additional issues with Frankfurt and Seoul I still need to look into. > s3a: Add flag for session ID to allow Amazon STS temporary credentials > -- > > Key: HADOOP-12537 > URL: https://issues.apache.org/jira/browse/HADOOP-12537 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 2.7.1 >Reporter: Sean Mackrory >Priority: Minor > Attachments: HADOOP-12537.001.patch, HADOOP-12537.002.patch, > HADOOP-12537.003.patch, HADOOP-12537.diff, HADOOP-12537.diff > > > Amazon STS allows you to issue temporary access key id / secret key pairs for > your a user / role. However, using these credentials also requires specifying > a session ID. There is currently no such configuration property or the > required code to pass it through to the API (at least not that I can find) in > any of the S3 connectors. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-7352) Contracts of LocalFileSystem and DistributedFileSystem should require FileSystem::listStatus throw IOException not return null upon access error
[ https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296947#comment-15296947 ] Colin Patrick McCabe commented on HADOOP-7352: -- This should be easier with the new jdk7 changes. We now have access to directory listing APIs like DirectoryStream that throw IOEs on problems instead of returning null. > Contracts of LocalFileSystem and DistributedFileSystem should require > FileSystem::listStatus throw IOException not return null upon access error > > > Key: HADOOP-7352 > URL: https://issues.apache.org/jira/browse/HADOOP-7352 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, fs/s3 >Reporter: Matt Foley >Assignee: Matt Foley > > In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should > throw FileNotFoundException instead of returning null, when the target > directory did not exist. > However, in LocalFileSystem implementation today, FileSystem::listStatus > still may return null, when the target directory exists but does not grant > read permission. This causes NPE in many callers, for all the reasons cited > in HADOOP-6201 and HDFS-538. See HADOOP-7327 and its linked issues for > examples. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations
[ https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296901#comment-15296901 ] Hadoop QA commented on HADOOP-13171: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 56s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 7 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 29s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s {color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s {color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s {color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s {color} | {color:red} hadoop-tools/hadoop-aws: The patch generated 9 new + 54 unchanged - 12 fixed = 63 total (was 66) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s {color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s {color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 32s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:babe025 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805709/HADOOP-13171-branch-2-005.patch | | JIRA Issue | HADOOP-13171 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1ae09c6edcbe 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personali
[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations
[ https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296876#comment-15296876 ] Steve Loughran commented on HADOOP-13171: - For the curious, here's the debug level output of the test of {{copyFromLocalFile()}} Everyone of those object_*_requests events is an HTTPS round trip, with all the overhead of TCP setup and HTTPS auth. Very expensive long haul Here: setup: 500ms upload: 250ms cleanup: 400ms That's almost pathological: long-haul connection, small file upload. It just highlights: don't use small objects. And anything which can be done on start/stop is worthwhile doing, as here there's about 4x more time spent round the upload than the upload itself. {code} 2016-05-23 19:43:20,775 [Thread-0] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initUserAgent(388)) - Using User-Agent: Hadoop 2.9.0-SNAPSHOT 2016-05-23 19:43:22,483 [Thread-0] INFO contract.AbstractFSContractTestBase (AbstractFSContractTestBase.java:setup(172)) - Test filesystem = s3a://stevel-ireland implemented by S3AFileSystem{uri=s3a://stevel-ireland, workingDir=s3a://stevel-ireland/user/stevel, partSize=104857600, enableMultiObjectsDelete=true, maxKeys=5000, readAhead=65536, blockSize=33554432, multiPartThreshold=2147483647, statistics {0 bytes read, 0 bytes written, 0 read ops, 0 large read ops, 0 write ops}, metrics {{Context=S3AFileSystem} {FileSystemId=ef20e446-8481-4da5-a406-7ae687fa49de-stevel-ireland} {fsURI=s3a://stevel-ireland/} {files_created=0} {files_copied=0} {files_copied_bytes=0} {files_deleted=0} {directories_created=0} {directories_deleted=0} {ignored_errors=0} {invocations_copyfromlocalfile=0} {invocations_exists=0} {invocations_getfilestatus=0} {invocations_globstatus=0} {invocations_is_directory=0} {invocations_is_file=0} {invocations_listlocatedstatus=0} {invocations_liststatus=0} {invocations_mdkirs=0} {invocations_rename=0} {object_copy_requests=0} {object_delete_requests=0} {object_list_requests=0} {object_metadata_requests=0} {object_multipart_aborted=0} {object_put_bytes=0} {object_put_requests=0} {streamReadOperations=0} {streamForwardSeekOperations=0} {streamBytesRead=0} {streamSeekOperations=0} {streamReadExceptions=0} {streamOpened=0} {streamReadOperationsIncomplete=0} {streamAborted=0} {streamReadFullyOperations=0} {streamClosed=0} {streamBytesSkippedOnSeek=0} {streamCloseOperations=0} {streamBytesBackwardsOnSeek=0} {streamBackwardSeekOperations=0} }} 2016-05-23 19:43:22,485 [Thread-0] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:innerMkdirs(1370)) - Making directory: /test 2016-05-23 19:43:22,485 [Thread-0] DEBUG s3a.S3AFileSystem (S3AStorageStatistics.java:incrementCounter(59)) - invocations_mdkirs += 1 -> 1 2016-05-23 19:43:22,486 [Thread-0] DEBUG s3a.S3AFileSystem (S3AStorageStatistics.java:incrementCounter(59)) - invocations_getfilestatus += 1 -> 1 2016-05-23 19:43:22,486 [Thread-0] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:getFileStatus(1412)) - Getting path status for /test (test) 2016-05-23 19:43:22,486 [Thread-0] DEBUG s3a.S3AFileSystem (S3AStorageStatistics.java:incrementCounter(59)) - object_metadata_requests += 1 -> 1 2016-05-23 19:43:22,519 [Thread-0] DEBUG s3a.S3AFileSystem (S3AStorageStatistics.java:incrementCounter(59)) - object_metadata_requests += 1 -> 2 2016-05-23 19:43:22,549 [Thread-0] DEBUG s3a.S3AFileSystem (S3AStorageStatistics.java:incrementCounter(59)) - object_list_requests += 1 -> 1 2016-05-23 19:43:22,620 [Thread-0] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:getFileStatus(1505)) - Not Found: /test 2016-05-23 19:43:22,620 [Thread-0] DEBUG s3a.S3AFileSystem (S3AStorageStatistics.java:incrementCounter(59)) - invocations_getfilestatus += 1 -> 2 2016-05-23 19:43:22,620 [Thread-0] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:getFileStatus(1412)) - Getting path status for /test (test) 2016-05-23 19:43:22,620 [Thread-0] DEBUG s3a.S3AFileSystem (S3AStorageStatistics.java:incrementCounter(59)) - object_metadata_requests += 1 -> 3 2016-05-23 19:43:22,652 [Thread-0] DEBUG s3a.S3AFileSystem (S3AStorageStatistics.java:incrementCounter(59)) - object_metadata_requests += 1 -> 4 2016-05-23 19:43:22,682 [Thread-0] DEBUG s3a.S3AFileSystem (S3AStorageStatistics.java:incrementCounter(59)) - object_list_requests += 1 -> 2 2016-05-23 19:43:22,720 [Thread-0] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:getFileStatus(1505)) - Not Found: /test 2016-05-23 19:43:22,721 [Thread-0] DEBUG s3a.S3AFileSystem (S3AStorageStatistics.java:incrementCounter(59)) - invocations_getfilestatus += 1 -> 3 2016-05-23 19:43:22,721 [Thread-0] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:getFileStatus(1412)) - Getting path status for / () 2016-05-23 19:43:22,721 [Thread-0] DEBUG s3a.S3AFileSystem (S3AStorageStatistics.java:incrementCounter(59)) - object_list_requests += 1 -> 3 2016-05-23 19:43:22,759 [Thread-0]
[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations
[ https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13171: Attachment: HADOOP-13171-branch-2-005.patch Patch 005: statistics on put and delete operations, more overloaded methods (exists,() isDirectory()), test on copyFileToLocal to verify that the method works with the metrics updated in the process. the ProgressListener class has been factored out and cleaned up for easier reporting; in the process the various output streams have had a lot of their use of the s3a client moved into the S3AFilesystem; public methods are provided which include the instrumentation counter updates. Also fixes a possible race condition wherein the creation of empty directories was not being awaited on, which may have been a trigger for some intermittent race conditions > Add StorageStatistics to S3A; instrument some more operations > - > > Key: HADOOP-13171 > URL: https://issues.apache.org/jira/browse/HADOOP-13171 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13171-branch-2-001.patch, > HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, > HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch > > > Add {{StorageStatistics}} support to S3A, collecting the same metrics as the > instrumentation, but sharing across all instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations
[ https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13171: Status: Patch Available (was: Open) > Add StorageStatistics to S3A; instrument some more operations > - > > Key: HADOOP-13171 > URL: https://issues.apache.org/jira/browse/HADOOP-13171 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13171-branch-2-001.patch, > HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, > HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch > > > Add {{StorageStatistics}} support to S3A, collecting the same metrics as the > instrumentation, but sharing across all instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13137) TraceAdmin should support Kerberized cluster
[ https://issues.apache.org/jira/browse/HADOOP-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296849#comment-15296849 ] Hadoop QA commented on HADOOP-13137: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 41s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 21s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 21s {color} | {color:red} root generated 1 new + 698 unchanged - 0 fixed = 699 total (was 698) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 19s {color} | {color:red} root: The patch generated 1 new + 11 unchanged - 2 fixed = 12 total (was 13) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 32s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m 57s {color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 112m 9s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805663/HADOOP-13137.003.patch | | JIRA Issue | HADOOP-13137 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 0e3e77fe3b42 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ac95448 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/9554/artifact/patchprocess/diff-compile-javac-root.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/9554/artifact/patchprocess/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9554/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://build
[jira] [Commented] (HADOOP-13112) Change CredentialShell to use CommandShell base class
[ https://issues.apache.org/jira/browse/HADOOP-13112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296847#comment-15296847 ] Hadoop QA commented on HADOOP-13112: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 43s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 3 unchanged - 1 fixed = 3 total (was 4) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 47s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 13s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805685/HADOOP-13112.06.patch | | JIRA Issue | HADOOP-13112 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3b15d93d1662 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ac95448 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9558/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9558/console | | Powered by | Apache Yetus 0.3.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Change CredentialShell to use CommandShell base class > - > > Key: HADOOP-13112 > URL: https://issues.apache.org/jira/browse/HADOOP-13112 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Matthew Paduano >Assignee: Matthew Paduano >Prio
[jira] [Commented] (HADOOP-13190) LoadBalancingKMSClientProvider should be documented
[ https://issues.apache.org/jira/browse/HADOOP-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296841#comment-15296841 ] Hadoop QA commented on HADOOP-13190: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 8m 27s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805701/HADOOP-13190.001.patch | | JIRA Issue | HADOOP-13190 | | Optional Tests | asflicense mvnsite | | uname | Linux 6bcf3f462e30 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ac95448 | | modules | C: hadoop-common-project/hadoop-kms U: hadoop-common-project/hadoop-kms | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9559/console | | Powered by | Apache Yetus 0.3.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > LoadBalancingKMSClientProvider should be documented > --- > > Key: HADOOP-13190 > URL: https://issues.apache.org/jira/browse/HADOOP-13190 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, kms >Affects Versions: 2.7.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Labels: supportability > Attachments: HADOOP-13190.001.patch > > > Currently, there are two ways to achieve KMS HA. > The first one, and the only documented one, is running multiple KMS instances > behind a load balancer. > https://hadoop.apache.org/docs/stable/hadoop-kms/index.html > The other way, is make use of LoadBalancingKMSClientProvider which is added > in HADOOP-11620. However the usage is undocumented. > I think we should update the KMS document to introduce > LoadBalancingKMSClientProvider, provide examples, and also update > kms-site.xml to explain it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13196) DF default interval value is not consistent
[ https://issues.apache.org/jira/browse/HADOOP-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296825#comment-15296825 ] Hadoop QA commented on HADOOP-13196: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 58s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 47s {color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 32s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805683/HADOOP-13196.001.patch | | JIRA Issue | HADOOP-13196 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 128506bb0326 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ac95448 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9557/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HADOOP-Build/9557/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9557/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9557/console | | Powered by | Apache Yetus 0.3.0-SNAPSHOT http://yetus.apache
[jira] [Updated] (HADOOP-13190) LoadBalancingKMSClientProvider should be documented
[ https://issues.apache.org/jira/browse/HADOOP-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13190: - Attachment: HADOOP-13190.001.patch v01: added a section to describe configuration for HA in this alternative set up. > LoadBalancingKMSClientProvider should be documented > --- > > Key: HADOOP-13190 > URL: https://issues.apache.org/jira/browse/HADOOP-13190 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, kms >Affects Versions: 2.7.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Labels: supportability > Attachments: HADOOP-13190.001.patch > > > Currently, there are two ways to achieve KMS HA. > The first one, and the only documented one, is running multiple KMS instances > behind a load balancer. > https://hadoop.apache.org/docs/stable/hadoop-kms/index.html > The other way, is make use of LoadBalancingKMSClientProvider which is added > in HADOOP-11620. However the usage is undocumented. > I think we should update the KMS document to introduce > LoadBalancingKMSClientProvider, provide examples, and also update > kms-site.xml to explain it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13190) LoadBalancingKMSClientProvider should be documented
[ https://issues.apache.org/jira/browse/HADOOP-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13190: - Status: Patch Available (was: Open) > LoadBalancingKMSClientProvider should be documented > --- > > Key: HADOOP-13190 > URL: https://issues.apache.org/jira/browse/HADOOP-13190 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, kms >Affects Versions: 2.7.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Labels: supportability > Attachments: HADOOP-13190.001.patch > > > Currently, there are two ways to achieve KMS HA. > The first one, and the only documented one, is running multiple KMS instances > behind a load balancer. > https://hadoop.apache.org/docs/stable/hadoop-kms/index.html > The other way, is make use of LoadBalancingKMSClientProvider which is added > in HADOOP-11620. However the usage is undocumented. > I think we should update the KMS document to introduce > LoadBalancingKMSClientProvider, provide examples, and also update > kms-site.xml to explain it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster
[ https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296785#comment-15296785 ] Hadoop QA commented on HADOOP-12847: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 16s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 42s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s {color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 10 unchanged - 13 fixed = 10 total (was 23) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 36s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 31s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805679/HADOOP-12847.008.patch | | JIRA Issue | HADOOP-12847 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f5513c31693d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ac95448 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9556/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9556/console | | Powered by | Apache Yetus 0.3.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > hadoop daemonlog should support https and SPNEGO for Kerberized cluster > --- > > Key: HADOOP-12847 > URL: https://issues.apache.org/jira/browse/HADOOP-12847 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, > HADOOP-12847.003.patch, HADO
[jira] [Commented] (HADOOP-13180) Encryption Zone data Run mr with execption:AuthenticationException can't be found in cache
[ https://issues.apache.org/jira/browse/HADOOP-13180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296770#comment-15296770 ] Wei-Chiu Chuang commented on HADOOP-13180: -- [~xiaochen] thanks, you're right! It's a client-only configuration. So it's the {{dfs.encryption.key.provider.uri}} in hdfs-site.xml that should be configured. > Encryption Zone data Run mr with execption:AuthenticationException can't be > found in cache > > > Key: HADOOP-13180 > URL: https://issues.apache.org/jira/browse/HADOOP-13180 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.6.1 >Reporter: lushuai > > org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.security.authentication.client.AuthenticationException: > org.apache.hadoop.security.token.SecretManager$InvalidToken: token > (owner=hive, renewer=yarn, realUser=, issueDate=1463627282514, > maxDate=1464232082514, sequenceNumber=217, masterKeyId=2) can't be found in > cache > at > org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:622) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:566) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:675) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:97) > at > org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:162) > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:508) > at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163) > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: > org.apache.hadoop.security.authentication.client.AuthenticationException: > org.apache.hadoop.security.token.SecretManager$InvalidToken: token > (owner=hive, renewer=yarn, realUser=, issueDate=1463627282514, > maxDate=1464232082514, sequenceNumber=217, masterKeyId=2) can't be found in > cache > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at > org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:487) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:445) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:719) > at > org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388) > at > org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1347) > at > org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1446) > at > org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1431) > at > org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:400) > at > org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393) > at > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:801) > at
[jira] [Commented] (HADOOP-13194) Document property fs.getspaceused.classname
[ https://issues.apache.org/jira/browse/HADOOP-13194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296751#comment-15296751 ] Wei-Chiu Chuang commented on HADOOP-13194: -- test failure seems unrelated. > Document property fs.getspaceused.classname > --- > > Key: HADOOP-13194 > URL: https://issues.apache.org/jira/browse/HADOOP-13194 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.8.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Attachments: HADOOP-13194.001.patch, HADOOP-13194.002.patch > > > HADOOP-12973 introduced a new property {{fs.getspaceused.classname}} which > makes it configurable to change the mechanism for estimating disk usage. This > is great work, thanks [~eclark]! > In Hadoop convention, this property should be declared as a string constant > (or in Java's terminology, public static final variable), and be documented > in core-default.xml -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13112) Change CredentialShell to use CommandShell base class
[ https://issues.apache.org/jira/browse/HADOOP-13112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthew Paduano updated HADOOP-13112: - Attachment: HADOOP-13112.06.patch patch #06 addresses findbugs. for patch #06 {code} --- T E S T S --- Running org.apache.hadoop.crypto.key.TestKeyShell Tests run: 10, Failures: 0, Errors: 0, Skipped: 0 Running org.apache.hadoop.security.alias.TestCredShell Tests run: 12, Failures: 0, Errors: 0, Skipped: 0 Running org.apache.hadoop.security.token.TestDtUtilShell Tests run: 8, Failures: 0, Errors: 0, Skipped: 0 Running org.apache.hadoop.tools.TestCommandShell Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 {code} > Change CredentialShell to use CommandShell base class > - > > Key: HADOOP-13112 > URL: https://issues.apache.org/jira/browse/HADOOP-13112 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Matthew Paduano >Assignee: Matthew Paduano >Priority: Minor > Attachments: HADOOP-13112.01.patch, HADOOP-13112.02.patch, > HADOOP-13112.03.patch, HADOOP-13112.04.patch, HADOOP-13112.05.patch, > HADOOP-13112.06.patch > > > org.apache.hadoop.tools.CommandShell is a base class created for use by > DtUtilShell. It was inspired by CredentialShell and much of it was taken > verbatim. It should be a simple change to get CredentialShell to use the > base class and simplify its code without changing its functionality. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13194) Document property fs.getspaceused.classname
[ https://issues.apache.org/jira/browse/HADOOP-13194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296734#comment-15296734 ] Hadoop QA commented on HADOOP-13194: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 51s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 39s {color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 6s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805668/HADOOP-13194.002.patch | | JIRA Issue | HADOOP-13194 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux bd097599ac51 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ac95448 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9555/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HADOOP-Build/9555/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9555/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9555/console | | Powered by | Apache Yetus 0.3.0-SNAPSHOT http
[jira] [Updated] (HADOOP-13196) DF default interval value is not consistent
[ https://issues.apache.org/jira/browse/HADOOP-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13196: - Attachment: (was: HADOOP-13196.001.patch) > DF default interval value is not consistent > --- > > Key: HADOOP-13196 > URL: https://issues.apache.org/jira/browse/HADOOP-13196 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Attachments: HADOOP-13196.001.patch > > > In {{core-default.xml}}, the value of the property {{fs.df.interval}} is > 6. This value is defined in > {{CommonConfigurationKeysPublic.FS_DF_INTERVAL_DEFAULT}}, however, this value > is never used. > When this property is used in {{DF}}, the default value is > {{DF.DF_INTERVAL_DEFAULT}} = 3000. > This can cause potential confusion and should be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13196) DF default interval value is not consistent
[ https://issues.apache.org/jira/browse/HADOOP-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13196: - Attachment: HADOOP-13196.001.patch Changed the default value to 3000 to make sure its backward compatible. > DF default interval value is not consistent > --- > > Key: HADOOP-13196 > URL: https://issues.apache.org/jira/browse/HADOOP-13196 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Attachments: HADOOP-13196.001.patch > > > In {{core-default.xml}}, the value of the property {{fs.df.interval}} is > 6. This value is defined in > {{CommonConfigurationKeysPublic.FS_DF_INTERVAL_DEFAULT}}, however, this value > is never used. > When this property is used in {{DF}}, the default value is > {{DF.DF_INTERVAL_DEFAULT}} = 3000. > This can cause potential confusion and should be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13196) DF default interval value is not consistent
[ https://issues.apache.org/jira/browse/HADOOP-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13196: - Status: Patch Available (was: Open) > DF default interval value is not consistent > --- > > Key: HADOOP-13196 > URL: https://issues.apache.org/jira/browse/HADOOP-13196 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Attachments: HADOOP-13196.001.patch > > > In {{core-default.xml}}, the value of the property {{fs.df.interval}} is > 6. This value is defined in > {{CommonConfigurationKeysPublic.FS_DF_INTERVAL_DEFAULT}}, however, this value > is never used. > When this property is used in {{DF}}, the default value is > {{DF.DF_INTERVAL_DEFAULT}} = 3000. > This can cause potential confusion and should be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13196) DF default interval value is not consistent
[ https://issues.apache.org/jira/browse/HADOOP-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13196: - Attachment: HADOOP-13196.001.patch v01: use {{CommonConfigurationKeys.FS_DF_INTERVAL_DEFAULT}} = 6 for the value. > DF default interval value is not consistent > --- > > Key: HADOOP-13196 > URL: https://issues.apache.org/jira/browse/HADOOP-13196 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > Attachments: HADOOP-13196.001.patch > > > In {{core-default.xml}}, the value of the property {{fs.df.interval}} is > 6. This value is defined in > {{CommonConfigurationKeysPublic.FS_DF_INTERVAL_DEFAULT}}, however, this value > is never used. > When this property is used in {{DF}}, the default value is > {{DF.DF_INTERVAL_DEFAULT}} = 3000. > This can cause potential confusion and should be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-12893: --- Attachment: HADOOP-12893.005.patch Ah, thanks for pointing out, Allen. Sorry for not reading that jira through and misunderstood. Patch 5 keeps the junit and mockito untouched.. > Verify LICENSE.txt and NOTICE.txt > - > > Key: HADOOP-12893 > URL: https://issues.apache.org/jira/browse/HADOOP-12893 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Xiao Chen >Priority: Blocker > Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, > HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.01.patch > > > We have many bundled dependencies in both the source and the binary artifacts > that are not in LICENSE.txt and NOTICE.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13196) DF default interval value is not consistent
[ https://issues.apache.org/jira/browse/HADOOP-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13196: - Priority: Trivial (was: Minor) > DF default interval value is not consistent > --- > > Key: HADOOP-13196 > URL: https://issues.apache.org/jira/browse/HADOOP-13196 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Trivial > > In {{core-default.xml}}, the value of the property {{fs.df.interval}} is > 6. This value is defined in > {{CommonConfigurationKeysPublic.FS_DF_INTERVAL_DEFAULT}}, however, this value > is never used. > When this property is used in {{DF}}, the default value is > {{DF.DF_INTERVAL_DEFAULT}} = 3000. > This can cause potential confusion and should be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster
[ https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-12847: - Attachment: HADOOP-12847.008.patch v08. fixed checkstyle, javac and whitespace issues. > hadoop daemonlog should support https and SPNEGO for Kerberized cluster > --- > > Key: HADOOP-12847 > URL: https://issues.apache.org/jira/browse/HADOOP-12847 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, > HADOOP-12847.003.patch, HADOOP-12847.004.patch, HADOOP-12847.005.patch, > HADOOP-12847.006.patch, HADOOP-12847.008.patch > > > {{hadoop daemonlog}} is a simple, yet useful tool for debugging. > However, it does not support https, nor does it support a Kerberized Hadoop > cluster. > Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation > with a Kerberized name node web ui. It will also fall back to simple > authentication if the cluster is not Kerberized. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13196) DF default interval value is not consistent
[ https://issues.apache.org/jira/browse/HADOOP-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13196: - Summary: DF default interval value is not consistent (was: DF interval value is not consistent) > DF default interval value is not consistent > --- > > Key: HADOOP-13196 > URL: https://issues.apache.org/jira/browse/HADOOP-13196 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > > In {{core-default.xml}}, the value of the property {{fs.df.interval}} is > 6. This value is defined in > {{CommonConfigurationKeysPublic.FS_DF_INTERVAL_DEFAULT}}, however, this value > is never used. > When this property is used in {{DF}}, the default value is > {{DF.DF_INTERVAL_DEFAULT}} = 3000. > This can cause potential confusion and should be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13196) DF interval value is not consistent
Wei-Chiu Chuang created HADOOP-13196: Summary: DF interval value is not consistent Key: HADOOP-13196 URL: https://issues.apache.org/jira/browse/HADOOP-13196 Project: Hadoop Common Issue Type: Bug Reporter: Wei-Chiu Chuang Assignee: Wei-Chiu Chuang Priority: Minor In {{core-default.xml}}, the value of the property {{fs.df.interval}} is 6. This value is defined in {{CommonConfigurationKeysPublic.FS_DF_INTERVAL_DEFAULT}}, however, this value is never used. When this property is used in {{DF}}, the default value is {{DF.DF_INTERVAL_DEFAULT}} = 3000. This can cause potential confusion and should be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13196) DF interval value is not consistent
[ https://issues.apache.org/jira/browse/HADOOP-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13196: - Component/s: fs > DF interval value is not consistent > --- > > Key: HADOOP-13196 > URL: https://issues.apache.org/jira/browse/HADOOP-13196 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > > In {{core-default.xml}}, the value of the property {{fs.df.interval}} is > 6. This value is defined in > {{CommonConfigurationKeysPublic.FS_DF_INTERVAL_DEFAULT}}, however, this value > is never used. > When this property is used in {{DF}}, the default value is > {{DF.DF_INTERVAL_DEFAULT}} = 3000. > This can cause potential confusion and should be fixed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13194) Document property fs.getspaceused.classname
[ https://issues.apache.org/jira/browse/HADOOP-13194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13194: - Attachment: HADOOP-13194.002.patch v02. fixed test error and checkstyle warning. > Document property fs.getspaceused.classname > --- > > Key: HADOOP-13194 > URL: https://issues.apache.org/jira/browse/HADOOP-13194 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.8.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Attachments: HADOOP-13194.001.patch, HADOOP-13194.002.patch > > > HADOOP-12973 introduced a new property {{fs.getspaceused.classname}} which > makes it configurable to change the mechanism for estimating disk usage. This > is great work, thanks [~eclark]! > In Hadoop convention, this property should be declared as a string constant > (or in Java's terminology, public static final variable), and be documented > in core-default.xml -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13137) TraceAdmin should support Kerberized cluster
[ https://issues.apache.org/jira/browse/HADOOP-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13137: - Attachment: HADOOP-13137.003.patch v03. Thanks [~steve_l] and [~cmccabe]! bq. No need to wrap the LOG.debug() with a condition; SLF4J is low cost if is not invoked done bq. Could you replace the ``${dfs.namenode.kerberos.principal}` done bq. I do wonder why we need a new file, TestKerberizedTraceAdmin.java, when it could have been a test in TestTraceAdmin.java I needed a subclass of {{SaslDataTransferTestCase}} to set up Kerberized mini cluster. I removed the new test file, and instead, let {{TestTraceAdmin}} to extend from {{SaslDataTransferTestCase}}. bq. I think there are a few other commands that might need to get an argument like this I believe so. I'm working on a patch to support {{hadoop daemonlog}} in Kerberized cluster, and I suspect erasure coding commands and other new commands should also be fixed. > TraceAdmin should support Kerberized cluster > > > Key: HADOOP-13137 > URL: https://issues.apache.org/jira/browse/HADOOP-13137 > Project: Hadoop Common > Issue Type: Bug > Components: tracing >Affects Versions: 2.6.0, 3.0.0-alpha1 > Environment: CDH5.5.1 cluster with Kerberos >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Labels: Kerberos > Attachments: HADOOP-13137.001.patch, HADOOP-13137.002.patch, > HADOOP-13137.003.patch > > > When I run {{hadoop trace}} command for a Kerberized NameNode, it failed with > the following error: > [hdfs@weichiu-encryption-1 root]$ hadoop trace -list -host > weichiu-encryption-1.vpc.cloudera.com:802216/05/12 00:02:13 WARN ipc.Client: > Exception encountered while connecting to the server : > java.lang.IllegalArgumentException: Failed to specify server's Kerberos > principal name > 16/05/12 00:02:13 WARN security.UserGroupInformation: > PriviledgedActionException as:h...@vpc.cloudera.com (auth:KERBEROS) > cause:java.io.IOException: java.lang.IllegalArgumentException: Failed to > specify server's Kerberos principal name > Exception in thread "main" java.io.IOException: Failed on local exception: > java.io.IOException: java.lang.IllegalArgumentException: Failed to specify > server's Kerberos principal name; Host Details : local host is: > "weichiu-encryption-1.vpc.cloudera.com/172.26.8.185"; destination host is: > "weichiu-encryption-1.vpc.cloudera.com":8022; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1470) > at org.apache.hadoop.ipc.Client.call(Client.java:1403) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy11.listSpanReceivers(Unknown Source) > at > org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.listSpanReceivers(TraceAdminProtocolTranslatorPB.java:58) > at > org.apache.hadoop.tracing.TraceAdmin.listSpanReceivers(TraceAdmin.java:68) > at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:177) > at org.apache.hadoop.tracing.TraceAdmin.main(TraceAdmin.java:195) > Caused by: java.io.IOException: java.lang.IllegalArgumentException: Failed to > specify server's Kerberos principal name > at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:682) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) > at > org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:645) > at > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:733) > at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370) > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519) > at org.apache.hadoop.ipc.Client.call(Client.java:1442) > ... 7 more > Caused by: java.lang.IllegalArgumentException: Failed to specify server's > Kerberos principal name > at > org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(SaslRpcClient.java:322) > at > org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:231) > at > org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:159) > at > org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396) > at > org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:555) > at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:370) > at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:725) > at org.apache.hadoop.ipc.Client$C
[jira] [Commented] (HADOOP-13194) Document property fs.getspaceused.classname
[ https://issues.apache.org/jira/browse/HADOOP-13194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296611#comment-15296611 ] Hadoop QA commented on HADOOP-13194: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 18s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 50s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s {color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 1 new + 187 unchanged - 0 fixed = 188 total (was 187) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 15s {color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 59s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.conf.TestCommonConfigurationFields | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805654/HADOOP-13194.001.patch | | JIRA Issue | HADOOP-13194 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 28fdba943304 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ac95448 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/9553/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9553/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HADOOP-Build/9553/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9553/testReport/
[jira] [Commented] (HADOOP-13195) hadoop-azure: page blob append support
[ https://issues.apache.org/jira/browse/HADOOP-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296582#comment-15296582 ] Chris Nauroth commented on HADOOP-13195: [~dchickabasapa], would you please watch this issue and help with code review on patches? Thank you. > hadoop-azure: page blob append support > -- > > Key: HADOOP-13195 > URL: https://issues.apache.org/jira/browse/HADOOP-13195 > Project: Hadoop Common > Issue Type: Improvement > Components: azure >Reporter: Bogdan Raducanu > > The use case for this is storing transaction logs, which, unlike HBase logs, > need to be appended, instead of created every time. > Currently, hadoop-azure has append support but only for Block Blobs, which > are not suited for transaction logging. > After a quick look, I think the existing {{PageBlobOutputStream}} can be > easily adapted to support appends. It already contains logic to re-upload the > last page if it's not full. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster
[ https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296568#comment-15296568 ] Hadoop QA commented on HADOOP-12847: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 55s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 39s {color} | {color:red} root generated 2 new + 698 unchanged - 0 fixed = 700 total (was 698) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s {color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 18 new + 9 unchanged - 13 fixed = 27 total (was 22) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 4s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 39m 45s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805649/HADOOP-12847.006.patch | | JIRA Issue | HADOOP-12847 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 33463e7e3b99 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d1df026 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/9552/artifact/patchprocess/diff-compile-javac-root.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/9552/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/9552/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9552/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9552/console | | Powered by | Apache Yetus 0.3.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > hadoop daemonlog should support https and S
[jira] [Updated] (HADOOP-13194) Document property fs.getspaceused.classname
[ https://issues.apache.org/jira/browse/HADOOP-13194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13194: - Status: Patch Available (was: Open) > Document property fs.getspaceused.classname > --- > > Key: HADOOP-13194 > URL: https://issues.apache.org/jira/browse/HADOOP-13194 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.8.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Attachments: HADOOP-13194.001.patch > > > HADOOP-12973 introduced a new property {{fs.getspaceused.classname}} which > makes it configurable to change the mechanism for estimating disk usage. This > is great work, thanks [~eclark]! > In Hadoop convention, this property should be declared as a string constant > (or in Java's terminology, public static final variable), and be documented > in core-default.xml -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13194) Document property fs.getspaceused.classname
[ https://issues.apache.org/jira/browse/HADOOP-13194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13194: - Attachment: HADOOP-13194.001.patch v01: moved the property to {{CommonConfigurationKeysPublic.FS_GETSPACEUSED_CLASSNAME_KEY}}, and document it in core-default.xml > Document property fs.getspaceused.classname > --- > > Key: HADOOP-13194 > URL: https://issues.apache.org/jira/browse/HADOOP-13194 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.8.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Attachments: HADOOP-13194.001.patch > > > HADOOP-12973 introduced a new property {{fs.getspaceused.classname}} which > makes it configurable to change the mechanism for estimating disk usage. This > is great work, thanks [~eclark]! > In Hadoop convention, this property should be declared as a string constant > (or in Java's terminology, public static final variable), and be documented > in core-default.xml -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13195) hadoop-azure: page blob append support
[ https://issues.apache.org/jira/browse/HADOOP-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296531#comment-15296531 ] Bogdan Raducanu commented on HADOOP-13195: -- I will work on a patch; please let me know if there are suggestions > hadoop-azure: page blob append support > -- > > Key: HADOOP-13195 > URL: https://issues.apache.org/jira/browse/HADOOP-13195 > Project: Hadoop Common > Issue Type: Improvement > Components: azure >Reporter: Bogdan Raducanu > > The use case for this is storing transaction logs, which, unlike HBase logs, > need to be appended, instead of created every time. > Currently, hadoop-azure has append support but only for Block Blobs, which > are not suited for transaction logging. > After a quick look, I think the existing {{PageBlobOutputStream}} can be > easily adapted to support appends. It already contains logic to re-upload the > last page if it's not full. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13195) hadoop-azure: page blob append support
[ https://issues.apache.org/jira/browse/HADOOP-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bogdan Raducanu updated HADOOP-13195: - Description: The use case for this is storing transaction logs, which, unlike HBase logs, need to be appended, instead of created every time. Currently, hadoop-azure has append support but only for Block Blobs, which are not suited for transaction logging. After a quick look, I think the existing {{PageBlobOutputStream}} can be easily adapted to support appends. It already contains logic to re-upload the last page if it's not full. was: The use case for this is storing transaction logs, which, unlike HBase logs, need to be appended, instead of created every time. Currently, hadoop-azure has append support but only for Block Blobs, which are not suited for transaction logging. After a quick look, I think the existing {code}PageBlobOutputStream{code} can be easily adapted to support appends. It already contains logic to re-upload the last page if it's not full. > hadoop-azure: page blob append support > -- > > Key: HADOOP-13195 > URL: https://issues.apache.org/jira/browse/HADOOP-13195 > Project: Hadoop Common > Issue Type: Improvement > Components: azure >Reporter: Bogdan Raducanu > > The use case for this is storing transaction logs, which, unlike HBase logs, > need to be appended, instead of created every time. > Currently, hadoop-azure has append support but only for Block Blobs, which > are not suited for transaction logging. > After a quick look, I think the existing {{PageBlobOutputStream}} can be > easily adapted to support appends. It already contains logic to re-upload the > last page if it's not full. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13195) hadoop-azure: page blob append support
Bogdan Raducanu created HADOOP-13195: Summary: hadoop-azure: page blob append support Key: HADOOP-13195 URL: https://issues.apache.org/jira/browse/HADOOP-13195 Project: Hadoop Common Issue Type: Improvement Components: azure Reporter: Bogdan Raducanu The use case for this is storing transaction logs, which, unlike HBase logs, need to be appended, instead of created every time. Currently, hadoop-azure has append support but only for Block Blobs, which are not suited for transaction logging. After a quick look, I think the existing {code}PageBlobOutputStream{code} can be easily adapted to support appends. It already contains logic to re-upload the last page if it's not full. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-13194) Document property fs.getspaceused.classname
[ https://issues.apache.org/jira/browse/HADOOP-13194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang moved HDFS-10435 to HADOOP-13194: - Affects Version/s: (was: 2.8.0) 2.8.0 Component/s: (was: fs) fs Key: HADOOP-13194 (was: HDFS-10435) Project: Hadoop Common (was: Hadoop HDFS) > Document property fs.getspaceused.classname > --- > > Key: HADOOP-13194 > URL: https://issues.apache.org/jira/browse/HADOOP-13194 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.8.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > > HADOOP-12973 introduced a new property {{fs.getspaceused.classname}} which > makes it configurable to change the mechanism for estimating disk usage. This > is great work, thanks [~eclark]! > In Hadoop convention, this property should be declared as a string constant > (or in Java's terminology, public static final variable), and be documented > in core-default.xml -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster
[ https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-12847: - Attachment: HADOOP-12847.006.patch v06: trigger precommit build again. > hadoop daemonlog should support https and SPNEGO for Kerberized cluster > --- > > Key: HADOOP-12847 > URL: https://issues.apache.org/jira/browse/HADOOP-12847 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, > HADOOP-12847.003.patch, HADOOP-12847.004.patch, HADOOP-12847.005.patch, > HADOOP-12847.006.patch > > > {{hadoop daemonlog}} is a simple, yet useful tool for debugging. > However, it does not support https, nor does it support a Kerberized Hadoop > cluster. > Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation > with a Kerberized name node web ui. It will also fall back to simple > authentication if the cluster is not Kerberized. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9819) FileSystem#rename is broken, deletes target when renaming link to itself
[ https://issues.apache.org/jira/browse/HADOOP-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296490#comment-15296490 ] Hadoop QA commented on HADOOP-9819: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 59s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 36s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 10s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805639/HADOOP-9819.03.patch | | JIRA Issue | HADOOP-9819 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux afd5754725af 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6161d9b | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9551/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9551/console | | Powered by | Apache Yetus 0.3.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > FileSystem#rename is broken, deletes target when renaming link to itself > > > Key: HADOOP-9819 > URL: https://issues.apache.org/jira/browse/HADOOP-9819 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 3.0.0-alpha1 >Reporter: Arpit Agarwal >Assignee: Andras Bokor > Attachments: HADOOP-9819.01.patch, HADOOP-9819.02.patch, > HADOOP-9819.03.patch > > > Uncovered while fixing TestSymlinkLocalFsFileSyst
[jira] [Comment Edited] (HADOOP-9819) FileSystem#rename is broken, deletes target when renaming link to itself
[ https://issues.apache.org/jira/browse/HADOOP-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296423#comment-15296423 ] Andras Bokor edited comment on HADOOP-9819 at 5/23/16 2:27 PM: --- [^HADOOP-9819.03.patch] Fixing checkstyle issues. was (Author: boky01): Fixing checkstyle issues. > FileSystem#rename is broken, deletes target when renaming link to itself > > > Key: HADOOP-9819 > URL: https://issues.apache.org/jira/browse/HADOOP-9819 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 3.0.0-alpha1 >Reporter: Arpit Agarwal >Assignee: Andras Bokor > Attachments: HADOOP-9819.01.patch, HADOOP-9819.02.patch, > HADOOP-9819.03.patch > > > Uncovered while fixing TestSymlinkLocalFsFileSystem on Windows. > This block of code deletes the symlink, the correct behavior is to do nothing. > {code:java} > try { > dstStatus = getFileLinkStatus(dst); > } catch (IOException e) { > dstStatus = null; > } > if (dstStatus != null) { > if (srcStatus.isDirectory() != dstStatus.isDirectory()) { > throw new IOException("Source " + src + " Destination " + dst > + " both should be either file or directory"); > } > if (!overwrite) { > throw new FileAlreadyExistsException("rename destination " + dst > + " already exists."); > } > // Delete the destination that is a file or an empty directory > if (dstStatus.isDirectory()) { > FileStatus[] list = listStatus(dst); > if (list != null && list.length != 0) { > throw new IOException( > "rename cannot overwrite non empty destination directory " + > dst); > } > } > delete(dst, false); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-9819) FileSystem#rename is broken, deletes target when renaming link to itself
[ https://issues.apache.org/jira/browse/HADOOP-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-9819: - Attachment: HADOOP-9819.03.patch Fixing checkstyle issues. > FileSystem#rename is broken, deletes target when renaming link to itself > > > Key: HADOOP-9819 > URL: https://issues.apache.org/jira/browse/HADOOP-9819 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 3.0.0-alpha1 >Reporter: Arpit Agarwal >Assignee: Andras Bokor > Attachments: HADOOP-9819.01.patch, HADOOP-9819.02.patch, > HADOOP-9819.03.patch > > > Uncovered while fixing TestSymlinkLocalFsFileSystem on Windows. > This block of code deletes the symlink, the correct behavior is to do nothing. > {code:java} > try { > dstStatus = getFileLinkStatus(dst); > } catch (IOException e) { > dstStatus = null; > } > if (dstStatus != null) { > if (srcStatus.isDirectory() != dstStatus.isDirectory()) { > throw new IOException("Source " + src + " Destination " + dst > + " both should be either file or directory"); > } > if (!overwrite) { > throw new FileAlreadyExistsException("rename destination " + dst > + " already exists."); > } > // Delete the destination that is a file or an empty directory > if (dstStatus.isDirectory()) { > FileStatus[] list = listStatus(dst); > if (list != null && list.length != 0) { > throw new IOException( > "rename cannot overwrite non empty destination directory " + > dst); > } > } > delete(dst, false); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs
[ https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan updated HADOOP-13162: -- Attachment: HADOOP-13162-branch-2-003.patch > Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs > --- > > Key: HADOOP-13162 > URL: https://issues.apache.org/jira/browse/HADOOP-13162 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13162-branch-2-002.patch, > HADOOP-13162-branch-2-003.patch, HADOOP-13162.001.patch > > > getFileStatus is relatively expensive call and mkdirs invokes it multiple > times depending on how deep the directory structure is. It would be good to > reduce the number of getFileStatus calls in such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs
[ https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan updated HADOOP-13162: -- Status: Open (was: Patch Available) > Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs > --- > > Key: HADOOP-13162 > URL: https://issues.apache.org/jira/browse/HADOOP-13162 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13162-branch-2-002.patch, HADOOP-13162.001.patch > > > getFileStatus is relatively expensive call and mkdirs invokes it multiple > times depending on how deep the directory structure is. It would be good to > reduce the number of getFileStatus calls in such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping
[ https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Esther Kundin updated HADOOP-12291: --- Status: In Progress (was: Patch Available) Attached the fixed patch on 13/May/16. > Add support for nested groups in LdapGroupsMapping > -- > > Key: HADOOP-12291 > URL: https://issues.apache.org/jira/browse/HADOOP-12291 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.8.0 >Reporter: Gautam Gopalakrishnan >Assignee: Esther Kundin > Labels: features, patch > Fix For: 2.8.0 > > Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, > HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, > HADOOP-12291.006.patch > > > When using {{LdapGroupsMapping}} with Hadoop, nested groups are not > supported. So for example if user {{jdoe}} is part of group A which is a > member of group B, the group mapping currently returns only group A. > Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and > SSSD (or similar tools) but would be good to have this feature as part of > {{LdapGroupsMapping}} directly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13181) WASB append support: getPos incorrect
[ https://issues.apache.org/jira/browse/HADOOP-13181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bogdan Raducanu updated HADOOP-13181: - Attachment: HADOOP-13181.001.patch > WASB append support: getPos incorrect > - > > Key: HADOOP-13181 > URL: https://issues.apache.org/jira/browse/HADOOP-13181 > Project: Hadoop Common > Issue Type: Bug > Components: azure >Affects Versions: 2.8.0 >Reporter: Bogdan Raducanu > Attachments: HADOOP-13181.001.patch, append.java > > > See attached code. > Cause: > In NativeAzureFileSystem.java: the append method returns > {code} > new FSDataOutputStream(appendStream, statistics) > {code} > Instead, it should probably return > {code} > new FSDataOutputStream(appendStream, statistics, meta.getLength()) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13193) Upgrade to Apache Yetus 0.3.0
Allen Wittenauer created HADOOP-13193: - Summary: Upgrade to Apache Yetus 0.3.0 Key: HADOOP-13193 URL: https://issues.apache.org/jira/browse/HADOOP-13193 Project: Hadoop Common Issue Type: Improvement Components: documentation, test Affects Versions: 3.0.0-alpha1 Reporter: Allen Wittenauer Assignee: Allen Wittenauer Upgrade yetus-wrapper to be 0.3.0 now that it has passed vote. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296312#comment-15296312 ] Allen Wittenauer commented on HADOOP-12893: --- bq. junit and mockito are currently bundled, and I'm removing them from being bundled in this patch. Also thanks Allen for the pointer jira. I don't think you fully understand what's happening in that other JIRA. Removal of junit was specifically reverted because: bq. This broke the mapreduce jobclient tests which ship in the distro, see MAPREDUCE-4644. So a decision needs to be made whether the -test jars that ship with Hadoop need to be fully functional or not. That's pretty much outside the scope of this JIRA and why I pointed to that other issue. > Verify LICENSE.txt and NOTICE.txt > - > > Key: HADOOP-12893 > URL: https://issues.apache.org/jira/browse/HADOOP-12893 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Xiao Chen >Priority: Blocker > Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, > HADOOP-12893.004.patch, HADOOP-12893.01.patch > > > We have many bundled dependencies in both the source and the binary artifacts > that are not in LICENSE.txt and NOTICE.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug
[ https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296265#comment-15296265 ] ASF GitHub Bot commented on HADOOP-13192: - GitHub user zhudebin opened a pull request: https://github.com/apache/hadoop/pull/96 fix bug HADOOP-13192 fix bug HADOOP-13192 You can merge this pull request into a Git repository by running: $ git pull https://github.com/zhudebin/hadoop branch-2.6-fixbug Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hadoop/pull/96.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #96 commit d6f4ab4fd1423910824ffee5365cba2dbbfcd081 Author: zhudebin Date: 2016-05-23T11:38:52Z fix bug HADOOP-13192 > org.apache.hadoop.util.LineReader match recordDelimiter has a bug > -- > > Key: HADOOP-13192 > URL: https://issues.apache.org/jira/browse/HADOOP-13192 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.6.2 >Reporter: binde > Original Estimate: 5m > Remaining Estimate: 5m > > org.apache.hadoop.util.LineReader.readCustomLine() has a bug, > when line is bccc, recordDelimiter is aaab, the result should be a,ccc, > show the code on line 310: > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > bufferPosn--; > delPosn = 0; > } > } > shoud be : > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > // - change here - start > bufferPosn -= delPosn; > // - change here - end > > delPosn = 0; > } > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug
[ https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296260#comment-15296260 ] ASF GitHub Bot commented on HADOOP-13192: - GitHub user zhudebin opened a pull request: https://github.com/apache/hadoop/pull/95 fix bug HADOOP-13192 fix bug HADOOP-13192 You can merge this pull request into a Git repository by running: $ git pull https://github.com/zhudebin/hadoop branch-2.6-fixbug Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hadoop/pull/95.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #95 commit 0b1e66f01dae1c5558e897e35b1cbe533d9c4542 Author: Andrew Wang Date: 2015-03-24T05:00:34Z HDFS-7960. The full block report should prune zombie storages even if they're not empty. Contributed by Colin McCabe and Eddy Xu. (cherry picked from commit 50ee8f4e67a66aa77c5359182f61f3e951844db6) (cherry picked from commit 2f46ee50bd4efc82ba3d30bd36f7637ea9d9714e) Conflicts: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestBlockListAsLongs.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTriggerBlockReport.java (cherry picked from commit 03d4af39e794dc03d764122077b434d658b6405e) commit bc8728cd27870e048fd90d1e07ea92e8c9ed310d Author: Kihwal Lee Date: 2015-03-30T15:11:25Z HDFS-7742. Favoring decommissioning node for replication can cause a block to stay underreplicated for long periods. Contributed by Nathan Roberts. (cherry picked from commit 04ee18ed48ceef34598f954ff40940abc9fde1d2) (cherry picked from commit c4cedfc1d601127430c70ca8ca4d4e2ee2d1003d) (cherry picked from commit c6b68a82adea8de488b255594d35db8e01f5fc8f) commit 8a9665a586624cfe7f11ad9e21976465e0bb0e21 Author: Junping Du Date: 2015-04-02T19:13:03Z MAPREDUCE-6303. Read timeout when retrying a fetch error can be fatal to a reducer. Contributed by Jason Lowe. (cherry picked from commit eccb7d46efbf07abcc6a01bd5e7d682f6815b824) (cherry picked from commit cacadea632f7ab6fe4fdb1432e1a2c48e8ebd55f) (cherry picked from commit 2abd4f61075739514fb3e63b118448895be02a30) commit c3f5ea11eca30a617cab2a716dd08dff20db3791 Author: Colin Patrick Mccabe Date: 2015-04-06T15:54:46Z HDFS-7999. FsDatasetImpl#createTemporary sometimes holds the FSDatasetImpl lock for a very long time (sinago via cmccabe) (cherry picked from commit 28bebc81db8bb6d1bc2574de7564fe4c595cfe09) (cherry picked from commit a827089905524e10638c783ba908a895d621911d) Conflicts: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java (cherry picked from commit c3a3092c37926eca75ea149c4c061742f6599b40) commit 31d30e8111d7c4ef6400b5fe51cc67b17ab34908 Author: Arpit Agarwal Date: 2015-04-08T18:38:21Z HDFS-8072. Reserved RBW space is not released if client terminates while writing block. (Arpit Agarwal) (cherry picked from commit f0324738c9db4f45d2b1ec5cfb46c5f2b7669571) Conflicts: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalReplicaInPipeline.java (cherry picked from commit de21de7e2243ef8a89082121d838b88e3c10f05b) commit 619f7938466e907f335941d928c6272a0482 Author: Kihwal Lee Date: 2015-04-08T20:39:25Z HDFS-8046. Allow better control of getContentSummary. Contributed by Kihwal Lee. (cherry picked from commit 285b31e75e51ec8e3a796c2cb0208739368ca9b8) (cherry picked from commit 7e622076d41a85fc9a8600fb270564a085f5cd83) Conflicts: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/serve
[jira] [Commented] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug
[ https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296261#comment-15296261 ] ASF GitHub Bot commented on HADOOP-13192: - Github user zhudebin closed the pull request at: https://github.com/apache/hadoop/pull/95 > org.apache.hadoop.util.LineReader match recordDelimiter has a bug > -- > > Key: HADOOP-13192 > URL: https://issues.apache.org/jira/browse/HADOOP-13192 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.6.2 >Reporter: binde > Original Estimate: 5m > Remaining Estimate: 5m > > org.apache.hadoop.util.LineReader.readCustomLine() has a bug, > when line is bccc, recordDelimiter is aaab, the result should be a,ccc, > show the code on line 310: > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > bufferPosn--; > delPosn = 0; > } > } > shoud be : > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > // - change here - start > bufferPosn -= delPosn; > // - change here - end > > delPosn = 0; > } > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12943) Add -w -r options in dfs -test command
[ https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296246#comment-15296246 ] Hadoop QA commented on HADOOP-12943: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 3s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 32s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 22s {color} | {color:red} root: The patch generated 2 new + 186 unchanged - 25 fixed = 188 total (was 211) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 40s {color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 58m 57s {color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 107m 24s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.net.TestDNS | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805605/HADOOP-12943.004.patch | | JIRA Issue | HADOOP-12943 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 17aa537908e5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6161d9b | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/9550/artifact/patchprocess/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9550/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HADOOP-Build/9550/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results
[jira] [Updated] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug
[ https://issues.apache.org/jira/browse/HADOOP-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] binde updated HADOOP-13192: --- Description: org.apache.hadoop.util.LineReader.readCustomLine() has a bug, when line is bccc, recordDelimiter is aaab, the result should be a,ccc, show the code on line 310: for (; bufferPosn < bufferLength; ++bufferPosn) { if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { delPosn++; if (delPosn >= recordDelimiterBytes.length) { bufferPosn++; break; } } else if (delPosn != 0) { bufferPosn--; delPosn = 0; } } shoud be : for (; bufferPosn < bufferLength; ++bufferPosn) { if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { delPosn++; if (delPosn >= recordDelimiterBytes.length) { bufferPosn++; break; } } else if (delPosn != 0) { // - change here - start bufferPosn -= delPosn; // - change here - end delPosn = 0; } } was: org.apache.hadoop.util.LineReader.readCustomLine() has a bug, when line is bccc, recordDelimiter is aaab, the result should be a,ccc, show the code on line 310: for (; bufferPosn < bufferLength; ++bufferPosn) { if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { delPosn++; if (delPosn >= recordDelimiterBytes.length) { bufferPosn++; break; } } else if (delPosn != 0) { bufferPosn--; delPosn = 0; } } shoud be : for (; bufferPosn < bufferLength; ++bufferPosn) { if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { delPosn++; if (delPosn >= recordDelimiterBytes.length) { bufferPosn++; break; } } else if (delPosn != 0) { // - change here - start bufferPosn -= delPosn; // - change here - end delPosn = 0; } } > org.apache.hadoop.util.LineReader match recordDelimiter has a bug > -- > > Key: HADOOP-13192 > URL: https://issues.apache.org/jira/browse/HADOOP-13192 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.6.2 >Reporter: binde > Original Estimate: 5m > Remaining Estimate: 5m > > org.apache.hadoop.util.LineReader.readCustomLine() has a bug, > when line is bccc, recordDelimiter is aaab, the result should be a,ccc, > show the code on line 310: > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > bufferPosn--; > delPosn = 0; > } > } > shoud be : > for (; bufferPosn < bufferLength; ++bufferPosn) { > if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { > delPosn++; > if (delPosn >= recordDelimiterBytes.length) { > bufferPosn++; > break; > } > } else if (delPosn != 0) { > // - change here - start > bufferPosn -= delPosn; > // - change here - end > > delPosn = 0; > } > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13192) org.apache.hadoop.util.LineReader match recordDelimiter has a bug
binde created HADOOP-13192: -- Summary: org.apache.hadoop.util.LineReader match recordDelimiter has a bug Key: HADOOP-13192 URL: https://issues.apache.org/jira/browse/HADOOP-13192 Project: Hadoop Common Issue Type: Bug Components: util Affects Versions: 2.6.2 Reporter: binde org.apache.hadoop.util.LineReader.readCustomLine() has a bug, when line is bccc, recordDelimiter is aaab, the result should be a,ccc, show the code on line 310: for (; bufferPosn < bufferLength; ++bufferPosn) { if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { delPosn++; if (delPosn >= recordDelimiterBytes.length) { bufferPosn++; break; } } else if (delPosn != 0) { bufferPosn--; delPosn = 0; } } shoud be : for (; bufferPosn < bufferLength; ++bufferPosn) { if (buffer[bufferPosn] == recordDelimiterBytes[delPosn]) { delPosn++; if (delPosn >= recordDelimiterBytes.length) { bufferPosn++; break; } } else if (delPosn != 0) { // - change here - start bufferPosn -= delPosn; // - change here - end delPosn = 0; } } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13164) Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories
[ https://issues.apache.org/jira/browse/HADOOP-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296193#comment-15296193 ] Steve Loughran commented on HADOOP-13164: - The goal of the call is to eliminate upstream pseudo-directory blobs. I fear removing it would do bad things. But if it is called after every file is written, it will be expensive, especially as there is {{getStatus()}} in there (2 x {{getObjectMetadata()}} + 1 x {{listObjects()}}) , plus the {{deleteObjects()}} call. As this goes up the tree, the cost will be O(depth) Given that after a file has just been written, it is known that there is a child of any directory (i.e. it is non-empty), then you don't need to check so much. You look for the existence of a path, and if there: delete. More deviously, you could say "delete the path without checking to see if it exists". If it's not there, a failed delete is harmless. That'd still be O(depth), but one S3 call, rather than 3 or 4. And, once you go down that path, you could say "queue up a delete for all parent paths and fire them off in one go", going from O(depth) to O(1). Even better, you could maybe even do that asynchronously. I'd worry a bit there about race conditions between the current thread and process, but given this is just a cleanup, it might be safe —and I don't see it being any worse race-wise than what exists today, except now it may be more visible to a single thread. That would need very, very, careful testing. The one thing nobody wants is an over-zealous delete operation to lose data. > Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories > > > Key: HADOOP-13164 > URL: https://issues.apache.org/jira/browse/HADOOP-13164 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Rajesh Balamohan >Priority: Minor > > https://github.com/apache/hadoop/blob/27c4e90efce04e1b1302f668b5eb22412e00d033/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L1224 > deleteUnnecessaryFakeDirectories is invoked in S3AFileSystem during rename > and on outputstream close() to purge any fake directories. Depending on the > nesting in the folder structure, it might take a lot longer time as it > invokes getFileStatus multiple times. Instead, it should be able to break > out of the loop once a non-empty directory is encountered. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12943) Add -w -r options in dfs -test command
[ https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-12943: - Attachment: HADOOP-12943.004.patch > Add -w -r options in dfs -test command > -- > > Key: HADOOP-12943 > URL: https://issues.apache.org/jira/browse/HADOOP-12943 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, scripts, tools >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: 2.8.0 > > Attachments: HADOOP-12943.001.patch, HADOOP-12943.002.patch, > HADOOP-12943.003.patch, HADOOP-12943.004.patch > > > Currently the dfs -test command only supports > -d, -e, -f, -s, -z > options. It would be helpful if we add > -w, -r > to verify permission of r/w before actual read or write. This will help > script programming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12943) Add -w -r options in dfs -test command
[ https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-12943: - Status: Patch Available (was: In Progress) > Add -w -r options in dfs -test command > -- > > Key: HADOOP-12943 > URL: https://issues.apache.org/jira/browse/HADOOP-12943 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, scripts, tools >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: 2.8.0 > > Attachments: HADOOP-12943.001.patch, HADOOP-12943.002.patch, > HADOOP-12943.003.patch, HADOOP-12943.004.patch > > > Currently the dfs -test command only supports > -d, -e, -f, -s, -z > options. It would be helpful if we add > -w, -r > to verify permission of r/w before actual read or write. This will help > script programming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12943) Add -w -r options in dfs -test command
[ https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-12943: - Status: In Progress (was: Patch Available) > Add -w -r options in dfs -test command > -- > > Key: HADOOP-12943 > URL: https://issues.apache.org/jira/browse/HADOOP-12943 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, scripts, tools >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: 2.8.0 > > Attachments: HADOOP-12943.001.patch, HADOOP-12943.002.patch, > HADOOP-12943.003.patch > > > Currently the dfs -test command only supports > -d, -e, -f, -s, -z > options. It would be helpful if we add > -w, -r > to verify permission of r/w before actual read or write. This will help > script programming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13164) Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories
[ https://issues.apache.org/jira/browse/HADOOP-13164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296117#comment-15296117 ] Rajesh Balamohan commented on HADOOP-13164: --- Instead of optimizing deleteUnnecessaryFakeDirectories to reduce the number of calls to S3, need to understand whether it is mandatory to do invoke it from S3*OutputStream.close() / S3AFileSystem.innerCopyFromLocalFile / S3AFileSystem.innerRename. Thoughts?. > Optimize S3AFileSystem::deleteUnnecessaryFakeDirectories > > > Key: HADOOP-13164 > URL: https://issues.apache.org/jira/browse/HADOOP-13164 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Rajesh Balamohan >Priority: Minor > > https://github.com/apache/hadoop/blob/27c4e90efce04e1b1302f668b5eb22412e00d033/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L1224 > deleteUnnecessaryFakeDirectories is invoked in S3AFileSystem during rename > and on outputstream close() to purge any fake directories. Depending on the > nesting in the folder structure, it might take a lot longer time as it > invokes getFileStatus multiple times. Instead, it should be able to break > out of the loop once a non-empty directory is encountered. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org