[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo
[ https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320013#comment-15320013 ] Yi Liu commented on HADOOP-13184: - option 1 is more beautiful, +1. > Add "Apache" to Hadoop project logo > --- > > Key: HADOOP-13184 > URL: https://issues.apache.org/jira/browse/HADOOP-13184 > Project: Hadoop Common > Issue Type: Task >Reporter: Chris Douglas >Assignee: Abhishek > > Many ASF projects include "Apache" in their logo. We should add it to Hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13189) FairCallQueue makes callQueue larger than the configured capacity.
[ https://issues.apache.org/jira/browse/HADOOP-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320012#comment-15320012 ] Vinitha Reddy Gankidi commented on HADOOP-13189: [~xyao] Thanks for the review. The 2nd parameter of the FairCallQueue constructor is the per queue capacity in the current implementation. However, this patch makes it the total capacity of all subqueues. So, this change is needed. It would be better to keep the capacity allocation to the subqueues flexible. Instead of validating the internal subqueue capacity allocation, I will add a test that validates that the total capacity of all subqueues equals the maxQueueSize. > FairCallQueue makes callQueue larger than the configured capacity. > -- > > Key: HADOOP-13189 > URL: https://issues.apache.org/jira/browse/HADOOP-13189 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.6.0 >Reporter: Konstantin Shvachko >Assignee: Vinitha Reddy Gankidi > Attachments: HADOOP-13189.001.patch > > > {{FairCallQueue}} divides {{callQueue}} into multiple (4 by default) > sub-queues, with each sub-queue corresponding to a different level of > priority. The constructor for {{FairCallQueue}} takes the same parameter > {{capacity}} as the default CallQueue implementation, and allocates all its > sub-queues of size {{capacity}}. With 4 levels of priority (sub-queues) by > default it results in the total callQueue size 4 times larger than it should > be based on the configuration. > {{capacity}} should be divided by the number of sub-queues at some place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo
[ https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320009#comment-15320009 ] Xiao Chen commented on HADOOP-13184: +1 on option 1 > Add "Apache" to Hadoop project logo > --- > > Key: HADOOP-13184 > URL: https://issues.apache.org/jira/browse/HADOOP-13184 > Project: Hadoop Common > Issue Type: Task >Reporter: Chris Douglas >Assignee: Abhishek > > Many ASF projects include "Apache" in their logo. We should add it to Hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo
[ https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320007#comment-15320007 ] Abhishek commented on HADOOP-13184: --- I'll add that in the final version. Thanks! > Add "Apache" to Hadoop project logo > --- > > Key: HADOOP-13184 > URL: https://issues.apache.org/jira/browse/HADOOP-13184 > Project: Hadoop Common > Issue Type: Task >Reporter: Chris Douglas >Assignee: Abhishek > > Many ASF projects include "Apache" in their logo. We should add it to Hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls
[ https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319976#comment-15319976 ] Hadoop QA commented on HADOOP-13227: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 51s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 38s{color} | {color:red} root: The patch generated 1 new + 219 unchanged - 1 fixed = 220 total (was 220) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 47s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 59s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}128m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | | | hadoop.hdfs.server.balancer.TestBalancer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808822/c13227_20160608b.patch | | JIRA Issue | HADOOP-13227 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7c9f44517612 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 76f0800 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/9684/artifact/patchprocess/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9684/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |
[jira] [Commented] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls
[ https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319958#comment-15319958 ] Hadoop QA commented on HADOOP-13227: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 45s{color} | {color:red} root: The patch generated 1 new + 219 unchanged - 1 fixed = 220 total (was 220) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 29s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 17s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}127m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics | | | hadoop.ha.TestZKFailoverController | | | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808819/c13227_20160608.patch | | JIRA Issue | HADOOP-13227 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux eb80c47f0174 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 76f0800 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/9683/artifact/patchprocess/diff-checkstyle-root.txt | | unit |
[jira] [Commented] (HADOOP-9956) RPC listener inefficiently assigns connections to readers
[ https://issues.apache.org/jira/browse/HADOOP-9956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319922#comment-15319922 ] Hudson commented on HADOOP-9956: SUCCESS: Integrated in HBase-Trunk_matrix #1008 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1008/]) Revert "HBASE-15948 Port "HADOOP-9956 RPC listener inefficiently assigns (stack: rev e66ecd7db68d6ef57084543d08f7774c82f22f45) * hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SimpleRpcSchedulerFactory.java * hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/AbstractTestIPC.java * hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSource.java * hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperImpl.java HBASE-15948 Port "HADOOP-9956 RPC listener inefficiently assigns (stack: rev 3a95552cfe6205ae845e1a7e1b5907da55b1a044) * hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/AbstractTestIPC.java * hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSource.java * hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperImpl.java * hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SimpleRpcSchedulerFactory.java > RPC listener inefficiently assigns connections to readers > - > > Key: HADOOP-9956 > URL: https://issues.apache.org/jira/browse/HADOOP-9956 > Project: Hadoop Common > Issue Type: Sub-task > Components: ipc >Affects Versions: 2.0.0-alpha, 3.0.0-alpha1 >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Fix For: 0.23.10, 2.3.0 > > Attachments: HADOOP-9956.branch-23.patch, HADOOP-9956.patch, > HADOOP-9956.patch > > > The socket listener and readers use a complex synchronization to update the > reader's NIO {{Selector}}. Updating active selectors is not thread-safe so > precautions are required. > However, the current locking choreography results in a serialized > distribution of new connections to the parallel socket readers. A > slower/busier reader can stall the listener and throttle performance. > The problem manifests as unexpectedly low cpu utilization by the listener and > readers (~20-30%) under heavy load. The call queue is shallow when it should > be overflowing. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail
[ https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319914#comment-15319914 ] linbao111 commented on HADOOP-13240: sorry,i do test on branch 2.7.1 and trunk,and find the same error info: [@test1.heracles.com surefire-reports]# cat org.apache.hadoop.fs.shell.TestAclCommands.txt --- Test set: org.apache.hadoop.fs.shell.TestAclCommands --- Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.892 sec <<< FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands) Time elapsed: 0.817 sec <<< FAILURE! java.lang.AssertionError: setfacl should fail ACL spec missing at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertFalse(Assert.java:64) at org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81) > TestAclCommands.testSetfaclValidations fail > --- > > Key: HADOOP-13240 > URL: https://issues.apache.org/jira/browse/HADOOP-13240 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.4.1, 2.7.1 > Environment: hadoop 2.4.1,as6.5 >Reporter: linbao111 >Assignee: John Zhuge >Priority: Minor > > mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console > -Dtest=TestAclCommands#testSetfaclValidations failed with following message: > --- > Test set: org.apache.hadoop.fs.shell.TestAclCommands > --- > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< > FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands > testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands) Time > elapsed: 0.534 sec <<< FAILURE! > java.lang.AssertionError: setfacl should fail ACL spec missing > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at org.junit.Assert.assertFalse(Assert.java:68) > at > org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81) > i notice from > HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java > code changed > should > hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe > changed to: > diff --git > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > index b14cd37..463bfcd > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception { > "/path" })); > assertFalse("setfacl should fail ACL spec missing", > 0 == runCommand(new String[] { "-setfacl", "-m", > -"", "/path" })); > +":", "/path" })); >} > >@Test -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail
[ https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] linbao111 updated HADOOP-13240: --- Affects Version/s: 2.7.1 > TestAclCommands.testSetfaclValidations fail > --- > > Key: HADOOP-13240 > URL: https://issues.apache.org/jira/browse/HADOOP-13240 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.4.1, 2.7.1 > Environment: hadoop 2.4.1,as6.5 >Reporter: linbao111 >Assignee: John Zhuge >Priority: Minor > > mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console > -Dtest=TestAclCommands#testSetfaclValidations failed with following message: > --- > Test set: org.apache.hadoop.fs.shell.TestAclCommands > --- > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< > FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands > testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands) Time > elapsed: 0.534 sec <<< FAILURE! > java.lang.AssertionError: setfacl should fail ACL spec missing > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at org.junit.Assert.assertFalse(Assert.java:68) > at > org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81) > i notice from > HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java > code changed > should > hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe > changed to: > diff --git > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > index b14cd37..463bfcd > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception { > "/path" })); > assertFalse("setfacl should fail ACL spec missing", > 0 == runCommand(new String[] { "-setfacl", "-m", > -"", "/path" })); > +":", "/path" })); >} > >@Test -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13247) The CACHE entry in FileSystem is not removed if exception happened in close
[ https://issues.apache.org/jira/browse/HADOOP-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319881#comment-15319881 ] Hadoop QA commented on HADOOP-13247: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 5s{color} | {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808824/HADOOP-13247.000.patch | | JIRA Issue | HADOOP-13247 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9686/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > The CACHE entry in FileSystem is not removed if exception happened in close > --- > > Key: HADOOP-13247 > URL: https://issues.apache.org/jira/browse/HADOOP-13247 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.8.0 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HADOOP-13247.000.patch > > > The CACHE entry in FileSystem is not removed if exception happened in close. > It causes "Filesystem closed" IOException if the same filesystem is used > later. > The following is stack trace for the exception coming out of close: > {code} > 2016-06-07 18:21:18,201 ERROR hive.ql.exec.DDLTask: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.reflect.UndeclaredThrowableException > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:756) > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4022) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:172) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1679) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1422) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1205) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1052) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1047) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:158) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:76) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:219) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:231) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.reflect.UndeclaredThrowableException > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400) > at > org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1383) > at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2006) > at > org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:900) > at > org.apache.hadoop.hive.metastore.Warehouse.closeFs(Warehouse.java:122) > at > org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:497) > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.createTempTable(SessionHiveMetaStoreClient.java:345) > at >
[jira] [Updated] (HADOOP-13247) The CACHE entry in FileSystem is not removed if exception happened in close
[ https://issues.apache.org/jira/browse/HADOOP-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhihai xu updated HADOOP-13247: --- Attachment: HADOOP-13247.000.patch > The CACHE entry in FileSystem is not removed if exception happened in close > --- > > Key: HADOOP-13247 > URL: https://issues.apache.org/jira/browse/HADOOP-13247 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.8.0 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HADOOP-13247.000.patch > > > The CACHE entry in FileSystem is not removed if exception happened in close. > It causes "Filesystem closed" IOException if the same filesystem is used > later. > The following is stack trace for the exception coming out of close: > {code} > 2016-06-07 18:21:18,201 ERROR hive.ql.exec.DDLTask: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.reflect.UndeclaredThrowableException > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:756) > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4022) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:172) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1679) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1422) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1205) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1052) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1047) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:158) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:76) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:219) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:231) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.reflect.UndeclaredThrowableException > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400) > at > org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1383) > at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2006) > at > org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:900) > at > org.apache.hadoop.hive.metastore.Warehouse.closeFs(Warehouse.java:122) > at > org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:497) > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.createTempTable(SessionHiveMetaStoreClient.java:345) > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:93) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:664) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:652) > at sun.reflect.GeneratedMethodAccessor108.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90) > at com.sun.proxy.$Proxy8.createTable(Unknown Source) >
[jira] [Updated] (HADOOP-13247) The CACHE entry in FileSystem is not removed if exception happened in close
[ https://issues.apache.org/jira/browse/HADOOP-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhihai xu updated HADOOP-13247: --- Attachment: (was: HADOOP-13247.000.patch) > The CACHE entry in FileSystem is not removed if exception happened in close > --- > > Key: HADOOP-13247 > URL: https://issues.apache.org/jira/browse/HADOOP-13247 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.8.0 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HADOOP-13247.000.patch > > > The CACHE entry in FileSystem is not removed if exception happened in close. > It causes "Filesystem closed" IOException if the same filesystem is used > later. > The following is stack trace for the exception coming out of close: > {code} > 2016-06-07 18:21:18,201 ERROR hive.ql.exec.DDLTask: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.reflect.UndeclaredThrowableException > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:756) > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4022) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:172) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1679) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1422) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1205) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1052) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1047) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:158) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:76) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:219) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:231) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.reflect.UndeclaredThrowableException > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400) > at > org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1383) > at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2006) > at > org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:900) > at > org.apache.hadoop.hive.metastore.Warehouse.closeFs(Warehouse.java:122) > at > org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:497) > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.createTempTable(SessionHiveMetaStoreClient.java:345) > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:93) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:664) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:652) > at sun.reflect.GeneratedMethodAccessor108.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90) > at com.sun.proxy.$Proxy8.createTable(Unknown Source)
[jira] [Commented] (HADOOP-13247) The CACHE entry in FileSystem is not removed if exception happened in close
[ https://issues.apache.org/jira/browse/HADOOP-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319874#comment-15319874 ] zhihai xu commented on HADOOP-13247: I attached a patch HADOOP-13247.000.patch which will remove the entry from CACHE in {{finally}} block, so even the exception happens in {{processDeleteOnExit}}, the entry will still be removed from CACHE. > The CACHE entry in FileSystem is not removed if exception happened in close > --- > > Key: HADOOP-13247 > URL: https://issues.apache.org/jira/browse/HADOOP-13247 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.8.0 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HADOOP-13247.000.patch > > > The CACHE entry in FileSystem is not removed if exception happened in close. > It causes "Filesystem closed" IOException if the same filesystem is used > later. > The following is stack trace for the exception coming out of close: > {code} > 2016-06-07 18:21:18,201 ERROR hive.ql.exec.DDLTask: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.reflect.UndeclaredThrowableException > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:756) > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4022) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:172) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1679) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1422) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1205) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1052) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1047) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:158) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:76) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:219) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:231) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.reflect.UndeclaredThrowableException > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400) > at > org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1383) > at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2006) > at > org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:900) > at > org.apache.hadoop.hive.metastore.Warehouse.closeFs(Warehouse.java:122) > at > org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:497) > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.createTempTable(SessionHiveMetaStoreClient.java:345) > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:93) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:664) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:652) > at sun.reflect.GeneratedMethodAccessor108.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at
[jira] [Commented] (HADOOP-13247) The CACHE entry in FileSystem is not removed if exception happened in close
[ https://issues.apache.org/jira/browse/HADOOP-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319873#comment-15319873 ] Hadoop QA commented on HADOOP-13247: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 6s{color} | {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808823/HADOOP-13247.000.patch | | JIRA Issue | HADOOP-13247 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9685/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > The CACHE entry in FileSystem is not removed if exception happened in close > --- > > Key: HADOOP-13247 > URL: https://issues.apache.org/jira/browse/HADOOP-13247 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.8.0 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HADOOP-13247.000.patch > > > The CACHE entry in FileSystem is not removed if exception happened in close. > It causes "Filesystem closed" IOException if the same filesystem is used > later. > The following is stack trace for the exception coming out of close: > {code} > 2016-06-07 18:21:18,201 ERROR hive.ql.exec.DDLTask: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.reflect.UndeclaredThrowableException > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:756) > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4022) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:172) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1679) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1422) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1205) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1052) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1047) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:158) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:76) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:219) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:231) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.reflect.UndeclaredThrowableException > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400) > at > org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1383) > at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2006) > at > org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:900) > at > org.apache.hadoop.hive.metastore.Warehouse.closeFs(Warehouse.java:122) > at > org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:497) > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.createTempTable(SessionHiveMetaStoreClient.java:345) > at >
[jira] [Updated] (HADOOP-13247) The CACHE entry in FileSystem is not removed if exception happened in close
[ https://issues.apache.org/jira/browse/HADOOP-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhihai xu updated HADOOP-13247: --- Attachment: HADOOP-13247.000.patch > The CACHE entry in FileSystem is not removed if exception happened in close > --- > > Key: HADOOP-13247 > URL: https://issues.apache.org/jira/browse/HADOOP-13247 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.8.0 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HADOOP-13247.000.patch > > > The CACHE entry in FileSystem is not removed if exception happened in close. > It causes "Filesystem closed" IOException if the same filesystem is used > later. > The following is stack trace for the exception coming out of close: > {code} > 2016-06-07 18:21:18,201 ERROR hive.ql.exec.DDLTask: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.reflect.UndeclaredThrowableException > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:756) > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4022) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:172) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1679) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1422) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1205) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1052) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1047) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:158) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:76) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:219) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:231) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.reflect.UndeclaredThrowableException > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400) > at > org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1383) > at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2006) > at > org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:900) > at > org.apache.hadoop.hive.metastore.Warehouse.closeFs(Warehouse.java:122) > at > org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:497) > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.createTempTable(SessionHiveMetaStoreClient.java:345) > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:93) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:664) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:652) > at sun.reflect.GeneratedMethodAccessor108.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90) > at com.sun.proxy.$Proxy8.createTable(Unknown Source) >
[jira] [Updated] (HADOOP-13247) The CACHE entry in FileSystem is not removed if exception happened in close
[ https://issues.apache.org/jira/browse/HADOOP-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhihai xu updated HADOOP-13247: --- Status: Patch Available (was: Open) > The CACHE entry in FileSystem is not removed if exception happened in close > --- > > Key: HADOOP-13247 > URL: https://issues.apache.org/jira/browse/HADOOP-13247 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.8.0 >Reporter: zhihai xu >Assignee: zhihai xu > Attachments: HADOOP-13247.000.patch > > > The CACHE entry in FileSystem is not removed if exception happened in close. > It causes "Filesystem closed" IOException if the same filesystem is used > later. > The following is stack trace for the exception coming out of close: > {code} > 2016-06-07 18:21:18,201 ERROR hive.ql.exec.DDLTask: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.reflect.UndeclaredThrowableException > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:756) > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4022) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:172) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1679) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1422) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1205) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1052) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1047) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:158) > at > org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:76) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:219) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:231) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.reflect.UndeclaredThrowableException > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118) > at > org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400) > at > org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1383) > at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2006) > at > org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:900) > at > org.apache.hadoop.hive.metastore.Warehouse.closeFs(Warehouse.java:122) > at > org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:497) > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.createTempTable(SessionHiveMetaStoreClient.java:345) > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:93) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:664) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:652) > at sun.reflect.GeneratedMethodAccessor108.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90) > at com.sun.proxy.$Proxy8.createTable(Unknown Source) >
[jira] [Updated] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls
[ https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HADOOP-13227: - Attachment: c13227_20160608b.patch c13227_20160608b.patch: checkEmpty() should be right after remove() > AsyncCallHandler should use a event driven architecture to handle async calls > - > > Key: HADOOP-13227 > URL: https://issues.apache.org/jira/browse/HADOOP-13227 > Project: Hadoop Common > Issue Type: Improvement > Components: io, ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: c13227_20160602.patch, c13227_20160606.patch, > c13227_20160607.patch, c13227_20160608.patch, c13227_20160608b.patch > > > This JIRA is to address [Jing's > comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630] > in HADOOP-13226. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls
[ https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HADOOP-13227: - Attachment: c13227_20160608.patch c13227_20160608.patch: - uses RetryDecision ordering to further simplify newRetryInfo. - checkCalls should check if the queue is empty at the end. > AsyncCallHandler should use a event driven architecture to handle async calls > - > > Key: HADOOP-13227 > URL: https://issues.apache.org/jira/browse/HADOOP-13227 > Project: Hadoop Common > Issue Type: Improvement > Components: io, ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: c13227_20160602.patch, c13227_20160606.patch, > c13227_20160607.patch, c13227_20160608.patch > > > This JIRA is to address [Jing's > comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630] > in HADOOP-13226. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13237) s3a initialization against public bucket fails if caller lacks any credentials
[ https://issues.apache.org/jira/browse/HADOOP-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319843#comment-15319843 ] Hadoop QA commented on HADOOP-13237: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 34s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 42s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 26s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 27s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 39s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 25s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 22s{color} | {color:green} root: The patch generated 0 new + 9 unchanged - 2 fixed = 9 total (was 11) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 13s{color} | {color:red} hadoop-common in the patch failed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 76m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_91 Timed out junit tests |
[jira] [Commented] (HADOOP-12943) Add -w -r options in dfs -test command
[ https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319836#comment-15319836 ] Weiwei Yang commented on HADOOP-12943: -- Hello [~ajisakaa] appreciate if you can take a look at this one when you have time. Thanks a lot. > Add -w -r options in dfs -test command > -- > > Key: HADOOP-12943 > URL: https://issues.apache.org/jira/browse/HADOOP-12943 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, scripts, tools >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: 2.8.0 > > Attachments: HADOOP-12943.001.patch, HADOOP-12943.002.patch, > HADOOP-12943.003.patch, HADOOP-12943.004.patch > > > Currently the dfs -test command only supports > -d, -e, -f, -s, -z > options. It would be helpful if we add > -w, -r > to verify permission of r/w before actual read or write. This will help > script programming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13247) The CACHE entry in FileSystem is not removed if exception happened in close
zhihai xu created HADOOP-13247: -- Summary: The CACHE entry in FileSystem is not removed if exception happened in close Key: HADOOP-13247 URL: https://issues.apache.org/jira/browse/HADOOP-13247 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.8.0 Reporter: zhihai xu Assignee: zhihai xu The CACHE entry in FileSystem is not removed if exception happened in close. It causes "Filesystem closed" IOException if the same filesystem is used later. The following is stack trace for the exception coming out of close: {code} 2016-06-07 18:21:18,201 ERROR hive.ql.exec.DDLTask: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.reflect.UndeclaredThrowableException at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:756) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4022) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:172) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1679) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1422) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1205) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1052) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1047) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:158) at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:76) at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:219) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:231) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.reflect.UndeclaredThrowableException at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988) at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118) at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400) at org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1383) at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2006) at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:900) at org.apache.hadoop.hive.metastore.Warehouse.closeFs(Warehouse.java:122) at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:497) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.createTempTable(SessionHiveMetaStoreClient.java:345) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:93) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:664) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:652) at sun.reflect.GeneratedMethodAccessor108.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90) at com.sun.proxy.$Proxy8.createTable(Unknown Source) at sun.reflect.GeneratedMethodAccessor108.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:1909) at com.sun.proxy.$Proxy8.createTable(Unknown Source) at
[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo
[ https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319804#comment-15319804 ] Craig L Russell commented on HADOOP-13184: -- Great idea to add Apache to the logo. While you're at it, how about adding (R) to Hadoop to show that it's a registered trademark? > Add "Apache" to Hadoop project logo > --- > > Key: HADOOP-13184 > URL: https://issues.apache.org/jira/browse/HADOOP-13184 > Project: Hadoop Common > Issue Type: Task >Reporter: Chris Douglas >Assignee: Abhishek > > Many ASF projects include "Apache" in their logo. We should add it to Hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo
[ https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319781#comment-15319781 ] Chris Douglas commented on HADOOP-13184: +1 on option 1 > Add "Apache" to Hadoop project logo > --- > > Key: HADOOP-13184 > URL: https://issues.apache.org/jira/browse/HADOOP-13184 > Project: Hadoop Common > Issue Type: Task >Reporter: Chris Douglas >Assignee: Abhishek > > Many ASF projects include "Apache" in their logo. We should add it to Hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13237) s3a initialization against public bucket fails if caller lacks any credentials
[ https://issues.apache.org/jira/browse/HADOOP-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13237: --- Attachment: HADOOP-13237-branch-2.002.patch Here is patch 002 for branch-2. I'm currently doing a full test run against an S3 bucket in US-west-2. * Documentation updated. * Tests added. bq. we could have an anon provider subclass which has the constructor; that would eliminate the need to have a handler. I'm not sure I understood this comment. {{AnonymousAWSCredentialsProvider}} is our own code in S3A, so we have control over the constructors we want it to provide. I considered providing a constructor that accepts and ignores a {{URI}} and {{Configuration}}, but I thought it would cause confusion to see a constructor with unused arguments. Instead, I expanded the reflection logic to support calling the default constructor. I haven't yet made any changes related to this in this revision of the patch, so if you still want to request changes, please let me know. bq. maybe also: log @ Info? I looked into this. Unfortunately, info-level logging would propagate out to stderr in the shell example I gave earlier, and this would be undesirable output. Maybe the existing debug-level logging is sufficient? > s3a initialization against public bucket fails if caller lacks any credentials > -- > > Key: HADOOP-13237 > URL: https://issues.apache.org/jira/browse/HADOOP-13237 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Chris Nauroth >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13237-branch-2.002.patch, HADOOP-13237.001.patch > > > If an S3 bucket is public, anyone should be able to read from it. > However, you cannot create an s3a client bonded to a public bucket unless you > have some credentials; the {{doesBucketExist()}} check rejects the call. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13237) s3a initialization against public bucket fails if caller lacks any credentials
[ https://issues.apache.org/jira/browse/HADOOP-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13237: --- Status: Patch Available (was: Reopened) > s3a initialization against public bucket fails if caller lacks any credentials > -- > > Key: HADOOP-13237 > URL: https://issues.apache.org/jira/browse/HADOOP-13237 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Chris Nauroth >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13237-branch-2.002.patch, HADOOP-13237.001.patch > > > If an S3 bucket is public, anyone should be able to read from it. > However, you cannot create an s3a client bonded to a public bucket unless you > have some credentials; the {{doesBucketExist()}} check rejects the call. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail
[ https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319769#comment-15319769 ] John Zhuge commented on HADOOP-13240: - [~cnauroth] Feel free to close it. Thanks for the heads up. I planned to reproduce but haven't attempted yet. > TestAclCommands.testSetfaclValidations fail > --- > > Key: HADOOP-13240 > URL: https://issues.apache.org/jira/browse/HADOOP-13240 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.4.1 > Environment: hadoop 2.4.1,as6.5 >Reporter: linbao111 >Assignee: John Zhuge >Priority: Minor > > mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console > -Dtest=TestAclCommands#testSetfaclValidations failed with following message: > --- > Test set: org.apache.hadoop.fs.shell.TestAclCommands > --- > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< > FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands > testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands) Time > elapsed: 0.534 sec <<< FAILURE! > java.lang.AssertionError: setfacl should fail ACL spec missing > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at org.junit.Assert.assertFalse(Assert.java:68) > at > org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81) > i notice from > HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java > code changed > should > hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe > changed to: > diff --git > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > index b14cd37..463bfcd > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception { > "/path" })); > assertFalse("setfacl should fail ACL spec missing", > 0 == runCommand(new String[] { "-setfacl", "-m", > -"", "/path" })); > +":", "/path" })); >} > >@Test -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13245) Fix up some misc create-release issues
[ https://issues.apache.org/jira/browse/HADOOP-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319732#comment-15319732 ] Hadoop QA commented on HADOOP-13245: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 12s{color} | {color:green} The patch generated 0 new + 76 unchanged - 5 fixed = 76 total (was 81) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 9s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 25s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 72m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.impl.TestMetricsSystemImpl | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808775/HADOOP-13245.00.patch | | JIRA Issue | HADOOP-13245 | | Optional Tests | asflicense shellcheck shelldocs compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 13d836ab9bfb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 733f3f1 | | Default Java | 1.8.0_91 | | shellcheck | v0.4.4 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9681/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9681/testReport/ | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9681/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix up some misc create-release issues > -- > > Key: HADOOP-13245 > URL: https://issues.apache.org/jira/browse/HADOOP-13245 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen
[jira] [Commented] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls
[ https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319679#comment-15319679 ] Jing Zhao commented on HADOOP-13227: The latest patch looks good to me. The only minor comment: {code} final RetryAction a = failover != null? failover : retry == null? fail: null; {code} Here {{a}} can be assigned with {{retry}} itself if {{retry}} is not failover/null? Other than this +1. > AsyncCallHandler should use a event driven architecture to handle async calls > - > > Key: HADOOP-13227 > URL: https://issues.apache.org/jira/browse/HADOOP-13227 > Project: Hadoop Common > Issue Type: Improvement > Components: io, ipc >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: c13227_20160602.patch, c13227_20160606.patch, > c13227_20160607.patch > > > This JIRA is to address [Jing's > comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630] > in HADOOP-13226. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13246) Support Mutable Short Gauge In Metrics2 lib
[ https://issues.apache.org/jira/browse/HADOOP-13246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-13246: Description: Currently, MutableGaugeInt and MutableGaugeLong are the supported types of MutableGauge. Add MutableGaugeShort to this list for keeping track of metrics for which the int range is more than is required. (was: Currently, MutableGaugeInt and MutableGaugeLong are the supported types of MutableGauge. Add MutableGaugeShort to this list for keeping track of metrics for which the int range is too big.) > Support Mutable Short Gauge In Metrics2 lib > --- > > Key: HADOOP-13246 > URL: https://issues.apache.org/jira/browse/HADOOP-13246 > Project: Hadoop Common > Issue Type: Improvement > Components: metrics >Affects Versions: 3.0.0-alpha1 >Reporter: Hanisha Koneru > > Currently, MutableGaugeInt and MutableGaugeLong are the supported types of > MutableGauge. Add MutableGaugeShort to this list for keeping track of metrics > for which the int range is more than is required. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13246) Support Mutable Short Gauge In Metrics2 lib
Hanisha Koneru created HADOOP-13246: --- Summary: Support Mutable Short Gauge In Metrics2 lib Key: HADOOP-13246 URL: https://issues.apache.org/jira/browse/HADOOP-13246 Project: Hadoop Common Issue Type: Improvement Components: metrics Affects Versions: 3.0.0-alpha1 Reporter: Hanisha Koneru Currently, MutableGaugeInt and MutableGaugeLong are the supported types of MutableGauge. Add MutableGaugeShort to this list for keeping track of metrics for which the int range is too big. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9956) RPC listener inefficiently assigns connections to readers
[ https://issues.apache.org/jira/browse/HADOOP-9956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319655#comment-15319655 ] Hudson commented on HADOOP-9956: FAILURE: Integrated in HBase-Trunk_matrix #1007 (See [https://builds.apache.org/job/HBase-Trunk_matrix/1007/]) HBASE-15948 Port "HADOOP-9956 RPC listener inefficiently assigns (stack: rev e0b70c00e74aeaac33570508e3732a53daea839e) * hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/AbstractTestIPC.java * hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperImpl.java * hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java * hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSource.java * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SimpleRpcSchedulerFactory.java > RPC listener inefficiently assigns connections to readers > - > > Key: HADOOP-9956 > URL: https://issues.apache.org/jira/browse/HADOOP-9956 > Project: Hadoop Common > Issue Type: Sub-task > Components: ipc >Affects Versions: 2.0.0-alpha, 3.0.0-alpha1 >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Fix For: 0.23.10, 2.3.0 > > Attachments: HADOOP-9956.branch-23.patch, HADOOP-9956.patch, > HADOOP-9956.patch > > > The socket listener and readers use a complex synchronization to update the > reader's NIO {{Selector}}. Updating active selectors is not thread-safe so > precautions are required. > However, the current locking choreography results in a serialized > distribution of new connections to the parallel socket readers. A > slower/busier reader can stall the listener and throttle performance. > The problem manifests as unexpectedly low cpu utilization by the listener and > readers (~20-30%) under heavy load. The call queue is shallow when it should > be overflowing. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12892) fix/rewrite create-release
[ https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319636#comment-15319636 ] Allen Wittenauer commented on HADOOP-12892: --- I've updated HowToRelease based upon having dev-support/bin/create-release and fixing quite a few things that were missing (jdiff!) or wrong (people.apache.org!). Also note that I filed HADOOP-13245 to fix up a few more things in the source and add some functionality. > fix/rewrite create-release > -- > > Key: HADOOP-12892 > URL: https://issues.apache.org/jira/browse/HADOOP-12892 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer >Priority: Blocker > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, > HADOOP-12892.02.patch, HADOOP-12892.03.patch > > > create-release needs some major surgery. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13245) Fix up some misc create-release issues
[ https://issues.apache.org/jira/browse/HADOOP-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319628#comment-15319628 ] Hadoop QA commented on HADOOP-13245: (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://builds.apache.org/job/PreCommit-HADOOP-Build/9681/console in case of problems. > Fix up some misc create-release issues > -- > > Key: HADOOP-13245 > URL: https://issues.apache.org/jira/browse/HADOOP-13245 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer >Priority: Blocker > Attachments: HADOOP-13245.00.patch > > > 1. Apache Yetus 0.3.0 requires the dateutil.parser module for Python. This > needs to get added to the Dockerfile > 2. Missing -Pdocs so that documentation build is complete -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13245) Fix up some misc create-release issues
[ https://issues.apache.org/jira/browse/HADOOP-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13245: -- Attachment: HADOOP-13245.00.patch -00: * add -Pdocs * fix some shellcheck errors * add -Psign * fix missing version for maven-gpg-plugin * add python-dateutil to dockerfile * fix naked gpg usage * warm the gpg-agent cache * add a label to the dockerfile to make them easier to remove > Fix up some misc create-release issues > -- > > Key: HADOOP-13245 > URL: https://issues.apache.org/jira/browse/HADOOP-13245 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer >Priority: Blocker > Attachments: HADOOP-13245.00.patch > > > 1. Apache Yetus 0.3.0 requires the dateutil.parser module for Python. This > needs to get added to the Dockerfile > 2. Missing -Pdocs so that documentation build is complete -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13245) Fix up some misc create-release issues
[ https://issues.apache.org/jira/browse/HADOOP-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13245: -- Status: Patch Available (was: Open) > Fix up some misc create-release issues > -- > > Key: HADOOP-13245 > URL: https://issues.apache.org/jira/browse/HADOOP-13245 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer >Priority: Blocker > Attachments: HADOOP-13245.00.patch > > > 1. Apache Yetus 0.3.0 requires the dateutil.parser module for Python. This > needs to get added to the Dockerfile > 2. Missing -Pdocs so that documentation build is complete -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping
[ https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319587#comment-15319587 ] Jitendra Nath Pandey commented on HADOOP-12291: --- [~ekundin], could you please rebase the patch once again against the latest trunk? There are some small conflicts, but I don't think it changes the logic significantly. I will review and commit the rebased patch quickly. Thanks. > Add support for nested groups in LdapGroupsMapping > -- > > Key: HADOOP-12291 > URL: https://issues.apache.org/jira/browse/HADOOP-12291 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.8.0 >Reporter: Gautam Gopalakrishnan >Assignee: Esther Kundin > Labels: features, patch > Fix For: 2.8.0 > > Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, > HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, > HADOOP-12291.006.patch, HADOOP-12291.007.patch > > > When using {{LdapGroupsMapping}} with Hadoop, nested groups are not > supported. So for example if user {{jdoe}} is part of group A which is a > member of group B, the group mapping currently returns only group A. > Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and > SSSD (or similar tools) but would be good to have this feature as part of > {{LdapGroupsMapping}} directly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail
[ https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319576#comment-15319576 ] Chris Nauroth commented on HADOOP-13240: [~jzhuge], I noticed you assigned this issue to yourself. Do you have a repro? I haven't seen the failure yet. bq. i run test only on my hadoop2.4.1,and i am sure it will be failed on trunk or 2.7 version If the failure only repros on 2.4.1, but it succeeds in later versions, then we'll likely close this issue. There is no active maintenance of the 2.4 line now. > TestAclCommands.testSetfaclValidations fail > --- > > Key: HADOOP-13240 > URL: https://issues.apache.org/jira/browse/HADOOP-13240 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.4.1 > Environment: hadoop 2.4.1,as6.5 >Reporter: linbao111 >Assignee: John Zhuge >Priority: Minor > > mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console > -Dtest=TestAclCommands#testSetfaclValidations failed with following message: > --- > Test set: org.apache.hadoop.fs.shell.TestAclCommands > --- > Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< > FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands > testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands) Time > elapsed: 0.534 sec <<< FAILURE! > java.lang.AssertionError: setfacl should fail ACL spec missing > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at org.junit.Assert.assertFalse(Assert.java:68) > at > org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81) > i notice from > HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java > code changed > should > hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe > changed to: > diff --git > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > index b14cd37..463bfcd > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java > @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception { > "/path" })); > assertFalse("setfacl should fail ACL spec missing", > 0 == runCommand(new String[] { "-setfacl", "-m", > -"", "/path" })); > +":", "/path" })); >} > >@Test -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.
[ https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319513#comment-15319513 ] Chris Nauroth commented on HADOOP-13223: bq. It's not clear to me why a DLL would be less prone to path problems than an EXE. It seems like we should just be putting a version number on the EXE, so that we avoid these conflicts. We have the same problem with libhadoop-- see HADOOP-11127. You're correct that hadoop.dll suffers the same challenges as libhadoop.so, but on Windows, the challenges are even greater. First, there is the simple matter that there are 2 binaries to grapple with instead of 1. Both hadoop.dll and winutils.exe are required. Second, there is the problem of understanding how a Hadoop process loads winutils.exe. In the case of hadoop.dll, the running process uses well-known, well-defined dynamic linking mechanisms to find the dll. Experienced Windows developers and admins will be familiar with the [DLL search path|https://msdn.microsoft.com/en-us/library/windows/desktop/ms682586(v=vs.85).aspx]. For winutils.exe, there is no such familiar ground on which developers and admins can build an understanding. Hadoop uses separate, arbitrary logic to find the winutils.exe binary. Unlike the DLL search path, this is not consistent with typical dynamic linking practices, so it can be a source of confusion. I am +1 for migrating more functionality into hadoop.dll and eventually eliminating winutils.exe. This addresses the additional difficulty of coordinating 2 binaries and the additional difficulty of understanding how it gets loaded. It does not address the challenge of version compatibility between Java and native code during dynamic linking, but that issue is tracked elsewhere. > winutils.exe is a bug nexus and should be killed with an axe. > - > > Key: HADOOP-13223 > URL: https://issues.apache.org/jira/browse/HADOOP-13223 > Project: Hadoop Common > Issue Type: Improvement > Components: bin >Affects Versions: 2.6.0 > Environment: Microsoft Windows, all versions >Reporter: john lilley > > winutils.exe was apparently created as a stopgap measure to allow Hadoop to > "work" on Windows platforms, because the NativeIO libraries aren't > implemented there (edit: even NativeIO probably doesn't cover the operations > that winutils.exe is used for). Rather than building a DLL that makes native > OS calls, the creators of winutils.exe must have decided that it would be > more expedient to create an EXE to carry out file system operations in a > linux-like fashion. Unfortunately, like many stopgap measures in software, > this one has persisted well beyond its expected lifetime and usefulness. My > team creates software that runs on Windows and Linux, and winutils.exe is > probably responsible for 20% of all issues we encounter, both during > development and in the field. > Problem #1 with winutils.exe is that it is simply missing from many popular > distros and/or the client-side software installation for said distros, when > supplied, fails to install winutils.exe. Thus, as software developers, we > are forced to pick one version and distribute and install it with our > software. > Which leads to problem #2: winutils.exe are not always compatible. In > particular, MapR MUST have its winutils.exe in the system path, but doing so > breaks the Hadoop distro for every other Hadoop vendor. This makes creating > and maintaining test environments that work with all of the Hadoop distros we > want to test unnecessarily tedious and error-prone. > Problem #3 is that the mechanism by which you inform the Hadoop client > software where to find winutils.exe is poorly documented and fragile. First, > it can be in the PATH. If it is in the PATH, that is where it is found. > However, the documentation, such as it is, makes no mention of this, and > instead says that you should set the HADOOP_HOME environment variable, which > does NOT override the winutils.exe found in your system PATH. > Which leads to problem #4: There is no logging that says where winutils.exe > was actually found and loaded. Because of this, fixing problems of finding > the wrong winutils.exe are extremely difficult. > Problem #5 is that most of the time, such as when accessing straight up HDFS > and YARN, one does not *need* winutils.exe. But if it is missing, the log > messages complain about its absence. When we are trying to diagnose an > obscure issue in Hadoop (of which there are many), the presence of this red > herring leads to all sorts of time wasted until someone on the team points > out that winutils.exe is not the problem, at least not this time. > Problem #6 is that errors and stack traces from issues involving
[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping
[ https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319486#comment-15319486 ] Jitendra Nath Pandey commented on HADOOP-12291: --- +1 > Add support for nested groups in LdapGroupsMapping > -- > > Key: HADOOP-12291 > URL: https://issues.apache.org/jira/browse/HADOOP-12291 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.8.0 >Reporter: Gautam Gopalakrishnan >Assignee: Esther Kundin > Labels: features, patch > Fix For: 2.8.0 > > Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, > HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, > HADOOP-12291.006.patch, HADOOP-12291.007.patch > > > When using {{LdapGroupsMapping}} with Hadoop, nested groups are not > supported. So for example if user {{jdoe}} is part of group A which is a > member of group B, the group mapping currently returns only group A. > Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and > SSSD (or similar tools) but would be good to have this feature as part of > {{LdapGroupsMapping}} directly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.
[ https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319400#comment-15319400 ] john lilley commented on HADOOP-13223: -- [~cmccabe], Looking at the various issues we've encountered, I agree that most of them can be addressed with keeping winutils.exe and doing these things: 1: Taking steps to ensure that winutils.exe is always available on client library downloads IN A CONSISTENT PLACE 2: #1 can be made automatic by bundling winutils.exe into the RawLocalFileSystem jar (or perhaps NativeIO?) and caching it to a temporary place before invoking it. 3: Removing HADOOP_HOME, hadoop.home.dir, and PATH as alternate ways of finding winutils.exe. If #2 is done, this should always yield a full path to exactly the winutils.exe that we want. 4: Hiding all access to winutils under a consistent API (in RawLocalFileSystem or NativeIO) for performing file operations (chown, chmod, symlink, readlink, etc). This means removing or privatizing almost everything in the Shell class, but especially the following: Shell.getWinUtilsPath(), Shell.WINUTILS, Shell.get*Command(). > winutils.exe is a bug nexus and should be killed with an axe. > - > > Key: HADOOP-13223 > URL: https://issues.apache.org/jira/browse/HADOOP-13223 > Project: Hadoop Common > Issue Type: Improvement > Components: bin >Affects Versions: 2.6.0 > Environment: Microsoft Windows, all versions >Reporter: john lilley > > winutils.exe was apparently created as a stopgap measure to allow Hadoop to > "work" on Windows platforms, because the NativeIO libraries aren't > implemented there (edit: even NativeIO probably doesn't cover the operations > that winutils.exe is used for). Rather than building a DLL that makes native > OS calls, the creators of winutils.exe must have decided that it would be > more expedient to create an EXE to carry out file system operations in a > linux-like fashion. Unfortunately, like many stopgap measures in software, > this one has persisted well beyond its expected lifetime and usefulness. My > team creates software that runs on Windows and Linux, and winutils.exe is > probably responsible for 20% of all issues we encounter, both during > development and in the field. > Problem #1 with winutils.exe is that it is simply missing from many popular > distros and/or the client-side software installation for said distros, when > supplied, fails to install winutils.exe. Thus, as software developers, we > are forced to pick one version and distribute and install it with our > software. > Which leads to problem #2: winutils.exe are not always compatible. In > particular, MapR MUST have its winutils.exe in the system path, but doing so > breaks the Hadoop distro for every other Hadoop vendor. This makes creating > and maintaining test environments that work with all of the Hadoop distros we > want to test unnecessarily tedious and error-prone. > Problem #3 is that the mechanism by which you inform the Hadoop client > software where to find winutils.exe is poorly documented and fragile. First, > it can be in the PATH. If it is in the PATH, that is where it is found. > However, the documentation, such as it is, makes no mention of this, and > instead says that you should set the HADOOP_HOME environment variable, which > does NOT override the winutils.exe found in your system PATH. > Which leads to problem #4: There is no logging that says where winutils.exe > was actually found and loaded. Because of this, fixing problems of finding > the wrong winutils.exe are extremely difficult. > Problem #5 is that most of the time, such as when accessing straight up HDFS > and YARN, one does not *need* winutils.exe. But if it is missing, the log > messages complain about its absence. When we are trying to diagnose an > obscure issue in Hadoop (of which there are many), the presence of this red > herring leads to all sorts of time wasted until someone on the team points > out that winutils.exe is not the problem, at least not this time. > Problem #6 is that errors and stack traces from issues involving winutils.exe > are not helpful. The Java stack trace ends at the ProcessBuilder call. Only > through bitter experience is one able to connect the dots from > "ProcessBuilder is the last thing on the stack" to "something is wrong with > winutils.exe". > Note that none of these involve running Hadoop on Windows. They are only > encountered when using Hadoop client libraries to access a cluster from > Windows. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail:
[jira] [Commented] (HADOOP-13243) TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently
[ https://issues.apache.org/jira/browse/HADOOP-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319322#comment-15319322 ] Daniel Templeton commented on HADOOP-13243: --- I just ran it 4122 times without a failure. I think it's fixed. :) The unit test failure from Jenkins is unrelated. > TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently > > > Key: HADOOP-13243 > URL: https://issues.apache.org/jira/browse/HADOOP-13243 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.9.0 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Minor > Attachments: HADOOP-13243.001.patch > > > Because of poor checking of boundary conditions, the test fails 1% of the > time: > {noformat} > The initial flush time was calculated incorrectly: 0 > Stacktrace > java.lang.AssertionError: The initial flush time was calculated incorrectly: 0 > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at > org.apache.hadoop.metrics2.sink.TestRollingFileSystemSink.testSetInitialFlushTime(TestRollingFileSystemSink.java:120) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.
[ https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319318#comment-15319318 ] Colin Patrick McCabe commented on HADOOP-13223: --- Hmm. It's not clear to me why a DLL would be less prone to path problems than an EXE. It seems like we should just be putting a version number on the EXE, so that we avoid these conflicts. We have the same problem with libhadoop-- see HADOOP-11127. > winutils.exe is a bug nexus and should be killed with an axe. > - > > Key: HADOOP-13223 > URL: https://issues.apache.org/jira/browse/HADOOP-13223 > Project: Hadoop Common > Issue Type: Improvement > Components: bin >Affects Versions: 2.6.0 > Environment: Microsoft Windows, all versions >Reporter: john lilley > > winutils.exe was apparently created as a stopgap measure to allow Hadoop to > "work" on Windows platforms, because the NativeIO libraries aren't > implemented there (edit: even NativeIO probably doesn't cover the operations > that winutils.exe is used for). Rather than building a DLL that makes native > OS calls, the creators of winutils.exe must have decided that it would be > more expedient to create an EXE to carry out file system operations in a > linux-like fashion. Unfortunately, like many stopgap measures in software, > this one has persisted well beyond its expected lifetime and usefulness. My > team creates software that runs on Windows and Linux, and winutils.exe is > probably responsible for 20% of all issues we encounter, both during > development and in the field. > Problem #1 with winutils.exe is that it is simply missing from many popular > distros and/or the client-side software installation for said distros, when > supplied, fails to install winutils.exe. Thus, as software developers, we > are forced to pick one version and distribute and install it with our > software. > Which leads to problem #2: winutils.exe are not always compatible. In > particular, MapR MUST have its winutils.exe in the system path, but doing so > breaks the Hadoop distro for every other Hadoop vendor. This makes creating > and maintaining test environments that work with all of the Hadoop distros we > want to test unnecessarily tedious and error-prone. > Problem #3 is that the mechanism by which you inform the Hadoop client > software where to find winutils.exe is poorly documented and fragile. First, > it can be in the PATH. If it is in the PATH, that is where it is found. > However, the documentation, such as it is, makes no mention of this, and > instead says that you should set the HADOOP_HOME environment variable, which > does NOT override the winutils.exe found in your system PATH. > Which leads to problem #4: There is no logging that says where winutils.exe > was actually found and loaded. Because of this, fixing problems of finding > the wrong winutils.exe are extremely difficult. > Problem #5 is that most of the time, such as when accessing straight up HDFS > and YARN, one does not *need* winutils.exe. But if it is missing, the log > messages complain about its absence. When we are trying to diagnose an > obscure issue in Hadoop (of which there are many), the presence of this red > herring leads to all sorts of time wasted until someone on the team points > out that winutils.exe is not the problem, at least not this time. > Problem #6 is that errors and stack traces from issues involving winutils.exe > are not helpful. The Java stack trace ends at the ProcessBuilder call. Only > through bitter experience is one able to connect the dots from > "ProcessBuilder is the last thing on the stack" to "something is wrong with > winutils.exe". > Note that none of these involve running Hadoop on Windows. They are only > encountered when using Hadoop client libraries to access a cluster from > Windows. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13245) Fix up some misc create-release issues
[ https://issues.apache.org/jira/browse/HADOOP-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13245: -- Description: 1. Apache Yetus 0.3.0 requires the dateutil.parser module for Python. This needs to get added to the Dockerfile 2. Missing -Pdocs so that documentation build is complete was: 1. Apache Yetus 0.3.0 requires the dateutil.parser module for Python. 2. Missing -Pdocs > Fix up some misc create-release issues > -- > > Key: HADOOP-13245 > URL: https://issues.apache.org/jira/browse/HADOOP-13245 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer >Priority: Blocker > > 1. Apache Yetus 0.3.0 requires the dateutil.parser module for Python. This > needs to get added to the Dockerfile > 2. Missing -Pdocs so that documentation build is complete -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13245) Fix up some misc create-release issues
[ https://issues.apache.org/jira/browse/HADOOP-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319167#comment-15319167 ] Allen Wittenauer commented on HADOOP-13245: --- Ping [~andrew.wang], since he'll care about getting these fixed. :) > Fix up some misc create-release issues > -- > > Key: HADOOP-13245 > URL: https://issues.apache.org/jira/browse/HADOOP-13245 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer >Priority: Blocker > > 1. Apache Yetus 0.3.0 requires the dateutil.parser module for Python. > 2. Missing -Pdocs -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13245) Fix up some misc create-release issues
[ https://issues.apache.org/jira/browse/HADOOP-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer reassigned HADOOP-13245: - Assignee: Allen Wittenauer > Fix up some misc create-release issues > -- > > Key: HADOOP-13245 > URL: https://issues.apache.org/jira/browse/HADOOP-13245 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer >Priority: Blocker > > 1. Apache Yetus 0.3.0 requires the dateutil.parser module for Python. > 2. Missing -Pdocs -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13245) Fix up some misc create-release issues
Allen Wittenauer created HADOOP-13245: - Summary: Fix up some misc create-release issues Key: HADOOP-13245 URL: https://issues.apache.org/jira/browse/HADOOP-13245 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 3.0.0-alpha1 Reporter: Allen Wittenauer Priority: Blocker 1. Apache Yetus 0.3.0 requires the dateutil.parser module for Python. 2. Missing -Pdocs -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9680) Extend S3FS and S3NativeFS to work with AWS IAM Temporary Security Credentials
[ https://issues.apache.org/jira/browse/HADOOP-9680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319160#comment-15319160 ] Dmitry Vasilenko commented on HADOOP-9680: -- +1 > Extend S3FS and S3NativeFS to work with AWS IAM Temporary Security Credentials > -- > > Key: HADOOP-9680 > URL: https://issues.apache.org/jira/browse/HADOOP-9680 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 2.1.0-beta, 3.0.0-alpha1 >Reporter: Robert Gibbon >Priority: Minor > Attachments: s3fs-temp-iam-creds.diff.patch > > > Here is a patch in unified diff format to enable Amazon Web Services IAM > Temporary Security Credentials secured interactions with S3 from Hadoop. > It bumps the JetS3t release version up to 0.9.0. > To use a temporary security credential set, you need to provide the following > properties, depending on the implementation (s3 or s3native): > fs.s3.awsAccessKeyId or fs.s3n.awsAccessKeyId - the temporary access key id > issued by AWS IAM > fs.s3.awsSecretAccessKey or fs.s3n.awsSecretAccessKey - the temporary secret > access key issued by AWS IAM > fs.s3.awsSessionToken or fs.s3n.awsSessionToken - the session ticket issued > by AWS IAM along with the temporary key > fs.s3.awsTokenFriendlyName or fs.s3n.awsTokenFriendlyName - any string -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319073#comment-15319073 ] Hadoop QA commented on HADOOP-12893: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 49s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 57s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 41s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ipc.TestIPC | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808716/HADOOP-12893.011.patch | | JIRA Issue | HADOOP-12893 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 3ab222a74791 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c14c1b2 | | Default Java | 1.8.0_91 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9680/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9680/testReport/ | | modules | C: hadoop-build-tools hadoop-project hadoop-project-dist . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9680/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Verify LICENSE.txt and NOTICE.txt > - > > Key: HADOOP-12893 > URL: https://issues.apache.org/jira/browse/HADOOP-12893 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Xiao Chen >Priority: Blocker > Attachments: HADOOP-12893.002.patch,
[jira] [Commented] (HADOOP-13079) Add dfs -ls -q to print ? instead of non-printable characters
[ https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318991#comment-15318991 ] John Zhuge commented on HADOOP-13079: - Unit test failure is not related: {noformat} TestGangliaMetrics.testGangliaMetrics2:139->checkMetrics:161 Missing metrics: test.s1rec.Xxx {noformat} > Add dfs -ls -q to print ? instead of non-printable characters > - > > Key: HADOOP-13079 > URL: https://issues.apache.org/jira/browse/HADOOP-13079 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-13079.001.patch, HADOOP-13079.002.patch, > HADOOP-13079.003.patch, HADOOP-13079.004.patch > > > Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". > Non-printable characters are defined by > [isprint(3)|http://linux.die.net/man/3/isprint] according to the current > locale. > Default to {{-q}} behavior on terminal; otherwise, print raw characters. See > the difference in these 2 command lines: > * {{hadoop fs -ls /dir}} > * {{hadoop fs -ls /dir | od -c}} > In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a > terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C > {{isatty()}} because the closest test {{System.console() == null}} does not > work in some cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo
[ https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318946#comment-15318946 ] Bikas Saha commented on HADOOP-13184: - +1 on 5 > Add "Apache" to Hadoop project logo > --- > > Key: HADOOP-13184 > URL: https://issues.apache.org/jira/browse/HADOOP-13184 > Project: Hadoop Common > Issue Type: Task >Reporter: Chris Douglas >Assignee: Abhishek > > Many ASF projects include "Apache" in their logo. We should add it to Hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-12893: --- Attachment: HADOOP-12893.011.patch Thanks [~ajisakaa] for review and good catch. Patch 011 address both comments. Verified again that all jars under {{hadoop-dist/target}} after {{mvn package}} from a clean cache contains L > Verify LICENSE.txt and NOTICE.txt > - > > Key: HADOOP-12893 > URL: https://issues.apache.org/jira/browse/HADOOP-12893 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Xiao Chen >Priority: Blocker > Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, > HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, > HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, > HADOOP-12893.01.patch, HADOOP-12893.011.patch, HADOOP-12893.10.patch > > > We have many bundled dependencies in both the source and the binary artifacts > that are not in LICENSE.txt and NOTICE.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13241) document s3a better
[ https://issues.apache.org/jira/browse/HADOOP-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318887#comment-15318887 ] Hadoop QA commented on HADOOP-13241: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 22s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 46s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 20s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 11s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 59s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 16s{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 65m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:babe025 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808699/HADOOP-13241-branch-2-001.patch | | JIRA Issue | HADOOP-13241 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 5a6c1be61dc6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git
[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo
[ https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318835#comment-15318835 ] Daniel Templeton commented on HADOOP-13184: --- +1 on #4. > Add "Apache" to Hadoop project logo > --- > > Key: HADOOP-13184 > URL: https://issues.apache.org/jira/browse/HADOOP-13184 > Project: Hadoop Common > Issue Type: Task >Reporter: Chris Douglas >Assignee: Abhishek > > Many ASF projects include "Apache" in their logo. We should add it to Hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10048) LocalDirAllocator should avoid holding locks while accessing the filesystem
[ https://issues.apache.org/jira/browse/HADOOP-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318824#comment-15318824 ] Hudson commented on HADOOP-10048: - SUCCESS: Integrated in Hadoop-trunk-Commit #9921 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9921/]) HADOOP-10048. LocalDirAllocator should avoid holding locks while (junping_du: rev c14c1b298e29e799f7c8f15ff24d7eba6e0cd39b) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java > LocalDirAllocator should avoid holding locks while accessing the filesystem > --- > > Key: HADOOP-10048 > URL: https://issues.apache.org/jira/browse/HADOOP-10048 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.3.0 >Reporter: Jason Lowe >Assignee: Jason Lowe > Fix For: 2.8.0 > > Attachments: HADOOP-10048.003.patch, HADOOP-10048.004.patch, > HADOOP-10048.005.patch, HADOOP-10048.006.patch, HADOOP-10048.patch, > HADOOP-10048.trunk.patch > > > As noted in MAPREDUCE-5584 and HADOOP-7016, LocalDirAllocator can be a > bottleneck for multithreaded setups like the ShuffleHandler. We should > consider moving to a lockless design or minimizing the critical sections to a > very small amount of time that does not involve I/O operations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10048) LocalDirAllocator should avoid holding locks while accessing the filesystem
[ https://issues.apache.org/jira/browse/HADOOP-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HADOOP-10048: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) I have commit the patch to trunk, branch-2 and branch-2.8. Thanks [~jlowe] for patch contribution! > LocalDirAllocator should avoid holding locks while accessing the filesystem > --- > > Key: HADOOP-10048 > URL: https://issues.apache.org/jira/browse/HADOOP-10048 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.3.0 >Reporter: Jason Lowe >Assignee: Jason Lowe > Fix For: 2.8.0 > > Attachments: HADOOP-10048.003.patch, HADOOP-10048.004.patch, > HADOOP-10048.005.patch, HADOOP-10048.006.patch, HADOOP-10048.patch, > HADOOP-10048.trunk.patch > > > As noted in MAPREDUCE-5584 and HADOOP-7016, LocalDirAllocator can be a > bottleneck for multithreaded setups like the ShuffleHandler. We should > consider moving to a lockless design or minimizing the critical sections to a > very small amount of time that does not involve I/O operations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls
[ https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318747#comment-15318747 ] stack commented on HADOOP-12910: Why the insistence on doing the async twice? Once for branch-2 and then with a totally different API in branch-3? Wouldn't doing it once be better all around given it is tricky at the best of times getting async correct and performant? Why do the work in branch-2 and then go keep it private, ' if it gets complicated...'.? Where does that leave willing contributors/users like [~Apache9] (see his note above)? Why invent an API (based on AWT experience with mouse-moved listeners (?)) rather than take on a proven one whose author is trying to help here and whose API surface is considerably less than the CompletableFuture kitchen-sink? > Add new FileSystem API to support asynchronous method calls > --- > > Key: HADOOP-12910 > URL: https://issues.apache.org/jira/browse/HADOOP-12910 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou > Attachments: HADOOP-12910-HDFS-9924.000.patch, > HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch > > > Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a > better name). All the APIs in FutureFileSystem are the same as FileSystem > except that the return type is wrapped by Future, e.g. > {code} > //FileSystem > public boolean rename(Path src, Path dst) throws IOException; > //FutureFileSystem > public Future rename(Path src, Path dst) throws IOException; > {code} > Note that FutureFileSystem does not extend FileSystem. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13241) document s3a better
[ https://issues.apache.org/jira/browse/HADOOP-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13241: Attachment: HADOOP-13241-branch-2-001.patch Patch 001 > document s3a better > --- > > Key: HADOOP-13241 > URL: https://issues.apache.org/jira/browse/HADOOP-13241 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13241-branch-2-001.patch > > > s3a can be documented better, things like classpath, troubleshooting, etc. > sit down and do it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13241) document s3a better
[ https://issues.apache.org/jira/browse/HADOOP-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13241: Status: Patch Available (was: Open) > document s3a better > --- > > Key: HADOOP-13241 > URL: https://issues.apache.org/jira/browse/HADOOP-13241 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation, fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13241-branch-2-001.patch > > > s3a can be documented better, things like classpath, troubleshooting, etc. > sit down and do it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-9328) INSERT INTO a S3 external table with no reduce phase results in FileNotFoundException
[ https://issues.apache.org/jira/browse/HADOOP-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-9328: --- Affects Version/s: (was: 0.9.0) 2.0.2-alpha > INSERT INTO a S3 external table with no reduce phase results in > FileNotFoundException > - > > Key: HADOOP-9328 > URL: https://issues.apache.org/jira/browse/HADOOP-9328 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.0.2-alpha > Environment: YARN, Hadoop 2.0.2-alpha > Ubuntu >Reporter: Marc Limotte >Priority: Minor > > With Yarn and Hadoop 2.0.2-alpha, hive 0.9.0. > The destination is an S3 table, the source for the query is a small hive > managed table. > CREATE EXTERNAL TABLE payout_state_product ( > state STRING, > product_id STRING, > element_id INT, > element_value DOUBLE, > number_of_fields INT) > ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' > STORED AS TEXTFILE > LOCATION 's3://com.weatherbill.foo/bar/payout_state_product/'; > A simple query to copy the results from the hive managed table into a S3. > hive> INSERT OVERWRITE TABLE payout_state_product > SELECT * FROM payout_state_product_cached; > Total MapReduce jobs = 2 > Launching Job 1 out of 2 > Number of reduce tasks is set to 0 since there's no reduce operator > Starting Job = job_1360884012490_0014, Tracking URL = > http://i-9ff9e9ef.us-east-1.production.climatedna.net:8088/proxy/application_1360884012490_0014/ > > Kill Command = /usr/lib/hadoop/bin/hadoop job > -Dmapred.job.tracker=i-9ff9e9ef.us-east-1.production.climatedna.net:8032 > -kill job_1360884012490_0014 > Hadoop job information for Stage-1: number of mappers: 100; number of > reducers: 0 > 2013-02-22 19:15:46,709 Stage-1 map = 0%, reduce = 0% > ...snip... > 2013-02-22 19:17:02,374 Stage-1 map = 100%, reduce = 0%, Cumulative CPU > 427.13 sec > MapReduce Total cumulative CPU time: 7 minutes 7 seconds 130 msec > Ended Job = job_1360884012490_0014 > Ended Job = -1776780875, job is filtered out (removed at runtime). > Launching Job 2 out of 2 > Number of reduce tasks is set to 0 since there's no reduce operator > java.io.FileNotFoundException: File does not exist: > /tmp/hive-marc/hive_2013-02-22_19-15-31_691_7365912335285010827/-ext-10002/00_0 > > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:782) > > at > org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat$OneFileInfo.(CombineFileInputFormat.java:493) > > at > org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:284) > > at > org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:244) > > at > org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:69) > > at > org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:386) > > at > org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:352) > > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.processPaths(CombineHiveInputFormat.java:419) > > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:390) > > at > org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:479) > > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:471) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:366) > > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367) > > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215) > at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:617) > at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:612) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367) > > at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:612) > at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:435) > at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:137) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) > at
[jira] [Updated] (HADOOP-9328) INSERT INTO a S3 external table with no reduce phase results in FileNotFoundException
[ https://issues.apache.org/jira/browse/HADOOP-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-9328: --- Priority: Minor (was: Major) Component/s: fs/s3 > INSERT INTO a S3 external table with no reduce phase results in > FileNotFoundException > - > > Key: HADOOP-9328 > URL: https://issues.apache.org/jira/browse/HADOOP-9328 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.0.2-alpha > Environment: YARN, Hadoop 2.0.2-alpha > Ubuntu >Reporter: Marc Limotte >Priority: Minor > > With Yarn and Hadoop 2.0.2-alpha, hive 0.9.0. > The destination is an S3 table, the source for the query is a small hive > managed table. > CREATE EXTERNAL TABLE payout_state_product ( > state STRING, > product_id STRING, > element_id INT, > element_value DOUBLE, > number_of_fields INT) > ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' > STORED AS TEXTFILE > LOCATION 's3://com.weatherbill.foo/bar/payout_state_product/'; > A simple query to copy the results from the hive managed table into a S3. > hive> INSERT OVERWRITE TABLE payout_state_product > SELECT * FROM payout_state_product_cached; > Total MapReduce jobs = 2 > Launching Job 1 out of 2 > Number of reduce tasks is set to 0 since there's no reduce operator > Starting Job = job_1360884012490_0014, Tracking URL = > http://i-9ff9e9ef.us-east-1.production.climatedna.net:8088/proxy/application_1360884012490_0014/ > > Kill Command = /usr/lib/hadoop/bin/hadoop job > -Dmapred.job.tracker=i-9ff9e9ef.us-east-1.production.climatedna.net:8032 > -kill job_1360884012490_0014 > Hadoop job information for Stage-1: number of mappers: 100; number of > reducers: 0 > 2013-02-22 19:15:46,709 Stage-1 map = 0%, reduce = 0% > ...snip... > 2013-02-22 19:17:02,374 Stage-1 map = 100%, reduce = 0%, Cumulative CPU > 427.13 sec > MapReduce Total cumulative CPU time: 7 minutes 7 seconds 130 msec > Ended Job = job_1360884012490_0014 > Ended Job = -1776780875, job is filtered out (removed at runtime). > Launching Job 2 out of 2 > Number of reduce tasks is set to 0 since there's no reduce operator > java.io.FileNotFoundException: File does not exist: > /tmp/hive-marc/hive_2013-02-22_19-15-31_691_7365912335285010827/-ext-10002/00_0 > > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:782) > > at > org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat$OneFileInfo.(CombineFileInputFormat.java:493) > > at > org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:284) > > at > org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:244) > > at > org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:69) > > at > org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:386) > > at > org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:352) > > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.processPaths(CombineHiveInputFormat.java:419) > > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:390) > > at > org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:479) > > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:471) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:366) > > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367) > > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215) > at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:617) > at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:612) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367) > > at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:612) > at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:435) > at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:137) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) > at
[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo
[ https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318601#comment-15318601 ] Karthik Kambatla commented on HADOOP-13184: --- +1 on option 4. > Add "Apache" to Hadoop project logo > --- > > Key: HADOOP-13184 > URL: https://issues.apache.org/jira/browse/HADOOP-13184 > Project: Hadoop Common > Issue Type: Task >Reporter: Chris Douglas >Assignee: Abhishek > > Many ASF projects include "Apache" in their logo. We should add it to Hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12892) fix/rewrite create-release
[ https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318519#comment-15318519 ] Allen Wittenauer commented on HADOOP-12892: --- Sorry, branch-2 is not a priority of my volunteer time. > fix/rewrite create-release > -- > > Key: HADOOP-12892 > URL: https://issues.apache.org/jira/browse/HADOOP-12892 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer >Priority: Blocker > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, > HADOOP-12892.02.patch, HADOOP-12892.03.patch > > > create-release needs some major surgery. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo
[ https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318511#comment-15318511 ] Akira AJISAKA commented on HADOOP-13184: +1 on option 4. > Add "Apache" to Hadoop project logo > --- > > Key: HADOOP-13184 > URL: https://issues.apache.org/jira/browse/HADOOP-13184 > Project: Hadoop Common > Issue Type: Task >Reporter: Chris Douglas >Assignee: Abhishek > > Many ASF projects include "Apache" in their logo. We should add it to Hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo
[ https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318485#comment-15318485 ] Junping Du commented on HADOOP-13184: - +1 on option 4 too. > Add "Apache" to Hadoop project logo > --- > > Key: HADOOP-13184 > URL: https://issues.apache.org/jira/browse/HADOOP-13184 > Project: Hadoop Common > Issue Type: Task >Reporter: Chris Douglas >Assignee: Abhishek > > Many ASF projects include "Apache" in their logo. We should add it to Hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo
[ https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318477#comment-15318477 ] Thomas Graves commented on HADOOP-13184: my vote would be option 4. > Add "Apache" to Hadoop project logo > --- > > Key: HADOOP-13184 > URL: https://issues.apache.org/jira/browse/HADOOP-13184 > Project: Hadoop Common > Issue Type: Task >Reporter: Chris Douglas >Assignee: Abhishek > > Many ASF projects include "Apache" in their logo. We should add it to Hadoop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation
[ https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318457#comment-15318457 ] Hadoop QA commented on HADOOP-12756: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 14 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-tools/hadoop-tools-dist hadoop-tools . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 10s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 8s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project . hadoop-tools hadoop-tools/hadoop-tools-dist {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 26s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 83m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808654/HADOOP-12756.004.patch | | JIRA Issue | HADOOP-12756 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux 144154f2d3a6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e620530 | | Default Java | 1.8.0_91 | | unit |
[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation
[ https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-12756: --- Attachment: HADOOP-12756.004.patch Uploaded the updated patch for Ling. > Incorporate Aliyun OSS file system implementation > - > > Key: HADOOP-12756 > URL: https://issues.apache.org/jira/browse/HADOOP-12756 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: shimingfei >Assignee: shimingfei > Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, > HADOOP-12756.004.patch, HCFS User manual.md, OSS integration.pdf, OSS > integration.pdf > > > Aliyun OSS is widely used among China’s cloud users, but currently it is not > easy to access data laid on OSS storage from user’s Hadoop/Spark application, > because of no original support for OSS in Hadoop. > This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, > Spark/Hadoop applications can read/write data from OSS without any code > change. Narrowing the gap between user’s APP and data storage, like what have > been done for S3 in Hadoop -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13186) GenericOptionParser -libjars option validity check not always working because of bad local FS equality check
[ https://issues.apache.org/jira/browse/HADOOP-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318254#comment-15318254 ] Arnaud Linz commented on HADOOP-13186: -- {{impl.disable.cache}} was not set in my core-site.xml when this issue occurred. > GenericOptionParser -libjars option validity check not always working because > of bad local FS equality check > > > Key: HADOOP-13186 > URL: https://issues.apache.org/jira/browse/HADOOP-13186 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.6.0, 2.7.1 > Environment: Occured on: CentOS 6.3 or Ubuntu 14.04, Oracle JDK > 1.7.0_45 or 1.8.0_66 >Reporter: Arnaud Linz > Original Estimate: 24h > Remaining Estimate: 24h > > _Class concerned :_ > {{org.apache.hadoop.util.GenericOptionParser}} > _Method :_ > {{public static URL[] getLibJars(final Configuration conf)}} > _Line :_ > {{if (tmp.getFileSystem(conf).equals(FileSystem.getLocal(conf)))}} > In this method we check if the provided jar on the command line are on a > local file system, else we emit a warning log and ignore it. > I've got the case where the two file systems retrieved (the one from the > {{Path.getFileSystem(conf)}} and the one returned by > {{FileSystem.getLocal(conf)}}) where two local file systems but different > objects, and the {{equals()}} method, not having been implemented, defaulted > to the object pointer equality, leading to my jar files not being taken into > account. > I've quickly patched it to > {{tmp.getFileSystem(conf).getUri().equals(FileSystem.getLocal(conf).getUri()}} > to have my application work. > I did not dig further into determining whether the objects should have been > the same, or whether the {{equals()}} method should have been implemented, > but it has to be done. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13186) GenericOptionParser -libjars option validity check not always working because of bad local FS equality check
[ https://issues.apache.org/jira/browse/HADOOP-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318236#comment-15318236 ] Yuanbo Liu commented on HADOOP-13186: - please search "impl.disable.cache" in core-site.xml. Every specific file system will be instantiated only one time, unless you config "fs.*(scheme).impl.disable.cache" as true. > GenericOptionParser -libjars option validity check not always working because > of bad local FS equality check > > > Key: HADOOP-13186 > URL: https://issues.apache.org/jira/browse/HADOOP-13186 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 2.6.0, 2.7.1 > Environment: Occured on: CentOS 6.3 or Ubuntu 14.04, Oracle JDK > 1.7.0_45 or 1.8.0_66 >Reporter: Arnaud Linz > Original Estimate: 24h > Remaining Estimate: 24h > > _Class concerned :_ > {{org.apache.hadoop.util.GenericOptionParser}} > _Method :_ > {{public static URL[] getLibJars(final Configuration conf)}} > _Line :_ > {{if (tmp.getFileSystem(conf).equals(FileSystem.getLocal(conf)))}} > In this method we check if the provided jar on the command line are on a > local file system, else we emit a warning log and ignore it. > I've got the case where the two file systems retrieved (the one from the > {{Path.getFileSystem(conf)}} and the one returned by > {{FileSystem.getLocal(conf)}}) where two local file systems but different > objects, and the {{equals()}} method, not having been implemented, defaulted > to the object pointer equality, leading to my jar files not being taken into > account. > I've quickly patched it to > {{tmp.getFileSystem(conf).getUri().equals(FileSystem.getLocal(conf).getUri()}} > to have my application work. > I did not dig further into determining whether the objects should have been > the same, or whether the {{equals()}} method should have been implemented, > but it has to be done. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318228#comment-15318228 ] Akira AJISAKA commented on HADOOP-12893: Thanks [~xiaochen] for the continuous work. * In hadoop-build-tools/pom.xml, we need not to set the version of maven-remote-resources-plugin because it is already set in hadoop-project/pom.xml. * In hadoop-project/pom.xml, would you set a parameter as follows and use it to set the version of maven-remote-resources-plugin? {code:title=hadoop-project/pom.xml} 2.5 3.1 2.5.1 {code} > Verify LICENSE.txt and NOTICE.txt > - > > Key: HADOOP-12893 > URL: https://issues.apache.org/jira/browse/HADOOP-12893 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Xiao Chen >Priority: Blocker > Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, > HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, > HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, > HADOOP-12893.01.patch, HADOOP-12893.10.patch > > > We have many bundled dependencies in both the source and the binary artifacts > that are not in LICENSE.txt and NOTICE.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9427) use jUnit assumptions to skip platform-specific tests
[ https://issues.apache.org/jira/browse/HADOOP-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318178#comment-15318178 ] Hadoop QA commented on HADOOP-9427: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 12m 28s{color} | {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808616/HADOOP-9427.002.patch | | JIRA Issue | HADOOP-9427 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9677/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > use jUnit assumptions to skip platform-specific tests > - > > Key: HADOOP-9427 > URL: https://issues.apache.org/jira/browse/HADOOP-9427 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 1-win, 3.0.0-alpha1 >Reporter: Arpit Agarwal >Assignee: Gergely Novák > Attachments: HADOOP-9427.001.patch, HADOOP-9427.002.patch > > > Certain tests for platform-specific functionality are either executed only on > Windows or bypass on Windows using checks like "if (Path.WINDOWS)" e.g. > TestNativeIO. > Prefer using jUnit assumptions instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-9427) use jUnit assumptions to skip platform-specific tests
[ https://issues.apache.org/jira/browse/HADOOP-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gergely Novák updated HADOOP-9427: -- Attachment: HADOOP-9427.002.patch > use jUnit assumptions to skip platform-specific tests > - > > Key: HADOOP-9427 > URL: https://issues.apache.org/jira/browse/HADOOP-9427 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 1-win, 3.0.0-alpha1 >Reporter: Arpit Agarwal >Assignee: Gergely Novák > Attachments: HADOOP-9427.001.patch, HADOOP-9427.002.patch > > > Certain tests for platform-specific functionality are either executed only on > Windows or bypass on Windows using checks like "if (Path.WINDOWS)" e.g. > TestNativeIO. > Prefer using jUnit assumptions instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-9427) use jUnit assumptions to skip platform-specific tests
[ https://issues.apache.org/jira/browse/HADOOP-9427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gergely Novák updated HADOOP-9427: -- Attachment: (was: HADOOP-9427.002.patch) > use jUnit assumptions to skip platform-specific tests > - > > Key: HADOOP-9427 > URL: https://issues.apache.org/jira/browse/HADOOP-9427 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 1-win, 3.0.0-alpha1 >Reporter: Arpit Agarwal >Assignee: Gergely Novák > Attachments: HADOOP-9427.001.patch > > > Certain tests for platform-specific functionality are either executed only on > Windows or bypass on Windows using checks like "if (Path.WINDOWS)" e.g. > TestNativeIO. > Prefer using jUnit assumptions instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-3584) Add an explicit HadoopConfigurationException that extends RuntimeException
[ https://issues.apache.org/jira/browse/HADOOP-3584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-3584. Resolution: Won't Fix Fix Version/s: 2.8.0 Just noticed this JIRA hanging around. Clearly nobody else cares for it. Closing for now > Add an explicit HadoopConfigurationException that extends RuntimeException > -- > > Key: HADOOP-3584 > URL: https://issues.apache.org/jira/browse/HADOOP-3584 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 0.19.0 >Reporter: Steve Loughran >Priority: Minor > Fix For: 2.8.0 > > > It is possible for a get() or set() operation to throw an exception today, > especially if a security manager is blocking property access. As more complex > cross-references are used, the likelihood for failure is higher. > Yet there is no way for a Configuration or subclass to throw an exception > today except by throwing a general purpose RuntimeException. > I propose having a specific HadoopConfigurationException that extends > RuntimeException. Classes that read in configurations can explicitly catch > and handle these. The exception could > * be raised on some parse error (a float attribute is not a parseable float, > etc) > * be raised on some error caused by an implementation of a configuration > service API > * wrap underlying errors from different implementations (like JNDI exceptions) > * wrap security errors and other generic problems > I'm not going to propose having specific errors for parsing problems versus > undefined name,value pair though that may be useful feature creep. It > certainly makes bridging from different back-ends trickier. > This would not be incompatible with the existing code, at least from my > current experiments. What is more likely to cause problems is having the > get() operations failing, as that is not something that is broadly tested > (yet). If we do want to test it, we could have a custom mock back-end that > could be configured to fail on a get() of a specific option. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13237) s3a initialization against public bucket fails if caller lacks any credentials
[ https://issues.apache.org/jira/browse/HADOOP-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318089#comment-15318089 ] Steve Loughran commented on HADOOP-13237: - wow, good find. # we could have an anon provider subclass which has the constructor; that would eliminate the need to have a handler. # maybe also: log @ Info? # this should be straightforward to test > s3a initialization against public bucket fails if caller lacks any credentials > -- > > Key: HADOOP-13237 > URL: https://issues.apache.org/jira/browse/HADOOP-13237 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Chris Nauroth >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13237.001.patch > > > If an S3 bucket is public, anyone should be able to read from it. > However, you cannot create an s3a client bonded to a public bucket unless you > have some credentials; the {{doesBucketExist()}} check rejects the call. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation
[ https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318075#comment-15318075 ] Hadoop QA commented on HADOOP-12756: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 5s{color} | {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808594/HADOOP-12756.003.patch | | JIRA Issue | HADOOP-12756 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9676/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Incorporate Aliyun OSS file system implementation > - > > Key: HADOOP-12756 > URL: https://issues.apache.org/jira/browse/HADOOP-12756 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: shimingfei >Assignee: shimingfei > Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, HCFS > User manual.md, OSS integration.pdf, OSS integration.pdf > > > Aliyun OSS is widely used among China’s cloud users, but currently it is not > easy to access data laid on OSS storage from user’s Hadoop/Spark application, > because of no original support for OSS in Hadoop. > This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, > Spark/Hadoop applications can read/write data from OSS without any code > change. Narrowing the gap between user’s APP and data storage, like what have > been done for S3 in Hadoop -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation
[ https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318068#comment-15318068 ] Ling Zhou commented on HADOOP-12756: Thanks Kai, Patch is updated. 1. Resolve commons-beanutils dependency conflit. 2. Update pom in hadoop-tools-dist. 3. Fix coding style issues to pass checkstyle checks. > Incorporate Aliyun OSS file system implementation > - > > Key: HADOOP-12756 > URL: https://issues.apache.org/jira/browse/HADOOP-12756 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: shimingfei >Assignee: shimingfei > Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, HCFS > User manual.md, OSS integration.pdf, OSS integration.pdf > > > Aliyun OSS is widely used among China’s cloud users, but currently it is not > easy to access data laid on OSS storage from user’s Hadoop/Spark application, > because of no original support for OSS in Hadoop. > This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, > Spark/Hadoop applications can read/write data from OSS without any code > change. Narrowing the gap between user’s APP and data storage, like what have > been done for S3 in Hadoop -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls
[ https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318065#comment-15318065 ] Tsz Wo Nicholas Sze commented on HADOOP-12910: -- > One of the key things that Deferred and Finagle's Future both enable is > composition. This allows you to chain multiple asynchronous operations in a > type-safe fashion. ... Thanks for the comments. The dilemma here is that we want to use CompletableFuture in trunk but it is unavailable in branch-2. My suggestion indeed is that we only provide minimum async support in branch-2 and have full support in trunk. I originally thought that Future was good enough. However, as you can see in the past comments, people want callbacks so that I proposed FutureWithCallback. You are right that it won't support chaining. > Add new FileSystem API to support asynchronous method calls > --- > > Key: HADOOP-12910 > URL: https://issues.apache.org/jira/browse/HADOOP-12910 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou > Attachments: HADOOP-12910-HDFS-9924.000.patch, > HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch > > > Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a > better name). All the APIs in FutureFileSystem are the same as FileSystem > except that the return type is wrapped by Future, e.g. > {code} > //FileSystem > public boolean rename(Path src, Path dst) throws IOException; > //FutureFileSystem > public Future rename(Path src, Path dst) throws IOException; > {code} > Note that FutureFileSystem does not extend FileSystem. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation
[ https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-12756: --- Attachment: HADOOP-12756.003.patch Uploading the update patch on behalf of Mingfei and Ling > Incorporate Aliyun OSS file system implementation > - > > Key: HADOOP-12756 > URL: https://issues.apache.org/jira/browse/HADOOP-12756 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: shimingfei >Assignee: shimingfei > Attachments: HADOOP-12756-v02.patch, HADOOP-12756.003.patch, HCFS > User manual.md, OSS integration.pdf, OSS integration.pdf > > > Aliyun OSS is widely used among China’s cloud users, but currently it is not > easy to access data laid on OSS storage from user’s Hadoop/Spark application, > because of no original support for OSS in Hadoop. > This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, > Spark/Hadoop applications can read/write data from OSS without any code > change. Narrowing the gap between user’s APP and data storage, like what have > been done for S3 in Hadoop -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls
[ https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318041#comment-15318041 ] Tsz Wo Nicholas Sze commented on HADOOP-12910: -- > As to your FutureWithCallback, where does this come from? Have you built any > event-driven apps with it? At first blush, it is lacking in vocabulary at > least when put against Deferred or CompletableFuture. Thanks. The idea of Callback and FutureWithCallback is similar to Java Event Model which is a well-known model that many and many apps are built with it. I did not intent to use it for replacing Deferred or CompletableFuture. As mentioned previously, I propose using CompletableFuture for trunk. Unfortunately, CompletableFuture is not available in branch-2 so that we need something like FutureWithCallback to support callbacks. > Add new FileSystem API to support asynchronous method calls > --- > > Key: HADOOP-12910 > URL: https://issues.apache.org/jira/browse/HADOOP-12910 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou > Attachments: HADOOP-12910-HDFS-9924.000.patch, > HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch > > > Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a > better name). All the APIs in FutureFileSystem are the same as FileSystem > except that the return type is wrapped by Future, e.g. > {code} > //FileSystem > public boolean rename(Path src, Path dst) throws IOException; > //FutureFileSystem > public Future rename(Path src, Path dst) throws IOException; > {code} > Note that FutureFileSystem does not extend FileSystem. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318024#comment-15318024 ] Hadoop QA commented on HADOOP-12893: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 19s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 6s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 2s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 42s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 82m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.impl.TestMetricsSystemImpl | | | hadoop.metrics2.impl.TestGangliaMetrics | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12808570/HADOOP-12893.10.patch | | JIRA Issue | HADOOP-12893 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 5dc9ceb268ef 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bddea5f | | Default Java | 1.8.0_91 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9675/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9675/testReport/ | | modules | C: hadoop-build-tools hadoop-project hadoop-project-dist . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9675/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Verify LICENSE.txt and NOTICE.txt > - > > Key: HADOOP-12893 > URL: https://issues.apache.org/jira/browse/HADOOP-12893 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Xiao Chen >
[jira] [Created] (HADOOP-13244) o.a.h.ipc.Server#Server should honor handlerCount when queueSizePerHandler is specified in consturctor
Xiaoyu Yao created HADOOP-13244: --- Summary: o.a.h.ipc.Server#Server should honor handlerCount when queueSizePerHandler is specified in consturctor Key: HADOOP-13244 URL: https://issues.apache.org/jira/browse/HADOOP-13244 Project: Hadoop Common Issue Type: Bug Components: ipc Reporter: Xiaoyu Yao Priority: Minor In the code below, {{this.maxQueueSize = queueSizePerHandler;}} should be {{ this.maxQueueSize = handlerCount * queueSizePerHandler;}}. Luckily, I search the code base and found most callers invoke the constructor with queueSizePerHandler=-1. This ticket is opened to make it correct for the case when queueSizePerHandler is not -1. {code} if (queueSizePerHandler != -1) { this.maxQueueSize = queueSizePerHandler; } else { this.maxQueueSize = handlerCount * conf.getInt( CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_KEY, CommonConfigurationKeys.IPC_SERVER_HANDLER_QUEUE_SIZE_DEFAULT); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13189) FairCallQueue makes callQueue larger than the configured capacity.
[ https://issues.apache.org/jira/browse/HADOOP-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317957#comment-15317957 ] Xiaoyu Yao commented on HADOOP-13189: - Thanks for the heads up [~arpiagariu]. The code change looks good. I'm not sure whether the unit test change is needed as the 2nd parameter of the FairCallQueue constructor is already based on per queue capacity. I would suggest we test with mock (e.g., mockito) to validate internal subqueue capacity allocation given different {{ipc.server.handler.queue.size}}. {code} -fcq = new FairCallQueue(2, 5, "ns", conf); +fcq = new FairCallQueue(2, 10, "ns", conf); {code} > FairCallQueue makes callQueue larger than the configured capacity. > -- > > Key: HADOOP-13189 > URL: https://issues.apache.org/jira/browse/HADOOP-13189 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.6.0 >Reporter: Konstantin Shvachko >Assignee: Vinitha Reddy Gankidi > Attachments: HADOOP-13189.001.patch > > > {{FairCallQueue}} divides {{callQueue}} into multiple (4 by default) > sub-queues, with each sub-queue corresponding to a different level of > priority. The constructor for {{FairCallQueue}} takes the same parameter > {{capacity}} as the default CallQueue implementation, and allocates all its > sub-queues of size {{capacity}}. With 4 levels of priority (sub-queues) by > default it results in the total callQueue size 4 times larger than it should > be based on the configuration. > {{capacity}} should be divided by the number of sub-queues at some place. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request
[ https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317922#comment-15317922 ] Chris Nauroth commented on HADOOP-13203: Rajesh, thank you for patch 003. That addresses my comments 1 and 2, though it looks like we still need to come to consensus on point 3 (optimized forward seek/scan vs. optimized backward seek). > S3a: Consider reducing the number of connection aborts by setting correct > length in s3 request > -- > > Key: HADOOP-13203 > URL: https://issues.apache.org/jira/browse/HADOOP-13203 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13203-branch-2-001.patch, > HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch > > > Currently file's "contentLength" is set as the "requestedStreamLen", when > invoking S3AInputStream::reopen(). As a part of lazySeek(), sometimes the > stream had to be closed and reopened. But lots of times the stream was closed > with abort() causing the internal http connection to be unusable. This incurs > lots of connection establishment cost in some jobs. It would be good to set > the correct value for the stream length to avoid connection aborts. > I will post the patch once aws tests passes in my machine. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org