[jira] [Updated] (HADOOP-8602) Passive mode support for FTPFileSystem

2015-08-10 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HADOOP-8602:
--
Attachment: (was: HADOOP-8602.007.patch)

 Passive mode support for FTPFileSystem
 --

 Key: HADOOP-8602
 URL: https://issues.apache.org/jira/browse/HADOOP-8602
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Nemon Lou
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-8602.004.patch, HADOOP-8602.005.patch, 
 HADOOP-8602.006.patch, HADOOP-8602.patch, HADOOP-8602.patch, HADOOP-8602.patch


  FTPFileSystem uses active mode for default data connection mode.We shall be 
 able to choose passive mode when active mode doesn't work (firewall for 
 example).
  My thoughts is to add an option fs.ftp.data.connection.mode in 
 core-site.xml.Since FTPClient(in org.apache.commons.net.ftp package) already 
 supports passive mode, we just need to add a few code in FTPFileSystem 
 .connect() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-12313) Some tests in TestRMAdminService fails with NPE

2015-08-10 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S moved YARN-4035 to HADOOP-12313:
--

Affects Version/s: (was: 2.8.0)
 Target Version/s: 2.8.0  (was: 2.8.0)
  Key: HADOOP-12313  (was: YARN-4035)
  Project: Hadoop Common  (was: Hadoop YARN)

 Some tests in TestRMAdminService fails with NPE 
 

 Key: HADOOP-12313
 URL: https://issues.apache.org/jira/browse/HADOOP-12313
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Rohith Sharma K S
Assignee: Gabor Liptak
 Attachments: YARN-4035.1.patch


 It is observed that after YARN-4019 some tests are failing in 
 TestRMAdminService with null pointer exceptions in build [build failure 
 |https://builds.apache.org/job/PreCommit-YARN-Build/8792/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt]
 {noformat}
 unning org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
 Tests run: 19, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 11.541 sec 
  FAILURE! - in 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
 testModifyLabelsOnNodesWithDistributedConfigurationDisabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
   Time elapsed: 0.132 sec   ERROR!
 java.lang.NullPointerException: null
   at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.service.AbstractService.close(AbstractService.java:250)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testModifyLabelsOnNodesWithDistributedConfigurationDisabled(TestRMAdminService.java:824)
 testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
   Time elapsed: 0.121 sec   ERROR!
 java.lang.NullPointerException: null
   at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.service.AbstractService.close(AbstractService.java:250)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(TestRMAdminService.java:867)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12313) Some tests in TestRMAdminService fails with NPE

2015-08-10 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S reassigned HADOOP-12313:
--

Assignee: Rohith Sharma K S  (was: Gabor Liptak)

 Some tests in TestRMAdminService fails with NPE 
 

 Key: HADOOP-12313
 URL: https://issues.apache.org/jira/browse/HADOOP-12313
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Rohith Sharma K S
Assignee: Rohith Sharma K S
 Attachments: YARN-4035.1.patch


 It is observed that after YARN-4019 some tests are failing in 
 TestRMAdminService with null pointer exceptions in build [build failure 
 |https://builds.apache.org/job/PreCommit-YARN-Build/8792/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt]
 {noformat}
 unning org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
 Tests run: 19, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 11.541 sec 
  FAILURE! - in 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
 testModifyLabelsOnNodesWithDistributedConfigurationDisabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
   Time elapsed: 0.132 sec   ERROR!
 java.lang.NullPointerException: null
   at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.service.AbstractService.close(AbstractService.java:250)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testModifyLabelsOnNodesWithDistributedConfigurationDisabled(TestRMAdminService.java:824)
 testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
   Time elapsed: 0.121 sec   ERROR!
 java.lang.NullPointerException: null
   at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.service.AbstractService.close(AbstractService.java:250)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(TestRMAdminService.java:867)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12313) Possible NPE in JvmPauseMonitor.stop()

2015-08-10 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HADOOP-12313:
---
Summary: Possible NPE in JvmPauseMonitor.stop()  (was: Some tests in 
TestRMAdminService fails with NPE )

 Possible NPE in JvmPauseMonitor.stop()
 --

 Key: HADOOP-12313
 URL: https://issues.apache.org/jira/browse/HADOOP-12313
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Rohith Sharma K S
Assignee: Rohith Sharma K S
 Attachments: YARN-4035.1.patch


 It is observed that after YARN-4019 some tests are failing in 
 TestRMAdminService with null pointer exceptions in build [build failure 
 |https://builds.apache.org/job/PreCommit-YARN-Build/8792/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt]
 {noformat}
 unning org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
 Tests run: 19, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 11.541 sec 
  FAILURE! - in 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
 testModifyLabelsOnNodesWithDistributedConfigurationDisabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
   Time elapsed: 0.132 sec   ERROR!
 java.lang.NullPointerException: null
   at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.service.AbstractService.close(AbstractService.java:250)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testModifyLabelsOnNodesWithDistributedConfigurationDisabled(TestRMAdminService.java:824)
 testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
   Time elapsed: 0.121 sec   ERROR!
 java.lang.NullPointerException: null
   at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.service.AbstractService.close(AbstractService.java:250)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(TestRMAdminService.java:867)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12313) Possible NPE in JvmPauseMonitor.stop()

2015-08-10 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S reassigned HADOOP-12313:
--

Assignee: Gabor Liptak  (was: Rohith Sharma K S)

Assigned to me by mistake, assigned back to [~gliptak]

 Possible NPE in JvmPauseMonitor.stop()
 --

 Key: HADOOP-12313
 URL: https://issues.apache.org/jira/browse/HADOOP-12313
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Rohith Sharma K S
Assignee: Gabor Liptak
 Attachments: YARN-4035.1.patch


 It is observed that after YARN-4019 some tests are failing in 
 TestRMAdminService with null pointer exceptions in build [build failure 
 |https://builds.apache.org/job/PreCommit-YARN-Build/8792/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt]
 {noformat}
 unning org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
 Tests run: 19, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 11.541 sec 
  FAILURE! - in 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
 testModifyLabelsOnNodesWithDistributedConfigurationDisabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
   Time elapsed: 0.132 sec   ERROR!
 java.lang.NullPointerException: null
   at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.service.AbstractService.close(AbstractService.java:250)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testModifyLabelsOnNodesWithDistributedConfigurationDisabled(TestRMAdminService.java:824)
 testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
   Time elapsed: 0.121 sec   ERROR!
 java.lang.NullPointerException: null
   at org.apache.hadoop.util.JvmPauseMonitor.stop(JvmPauseMonitor.java:86)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStop(ResourceManager.java:601)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.stopActiveServices(ResourceManager.java:983)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:1038)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1085)
   at 
 org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
   at 
 org.apache.hadoop.service.AbstractService.close(AbstractService.java:250)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRemoveClusterNodeLabelsWithDistributedConfigurationEnabled(TestRMAdminService.java:867)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9654) IPC timeout doesn't seem to be kicking in

2015-08-10 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681262#comment-14681262
 ] 

Rohith Sharma K S commented on HADOOP-9654:
---

Is it same as HADOOP-11252?

 IPC timeout doesn't seem to be kicking in
 -

 Key: HADOOP-9654
 URL: https://issues.apache.org/jira/browse/HADOOP-9654
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.1.0-beta
Reporter: Roman Shaposhnik
Assignee: Ajith S

 During my Bigtop testing I made the NN OOM. This, in turn, made all of the 
 clients stuck in the IPC call (even the new clients that I run *after* the NN 
 went OOM). Here's an example of a jstack output on the client that was 
 running:
 {noformat}
 $ hadoop fs -lsr /
 {noformat}
 Stacktrace:
 {noformat}
 /usr/java/jdk1.6.0_21/bin/jstack 19078
 2013-06-19 23:14:00
 Full thread dump Java HotSpot(TM) 64-Bit Server VM (17.0-b16 mixed mode):
 Attach Listener daemon prio=10 tid=0x7fcd8c8c1800 nid=0x5105 waiting on 
 condition [0x]
java.lang.Thread.State: RUNNABLE
 IPC Client (1223039541) connection to 
 ip-10-144-82-213.ec2.internal/10.144.82.213:17020 from root daemon prio=10 
 tid=0x7fcd8c7ea000 nid=0x4aa0 runnable [0x7fcd443e2000]
java.lang.Thread.State: RUNNABLE
   at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
   at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
   at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
   at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
   - locked 0x7fcd7529de18 (a sun.nio.ch.Util$1)
   - locked 0x7fcd7529de00 (a java.util.Collections$UnmodifiableSet)
   - locked 0x7fcd7529da80 (a sun.nio.ch.EPollSelectorImpl)
   at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
   at 
 org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
   at 
 org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
   at 
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
   at 
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
   at java.io.FilterInputStream.read(FilterInputStream.java:116)
   at java.io.FilterInputStream.read(FilterInputStream.java:116)
   at 
 org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:421)
   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
   - locked 0x7fcd752aaf18 (a java.io.BufferedInputStream)
   at java.io.DataInputStream.readInt(DataInputStream.java:370)
   at 
 org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:943)
   at org.apache.hadoop.ipc.Client$Connection.run(Client.java:840)
 Low Memory Detector daemon prio=10 tid=0x7fcd8c09 nid=0x4a9b 
 runnable [0x]
java.lang.Thread.State: RUNNABLE
 CompilerThread1 daemon prio=10 tid=0x7fcd8c08d800 nid=0x4a9a waiting on 
 condition [0x]
java.lang.Thread.State: RUNNABLE
 CompilerThread0 daemon prio=10 tid=0x7fcd8c08a800 nid=0x4a99 waiting on 
 condition [0x]
java.lang.Thread.State: RUNNABLE
 Signal Dispatcher daemon prio=10 tid=0x7fcd8c088800 nid=0x4a98 runnable 
 [0x]
java.lang.Thread.State: RUNNABLE
 Finalizer daemon prio=10 tid=0x7fcd8c06a000 nid=0x4a97 in Object.wait() 
 [0x7fcd902e9000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 0x7fcd75fc0470 (a java.lang.ref.ReferenceQueue$Lock)
   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
   - locked 0x7fcd75fc0470 (a java.lang.ref.ReferenceQueue$Lock)
   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
   at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
 Reference Handler daemon prio=10 tid=0x7fcd8c068000 nid=0x4a96 in 
 Object.wait() [0x7fcd903ea000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 0x7fcd75fc0550 (a java.lang.ref.Reference$Lock)
   at java.lang.Object.wait(Object.java:485)
   at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
   - locked 0x7fcd75fc0550 (a java.lang.ref.Reference$Lock)
 main prio=10 tid=0x7fcd8c00a800 nid=0x4a92 in Object.wait() 
 [0x7fcd91b06000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 0x7fcd752528e8 (a org.apache.hadoop.ipc.Client$Call)
   at java.lang.Object.wait(Object.java:485)
 

[jira] [Commented] (HADOOP-12253) ViewFileSystem getFileStatus java.lang.ArrayIndexOutOfBoundsException: 0

2015-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681265#comment-14681265
 ] 

Hadoop QA commented on HADOOP-12253:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  23m  2s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |  13m 15s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  13m 12s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 52s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 24s | The applied patch generated  3 
new checkstyle issues (total was 64, now 67). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 44s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 43s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 34s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  25m 19s | Tests failed in 
hadoop-common. |
| | |  82m  8s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.net.TestNetUtils |
|   | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12749752/HADOOP-12253.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / fa1d84a |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7435/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7435/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7435/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7435/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7435/console |


This message was automatically generated.

 ViewFileSystem getFileStatus java.lang.ArrayIndexOutOfBoundsException: 0
 

 Key: HADOOP-12253
 URL: https://issues.apache.org/jira/browse/HADOOP-12253
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
 Environment: hadoop 2.6.0   hive 1.1.0 tez0.7  cenos6.4
Reporter: tangjunjie
Assignee: Ajith S
 Attachments: HADOOP-12253.patch


 When I enable hdfs federation.I run a query on hive on tez. Then it occur a 
 exception:
 {noformat}
 8.784 PM  WARNorg.apache.hadoop.security.UserGroupInformation No 
 groups available for user tangjijun
 3:12:28.784 PMERROR   org.apache.hadoop.hive.ql.exec.Task Failed 
 to execute tez graph.
 java.lang.ArrayIndexOutOfBoundsException: 0
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem$InternalDirOfViewFs.getFileStatus(ViewFileSystem.java:771)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.getFileStatus(ViewFileSystem.java:359)
   at 
 org.apache.tez.client.TezClientUtils.checkAncestorPermissionsForAllUsers(TezClientUtils.java:955)
   at 
 org.apache.tez.client.TezClientUtils.setupTezJarsLocalResources(TezClientUtils.java:184)
   at 
 org.apache.tez.client.TezClient.getTezJarResources(TezClient.java:787)
   at org.apache.tez.client.TezClient.start(TezClient.java:337)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:191)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:234)
   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:136)
   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
   at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1640)
   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1399)
   at 

[jira] [Commented] (HADOOP-2) Reused Keys and Values fail with a Combiner

2015-08-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681266#comment-14681266
 ] 

Hudson commented on HADOOP-2:
-

FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1026 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1026/])
HBASE-5878 Use getVisibleLength public api from HdfsDataInputStream from 
Hadoop-2. (apurtell: rev b69569f512068d795199310ce662ab381bb6b6b7)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java
Revert HBASE-5878 Use getVisibleLength public api from HdfsDataInputStream 
from Hadoop-2. (apurtell: rev fabfb423f9cf48ddd52e9583ca6664f42349)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java


 Reused Keys and Values fail with a Combiner
 ---

 Key: HADOOP-2
 URL: https://issues.apache.org/jira/browse/HADOOP-2
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.1.0

 Attachments: clone-map-output.patch


 If the map function reuses the key or value by destructively modifying it 
 after the output.collect(key,value) call and your application uses a 
 combiner, the data is corrupted by having lots of instances with the last key 
 or value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8602) Passive mode support for FTPFileSystem

2015-08-10 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HADOOP-8602:
--
Attachment: HADOOP-8602.007.patch

test bugs fixed.

 Passive mode support for FTPFileSystem
 --

 Key: HADOOP-8602
 URL: https://issues.apache.org/jira/browse/HADOOP-8602
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Nemon Lou
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-8602.004.patch, HADOOP-8602.005.patch, 
 HADOOP-8602.006.patch, HADOOP-8602.007.patch, HADOOP-8602.patch, 
 HADOOP-8602.patch, HADOOP-8602.patch


  FTPFileSystem uses active mode for default data connection mode.We shall be 
 able to choose passive mode when active mode doesn't work (firewall for 
 example).
  My thoughts is to add an option fs.ftp.data.connection.mode in 
 core-site.xml.Since FTPClient(in org.apache.commons.net.ftp package) already 
 supports passive mode, we just need to add a few code in FTPFileSystem 
 .connect() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8602) Passive mode support for FTPFileSystem

2015-08-10 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HADOOP-8602:
--
Fix Version/s: 2.7.1
   Status: Patch Available  (was: Open)

 Passive mode support for FTPFileSystem
 --

 Key: HADOOP-8602
 URL: https://issues.apache.org/jira/browse/HADOOP-8602
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.0.0-alpha, 1.0.3
Reporter: Nemon Lou
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 2.7.1

 Attachments: HADOOP-8602.004.patch, HADOOP-8602.005.patch, 
 HADOOP-8602.006.patch, HADOOP-8602.007.patch, HADOOP-8602.patch, 
 HADOOP-8602.patch, HADOOP-8602.patch


  FTPFileSystem uses active mode for default data connection mode.We shall be 
 able to choose passive mode when active mode doesn't work (firewall for 
 example).
  My thoughts is to add an option fs.ftp.data.connection.mode in 
 core-site.xml.Since FTPClient(in org.apache.commons.net.ftp package) already 
 supports passive mode, we just need to add a few code in FTPFileSystem 
 .connect() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12275) releasedocmaker: unreleased should still be dated

2015-08-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12275:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

+1 committing.

thanks!

 releasedocmaker: unreleased should still be dated
 -

 Key: HADOOP-12275
 URL: https://issues.apache.org/jira/browse/HADOOP-12275
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
Priority: Trivial
  Labels: newbie
 Fix For: HADOOP-12111

 Attachments: HADOOP-12275.HADOOP-12111.00.patch


 releasedocmaker should still date unreleased versions. Instead of 
 {{Unreleased}} it should be {{Unreleased (as of -MM-DD)}}.  This way if 
 versions are later released, there will be no confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12258) Need translate java.nio.file.NoSuchFileException to FileNotFoundException to avoid regression

2015-08-10 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12258:
---
Attachment: (was: HADOOP-12258.002.patch)

 Need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
 avoid regression
 -

 Key: HADOOP-12258
 URL: https://issues.apache.org/jira/browse/HADOOP-12258
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Critical
 Attachments: HADOOP-12258.000.patch, HADOOP-12258.001.patch


 need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
 avoid regression.
 HADOOP-12045 adds nio to support access time, but nio will create 
 java.nio.file.NoSuchFileException instead of FileNotFoundException.
 many hadoop codes depend on FileNotFoundException to decide whether a file 
 exists. for example {{FileContext.util().exists()}}. 
 {code}
 public boolean exists(final Path f) throws AccessControlException,
   UnsupportedFileSystemException, IOException {
   try {
 FileStatus fs = FileContext.this.getFileStatus(f);
 assert fs != null;
 return true;
   } catch (FileNotFoundException e) {
 return false;
   }
 }
 {code}
 same for {{FileSystem#exists}}
 {code}
   public boolean exists(Path f) throws IOException {
 try {
   return getFileStatus(f) != null;
 } catch (FileNotFoundException e) {
   return false;
 }
   }
 {code}
 NoSuchFileException will break these functions.
 Since {{exists}} is one of the most used API in FileSystem, this issue is 
 very critical.
 Several test failures for TestDeletionService are caused by this issue:
 https://builds.apache.org/job/PreCommit-YARN-Build/8630/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testRelativeDelete/
 https://builds.apache.org/job/PreCommit-YARN-Build/8632/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testAbsDelete/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12310) final memory report sometimes generates spurious errors

2015-08-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12310:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

+1 committing.

Thanks!

 final memory report sometimes generates spurious errors
 ---

 Key: HADOOP-12310
 URL: https://issues.apache.org/jira/browse/HADOOP-12310
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
 Fix For: HADOOP-12111

 Attachments: HADOOP-12310.HADOOP-12111.00.patch


 There are spurious sort write pipeline failures coming from the maven memory 
 check on Jenkins.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7422/console
 with bash debug turned on:
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7423/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12248) Add native support for TAP

2015-08-10 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680354#comment-14680354
 ] 

Sean Busbey commented on HADOOP-12248:
--

nit: is the change  to have scala files not require a javadoc check intentional?

 Add native support for TAP
 --

 Key: HADOOP-12248
 URL: https://issues.apache.org/jira/browse/HADOOP-12248
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12248.HADOOP-12111.00.patch, 
 HADOOP-12248.HADOOP-12111.01.patch


 test-patch should support TAP-output files similarly to how we support JUnit 
 XML files.  This is an enabler for bats support for our own unit testing!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12258) Need translate java.nio.file.NoSuchFileException to FileNotFoundException to avoid regression

2015-08-10 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12258:
---
Attachment: HADOOP-12258.002.patch

 Need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
 avoid regression
 -

 Key: HADOOP-12258
 URL: https://issues.apache.org/jira/browse/HADOOP-12258
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Critical
 Attachments: HADOOP-12258.000.patch, HADOOP-12258.001.patch, 
 HADOOP-12258.002.patch


 need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
 avoid regression.
 HADOOP-12045 adds nio to support access time, but nio will create 
 java.nio.file.NoSuchFileException instead of FileNotFoundException.
 many hadoop codes depend on FileNotFoundException to decide whether a file 
 exists. for example {{FileContext.util().exists()}}. 
 {code}
 public boolean exists(final Path f) throws AccessControlException,
   UnsupportedFileSystemException, IOException {
   try {
 FileStatus fs = FileContext.this.getFileStatus(f);
 assert fs != null;
 return true;
   } catch (FileNotFoundException e) {
 return false;
   }
 }
 {code}
 same for {{FileSystem#exists}}
 {code}
   public boolean exists(Path f) throws IOException {
 try {
   return getFileStatus(f) != null;
 } catch (FileNotFoundException e) {
   return false;
 }
   }
 {code}
 NoSuchFileException will break these functions.
 Since {{exists}} is one of the most used API in FileSystem, this issue is 
 very critical.
 Several test failures for TestDeletionService are caused by this issue:
 https://builds.apache.org/job/PreCommit-YARN-Build/8630/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testRelativeDelete/
 https://builds.apache.org/job/PreCommit-YARN-Build/8632/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testAbsDelete/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12312) Findbugs HTML report link shows 0 warnings despite errors

2015-08-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680314#comment-14680314
 ] 

Allen Wittenauer commented on HADOOP-12312:
---

Have you actually duplicated this with HADOOP-12111?

 Findbugs HTML report link shows 0 warnings despite errors
 -

 Key: HADOOP-12312
 URL: https://issues.apache.org/jira/browse/HADOOP-12312
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Varun Saxena

 Refer to Hadoop QA report below :
 https://issues.apache.org/jira/browse/YARN-3232?focusedCommentId=14679146page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14679146
 The report shows -1 for findbugs because there have been 7 findbugs warnings 
 introduced. But the HTML report in link is showing 0 findbugs warnings.
 I verified locally and the warnings did indeed exist.
 So there must be some problem in findbugs HTML report generation in 
 test-patch.sh
 This inconsistency between -1 for findbugs and HTML report lead to these 
 findbugs warnings being leaked to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12248) Add native support for TAP

2015-08-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680335#comment-14680335
 ] 

Allen Wittenauer commented on HADOOP-12248:
---

ping [~busbey], [~cnauroth].  

 Add native support for TAP
 --

 Key: HADOOP-12248
 URL: https://issues.apache.org/jira/browse/HADOOP-12248
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12248.HADOOP-12111.00.patch, 
 HADOOP-12248.HADOOP-12111.01.patch


 test-patch should support TAP-output files similarly to how we support JUnit 
 XML files.  This is an enabler for bats support for our own unit testing!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12248) Add native support for TAP

2015-08-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680364#comment-14680364
 ] 

Allen Wittenauer commented on HADOOP-12248:
---

Yes.  When I first wrote that code, I made an assumption that Scala actually 
did generate javadoc.  While I was going through there for the other stuff, I 
figured it was safer to remove that assumption until someone can verify that 
it's true. :)

 Add native support for TAP
 --

 Key: HADOOP-12248
 URL: https://issues.apache.org/jira/browse/HADOOP-12248
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12248.HADOOP-12111.00.patch, 
 HADOOP-12248.HADOOP-12111.01.patch


 test-patch should support TAP-output files similarly to how we support JUnit 
 XML files.  This is an enabler for bats support for our own unit testing!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12312) Findbugs HTML report link shows 0 warnings despite errors

2015-08-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680476#comment-14680476
 ] 

Allen Wittenauer commented on HADOOP-12312:
---


YARN-3232 generated the following with HADOOP-12111+HADOOP-12129 and unit tests 
turned off:

{code}
| Vote |  Subsystem |  Runtime   | Comment

|  +1  |mvninstall  |  4m 12s| trunk passed 
|  +1  | javac  |  4m 24s| trunk passed 
|  +1  |   javadoc  |  2m 45s| trunk passed 
|  +1  |  site  |  2m 04s| trunk passed 
|  +1  |   @author  |  0m 00s| The patch does not contain any @author 
|  ||| tags.
|  +1  |test4tests  |  0m 00s| The patch appears to include 6 new or 
|  ||| modified test files.
|  +1  |checkstyle  |  0m 31s| trunk passed 
|  +1  | javac  |  4m 25s| the patch passed 
|  +1  |  site  |  2m 04s| the patch passed 
|  +1  |asflicense  |  0m 16s| Patch does not generate ASF License 
|  ||| warnings.
|  -1  |checkstyle  |  0m 32s| Patch generated 1 new checkstyle issues 
|  ||| in . (total was 299, now 296).
|  +1  |whitespace  |  0m 00s| Patch has no whitespace issues. 
|  +1  |mvninstall  |  1m 59s| the patch passed 
|  +1  |   javadoc  |  2m 49s| the patch passed 
|  +1  |   eclipse  |  1m 19s| the patch passed 
|  +1  |  findbugs  |  4m 35s| the patch passed 
|  ||  36m 51s   | 


|| Subsystem || Report/Notes ||

| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12749469/YARN-3232.02.patch |
| JIRA Issue | YARN-3232 |
| git revision | trunk / 8f73bdd |
| Optional Tests | asflicense javac javadoc mvninstall unit findbugs checkstyle 
site |
| uname | Darwin aw-mbp-work.local 13.4.0 Darwin Kernel Version 13.4.0: Wed Mar 
18 16:20:14 PDT 2015; root:xnu-2422.115.14~1/RELEASE_X86_64 x86_64 |
| Build tool | maven |
| Personality | 
/Users/aw/Src/aw-github/hadoop-yetus/dev-support/personality/hadoop.sh |
| Default Java | 1.7.0_67 |
| findbugs | v3.0.0 |
| checkstyle | /Users/aw/Src/drd-github/patchprocess/diff-checkstyle-root.txt |
| Max memory used | 95MB |
{code}

So this has likely already been fixed.

 Findbugs HTML report link shows 0 warnings despite errors
 -

 Key: HADOOP-12312
 URL: https://issues.apache.org/jira/browse/HADOOP-12312
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Varun Saxena

 Refer to Hadoop QA report below :
 https://issues.apache.org/jira/browse/YARN-3232?focusedCommentId=14679146page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14679146
 The report shows -1 for findbugs because there have been 7 findbugs warnings 
 introduced. But the HTML report in link is showing 0 findbugs warnings.
 I verified locally and the warnings did indeed exist.
 So there must be some problem in findbugs HTML report generation in 
 test-patch.sh
 This inconsistency between -1 for findbugs and HTML report lead to these 
 findbugs warnings being leaked to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12248) Add native support for TAP

2015-08-10 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680380#comment-14680380
 ] 

Sean Busbey commented on HADOOP-12248:
--

+1

 Add native support for TAP
 --

 Key: HADOOP-12248
 URL: https://issues.apache.org/jira/browse/HADOOP-12248
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12248.HADOOP-12111.00.patch, 
 HADOOP-12248.HADOOP-12111.01.patch


 test-patch should support TAP-output files similarly to how we support JUnit 
 XML files.  This is an enabler for bats support for our own unit testing!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11229) JobStoryProducer is not closed upon return from Gridmix#setupDistCacheEmulation()

2015-08-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-11229:

Description: 
Here is related code:
{code}
  JobStoryProducer jsp = createJobStoryProducer(traceIn, conf);
  exitCode = distCacheEmulator.setupGenerateDistCacheData(jsp);
{code}
jsp should be closed upon return from setupDistCacheEmulation().

  was:
Here is related code:
{code}
  JobStoryProducer jsp = createJobStoryProducer(traceIn, conf);
  exitCode = distCacheEmulator.setupGenerateDistCacheData(jsp);
{code}

jsp should be closed upon return from setupDistCacheEmulation().


 JobStoryProducer is not closed upon return from 
 Gridmix#setupDistCacheEmulation()
 -

 Key: HADOOP-11229
 URL: https://issues.apache.org/jira/browse/HADOOP-11229
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: skrho
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11229_001.patch, HADOOP-11229_002.patch


 Here is related code:
 {code}
   JobStoryProducer jsp = createJobStoryProducer(traceIn, conf);
   exitCode = distCacheEmulator.setupGenerateDistCacheData(jsp);
 {code}
 jsp should be closed upon return from setupDistCacheEmulation().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12248) Add native support for TAP

2015-08-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12248:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

Thanks for the review!

Committing

 Add native support for TAP
 --

 Key: HADOOP-12248
 URL: https://issues.apache.org/jira/browse/HADOOP-12248
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: HADOOP-12111

 Attachments: HADOOP-12248.HADOOP-12111.00.patch, 
 HADOOP-12248.HADOOP-12111.01.patch


 test-patch should support TAP-output files similarly to how we support JUnit 
 XML files.  This is an enabler for bats support for our own unit testing!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12311) Implement stream-based Filesystem API

2015-08-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680506#comment-14680506
 ] 

Steve Loughran commented on HADOOP-12311:
-

Different thought: can we have a java8 module alongside hadoop 2.x which does 
the API bridge? that way, no need to wait for Hadoop 3 *and its adoption*

 Implement stream-based Filesystem API
 -

 Key: HADOOP-12311
 URL: https://issues.apache.org/jira/browse/HADOOP-12311
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Victor Malov 
Priority: Minor

 After looking at Filesystem API, I couldn't find Stream-based API, which will 
 be work well with Java lambda-functions and be able chain calls.
 As Hadoop 3.0 is going to support JDK 8, I propose implement general 
 stream-based Filesystem API similar to as implemented in Java SE 8:
 static StreamString lines(Path path, Charset cs)
 This probably will looks similar to this:
 try (StreamPath stream = Files.list(Paths.get())) { 
 String joined = stream
. map(String::valueOf)
.filter(path - !path.startsWith(.))
.sorted()
.collect(Collectors.joining(; ));
 System.out.println(List:  + joined);
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8602) Passive mode support for FTPFileSystem

2015-08-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680508#comment-14680508
 ] 

Steve Loughran commented on HADOOP-8602:


you have to hit the submit patch button

 Passive mode support for FTPFileSystem
 --

 Key: HADOOP-8602
 URL: https://issues.apache.org/jira/browse/HADOOP-8602
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Nemon Lou
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-8602.004.patch, HADOOP-8602.005.patch, 
 HADOOP-8602.006.patch, HADOOP-8602.007.patch, HADOOP-8602.patch, 
 HADOOP-8602.patch, HADOOP-8602.patch


  FTPFileSystem uses active mode for default data connection mode.We shall be 
 able to choose passive mode when active mode doesn't work (firewall for 
 example).
  My thoughts is to add an option fs.ftp.data.connection.mode in 
 core-site.xml.Since FTPClient(in org.apache.commons.net.ftp package) already 
 supports passive mode, we just need to add a few code in FTPFileSystem 
 .connect() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-08-10 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11887:
---
Attachment: HADOOP-11887-v5.patch

Updated the patch addressing all the comments made by Colin. 
Colin, would you look at the changes to see if they followed your ideas well? 
Thanks.

 Introduce Intel ISA-L erasure coding library for the native support
 ---

 Key: HADOOP-11887
 URL: https://issues.apache.org/jira/browse/HADOOP-11887
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11887-v1.patch, HADOOP-11887-v2.patch, 
 HADOOP-11887-v3.patch, HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, 
 HADOOP-11887-v5.patch


 This is to introduce Intel ISA-L erasure coding library for the native 
 support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
 *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-08-10 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680311#comment-14680311
 ] 

Alan Burlison commented on HADOOP-11887:


Hello,

I'm on holiday from August 10th returning on August 17th, I'll reply to you
when I'm back. If it's something urgent, please contact bonnie.cor...@oracle.com

Thanks,

--
Alan Burlison
--


 Introduce Intel ISA-L erasure coding library for the native support
 ---

 Key: HADOOP-11887
 URL: https://issues.apache.org/jira/browse/HADOOP-11887
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11887-v1.patch, HADOOP-11887-v2.patch, 
 HADOOP-11887-v3.patch, HADOOP-11887-v4.patch, HADOOP-11887-v5.patch, 
 HADOOP-11887-v5.patch


 This is to introduce Intel ISA-L erasure coding library for the native 
 support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
 *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12258) Need translate java.nio.file.NoSuchFileException to FileNotFoundException to avoid regression

2015-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680808#comment-14680808
 ] 

Hadoop QA commented on HADOOP-12258:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m 58s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 12 new or modified test files. |
| {color:green}+1{color} | javac |   7m 41s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 42s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 38s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 17s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m 18s | Tests failed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 175m  7s | Tests failed in hadoop-hdfs. |
| | | 240m 10s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.net.TestNetUtils |
|   | hadoop.fs.sftp.TestSFTPFileSystem |
| Timed out tests | org.apache.hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12749614/HADOOP-12258.002.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 8f73bdd |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7433/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7433/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7433/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7433/console |


This message was automatically generated.

 Need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
 avoid regression
 -

 Key: HADOOP-12258
 URL: https://issues.apache.org/jira/browse/HADOOP-12258
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Critical
 Attachments: HADOOP-12258.000.patch, HADOOP-12258.001.patch, 
 HADOOP-12258.002.patch


 need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
 avoid regression.
 HADOOP-12045 adds nio to support access time, but nio will create 
 java.nio.file.NoSuchFileException instead of FileNotFoundException.
 many hadoop codes depend on FileNotFoundException to decide whether a file 
 exists. for example {{FileContext.util().exists()}}. 
 {code}
 public boolean exists(final Path f) throws AccessControlException,
   UnsupportedFileSystemException, IOException {
   try {
 FileStatus fs = FileContext.this.getFileStatus(f);
 assert fs != null;
 return true;
   } catch (FileNotFoundException e) {
 return false;
   }
 }
 {code}
 same for {{FileSystem#exists}}
 {code}
   public boolean exists(Path f) throws IOException {
 try {
   return getFileStatus(f) != null;
 } catch (FileNotFoundException e) {
   return false;
 }
   }
 {code}
 NoSuchFileException will break these functions.
 Since {{exists}} is one of the most used API in FileSystem, this issue is 
 very critical.
 Several test failures for TestDeletionService are caused by this issue:
 https://builds.apache.org/job/PreCommit-YARN-Build/8630/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testRelativeDelete/
 https://builds.apache.org/job/PreCommit-YARN-Build/8632/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testAbsDelete/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12129) rework test-patch bug system support

2015-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680963#comment-14680963
 ] 

Hadoop QA commented on HADOOP-12129:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7434/console in case of 
problems.

 rework test-patch bug system support
 

 Key: HADOOP-12129
 URL: https://issues.apache.org/jira/browse/HADOOP-12129
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-12129.HADOOP-12111.00.patch, 
 HADOOP-12129.HADOOP-12111.01.patch, HADOOP-12129.HADOOP-12111.02.patch


 WARNING: this is a fairly big project.
 See first comment for a brain dump on the issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12129) rework test-patch bug system support

2015-08-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12129:
--
Attachment: HADOOP-12129.HADOOP-12111.02.patch

-02:
* fix some bugs with curl's usage in smart-apply-patch
* github support works! both basic and token for auth!
* jira plugin now has some simple/basic support of switching to github when it 
detects a pull request in the comments
* removed jira-cmd parse arg since we don't need it anymore
* some hard-coded sed's moved to use the sed var
* jira was forcing it's header into github comments

 rework test-patch bug system support
 

 Key: HADOOP-12129
 URL: https://issues.apache.org/jira/browse/HADOOP-12129
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-12129.HADOOP-12111.00.patch, 
 HADOOP-12129.HADOOP-12111.01.patch, HADOOP-12129.HADOOP-12111.02.patch


 WARNING: this is a fairly big project.
 See first comment for a brain dump on the issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12129) rework test-patch bug system support

2015-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680967#comment-14680967
 ] 

Hadoop QA commented on HADOOP-12129:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} HADOOP-12111 passed {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} site {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 12s 
{color} | {color:red} The applied patch generated 2 new shellcheck issues 
(total was 22, now 24). {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 48s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12749721/HADOOP-12129.HADOOP-12111.02.patch
 |
| JIRA Issue | HADOOP-12129 |
| git revision | HADOOP-12111 / 8726069 |
| Optional Tests | asflicense site unit shellcheck |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/dev-support-test/personality/hadoop.sh
 |
| Default Java | 1.7.0_55 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7434/artifact/patchprocess/diff-patch-shellcheck.txt
 |
| JDK v1.7.0_55  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7434/testReport/ |
| Max memory used | 49MB |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7434/console |


This message was automatically generated.



 rework test-patch bug system support
 

 Key: HADOOP-12129
 URL: https://issues.apache.org/jira/browse/HADOOP-12129
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-12129.HADOOP-12111.00.patch, 
 HADOOP-12129.HADOOP-12111.01.patch, HADOOP-12129.HADOOP-12111.02.patch


 WARNING: this is a fairly big project.
 See first comment for a brain dump on the issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12311) Implement stream-based Filesystem API

2015-08-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14680690#comment-14680690
 ] 

Chris Nauroth commented on HADOOP-12311:


[~steve_l], does this mean a new Maven sub-module configured to compile to 1.8 
within branch-2?  Then, the release process would need to have a JDK 8 
available at distro build time.

That seems feasible.  The majority of Hadoop jars would still be built for JDK 
7, and just this new API bridge jar would contain JDK 8 bytecode.

 Implement stream-based Filesystem API
 -

 Key: HADOOP-12311
 URL: https://issues.apache.org/jira/browse/HADOOP-12311
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Victor Malov 
Priority: Minor

 After looking at Filesystem API, I couldn't find Stream-based API, which will 
 be work well with Java lambda-functions and be able chain calls.
 As Hadoop 3.0 is going to support JDK 8, I propose implement general 
 stream-based Filesystem API similar to as implemented in Java SE 8:
 static StreamString lines(Path path, Charset cs)
 This probably will looks similar to this:
 try (StreamPath stream = Files.list(Paths.get())) { 
 String joined = stream
. map(String::valueOf)
.filter(path - !path.startsWith(.))
.sorted()
.collect(Collectors.joining(; ));
 System.out.println(List:  + joined);
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-2) Reused Keys and Values fail with a Combiner

2015-08-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681176#comment-14681176
 ] 

Hudson commented on HADOOP-2:
-

FAILURE: Integrated in HBase-0.98 #1072 (See 
[https://builds.apache.org/job/HBase-0.98/1072/])
HBASE-5878 Use getVisibleLength public api from HdfsDataInputStream from 
Hadoop-2. (apurtell: rev b69569f512068d795199310ce662ab381bb6b6b7)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java
Revert HBASE-5878 Use getVisibleLength public api from HdfsDataInputStream 
from Hadoop-2. (apurtell: rev fabfb423f9cf48ddd52e9583ca6664f42349)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java


 Reused Keys and Values fail with a Combiner
 ---

 Key: HADOOP-2
 URL: https://issues.apache.org/jira/browse/HADOOP-2
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.1.0

 Attachments: clone-map-output.patch


 If the map function reuses the key or value by destructively modifying it 
 after the output.collect(key,value) call and your application uses a 
 combiner, the data is corrupted by having lots of instances with the last key 
 or value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-2) Reused Keys and Values fail with a Combiner

2015-08-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681195#comment-14681195
 ] 

Hudson commented on HADOOP-2:
-

FAILURE: Integrated in HBase-TRUNK #6712 (See 
[https://builds.apache.org/job/HBase-TRUNK/6712/])
HBASE-5878 Use getVisibleLength public api from HdfsDataInputStream from 
Hadoop-2. (apurtell: rev 6e8cdec242b6c40c09601982bad0a79a569e66c4)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java


 Reused Keys and Values fail with a Combiner
 ---

 Key: HADOOP-2
 URL: https://issues.apache.org/jira/browse/HADOOP-2
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.1.0

 Attachments: clone-map-output.patch


 If the map function reuses the key or value by destructively modifying it 
 after the output.collect(key,value) call and your application uses a 
 combiner, the data is corrupted by having lots of instances with the last key 
 or value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-2) Reused Keys and Values fail with a Combiner

2015-08-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681200#comment-14681200
 ] 

Hudson commented on HADOOP-2:
-

FAILURE: Integrated in HBase-1.2 #99 (See 
[https://builds.apache.org/job/HBase-1.2/99/])
HBASE-5878 Use getVisibleLength public api from HdfsDataInputStream from 
Hadoop-2. (apurtell: rev 7f33e6330a37b0401c2f9143ddbea67361217453)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java


 Reused Keys and Values fail with a Combiner
 ---

 Key: HADOOP-2
 URL: https://issues.apache.org/jira/browse/HADOOP-2
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.1.0

 Attachments: clone-map-output.patch


 If the map function reuses the key or value by destructively modifying it 
 after the output.collect(key,value) call and your application uses a 
 combiner, the data is corrupted by having lots of instances with the last key 
 or value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12311) Implement stream-based Filesystem API

2015-08-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681202#comment-14681202
 ] 

Steve Loughran commented on HADOOP-12311:
-

you could do it in something downstream of hadoop-core for now, built  release 
separately/in sync, but with an ultimate goal to pull back in future

 Implement stream-based Filesystem API
 -

 Key: HADOOP-12311
 URL: https://issues.apache.org/jira/browse/HADOOP-12311
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Victor Malov 
Priority: Minor

 After looking at Filesystem API, I couldn't find Stream-based API, which will 
 be work well with Java lambda-functions and be able chain calls.
 As Hadoop 3.0 is going to support JDK 8, I propose implement general 
 stream-based Filesystem API similar to as implemented in Java SE 8:
 static StreamString lines(Path path, Charset cs)
 This probably will looks similar to this:
 try (StreamPath stream = Files.list(Paths.get())) { 
 String joined = stream
. map(String::valueOf)
.filter(path - !path.startsWith(.))
.sorted()
.collect(Collectors.joining(; ));
 System.out.println(List:  + joined);
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9654) IPC timeout doesn't seem to be kicking in

2015-08-10 Thread Ajith S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681206#comment-14681206
 ] 

Ajith S commented on HADOOP-9654:
-

+1 for [~rvs]

I think we can introduce a new default *ipc.client.timeout* property which can 
be used in case the ipc.client.ping=false(which is default now)
-1 is not a reasonable timeout value, we can set the new property to may be say 
3600 seconds.? reasonable.?

 IPC timeout doesn't seem to be kicking in
 -

 Key: HADOOP-9654
 URL: https://issues.apache.org/jira/browse/HADOOP-9654
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.1.0-beta
Reporter: Roman Shaposhnik
Assignee: Ajith S

 During my Bigtop testing I made the NN OOM. This, in turn, made all of the 
 clients stuck in the IPC call (even the new clients that I run *after* the NN 
 went OOM). Here's an example of a jstack output on the client that was 
 running:
 {noformat}
 $ hadoop fs -lsr /
 {noformat}
 Stacktrace:
 {noformat}
 /usr/java/jdk1.6.0_21/bin/jstack 19078
 2013-06-19 23:14:00
 Full thread dump Java HotSpot(TM) 64-Bit Server VM (17.0-b16 mixed mode):
 Attach Listener daemon prio=10 tid=0x7fcd8c8c1800 nid=0x5105 waiting on 
 condition [0x]
java.lang.Thread.State: RUNNABLE
 IPC Client (1223039541) connection to 
 ip-10-144-82-213.ec2.internal/10.144.82.213:17020 from root daemon prio=10 
 tid=0x7fcd8c7ea000 nid=0x4aa0 runnable [0x7fcd443e2000]
java.lang.Thread.State: RUNNABLE
   at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
   at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
   at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
   at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
   - locked 0x7fcd7529de18 (a sun.nio.ch.Util$1)
   - locked 0x7fcd7529de00 (a java.util.Collections$UnmodifiableSet)
   - locked 0x7fcd7529da80 (a sun.nio.ch.EPollSelectorImpl)
   at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
   at 
 org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
   at 
 org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
   at 
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
   at 
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
   at java.io.FilterInputStream.read(FilterInputStream.java:116)
   at java.io.FilterInputStream.read(FilterInputStream.java:116)
   at 
 org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:421)
   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
   - locked 0x7fcd752aaf18 (a java.io.BufferedInputStream)
   at java.io.DataInputStream.readInt(DataInputStream.java:370)
   at 
 org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:943)
   at org.apache.hadoop.ipc.Client$Connection.run(Client.java:840)
 Low Memory Detector daemon prio=10 tid=0x7fcd8c09 nid=0x4a9b 
 runnable [0x]
java.lang.Thread.State: RUNNABLE
 CompilerThread1 daemon prio=10 tid=0x7fcd8c08d800 nid=0x4a9a waiting on 
 condition [0x]
java.lang.Thread.State: RUNNABLE
 CompilerThread0 daemon prio=10 tid=0x7fcd8c08a800 nid=0x4a99 waiting on 
 condition [0x]
java.lang.Thread.State: RUNNABLE
 Signal Dispatcher daemon prio=10 tid=0x7fcd8c088800 nid=0x4a98 runnable 
 [0x]
java.lang.Thread.State: RUNNABLE
 Finalizer daemon prio=10 tid=0x7fcd8c06a000 nid=0x4a97 in Object.wait() 
 [0x7fcd902e9000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 0x7fcd75fc0470 (a java.lang.ref.ReferenceQueue$Lock)
   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
   - locked 0x7fcd75fc0470 (a java.lang.ref.ReferenceQueue$Lock)
   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
   at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
 Reference Handler daemon prio=10 tid=0x7fcd8c068000 nid=0x4a96 in 
 Object.wait() [0x7fcd903ea000]
java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on 0x7fcd75fc0550 (a java.lang.ref.Reference$Lock)
   at java.lang.Object.wait(Object.java:485)
   at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
   - locked 0x7fcd75fc0550 (a java.lang.ref.Reference$Lock)
 main prio=10 tid=0x7fcd8c00a800 nid=0x4a92 in Object.wait() 
 [0x7fcd91b06000]

[jira] [Commented] (HADOOP-12057) swiftfs rename on partitioned file attempts to consolidate partitions

2015-08-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681205#comment-14681205
 ] 

Steve Loughran commented on HADOOP-12057:
-

what's the legal status of something from sahara? Who is the original 
contributor?

 swiftfs rename on partitioned file attempts to consolidate partitions
 -

 Key: HADOOP-12057
 URL: https://issues.apache.org/jira/browse/HADOOP-12057
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Reporter: David Dobbins
Assignee: David Dobbins
 Attachments: HADOOP-12057-006.patch, HADOOP-12057-008.patch, 
 HADOOP-12057.007.patch, HADOOP-12057.patch, HADOOP-12057.patch, 
 HADOOP-12057.patch, HADOOP-12057.patch, HADOOP-12057.patch


 In the swift filesystem for openstack, a rename operation on a partitioned 
 file uses the swift COPY operation, which attempts to consolidate all of the 
 partitions into a single object.  This causes the rename to fail when the 
 total size of all the partitions exceeds the maximum object size for swift.  
 Since partitioned files are primarily created to allow a file to exceed the 
 maximum object size, this bug makes writing to swift extremely unreliable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12258) Need translate java.nio.file.NoSuchFileException to FileNotFoundException to avoid regression

2015-08-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681212#comment-14681212
 ] 

Steve Loughran commented on HADOOP-12258:
-

LGTM —especially the test work.

Chris?

 Need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
 avoid regression
 -

 Key: HADOOP-12258
 URL: https://issues.apache.org/jira/browse/HADOOP-12258
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Critical
 Attachments: HADOOP-12258.000.patch, HADOOP-12258.001.patch, 
 HADOOP-12258.002.patch


 need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
 avoid regression.
 HADOOP-12045 adds nio to support access time, but nio will create 
 java.nio.file.NoSuchFileException instead of FileNotFoundException.
 many hadoop codes depend on FileNotFoundException to decide whether a file 
 exists. for example {{FileContext.util().exists()}}. 
 {code}
 public boolean exists(final Path f) throws AccessControlException,
   UnsupportedFileSystemException, IOException {
   try {
 FileStatus fs = FileContext.this.getFileStatus(f);
 assert fs != null;
 return true;
   } catch (FileNotFoundException e) {
 return false;
   }
 }
 {code}
 same for {{FileSystem#exists}}
 {code}
   public boolean exists(Path f) throws IOException {
 try {
   return getFileStatus(f) != null;
 } catch (FileNotFoundException e) {
   return false;
 }
   }
 {code}
 NoSuchFileException will break these functions.
 Since {{exists}} is one of the most used API in FileSystem, this issue is 
 very critical.
 Several test failures for TestDeletionService are caused by this issue:
 https://builds.apache.org/job/PreCommit-YARN-Build/8630/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testRelativeDelete/
 https://builds.apache.org/job/PreCommit-YARN-Build/8632/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testAbsDelete/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11683) Need a plugin API to translate long principal names to local OS user names arbitrarily

2015-08-10 Thread roger mak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

roger mak updated HADOOP-11683:
---
Attachment: HADOOP-11683.001.patch

I uploaded a patch HADOOP-11683.001.patch for review. 

The patch allows HadoopKerberosName to use a user name mapping pluggable API 
from parameter, hadoop.security.user.name.mapping, instead of the regular 
expression specified in parameter, hadoop.security.auth_to_local. 

If user name is not found by the API or hadoop.security.user.mapping is not 
set, it will default back to hadoop.security.auth_to_local for compatibility. 

Note: Similar to the existing CompositeGroupsMapping class, a new class, 
CompositeUserNameMapping, is added to handle multiple mapping providers. Cache 
is not introduced in this version yet for simplicity.

 Need a plugin API to translate long principal names to local OS user names 
 arbitrarily
 --

 Key: HADOOP-11683
 URL: https://issues.apache.org/jira/browse/HADOOP-11683
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Sunny Cheung
Assignee: Sunny Cheung
 Attachments: HADOOP-11683.001.patch


 We need a plugin API to translate long principal names (e.g. 
 john@example.com) to local OS user names (e.g. user123456) arbitrarily.
 For some organizations the name translation is straightforward (e.g. 
 john@example.com to john_doe), and the hadoop.security.auth_to_local 
 configurable mapping is sufficient to resolve this (see HADOOP-6526). 
 However, in some other cases the name translation is arbitrary and cannot be 
 generalized by a set of translation rules easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-2) Reused Keys and Values fail with a Combiner

2015-08-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681114#comment-14681114
 ] 

Hudson commented on HADOOP-2:
-

FAILURE: Integrated in HBase-1.3 #99 (See 
[https://builds.apache.org/job/HBase-1.3/99/])
HBASE-5878 Use getVisibleLength public api from HdfsDataInputStream from 
Hadoop-2. (apurtell: rev 0862abd6599a6936fb8079f4c70afc660175ba11)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java


 Reused Keys and Values fail with a Combiner
 ---

 Key: HADOOP-2
 URL: https://issues.apache.org/jira/browse/HADOOP-2
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.1.0

 Attachments: clone-map-output.patch


 If the map function reuses the key or value by destructively modifying it 
 after the output.collect(key,value) call and your application uses a 
 combiner, the data is corrupted by having lots of instances with the last key 
 or value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12253) ViewFileSystem getFileStatus java.lang.ArrayIndexOutOfBoundsException: 0

2015-08-10 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated HADOOP-12253:
-
Attachment: HADOOP-12253.patch

Will avoid ArrayIndexOutOfBoundsException

 ViewFileSystem getFileStatus java.lang.ArrayIndexOutOfBoundsException: 0
 

 Key: HADOOP-12253
 URL: https://issues.apache.org/jira/browse/HADOOP-12253
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
 Environment: hadoop 2.6.0   hive 1.1.0 tez0.7  cenos6.4
Reporter: tangjunjie
Assignee: Ajith S
 Attachments: HADOOP-12253.patch


 When I enable hdfs federation.I run a query on hive on tez. Then it occur a 
 exception:
 {noformat}
 8.784 PM  WARNorg.apache.hadoop.security.UserGroupInformation No 
 groups available for user tangjijun
 3:12:28.784 PMERROR   org.apache.hadoop.hive.ql.exec.Task Failed 
 to execute tez graph.
 java.lang.ArrayIndexOutOfBoundsException: 0
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem$InternalDirOfViewFs.getFileStatus(ViewFileSystem.java:771)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.getFileStatus(ViewFileSystem.java:359)
   at 
 org.apache.tez.client.TezClientUtils.checkAncestorPermissionsForAllUsers(TezClientUtils.java:955)
   at 
 org.apache.tez.client.TezClientUtils.setupTezJarsLocalResources(TezClientUtils.java:184)
   at 
 org.apache.tez.client.TezClient.getTezJarResources(TezClient.java:787)
   at org.apache.tez.client.TezClient.start(TezClient.java:337)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:191)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:234)
   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:136)
   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
   at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1640)
   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1399)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1183)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1044)
   at 
 org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:144)
   at 
 org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:69)
   at 
 org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:196)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
   at 
 org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:208)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 {noformat}
 I digging into the issue,I found the code snippet in ViewFileSystem.java as 
 follows:
 {noformat}
  @Override
 public FileStatus getFileStatus(Path f) throws IOException {
   checkPathIsSlash(f);
   return new FileStatus(0, true, 0, 0, creationTime, creationTime,
   PERMISSION_555, ugi.getUserName(), ugi.getGroupNames()[0],
   new Path(theInternalDir.fullPath).makeQualified(
   myUri, ROOT_PATH));
 }
 {noformat}
 If the node in cluster  haven't creat user like 
 tangjijun,ugi.getGroupNames()[0]  will throw   
 ArrayIndexOutOfBoundsException.Because no user mean no group.
 I create user tangjijun on that node. Then the job was executed normally.  
 I think this code should check  ugi.getGroupNames() is empty.When it is empty 
 ,then print some log. Not to throw exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12253) ViewFileSystem getFileStatus java.lang.ArrayIndexOutOfBoundsException: 0

2015-08-10 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated HADOOP-12253:
-
Status: Patch Available  (was: Open)

Submitting patch. Please review

 ViewFileSystem getFileStatus java.lang.ArrayIndexOutOfBoundsException: 0
 

 Key: HADOOP-12253
 URL: https://issues.apache.org/jira/browse/HADOOP-12253
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
 Environment: hadoop 2.6.0   hive 1.1.0 tez0.7  cenos6.4
Reporter: tangjunjie
Assignee: Ajith S
 Attachments: HADOOP-12253.patch


 When I enable hdfs federation.I run a query on hive on tez. Then it occur a 
 exception:
 {noformat}
 8.784 PM  WARNorg.apache.hadoop.security.UserGroupInformation No 
 groups available for user tangjijun
 3:12:28.784 PMERROR   org.apache.hadoop.hive.ql.exec.Task Failed 
 to execute tez graph.
 java.lang.ArrayIndexOutOfBoundsException: 0
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem$InternalDirOfViewFs.getFileStatus(ViewFileSystem.java:771)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.getFileStatus(ViewFileSystem.java:359)
   at 
 org.apache.tez.client.TezClientUtils.checkAncestorPermissionsForAllUsers(TezClientUtils.java:955)
   at 
 org.apache.tez.client.TezClientUtils.setupTezJarsLocalResources(TezClientUtils.java:184)
   at 
 org.apache.tez.client.TezClient.getTezJarResources(TezClient.java:787)
   at org.apache.tez.client.TezClient.start(TezClient.java:337)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:191)
   at 
 org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:234)
   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:136)
   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
   at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1640)
   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1399)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1183)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1044)
   at 
 org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:144)
   at 
 org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:69)
   at 
 org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:196)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
   at 
 org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:208)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 {noformat}
 I digging into the issue,I found the code snippet in ViewFileSystem.java as 
 follows:
 {noformat}
  @Override
 public FileStatus getFileStatus(Path f) throws IOException {
   checkPathIsSlash(f);
   return new FileStatus(0, true, 0, 0, creationTime, creationTime,
   PERMISSION_555, ugi.getUserName(), ugi.getGroupNames()[0],
   new Path(theInternalDir.fullPath).makeQualified(
   myUri, ROOT_PATH));
 }
 {noformat}
 If the node in cluster  haven't creat user like 
 tangjijun,ugi.getGroupNames()[0]  will throw   
 ArrayIndexOutOfBoundsException.Because no user mean no group.
 I create user tangjijun on that node. Then the job was executed normally.  
 I think this code should check  ugi.getGroupNames() is empty.When it is empty 
 ,then print some log. Not to throw exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-2) Reused Keys and Values fail with a Combiner

2015-08-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681099#comment-14681099
 ] 

Hudson commented on HADOOP-2:
-

SUCCESS: Integrated in HBase-1.2-IT #82 (See 
[https://builds.apache.org/job/HBase-1.2-IT/82/])
HBASE-5878 Use getVisibleLength public api from HdfsDataInputStream from 
Hadoop-2. (apurtell: rev 7f33e6330a37b0401c2f9143ddbea67361217453)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java


 Reused Keys and Values fail with a Combiner
 ---

 Key: HADOOP-2
 URL: https://issues.apache.org/jira/browse/HADOOP-2
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.1.0

 Attachments: clone-map-output.patch


 If the map function reuses the key or value by destructively modifying it 
 after the output.collect(key,value) call and your application uses a 
 combiner, the data is corrupted by having lots of instances with the last key 
 or value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11683) Need a plugin API to translate long principal names to local OS user names arbitrarily

2015-08-10 Thread Sunny Cheung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunny Cheung updated HADOOP-11683:
--
Assignee: roger mak  (was: Sunny Cheung)

 Need a plugin API to translate long principal names to local OS user names 
 arbitrarily
 --

 Key: HADOOP-11683
 URL: https://issues.apache.org/jira/browse/HADOOP-11683
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Sunny Cheung
Assignee: roger mak
 Attachments: HADOOP-11683.001.patch


 We need a plugin API to translate long principal names (e.g. 
 john@example.com) to local OS user names (e.g. user123456) arbitrarily.
 For some organizations the name translation is straightforward (e.g. 
 john@example.com to john_doe), and the hadoop.security.auth_to_local 
 configurable mapping is sufficient to resolve this (see HADOOP-6526). 
 However, in some other cases the name translation is arbitrary and cannot be 
 generalized by a set of translation rules easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11683) Need a plugin API to translate long principal names to local OS user names arbitrarily

2015-08-10 Thread Sunny Cheung (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681046#comment-14681046
 ] 

Sunny Cheung commented on HADOOP-11683:
---

Just reassigned this bug to [~roger.mak]. He is my colleague who implements 
this feature. Thanks.

 Need a plugin API to translate long principal names to local OS user names 
 arbitrarily
 --

 Key: HADOOP-11683
 URL: https://issues.apache.org/jira/browse/HADOOP-11683
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Sunny Cheung
Assignee: roger mak
 Attachments: HADOOP-11683.001.patch


 We need a plugin API to translate long principal names (e.g. 
 john@example.com) to local OS user names (e.g. user123456) arbitrarily.
 For some organizations the name translation is straightforward (e.g. 
 john@example.com to john_doe), and the hadoop.security.auth_to_local 
 configurable mapping is sufficient to resolve this (see HADOOP-6526). 
 However, in some other cases the name translation is arbitrary and cannot be 
 generalized by a set of translation rules easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12160) Add snapshot APIs to the FileSystem specification

2015-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14679836#comment-14679836
 ] 

Hadoop QA commented on HADOOP-12160:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m  5s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 10 new or modified test files. |
| {color:green}+1{color} | javac |   7m 46s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 20s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   2m 55s | Site still builds. |
| {color:green}+1{color} | checkstyle |   2m 47s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 24s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m 47s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m 11s | Tests failed in 
hadoop-common. |
| {color:green}+1{color} | tools/hadoop tests |   0m 13s | Tests passed in 
hadoop-aws. |
| {color:green}+1{color} | tools/hadoop tests |   0m 13s | Tests passed in 
hadoop-openstack. |
| {color:red}-1{color} | hdfs tests | 172m 32s | Tests failed in hadoop-hdfs. |
| | | 230m 48s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.net.TestNetUtils |
|   | hadoop.ha.TestZKFailoverController |
| Timed out tests | org.apache.hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12749521/HADOOP-12160.003.patch 
|
| Optional Tests | site javac unit findbugs checkstyle |
| git revision | trunk / 8f73bdd |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7429/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-aws test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7429/artifact/patchprocess/testrun_hadoop-aws.txt
 |
| hadoop-openstack test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7429/artifact/patchprocess/testrun_hadoop-openstack.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7429/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7429/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7429/console |


This message was automatically generated.

 Add snapshot APIs to the FileSystem specification
 -

 Key: HADOOP-12160
 URL: https://issues.apache.org/jira/browse/HADOOP-12160
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.7.1
Reporter: Arpit Agarwal
Assignee: Masatake Iwasaki
 Attachments: HADOOP-12160.002.patch, HADOOP-12160.003.patch


 The following snapshot APIs should be documented in the [FileSystem 
 specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html].
 # createSnapshot(Path path)
 # createSnapshot(Path path, String snapshotName)
 # renameSnapshot(Path path, String snapshotOldName, String snapshotNewName)
 # deleteSnapshot(Path path, String snapshotName)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12312) Findbugs HTML report link shows 0 warnings despite errors

2015-08-10 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14679616#comment-14679616
 ] 

Varun Saxena commented on HADOOP-12312:
---

cc [~aw]

 Findbugs HTML report link shows 0 warnings despite errors
 -

 Key: HADOOP-12312
 URL: https://issues.apache.org/jira/browse/HADOOP-12312
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Varun Saxena

 Refer to Hadoop QA report below :
 https://issues.apache.org/jira/browse/YARN-3232?focusedCommentId=14679146page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14679146
 The report shows -1 for findbugs because there have been 7 findbugs warnings 
 introduced. But the HTML report in link is showing 0 findbugs warnings.
 I verified locally and the warnings did indeed exist.
 So there must be some problem in findbugs HTML report generation in 
 test-patch.sh
 This inconsistency between -1 for findbugs and HTML report lead to these 
 findbugs warnings being leaked to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12312) Findbugs HTML report link shows 0 warnings despite errors

2015-08-10 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated HADOOP-12312:
--
Component/s: yetus

 Findbugs HTML report link shows 0 warnings despite errors
 -

 Key: HADOOP-12312
 URL: https://issues.apache.org/jira/browse/HADOOP-12312
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Varun Saxena

 Refer to Hadoop QA report below :
 https://issues.apache.org/jira/browse/YARN-3232?focusedCommentId=14679146page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14679146
 The report shows -1 for findbugs because there have been 7 findbugs warnings 
 introduced. But the HTML report in link is showing 0 findbugs warnings.
 I verified locally and the warnings did indeed exist.
 So there must be some problem in findbugs HTML report generation in 
 test-patch.sh
 This inconsistency between -1 for findbugs and HTML report lead to these 
 findbugs warnings being leaked to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12312) Findbugs HTML report link shows 0 warnings despite errors

2015-08-10 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated HADOOP-12312:
--
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-12111

 Findbugs HTML report link shows 0 warnings despite errors
 -

 Key: HADOOP-12312
 URL: https://issues.apache.org/jira/browse/HADOOP-12312
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Varun Saxena

 Refer to Hadoop QA report below :
 https://issues.apache.org/jira/browse/YARN-3232?focusedCommentId=14679146page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14679146
 The report shows -1 for findbugs because there have been 7 findbugs warnings 
 introduced. But the HTML report in link is showing 0 findbugs warnings.
 I verified locally and the warnings did indeed exist.
 So there must be some problem in findbugs HTML report generation in 
 test-patch.sh
 This inconsistency between -1 for findbugs and HTML report lead to these 
 findbugs warnings being leaked to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12312) Findbugs HTML report link shows 0 warnings despite errors

2015-08-10 Thread Varun Saxena (JIRA)
Varun Saxena created HADOOP-12312:
-

 Summary: Findbugs HTML report link shows 0 warnings despite errors
 Key: HADOOP-12312
 URL: https://issues.apache.org/jira/browse/HADOOP-12312
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Saxena


Refer to Hadoop QA report below :

https://issues.apache.org/jira/browse/YARN-3232?focusedCommentId=14679146page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14679146

The report shows -1 for findbugs because there have been 7 findbugs warnings 
introduced. But the HTML report in link is showing 0 findbugs warnings.
I verified locally and the warnings did indeed exist.
So there must be some problem in findbugs HTML report generation in 
test-patch.sh

This inconsistency between -1 for findbugs and HTML report lead to these 
findbugs warnings being leaked to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-7139) Allow appending to existing SequenceFiles

2015-08-10 Thread kanaka kumar avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14679666#comment-14679666
 ] 

kanaka kumar avvaru commented on HADOOP-7139:
-

[~vinayrpet], Can you please check if we can push to 2.6.1 ? 

 Allow appending to existing SequenceFiles
 -

 Key: HADOOP-7139
 URL: https://issues.apache.org/jira/browse/HADOOP-7139
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 1.0.0
Reporter: Stephen Rose
Assignee: kanaka kumar avvaru
 Fix For: 2.8.0

 Attachments: HADOOP-7139-01.patch, HADOOP-7139-02.patch, 
 HADOOP-7139-03.patch, HADOOP-7139-04.patch, HADOOP-7139-05.patch, 
 HADOOP-7139-kt.patch, HADOOP-7139.patch, HADOOP-7139.patch, 
 HADOOP-7139.patch, HADOOP-7139.patch

   Original Estimate: 2h
  Remaining Estimate: 2h





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12312) Findbugs HTML report link shows 0 warnings despite errors

2015-08-10 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14679632#comment-14679632
 ] 

Brahma Reddy Battula commented on HADOOP-12312:
---

have you looked at HADOOP-12083..? Mostly it is like that...

 Findbugs HTML report link shows 0 warnings despite errors
 -

 Key: HADOOP-12312
 URL: https://issues.apache.org/jira/browse/HADOOP-12312
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Varun Saxena

 Refer to Hadoop QA report below :
 https://issues.apache.org/jira/browse/YARN-3232?focusedCommentId=14679146page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14679146
 The report shows -1 for findbugs because there have been 7 findbugs warnings 
 introduced. But the HTML report in link is showing 0 findbugs warnings.
 I verified locally and the warnings did indeed exist.
 So there must be some problem in findbugs HTML report generation in 
 test-patch.sh
 This inconsistency between -1 for findbugs and HTML report lead to these 
 findbugs warnings being leaked to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-10 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14679655#comment-14679655
 ] 

Vinayakumar B commented on HADOOP-12295:


+1, looks fine to me.
Hi [~chris.douglas], Do you want to check once?

 Improve NetworkTopology#InnerNode#remove logic
 --

 Key: HADOOP-12295
 URL: https://issues.apache.org/jira/browse/HADOOP-12295
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HADOOP-12295.001.patch


 In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
 the parent node, no need to loop the {{children}} list. Then it is more 
 efficient since in most cases deleting parent node doesn't happen.
 Another nit in current code is:
 {code}
   String parent = n.getNetworkLocation();
   String currentPath = getPath(this);
 {code}
 can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12258) Need translate java.nio.file.NoSuchFileException to FileNotFoundException to avoid regression

2015-08-10 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12258:
---
Attachment: (was: HADOOP-12258.002.patch)

 Need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
 avoid regression
 -

 Key: HADOOP-12258
 URL: https://issues.apache.org/jira/browse/HADOOP-12258
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Critical
 Attachments: HADOOP-12258.000.patch, HADOOP-12258.001.patch


 need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
 avoid regression.
 HADOOP-12045 adds nio to support access time, but nio will create 
 java.nio.file.NoSuchFileException instead of FileNotFoundException.
 many hadoop codes depend on FileNotFoundException to decide whether a file 
 exists. for example {{FileContext.util().exists()}}. 
 {code}
 public boolean exists(final Path f) throws AccessControlException,
   UnsupportedFileSystemException, IOException {
   try {
 FileStatus fs = FileContext.this.getFileStatus(f);
 assert fs != null;
 return true;
   } catch (FileNotFoundException e) {
 return false;
   }
 }
 {code}
 same for {{FileSystem#exists}}
 {code}
   public boolean exists(Path f) throws IOException {
 try {
   return getFileStatus(f) != null;
 } catch (FileNotFoundException e) {
   return false;
 }
   }
 {code}
 NoSuchFileException will break these functions.
 Since {{exists}} is one of the most used API in FileSystem, this issue is 
 very critical.
 Several test failures for TestDeletionService are caused by this issue:
 https://builds.apache.org/job/PreCommit-YARN-Build/8630/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testRelativeDelete/
 https://builds.apache.org/job/PreCommit-YARN-Build/8632/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testAbsDelete/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12258) Need translate java.nio.file.NoSuchFileException to FileNotFoundException to avoid regression

2015-08-10 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12258:
---
Attachment: HADOOP-12258.002.patch

 Need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
 avoid regression
 -

 Key: HADOOP-12258
 URL: https://issues.apache.org/jira/browse/HADOOP-12258
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Critical
 Attachments: HADOOP-12258.000.patch, HADOOP-12258.001.patch, 
 HADOOP-12258.002.patch


 need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
 avoid regression.
 HADOOP-12045 adds nio to support access time, but nio will create 
 java.nio.file.NoSuchFileException instead of FileNotFoundException.
 many hadoop codes depend on FileNotFoundException to decide whether a file 
 exists. for example {{FileContext.util().exists()}}. 
 {code}
 public boolean exists(final Path f) throws AccessControlException,
   UnsupportedFileSystemException, IOException {
   try {
 FileStatus fs = FileContext.this.getFileStatus(f);
 assert fs != null;
 return true;
   } catch (FileNotFoundException e) {
 return false;
   }
 }
 {code}
 same for {{FileSystem#exists}}
 {code}
   public boolean exists(Path f) throws IOException {
 try {
   return getFileStatus(f) != null;
 } catch (FileNotFoundException e) {
   return false;
 }
   }
 {code}
 NoSuchFileException will break these functions.
 Since {{exists}} is one of the most used API in FileSystem, this issue is 
 very critical.
 Several test failures for TestDeletionService are caused by this issue:
 https://builds.apache.org/job/PreCommit-YARN-Build/8630/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testRelativeDelete/
 https://builds.apache.org/job/PreCommit-YARN-Build/8632/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testAbsDelete/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7139) Allow appending to existing SequenceFiles

2015-08-10 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-7139:
--
Labels: 2.6.1-candidate  (was: )

 Allow appending to existing SequenceFiles
 -

 Key: HADOOP-7139
 URL: https://issues.apache.org/jira/browse/HADOOP-7139
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 1.0.0
Reporter: Stephen Rose
Assignee: kanaka kumar avvaru
  Labels: 2.6.1-candidate
 Fix For: 2.8.0

 Attachments: HADOOP-7139-01.patch, HADOOP-7139-02.patch, 
 HADOOP-7139-03.patch, HADOOP-7139-04.patch, HADOOP-7139-05.patch, 
 HADOOP-7139-kt.patch, HADOOP-7139.patch, HADOOP-7139.patch, 
 HADOOP-7139.patch, HADOOP-7139.patch

   Original Estimate: 2h
  Remaining Estimate: 2h





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12258) Need translate java.nio.file.NoSuchFileException to FileNotFoundException to avoid regression

2015-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14679946#comment-14679946
 ] 

Hadoop QA commented on HADOOP-12258:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m 12s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 12 new or modified test files. |
| {color:green}+1{color} | javac |   7m 44s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 44s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 36s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 25s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m 27s | Tests failed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 175m 31s | Tests failed in hadoop-hdfs. |
| | | 241m 12s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.net.TestNetUtils |
| Timed out tests | org.apache.hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12749535/HADOOP-12258.002.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 8f73bdd |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7430/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7430/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7430/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7430/console |


This message was automatically generated.

 Need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
 avoid regression
 -

 Key: HADOOP-12258
 URL: https://issues.apache.org/jira/browse/HADOOP-12258
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Critical
 Attachments: HADOOP-12258.000.patch, HADOOP-12258.001.patch, 
 HADOOP-12258.002.patch


 need translate java.nio.file.NoSuchFileException to FileNotFoundException to 
 avoid regression.
 HADOOP-12045 adds nio to support access time, but nio will create 
 java.nio.file.NoSuchFileException instead of FileNotFoundException.
 many hadoop codes depend on FileNotFoundException to decide whether a file 
 exists. for example {{FileContext.util().exists()}}. 
 {code}
 public boolean exists(final Path f) throws AccessControlException,
   UnsupportedFileSystemException, IOException {
   try {
 FileStatus fs = FileContext.this.getFileStatus(f);
 assert fs != null;
 return true;
   } catch (FileNotFoundException e) {
 return false;
   }
 }
 {code}
 same for {{FileSystem#exists}}
 {code}
   public boolean exists(Path f) throws IOException {
 try {
   return getFileStatus(f) != null;
 } catch (FileNotFoundException e) {
   return false;
 }
   }
 {code}
 NoSuchFileException will break these functions.
 Since {{exists}} is one of the most used API in FileSystem, this issue is 
 very critical.
 Several test failures for TestDeletionService are caused by this issue:
 https://builds.apache.org/job/PreCommit-YARN-Build/8630/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testRelativeDelete/
 https://builds.apache.org/job/PreCommit-YARN-Build/8632/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testAbsDelete/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12310) final memory report sometimes generates spurious errors

2015-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14679945#comment-14679945
 ] 

Hadoop QA commented on HADOOP-12310:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
10s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 29s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12749568/HADOOP-12310.HADOOP-12111.00.patch
 |
| git revision | HADOOP-12111 / c393182 |
| Optional Tests | asflicense shellcheck |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/dev-support-test/personality/hadoop.sh
 |
| Default Java | 1.7.0_55 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| Max memory used | 48MB |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7431/console |


This message was automatically generated.

 final memory report sometimes generates spurious errors
 ---

 Key: HADOOP-12310
 URL: https://issues.apache.org/jira/browse/HADOOP-12310
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
 Attachments: HADOOP-12310.HADOOP-12111.00.patch


 There are spurious sort write pipeline failures coming from the maven memory 
 check on Jenkins.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7422/console
 with bash debug turned on:
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7423/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12310) final memory report sometimes generates spurious errors

2015-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14679944#comment-14679944
 ] 

Hadoop QA commented on HADOOP-12310:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7431/console in case of 
problems.

 final memory report sometimes generates spurious errors
 ---

 Key: HADOOP-12310
 URL: https://issues.apache.org/jira/browse/HADOOP-12310
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
 Attachments: HADOOP-12310.HADOOP-12111.00.patch


 There are spurious sort write pipeline failures coming from the maven memory 
 check on Jenkins.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7422/console
 with bash debug turned on:
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7423/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12297) test-patch's basedir and patch-dir must be directories under the user's home in docker mode if using boot2docker

2015-08-10 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12297:

Summary: test-patch's basedir and patch-dir must be directories under the 
user's home in docker mode if using boot2docker  (was: test-patch docker mode 
fails if patch-dir is not specified or specified as an absolute path)

 test-patch's basedir and patch-dir must be directories under the user's home 
 in docker mode if using boot2docker
 

 Key: HADOOP-12297
 URL: https://issues.apache.org/jira/browse/HADOOP-12297
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki

 Docker mode without a patch-dir option or with an absolute path seems not to 
 work:
 {code}
 [sekikn@mobile hadoop]$ dev-support/test-patch.sh 
 --basedir=/Users/sekikn/dev/hadoop --project=hadoop --docker /tmp/test.patch
 (snip)
 Successfully built 37438de64e81
 JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home 
 does not exist. Dockermode: attempting to switch to another.
 /testptch/launch-test-patch.sh: line 42: cd: 
 /testptch/patchprocess/precommit/: No such file or directory
 /testptch/launch-test-patch.sh: line 45: 
 /testptch/patchprocess/precommit/test-patch.sh: No such file or directory
 {code}
 It succeeds if a relative directory is specified:
 {code}
 [sekikn@mobile hadoop]$ dev-support/test-patch.sh 
 --basedir=/Users/sekikn/dev/hadoop --project=hadoop --docker --patch-dir=foo 
 /tmp/test.patch
 (snip)
 Successfully built 6ea5001987a7
 JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home 
 does not exist. Dockermode: attempting to switch to another.
 
 
 Bootstrapping test harness
 
 
 (snip)
 +1 overall
 (snip)
 
 
   Finished build.
 
 
 {code}
 If my setup or usage is wrong, please close this JIRA as invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12297) test-patch docker mode fails if patch-dir is not specified or specified as an absolute path

2015-08-10 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14679883#comment-14679883
 ] 

Kengo Seki commented on HADOOP-12297:
-

Sorry, I guessed wrong. It's not a test-patch's problem, but probably a 
boot2docker's limitation (I tested this feature on Mac).

https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/

says that docker run's -v option only works for directories in /Users. Indeed, 
test-patch succeeds if an absolute path under /Users is specified as patch-dir.
Not only patch-dir, basedir must be a directory under /Users also.

 test-patch docker mode fails if patch-dir is not specified or specified as an 
 absolute path
 ---

 Key: HADOOP-12297
 URL: https://issues.apache.org/jira/browse/HADOOP-12297
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki

 Docker mode without a patch-dir option or with an absolute path seems not to 
 work:
 {code}
 [sekikn@mobile hadoop]$ dev-support/test-patch.sh 
 --basedir=/Users/sekikn/dev/hadoop --project=hadoop --docker /tmp/test.patch
 (snip)
 Successfully built 37438de64e81
 JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home 
 does not exist. Dockermode: attempting to switch to another.
 /testptch/launch-test-patch.sh: line 42: cd: 
 /testptch/patchprocess/precommit/: No such file or directory
 /testptch/launch-test-patch.sh: line 45: 
 /testptch/patchprocess/precommit/test-patch.sh: No such file or directory
 {code}
 It succeeds if a relative directory is specified:
 {code}
 [sekikn@mobile hadoop]$ dev-support/test-patch.sh 
 --basedir=/Users/sekikn/dev/hadoop --project=hadoop --docker --patch-dir=foo 
 /tmp/test.patch
 (snip)
 Successfully built 6ea5001987a7
 JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home 
 does not exist. Dockermode: attempting to switch to another.
 
 
 Bootstrapping test harness
 
 
 (snip)
 +1 overall
 (snip)
 
 
   Finished build.
 
 
 {code}
 If my setup or usage is wrong, please close this JIRA as invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12310) final memory report sometimes generates spurious errors

2015-08-10 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12310:

Assignee: Kengo Seki
  Status: Patch Available  (was: Open)

 final memory report sometimes generates spurious errors
 ---

 Key: HADOOP-12310
 URL: https://issues.apache.org/jira/browse/HADOOP-12310
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
 Attachments: HADOOP-12310.HADOOP-12111.00.patch


 There are spurious sort write pipeline failures coming from the maven memory 
 check on Jenkins.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7422/console
 with bash debug turned on:
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7423/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12310) final memory report sometimes generates spurious errors

2015-08-10 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12310:

Attachment: HADOOP-12310.HADOOP-12111.00.patch

Attaching a patch. Because it's difficult to reproduce it, I only confirmed 
that final memory report still works.

 final memory report sometimes generates spurious errors
 ---

 Key: HADOOP-12310
 URL: https://issues.apache.org/jira/browse/HADOOP-12310
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
 Attachments: HADOOP-12310.HADOOP-12111.00.patch


 There are spurious sort write pipeline failures coming from the maven memory 
 check on Jenkins.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7422/console
 with bash debug turned on:
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7423/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12275) releasedocmaker: unreleased should still be dated

2015-08-10 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12275:

  Labels: newbie  (was: )
Assignee: Kengo Seki
  Status: Patch Available  (was: Open)

 releasedocmaker: unreleased should still be dated
 -

 Key: HADOOP-12275
 URL: https://issues.apache.org/jira/browse/HADOOP-12275
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-12275.HADOOP-12111.00.patch


 releasedocmaker should still date unreleased versions. Instead of 
 {{Unreleased}} it should be {{Unreleased (as of -MM-DD)}}.  This way if 
 versions are later released, there will be no confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12275) releasedocmaker: unreleased should still be dated

2015-08-10 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12275:

Attachment: HADOOP-12275.HADOOP-12111.00.patch

Attaching a patch.

 releasedocmaker: unreleased should still be dated
 -

 Key: HADOOP-12275
 URL: https://issues.apache.org/jira/browse/HADOOP-12275
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-12275.HADOOP-12111.00.patch


 releasedocmaker should still date unreleased versions. Instead of 
 {{Unreleased}} it should be {{Unreleased (as of -MM-DD)}}.  This way if 
 versions are later released, there will be no confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12275) releasedocmaker: unreleased should still be dated

2015-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14679972#comment-14679972
 ] 

Hadoop QA commented on HADOOP-12275:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12749572/HADOOP-12275.HADOOP-12111.00.patch
 |
| Optional Tests |  |
| git revision | HADOOP-12111 / c393182 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7432/console |


This message was automatically generated.

 releasedocmaker: unreleased should still be dated
 -

 Key: HADOOP-12275
 URL: https://issues.apache.org/jira/browse/HADOOP-12275
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-12275.HADOOP-12111.00.patch


 releasedocmaker should still date unreleased versions. Instead of 
 {{Unreleased}} it should be {{Unreleased (as of -MM-DD)}}.  This way if 
 versions are later released, there will be no confusion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)