[jira] [Created] (HADOOP-12268) AbstractContractAppendTest#testRenameFileBeingAppended missed rename operation.

2015-07-24 Thread zhihai xu (JIRA)
zhihai xu created HADOOP-12268:
--

 Summary: AbstractContractAppendTest#testRenameFileBeingAppended 
missed rename operation.
 Key: HADOOP-12268
 URL: https://issues.apache.org/jira/browse/HADOOP-12268
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: zhihai xu
Assignee: zhihai xu


AbstractContractAppendTest#testRenameFileBeingAppended missed rename operation. 
Also TestHDFSContractAppend can pass the original test  after fix the issue at 
{{AbstractContractAppendTest#testRenameFileBeingAppended}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12269) Update aws-sdk dependency version

2015-07-24 Thread Thomas Demoor (JIRA)
Thomas Demoor created HADOOP-12269:
--

 Summary: Update aws-sdk dependency version
 Key: HADOOP-12269
 URL: https://issues.apache.org/jira/browse/HADOOP-12269
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Thomas Demoor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12268) AbstractContractAppendTest#testRenameFileBeingAppended missed rename operation.

2015-07-24 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12268:
---
Attachment: HADOOP-12268.000.patch

 AbstractContractAppendTest#testRenameFileBeingAppended missed rename 
 operation.
 ---

 Key: HADOOP-12268
 URL: https://issues.apache.org/jira/browse/HADOOP-12268
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: zhihai xu
Assignee: zhihai xu
 Attachments: HADOOP-12268.000.patch


 AbstractContractAppendTest#testRenameFileBeingAppended missed rename 
 operation. Also TestHDFSContractAppend can pass the original test  after fix 
 the issue at {{AbstractContractAppendTest#testRenameFileBeingAppended}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12268) AbstractContractAppendTest#testRenameFileBeingAppended missed rename operation.

2015-07-24 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12268:
---
Status: Patch Available  (was: Open)

 AbstractContractAppendTest#testRenameFileBeingAppended missed rename 
 operation.
 ---

 Key: HADOOP-12268
 URL: https://issues.apache.org/jira/browse/HADOOP-12268
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: zhihai xu
Assignee: zhihai xu
 Attachments: HADOOP-12268.000.patch


 AbstractContractAppendTest#testRenameFileBeingAppended missed rename 
 operation. Also TestHDFSContractAppend can pass the original test  after fix 
 the issue at {{AbstractContractAppendTest#testRenameFileBeingAppended}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7824) Native IO uses wrong constants almost everywhere

2015-07-24 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated HADOOP-7824:
--
Status: Patch Available  (was: Reopened)

 Native IO uses wrong constants almost everywhere
 

 Key: HADOOP-7824
 URL: https://issues.apache.org/jira/browse/HADOOP-7824
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 2.0.0-alpha, 0.23.0, 1.0.3, 0.20.205.0, 0.20.204.0, 3.0.0
 Environment: Mac OS X, Linux, Solaris, Windows, ... 
Reporter: Dmytro Shteflyuk
Assignee: Martin Walsh
  Labels: hadoop
 Fix For: 2.8.0

 Attachments: HADOOP-7824.001.patch, HADOOP-7824.002.patch, 
 HADOOP-7824.patch, HADOOP-7824.patch, hadoop-7824.txt


 Constants like O_CREAT, O_EXCL, etc. have different values on OS X and many 
 other operating systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12268) AbstractContractAppendTest#testRenameFileBeingAppended misses rename operation.

2015-07-24 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12268:
---
Summary: AbstractContractAppendTest#testRenameFileBeingAppended misses 
rename operation.  (was: AbstractContractAppendTest#testRenameFileBeingAppended 
missed rename operation.)

 AbstractContractAppendTest#testRenameFileBeingAppended misses rename 
 operation.
 ---

 Key: HADOOP-12268
 URL: https://issues.apache.org/jira/browse/HADOOP-12268
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: zhihai xu
Assignee: zhihai xu
 Attachments: HADOOP-12268.000.patch


 AbstractContractAppendTest#testRenameFileBeingAppended missed rename 
 operation. Also TestHDFSContractAppend can pass the original test  after fix 
 the issue at {{AbstractContractAppendTest#testRenameFileBeingAppended}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12268) AbstractContractAppendTest#testRenameFileBeingAppended misses rename operation.

2015-07-24 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12268:
---
Description: AbstractContractAppendTest#testRenameFileBeingAppended misses 
rename operation. Also TestHDFSContractAppend can pass the original test  after 
fix the issue at {{AbstractContractAppendTest#testRenameFileBeingAppended}}.  
(was: AbstractContractAppendTest#testRenameFileBeingAppended missed rename 
operation. Also TestHDFSContractAppend can pass the original test  after fix 
the issue at {{AbstractContractAppendTest#testRenameFileBeingAppended}}.)

 AbstractContractAppendTest#testRenameFileBeingAppended misses rename 
 operation.
 ---

 Key: HADOOP-12268
 URL: https://issues.apache.org/jira/browse/HADOOP-12268
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: zhihai xu
Assignee: zhihai xu
 Attachments: HADOOP-12268.000.patch


 AbstractContractAppendTest#testRenameFileBeingAppended misses rename 
 operation. Also TestHDFSContractAppend can pass the original test  after fix 
 the issue at {{AbstractContractAppendTest#testRenameFileBeingAppended}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12258) Need translate java.nio.file.NoSuchFileException to FileNotFoundException in DeprecatedRawLocalFileStatus constructor

2015-07-24 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640149#comment-14640149
 ] 

zhihai xu commented on HADOOP-12258:


Hi [~ste...@apache.org], [~cnauroth], thanks for the comment! Yes, it makes 
sense to add contract tests for setTimes and getFileStatus.
I just read the document [Testing the Filesystem 
Contract|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md],
 It is an excellent guide. I figured out how to add the contract tests based on 
the document.

 Need translate java.nio.file.NoSuchFileException to FileNotFoundException in 
 DeprecatedRawLocalFileStatus constructor
 -

 Key: HADOOP-12258
 URL: https://issues.apache.org/jira/browse/HADOOP-12258
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Critical
 Attachments: HADOOP-12258.000.patch


 need translate java.nio.file.NoSuchFileException to FileNotFoundException in 
 DeprecatedRawLocalFileStatus constructor. 
 HADOOP-12045 adds nio to support access time, but nio will create 
 java.nio.file.NoSuchFileException instead of FileNotFoundException.
 many hadoop codes depend on FileNotFoundException to decide whether a file 
 exists. for example {{FileContext.util().exists()}}. 
 {code}
 public boolean exists(final Path f) throws AccessControlException,
   UnsupportedFileSystemException, IOException {
   try {
 FileStatus fs = FileContext.this.getFileStatus(f);
 assert fs != null;
 return true;
   } catch (FileNotFoundException e) {
 return false;
   }
 }
 {code}
 same for {{FileSystem#exists}}
 {code}
   public boolean exists(Path f) throws IOException {
 try {
   return getFileStatus(f) != null;
 } catch (FileNotFoundException e) {
   return false;
 }
   }
 {code}
 NoSuchFileException will break these functions.
 Since {{exists}} is one of the most used API in FileSystem, this issue is 
 very critical.
 Several test failures for TestDeletionService are caused by this issue:
 https://builds.apache.org/job/PreCommit-YARN-Build/8630/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testRelativeDelete/
 https://builds.apache.org/job/PreCommit-YARN-Build/8632/testReport/org.apache.hadoop.yarn.server.nodemanager/TestDeletionService/testAbsDelete/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12267) s3a failure due to integer overflow bug in AWS SDK

2015-07-24 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640184#comment-14640184
 ] 

Thomas Demoor commented on HADOOP-12267:


Hi Aaron,

in HADOOP-11684 I have bumped to 1.9.x (we have been testing this for a month 
now and all is well). Note that other bugs fixed in the aws-sdk (multi-part 
threshold from int - long ) require some code changes in s3a. 

You will see in the comments that [~ste...@apache.org] requested to pull out 
the aws-sdk upgrade to a separate patch. I am doing that today, will link to 
the new issue then.

Another main benefit of 1.9+ is that s3 is a separate library. We no longer 
need to pull in the entire sdk.

 s3a failure due to integer overflow bug in AWS SDK
 --

 Key: HADOOP-12267
 URL: https://issues.apache.org/jira/browse/HADOOP-12267
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Aaron Fabbri
Assignee: Aaron Fabbri
 Attachments: HADOOP-12267.2.6.0.001.patch, 
 HADOOP-12267.2.7.1.001.patch


 Under high load writing to Amazon AWS S3 storage, a client can be throttled 
 enough to encounter 24 retries in a row.
 The amazon http client code (in aws-java-sdk jar) has a bug in its 
 exponential backoff retry code, that causes integer overflow, and a call to 
 Thread.sleep() with a negative value, which causes client to bail out with an 
 exception (see below).
 Bug has been fixed in aws-java-sdk:
 https://github.com/aws/aws-sdk-java/pull/388
 We need to pick this up for hadoop-tools/hadoop-aws.
 Error: java.io.IOException: File copy failed: hdfs://path-redacted -- 
 s3a://path-redacted
 at 
 org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:284)
 at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:252) 
 at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)  
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) 
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) 
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
 at java.security.AccessController.doPrivileged(Native Method) 
 at javax.security.auth.Subject.doAs(Subject.java:415) 
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.io.IOException: Couldn't run retriable-command: Copying 
 hdfs://path-redacted to s3a://path-redacted
 at 
 org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
  
 at 
 org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:280)
  
 ... 10 more 
 Caused by: com.amazonaws.AmazonClientException: Unable to complete transfer: 
 timeout value is negative
 at 
 com.amazonaws.services.s3.transfer.internal.AbstractTransfer.unwrapExecutionException(AbstractTransfer.java:300)
 at 
 com.amazonaws.services.s3.transfer.internal.AbstractTransfer.rethrowExecutionException(AbstractTransfer.java:284)
 at 
 com.amazonaws.services.s3.transfer.internal.CopyImpl.waitForCopyResult(CopyImpl.java:67)
  
 at org.apache.hadoop.fs.s3a.S3AFileSystem.copyFile(S3AFileSystem.java:943) 
 at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:357) 
 at 
 org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.promoteTmpToTarget(RetriableFileCopyCommand.java:220)
 at 
 org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:137)
  
 at 
 org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:100)
 at 
 org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
  
 ... 11 more 
 Caused by: java.lang.IllegalArgumentException: timeout value is negative
 at java.lang.Thread.sleep(Native Method) 
 at 
 com.amazonaws.http.AmazonHttpClient.pauseBeforeNextRetry(AmazonHttpClient.java:864)
 at 
 com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:353) 
 at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232) 
 at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
 at 
 com.amazonaws.services.s3.AmazonS3Client.copyObject(AmazonS3Client.java:1507)
 at 
 com.amazonaws.services.s3.transfer.internal.CopyCallable.copyInOneChunk(CopyCallable.java:143)
 at 
 com.amazonaws.services.s3.transfer.internal.CopyCallable.call(CopyCallable.java:131)
  
 at 
 com.amazonaws.services.s3.transfer.internal.CopyMonitor.copy(CopyMonitor.java:189)
  
 at 
 com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:134)
  
 at 
 com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:46)
   
 at 

[jira] [Commented] (HADOOP-7824) Native IO uses wrong constants almost everywhere

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640422#comment-14640422
 ] 

Hadoop QA commented on HADOOP-7824:
---

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 39s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 41s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 43s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m  1s | The applied patch generated  
67 new checkstyle issues (total was 81, now 145). |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 22s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m  2s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m  2s | Tests failed in 
hadoop-common. |
| {color:green}+1{color} | mapreduce tests |   0m 19s | Tests passed in 
hadoop-mapreduce-client-shuffle. |
| {color:red}-1{color} | hdfs tests | 160m 53s | Tests failed in hadoop-hdfs. |
| | | 230m 35s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12746537/HADOOP-7824.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / e202efa |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7335/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7335/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-mapreduce-client-shuffle test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7335/artifact/patchprocess/testrun_hadoop-mapreduce-client-shuffle.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7335/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7335/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7335/console |


This message was automatically generated.

 Native IO uses wrong constants almost everywhere
 

 Key: HADOOP-7824
 URL: https://issues.apache.org/jira/browse/HADOOP-7824
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.20.204.0, 0.20.205.0, 1.0.3, 0.23.0, 2.0.0-alpha, 3.0.0
 Environment: Mac OS X, Linux, Solaris, Windows, ... 
Reporter: Dmytro Shteflyuk
Assignee: Martin Walsh
  Labels: hadoop
 Fix For: 2.8.0

 Attachments: HADOOP-7824.001.patch, HADOOP-7824.002.patch, 
 HADOOP-7824.patch, HADOOP-7824.patch, hadoop-7824.txt


 Constants like O_CREAT, O_EXCL, etc. have different values on OS X and many 
 other operating systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12269) Update aws-sdk dependency version

2015-07-24 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640248#comment-14640248
 ] 

Brahma Reddy Battula commented on HADOOP-12269:
---

Yes, update from 1.7.4 to 1.10.6..

 Update aws-sdk dependency version
 -

 Key: HADOOP-12269
 URL: https://issues.apache.org/jira/browse/HADOOP-12269
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Thomas Demoor





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12189) Improve CallQueueManager#swapQueue to make queue elements drop nearly impossible.

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640294#comment-14640294
 ] 

Hudson commented on HADOOP-12189:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #996 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/996/])
HADOOP-12189. Improve CallQueueManager#swapQueue to make queue elements drop 
nearly impossible. Contributed by Zhihai Xu. (wang: rev 
6736a1ab7033523ed5f304fdfed46d7f348665b4)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Improve CallQueueManager#swapQueue to make queue elements drop nearly 
 impossible.
 -

 Key: HADOOP-12189
 URL: https://issues.apache.org/jira/browse/HADOOP-12189
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, test
Affects Versions: 2.7.1
Reporter: zhihai xu
Assignee: zhihai xu
 Fix For: 2.8.0

 Attachments: HADOOP-12189.000.patch, HADOOP-12189.001.patch, 
 HADOOP-12189.none_guarantee.000.patch, HADOOP-12189.none_guarantee.001.patch, 
 HADOOP-12189.none_guarantee.002.patch


 Improve CallQueueManager#swapQueue to make queue elements drop nearly 
 impossible. This is the trade-off between performance and functionality, even 
 in the very very rare situation, we may drop one element, but it is not the 
 end of the world since the client may still recover with timeout.
 CallQueueManager may drop elements from the queue sometimes when calling 
 {{swapQueue}}. 
 The following test failure from TestCallQueueManager shown some elements in 
 the queue are dropped.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7150/testReport/org.apache.hadoop.ipc/TestCallQueueManager/testSwapUnderContention/
 {code}
 java.lang.AssertionError: expected:27241 but was:27245
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.ipc.TestCallQueueManager.testSwapUnderContention(TestCallQueueManager.java:220)
 {code}
 It looked like the elements in the queue are dropped due to 
 {{CallQueueManager#swapQueue}}
 Looked at the implementation of {{CallQueueManager#swapQueue}}, there is a 
 possibility that the elements in the queue are dropped. If the queue is full, 
 the calling thread for {{CallQueueManager#put}} is blocked for long time. It 
 may put the element into the old queue after queue in {{takeRef}} is changed 
 by swapQueue, then this element in the old queue will be dropped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640290#comment-14640290
 ] 

Hudson commented on HADOOP-12009:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #996 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/996/])
HADOOP-12009: Clarify FileSystem.listStatus() sorting order  fix 
FileSystemContractBaseTest:testListStatus. (J.Andreina via jghoman) (jghoman: 
rev ab3197c20452e0dd908193d6854c204e6ee34645)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009-004.patch, 
 HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12161) Add getStoragePolicy API to the FileSystem interface

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640289#comment-14640289
 ] 

Hudson commented on HADOOP-12161:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #996 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/996/])
HADOOP-12161. Add getStoragePolicy API to the FileSystem interface. 
(Contributed by Brahma Reddy Battula) (arp: rev 
adfa34ff9992295a6d2496b259d8c483ed90b566)
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java


 Add getStoragePolicy API to the FileSystem interface
 

 Key: HADOOP-12161
 URL: https://issues.apache.org/jira/browse/HADOOP-12161
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HADOOP-12161-001.patch, HADOOP-12161-002.patch, 
 HADOOP-12161-003.patch, HADOOP-12161-004.patch


 HDFS-8345 added {{FileSystem#getAllStoragePolicies}} and 
 {{FileSystem#setStoragePolicy}}. Jira to
 # Add a corresponding {{FileSystem#getStoragePolicy}} to query the storage 
 policy for a given file/directory.
 # Add corresponding implementation for HDFS i.e. 
 {{DistributedFileSystem#getStoragePolicy}}.
 # Update the [FileSystem 
 specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html].
  This will require editing 
 _hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12269) Update aws-sdk dependency version

2015-07-24 Thread Thomas Demoor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Demoor updated HADOOP-12269:
---
Description: This was originally part of HADOOP-11684

 Update aws-sdk dependency version
 -

 Key: HADOOP-12269
 URL: https://issues.apache.org/jira/browse/HADOOP-12269
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Thomas Demoor

 This was originally part of HADOOP-11684



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12161) Add getStoragePolicy API to the FileSystem interface

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640634#comment-14640634
 ] 

Hudson commented on HADOOP-12161:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2212 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2212/])
HADOOP-12161. Add getStoragePolicy API to the FileSystem interface. 
(Contributed by Brahma Reddy Battula) (arp: rev 
adfa34ff9992295a6d2496b259d8c483ed90b566)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Add getStoragePolicy API to the FileSystem interface
 

 Key: HADOOP-12161
 URL: https://issues.apache.org/jira/browse/HADOOP-12161
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HADOOP-12161-001.patch, HADOOP-12161-002.patch, 
 HADOOP-12161-003.patch, HADOOP-12161-004.patch


 HDFS-8345 added {{FileSystem#getAllStoragePolicies}} and 
 {{FileSystem#setStoragePolicy}}. Jira to
 # Add a corresponding {{FileSystem#getStoragePolicy}} to query the storage 
 policy for a given file/directory.
 # Add corresponding implementation for HDFS i.e. 
 {{DistributedFileSystem#getStoragePolicy}}.
 # Update the [FileSystem 
 specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html].
  This will require editing 
 _hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12189) Improve CallQueueManager#swapQueue to make queue elements drop nearly impossible.

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640639#comment-14640639
 ] 

Hudson commented on HADOOP-12189:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2212 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2212/])
HADOOP-12189. Improve CallQueueManager#swapQueue to make queue elements drop 
nearly impossible. Contributed by Zhihai Xu. (wang: rev 
6736a1ab7033523ed5f304fdfed46d7f348665b4)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java


 Improve CallQueueManager#swapQueue to make queue elements drop nearly 
 impossible.
 -

 Key: HADOOP-12189
 URL: https://issues.apache.org/jira/browse/HADOOP-12189
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, test
Affects Versions: 2.7.1
Reporter: zhihai xu
Assignee: zhihai xu
 Fix For: 2.8.0

 Attachments: HADOOP-12189.000.patch, HADOOP-12189.001.patch, 
 HADOOP-12189.none_guarantee.000.patch, HADOOP-12189.none_guarantee.001.patch, 
 HADOOP-12189.none_guarantee.002.patch


 Improve CallQueueManager#swapQueue to make queue elements drop nearly 
 impossible. This is the trade-off between performance and functionality, even 
 in the very very rare situation, we may drop one element, but it is not the 
 end of the world since the client may still recover with timeout.
 CallQueueManager may drop elements from the queue sometimes when calling 
 {{swapQueue}}. 
 The following test failure from TestCallQueueManager shown some elements in 
 the queue are dropped.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7150/testReport/org.apache.hadoop.ipc/TestCallQueueManager/testSwapUnderContention/
 {code}
 java.lang.AssertionError: expected:27241 but was:27245
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.ipc.TestCallQueueManager.testSwapUnderContention(TestCallQueueManager.java:220)
 {code}
 It looked like the elements in the queue are dropped due to 
 {{CallQueueManager#swapQueue}}
 Looked at the implementation of {{CallQueueManager#swapQueue}}, there is a 
 possibility that the elements in the queue are dropped. If the queue is full, 
 the calling thread for {{CallQueueManager#put}} is blocked for long time. It 
 may put the element into the old queue after queue in {{takeRef}} is changed 
 by swapQueue, then this element in the old queue will be dropped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12265) Pylint should be installed in test-patch docker environment

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640652#comment-14640652
 ] 

Hadoop QA commented on HADOOP-12265:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  1s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747048/HADOOP-12265.HADOOP-12111.00.patch
 |
| Optional Tests | shellcheck site |
| git revision | HADOOP-12111 / 1e4f361 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7336/console |


This message was automatically generated.

 Pylint should be installed in test-patch docker environment
 ---

 Key: HADOOP-12265
 URL: https://issues.apache.org/jira/browse/HADOOP-12265
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Kengo Seki
Assignee: Kengo Seki
 Attachments: HADOOP-12265.HADOOP-12111.00.patch


 HADOOP-12207 added pylint plugin to test-patch, but pylint won't be installed 
 in Docker environment because I forgot modify Dockerfile :) It must be 
 updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12269) Update aws-sdk dependency to 1.10.6

2015-07-24 Thread Thomas Demoor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Demoor updated HADOOP-12269:
---
Summary: Update aws-sdk dependency to 1.10.6  (was: Update aws-sdk 
dependency version)

 Update aws-sdk dependency to 1.10.6
 ---

 Key: HADOOP-12269
 URL: https://issues.apache.org/jira/browse/HADOOP-12269
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Thomas Demoor

 This was originally part of HADOOP-11684, pulling out to this separate 
 subtask as requested by [~ste...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640559#comment-14640559
 ] 

Hudson commented on HADOOP-12009:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #255 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/255/])
HADOOP-12009: Clarify FileSystem.listStatus() sorting order  fix 
FileSystemContractBaseTest:testListStatus. (J.Andreina via jghoman) (jghoman: 
rev ab3197c20452e0dd908193d6854c204e6ee34645)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* hadoop-common-project/hadoop-common/CHANGES.txt


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009-004.patch, 
 HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12189) Improve CallQueueManager#swapQueue to make queue elements drop nearly impossible.

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640563#comment-14640563
 ] 

Hudson commented on HADOOP-12189:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #255 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/255/])
HADOOP-12189. Improve CallQueueManager#swapQueue to make queue elements drop 
nearly impossible. Contributed by Zhihai Xu. (wang: rev 
6736a1ab7033523ed5f304fdfed46d7f348665b4)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Improve CallQueueManager#swapQueue to make queue elements drop nearly 
 impossible.
 -

 Key: HADOOP-12189
 URL: https://issues.apache.org/jira/browse/HADOOP-12189
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, test
Affects Versions: 2.7.1
Reporter: zhihai xu
Assignee: zhihai xu
 Fix For: 2.8.0

 Attachments: HADOOP-12189.000.patch, HADOOP-12189.001.patch, 
 HADOOP-12189.none_guarantee.000.patch, HADOOP-12189.none_guarantee.001.patch, 
 HADOOP-12189.none_guarantee.002.patch


 Improve CallQueueManager#swapQueue to make queue elements drop nearly 
 impossible. This is the trade-off between performance and functionality, even 
 in the very very rare situation, we may drop one element, but it is not the 
 end of the world since the client may still recover with timeout.
 CallQueueManager may drop elements from the queue sometimes when calling 
 {{swapQueue}}. 
 The following test failure from TestCallQueueManager shown some elements in 
 the queue are dropped.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7150/testReport/org.apache.hadoop.ipc/TestCallQueueManager/testSwapUnderContention/
 {code}
 java.lang.AssertionError: expected:27241 but was:27245
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.ipc.TestCallQueueManager.testSwapUnderContention(TestCallQueueManager.java:220)
 {code}
 It looked like the elements in the queue are dropped due to 
 {{CallQueueManager#swapQueue}}
 Looked at the implementation of {{CallQueueManager#swapQueue}}, there is a 
 possibility that the elements in the queue are dropped. If the queue is full, 
 the calling thread for {{CallQueueManager#put}} is blocked for long time. It 
 may put the element into the old queue after queue in {{takeRef}} is changed 
 by swapQueue, then this element in the old queue will be dropped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12161) Add getStoragePolicy API to the FileSystem interface

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640558#comment-14640558
 ] 

Hudson commented on HADOOP-12161:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #255 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/255/])
HADOOP-12161. Add getStoragePolicy API to the FileSystem interface. 
(Contributed by Brahma Reddy Battula) (arp: rev 
adfa34ff9992295a6d2496b259d8c483ed90b566)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java


 Add getStoragePolicy API to the FileSystem interface
 

 Key: HADOOP-12161
 URL: https://issues.apache.org/jira/browse/HADOOP-12161
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HADOOP-12161-001.patch, HADOOP-12161-002.patch, 
 HADOOP-12161-003.patch, HADOOP-12161-004.patch


 HDFS-8345 added {{FileSystem#getAllStoragePolicies}} and 
 {{FileSystem#setStoragePolicy}}. Jira to
 # Add a corresponding {{FileSystem#getStoragePolicy}} to query the storage 
 policy for a given file/directory.
 # Add corresponding implementation for HDFS i.e. 
 {{DistributedFileSystem#getStoragePolicy}}.
 # Update the [FileSystem 
 specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html].
  This will require editing 
 _hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12254) test-patch.sh should run findbugs if ony findbugs-exclude.xml has changed

2015-07-24 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-12254:
-
Affects Version/s: HADOOP-12111

 test-patch.sh should run findbugs if ony findbugs-exclude.xml has changed
 -

 Key: HADOOP-12254
 URL: https://issues.apache.org/jira/browse/HADOOP-12254
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Varun Saxena

 Refer to Hadoop QA report for YARN-3952
 https://issues.apache.org/jira/browse/YARN-3952?focusedCommentId=14636455page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14636455
 We can run findbugs for all the submodules if findbugs-exclude.xml has been 
 changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12254) test-patch.sh should run findbugs if ony findbugs-exclude.xml has changed

2015-07-24 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-12254:
-
Component/s: yetus

 test-patch.sh should run findbugs if ony findbugs-exclude.xml has changed
 -

 Key: HADOOP-12254
 URL: https://issues.apache.org/jira/browse/HADOOP-12254
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Varun Saxena

 Refer to Hadoop QA report for YARN-3952
 https://issues.apache.org/jira/browse/YARN-3952?focusedCommentId=14636455page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14636455
 We can run findbugs for all the submodules if findbugs-exclude.xml has been 
 changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12161) Add getStoragePolicy API to the FileSystem interface

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640528#comment-14640528
 ] 

Hudson commented on HADOOP-12161:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2193 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2193/])
HADOOP-12161. Add getStoragePolicy API to the FileSystem interface. 
(Contributed by Brahma Reddy Battula) (arp: rev 
adfa34ff9992295a6d2496b259d8c483ed90b566)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java


 Add getStoragePolicy API to the FileSystem interface
 

 Key: HADOOP-12161
 URL: https://issues.apache.org/jira/browse/HADOOP-12161
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HADOOP-12161-001.patch, HADOOP-12161-002.patch, 
 HADOOP-12161-003.patch, HADOOP-12161-004.patch


 HDFS-8345 added {{FileSystem#getAllStoragePolicies}} and 
 {{FileSystem#setStoragePolicy}}. Jira to
 # Add a corresponding {{FileSystem#getStoragePolicy}} to query the storage 
 policy for a given file/directory.
 # Add corresponding implementation for HDFS i.e. 
 {{DistributedFileSystem#getStoragePolicy}}.
 # Update the [FileSystem 
 specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html].
  This will require editing 
 _hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12189) Improve CallQueueManager#swapQueue to make queue elements drop nearly impossible.

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640533#comment-14640533
 ] 

Hudson commented on HADOOP-12189:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2193 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2193/])
HADOOP-12189. Improve CallQueueManager#swapQueue to make queue elements drop 
nearly impossible. Contributed by Zhihai Xu. (wang: rev 
6736a1ab7033523ed5f304fdfed46d7f348665b4)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Improve CallQueueManager#swapQueue to make queue elements drop nearly 
 impossible.
 -

 Key: HADOOP-12189
 URL: https://issues.apache.org/jira/browse/HADOOP-12189
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, test
Affects Versions: 2.7.1
Reporter: zhihai xu
Assignee: zhihai xu
 Fix For: 2.8.0

 Attachments: HADOOP-12189.000.patch, HADOOP-12189.001.patch, 
 HADOOP-12189.none_guarantee.000.patch, HADOOP-12189.none_guarantee.001.patch, 
 HADOOP-12189.none_guarantee.002.patch


 Improve CallQueueManager#swapQueue to make queue elements drop nearly 
 impossible. This is the trade-off between performance and functionality, even 
 in the very very rare situation, we may drop one element, but it is not the 
 end of the world since the client may still recover with timeout.
 CallQueueManager may drop elements from the queue sometimes when calling 
 {{swapQueue}}. 
 The following test failure from TestCallQueueManager shown some elements in 
 the queue are dropped.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7150/testReport/org.apache.hadoop.ipc/TestCallQueueManager/testSwapUnderContention/
 {code}
 java.lang.AssertionError: expected:27241 but was:27245
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.ipc.TestCallQueueManager.testSwapUnderContention(TestCallQueueManager.java:220)
 {code}
 It looked like the elements in the queue are dropped due to 
 {{CallQueueManager#swapQueue}}
 Looked at the implementation of {{CallQueueManager#swapQueue}}, there is a 
 possibility that the elements in the queue are dropped. If the queue is full, 
 the calling thread for {{CallQueueManager#put}} is blocked for long time. It 
 may put the element into the old queue after queue in {{takeRef}} is changed 
 by swapQueue, then this element in the old queue will be dropped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12265) Pylint should be installed in test-patch docker environment

2015-07-24 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12265:

Attachment: HADOOP-12265.HADOOP-12111.00.patch

Attaching a patch. I confirmed that pylint is installed and pylint plugin works 
just after docker container launched.

 Pylint should be installed in test-patch docker environment
 ---

 Key: HADOOP-12265
 URL: https://issues.apache.org/jira/browse/HADOOP-12265
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Kengo Seki
 Attachments: HADOOP-12265.HADOOP-12111.00.patch


 HADOOP-12207 added pylint plugin to test-patch, but pylint won't be installed 
 in Docker environment because I forgot modify Dockerfile :) It must be 
 updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12268) AbstractContractAppendTest#testRenameFileBeingAppended misses rename operation.

2015-07-24 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640682#comment-14640682
 ] 

zhihai xu commented on HADOOP-12268:


The failure test TestDistributedFileSystem.testDFSClientPeerWriteTimeout is not 
related to my change. it was already reported at HDFS-8785.


 AbstractContractAppendTest#testRenameFileBeingAppended misses rename 
 operation.
 ---

 Key: HADOOP-12268
 URL: https://issues.apache.org/jira/browse/HADOOP-12268
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: zhihai xu
Assignee: zhihai xu
 Attachments: HADOOP-12268.000.patch


 AbstractContractAppendTest#testRenameFileBeingAppended misses rename 
 operation. Also TestHDFSContractAppend can pass the original test  after fix 
 the issue at {{AbstractContractAppendTest#testRenameFileBeingAppended}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12269) Update aws-sdk dependency to 1.10.6

2015-07-24 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640715#comment-14640715
 ] 

Sean Busbey commented on HADOOP-12269:
--

What's the target Hadoop version(s)?

 Update aws-sdk dependency to 1.10.6
 ---

 Key: HADOOP-12269
 URL: https://issues.apache.org/jira/browse/HADOOP-12269
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Thomas Demoor
 Attachments: HADOOP-12269-001.patch


 This was originally part of HADOOP-11684, pulling out to this separate 
 subtask as requested by [~ste...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640635#comment-14640635
 ] 

Hudson commented on HADOOP-12009:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2212 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2212/])
HADOOP-12009: Clarify FileSystem.listStatus() sorting order  fix 
FileSystemContractBaseTest:testListStatus. (J.Andreina via jghoman) (jghoman: 
rev ab3197c20452e0dd908193d6854c204e6ee34645)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009-004.patch, 
 HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12254) test-patch.sh should run findbugs if ony findbugs-exclude.xml has changed

2015-07-24 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-12254:
-
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-12111

 test-patch.sh should run findbugs if ony findbugs-exclude.xml has changed
 -

 Key: HADOOP-12254
 URL: https://issues.apache.org/jira/browse/HADOOP-12254
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Varun Saxena

 Refer to Hadoop QA report for YARN-3952
 https://issues.apache.org/jira/browse/YARN-3952?focusedCommentId=14636455page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14636455
 We can run findbugs for all the submodules if findbugs-exclude.xml has been 
 changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11487) FileNotFound on distcp to s3n due to creation inconsistency

2015-07-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640706#comment-14640706
 ] 

Steve Loughran commented on HADOOP-11487:
-

no, that looks like creation inconsistency, which is an AWS architecture issue

bq. Amazon S3 buckets in the US Standard Region only provide read-after-write 
consistency when accessed through the Northern Virginia endpoint 
(s3-external-1.amazonaws.com).

 FileNotFound on distcp to s3n due to creation inconsistency 
 

 Key: HADOOP-11487
 URL: https://issues.apache.org/jira/browse/HADOOP-11487
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, fs/s3
Reporter: Paulo Motta

 I'm trying to copy a large amount of files from HDFS to S3 via distcp and I'm 
 getting the following exception:
 {code:java}
 2015-01-16 20:53:18,187 ERROR [main] 
 org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying 
 hdfs://10.165.35.216/hdfsFolder/file.gz to s3n://s3-bucket/file.gz
 java.io.FileNotFoundException: No such file or directory 
 's3n://s3-bucket/file.gz'
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445)
   at 
 org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187)
   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233)
   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:422)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 2015-01-16 20:53:18,276 WARN [main] org.apache.hadoop.mapred.YarnChild: 
 Exception running child : java.io.FileNotFoundException: No such file or 
 directory 's3n://s3-bucket/file.gz'
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445)
   at 
 org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187)
   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233)
   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:422)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 However, when I try hadoop fs -ls s3n://s3-bucket/file.gz the file is there. 
 So probably due to Amazon's S3 eventual consistency the job failure.
 In my opinion, in order to fix this problem NativeS3FileSystem.getFileStatus 
 must use fs.s3.maxRetries property in order to avoid failures like this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640529#comment-14640529
 ] 

Hudson commented on HADOOP-12009:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2193 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2193/])
HADOOP-12009: Clarify FileSystem.listStatus() sorting order  fix 
FileSystemContractBaseTest:testListStatus. (J.Andreina via jghoman) (jghoman: 
rev ab3197c20452e0dd908193d6854c204e6ee34645)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009-004.patch, 
 HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12137) build environment and unit tests

2015-07-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12137:
--
Status: Open  (was: Patch Available)

Cancelling this patch since it's already out of date.  I'll post a script to do 
the git mv's.

 build environment and unit tests
 

 Key: HADOOP-12137
 URL: https://issues.apache.org/jira/browse/HADOOP-12137
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Critical
 Attachments: HADOOP-12137.HADOOP-12111.00.patch, 
 HADOOP-12137.HADOOP-12111.01.patch


 We need to have some way to build (esp the documentation!) and unit tests.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12161) Add getStoragePolicy API to the FileSystem interface

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640683#comment-14640683
 ] 

Hudson commented on HADOOP-12161:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #263 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/263/])
HADOOP-12161. Add getStoragePolicy API to the FileSystem interface. 
(Contributed by Brahma Reddy Battula) (arp: rev 
adfa34ff9992295a6d2496b259d8c483ed90b566)
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


 Add getStoragePolicy API to the FileSystem interface
 

 Key: HADOOP-12161
 URL: https://issues.apache.org/jira/browse/HADOOP-12161
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HADOOP-12161-001.patch, HADOOP-12161-002.patch, 
 HADOOP-12161-003.patch, HADOOP-12161-004.patch


 HDFS-8345 added {{FileSystem#getAllStoragePolicies}} and 
 {{FileSystem#setStoragePolicy}}. Jira to
 # Add a corresponding {{FileSystem#getStoragePolicy}} to query the storage 
 policy for a given file/directory.
 # Add corresponding implementation for HDFS i.e. 
 {{DistributedFileSystem#getStoragePolicy}}.
 # Update the [FileSystem 
 specification|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html].
  This will require editing 
 _hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12189) Improve CallQueueManager#swapQueue to make queue elements drop nearly impossible.

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640688#comment-14640688
 ] 

Hudson commented on HADOOP-12189:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #263 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/263/])
HADOOP-12189. Improve CallQueueManager#swapQueue to make queue elements drop 
nearly impossible. Contributed by Zhihai Xu. (wang: rev 
6736a1ab7033523ed5f304fdfed46d7f348665b4)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java


 Improve CallQueueManager#swapQueue to make queue elements drop nearly 
 impossible.
 -

 Key: HADOOP-12189
 URL: https://issues.apache.org/jira/browse/HADOOP-12189
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, test
Affects Versions: 2.7.1
Reporter: zhihai xu
Assignee: zhihai xu
 Fix For: 2.8.0

 Attachments: HADOOP-12189.000.patch, HADOOP-12189.001.patch, 
 HADOOP-12189.none_guarantee.000.patch, HADOOP-12189.none_guarantee.001.patch, 
 HADOOP-12189.none_guarantee.002.patch


 Improve CallQueueManager#swapQueue to make queue elements drop nearly 
 impossible. This is the trade-off between performance and functionality, even 
 in the very very rare situation, we may drop one element, but it is not the 
 end of the world since the client may still recover with timeout.
 CallQueueManager may drop elements from the queue sometimes when calling 
 {{swapQueue}}. 
 The following test failure from TestCallQueueManager shown some elements in 
 the queue are dropped.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7150/testReport/org.apache.hadoop.ipc/TestCallQueueManager/testSwapUnderContention/
 {code}
 java.lang.AssertionError: expected:27241 but was:27245
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.ipc.TestCallQueueManager.testSwapUnderContention(TestCallQueueManager.java:220)
 {code}
 It looked like the elements in the queue are dropped due to 
 {{CallQueueManager#swapQueue}}
 Looked at the implementation of {{CallQueueManager#swapQueue}}, there is a 
 possibility that the elements in the queue are dropped. If the queue is full, 
 the calling thread for {{CallQueueManager#put}} is blocked for long time. It 
 may put the element into the old queue after queue in {{takeRef}} is changed 
 by swapQueue, then this element in the old queue will be dropped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640684#comment-14640684
 ] 

Hudson commented on HADOOP-12009:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #263 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/263/])
HADOOP-12009: Clarify FileSystem.listStatus() sorting order  fix 
FileSystemContractBaseTest:testListStatus. (J.Andreina via jghoman) (jghoman: 
rev ab3197c20452e0dd908193d6854c204e6ee34645)
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009-004.patch, 
 HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12269) Update aws-sdk dependency to 1.10.6

2015-07-24 Thread Thomas Demoor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Demoor updated HADOOP-12269:
---
Attachment: HADOOP-12269-001.patch

Patch bumps from aws-sdk-1.7.4 to aws-sdk-s3-1.10.6. 

* Only S3 library so smaller binary (possible since 1.9). 
* Multipart threshold changed from int to long as the corresponding change 
(bugfix) was made in the aws sdk
* Added config setting to make overriding the signing algorithm possible. This 
to keep object stores that still use the previous signing algorithm functional 
with s3. Set this config setting to {{S3Signer}} to get v2 authentication.

 Update aws-sdk dependency to 1.10.6
 ---

 Key: HADOOP-12269
 URL: https://issues.apache.org/jira/browse/HADOOP-12269
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Thomas Demoor
 Attachments: HADOOP-12269-001.patch


 This was originally part of HADOOP-11684, pulling out to this separate 
 subtask as requested by [~ste...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11506) Configuration variable expansion regex expensive for long values

2015-07-24 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11506:
---
Labels: 2.6.1-candidate  (was: )

 Configuration variable expansion regex expensive for long values
 

 Key: HADOOP-11506
 URL: https://issues.apache.org/jira/browse/HADOOP-11506
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Reporter: Dmitriy V. Ryaboy
Assignee: Gera Shegalov
  Labels: 2.6.1-candidate
 Fix For: 2.7.0

 Attachments: HADOOP-11506.001.patch, HADOOP-11506.002.patch, 
 HADOOP-11506.003.patch, HADOOP-11506.004.patch


 Profiling several large Hadoop jobs, we discovered that a surprising amount 
 of time was spent inside Configuration.get, more specifically, in regex 
 matching caused by the substituteVars call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10854) unit tests for the shell scripts

2015-07-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10854:
--
Attachment: HADOOP-10854.04.patch

-04:
* no longer fail when bats isn't installed, despite the fact it probably should
* set the phase to run before the actual test phase so that mvn test 
-Pshelltest works

 unit tests for the shell scripts
 

 Key: HADOOP-10854
 URL: https://issues.apache.org/jira/browse/HADOOP-10854
 Project: Hadoop Common
  Issue Type: Test
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: scripts
 Attachments: HADOOP-10854.00.patch, HADOOP-10854.01.patch, 
 HADOOP-10854.02.patch, HADOOP-10854.03.patch, HADOOP-10854.04.patch


 With HADOOP-9902 moving a lot of the core functionality to functions, we 
 should build some unit tests for them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12261) Surefire needs to make sure the JVMs it fires up are 64-bit

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641051#comment-14641051
 ] 

Allen Wittenauer commented on HADOOP-12261:
---

Yup.  We've crapped all over our compatibility guidelines.  I'm just pointing 
out that if we do this in branch-2, we'll be continuing the trend of releasing 
another minor that breaks stuff.

 Surefire needs to make sure the JVMs it fires up are 64-bit
 ---

 Key: HADOOP-12261
 URL: https://issues.apache.org/jira/browse/HADOOP-12261
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.1
Reporter: Alan Burlison
Assignee: Alan Burlison

 hadoop-project/pom.xml sets maven-surefire-plugin.argLine to include 
 -Xmx4096m. Allocating  that amount of memory requires a 64-bit JVM, but on 
 platforms with both 32 and 64-bit JVMs surefire runs the 32 bit version by 
 default and tests fail to start as a result. -d64 should be added to the 
 command-line arguments to ensure a 64-bit JVM is always used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12261) Surefire needs to make sure the JVMs it fires up are 64-bit

2015-07-24 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641065#comment-14641065
 ] 

Colin Patrick McCabe commented on HADOOP-12261:
---

I don't understand the negativity.  We are not robots blindly following 
guidelines even when they don't make sense.  If nobody is using 32-bit, then 
there seems to be little downside in removing it.  Clearly running unit tests 
on 32-bit is already broken and nobody noticed until now.

 Surefire needs to make sure the JVMs it fires up are 64-bit
 ---

 Key: HADOOP-12261
 URL: https://issues.apache.org/jira/browse/HADOOP-12261
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.1
Reporter: Alan Burlison
Assignee: Alan Burlison

 hadoop-project/pom.xml sets maven-surefire-plugin.argLine to include 
 -Xmx4096m. Allocating  that amount of memory requires a 64-bit JVM, but on 
 platforms with both 32 and 64-bit JVMs surefire runs the 32 bit version by 
 default and tests fail to start as a result. -d64 should be added to the 
 command-line arguments to ensure a 64-bit JVM is always used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12261) Surefire needs to make sure the JVMs it fires up are 64-bit

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641071#comment-14641071
 ] 

Allen Wittenauer commented on HADOOP-12261:
---

There is a difference between the unit tests being broken and people actually 
using Hadoop in 32-bit environments. 

My negativity comes from a PMC that is completely untrustworthy because they 
don't do what they say they are going to do, such as upholding the 
compatibility guidelines. Hadoop's users deserve better.

 Surefire needs to make sure the JVMs it fires up are 64-bit
 ---

 Key: HADOOP-12261
 URL: https://issues.apache.org/jira/browse/HADOOP-12261
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.1
Reporter: Alan Burlison
Assignee: Alan Burlison

 hadoop-project/pom.xml sets maven-surefire-plugin.argLine to include 
 -Xmx4096m. Allocating  that amount of memory requires a 64-bit JVM, but on 
 platforms with both 32 and 64-bit JVMs surefire runs the 32 bit version by 
 default and tests fail to start as a result. -d64 should be added to the 
 command-line arguments to ensure a 64-bit JVM is always used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12271) Hadoop Jar Error Should Be More Explanatory

2015-07-24 Thread Jesse Anderson (JIRA)
Jesse Anderson created HADOOP-12271:
---

 Summary: Hadoop Jar Error Should Be More Explanatory
 Key: HADOOP-12271
 URL: https://issues.apache.org/jira/browse/HADOOP-12271
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jesse Anderson
Priority: Minor


When the hadoop jar command is used and the JAR does not exist, the error 
message says Not a valid JAR:  The message should say that the JAR does 
not exist. This error makes it sound like the JAR is corrupt or is not a JAR 
format.

https://github.com/apache/hadoop/blob/c1d50a91f7c05e4aaf4655380c8dcd11703ff158/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java#L151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12261) Surefire needs to make sure the JVMs it fires up are 64-bit

2015-07-24 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641041#comment-14641041
 ] 

Colin Patrick McCabe commented on HADOOP-12261:
---

We have made incompatible changes in branch-2 before, including dropping 
support for JDK6 (something that people did actually use, unlike 32-bit).

 Surefire needs to make sure the JVMs it fires up are 64-bit
 ---

 Key: HADOOP-12261
 URL: https://issues.apache.org/jira/browse/HADOOP-12261
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.1
Reporter: Alan Burlison
Assignee: Alan Burlison

 hadoop-project/pom.xml sets maven-surefire-plugin.argLine to include 
 -Xmx4096m. Allocating  that amount of memory requires a 64-bit JVM, but on 
 platforms with both 32 and 64-bit JVMs surefire runs the 32 bit version by 
 default and tests fail to start as a result. -d64 should be added to the 
 command-line arguments to ensure a 64-bit JVM is always used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12170) hadoop-common's JNIFlags.cmake is redundant and can be removed

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641081#comment-14641081
 ] 

Hudson commented on HADOOP-12170:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8218 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8218/])
HADOOP-12170. hadoop-common's JNIFlags.cmake is redundant and can be removed 
(Alan Burlison via Colin P. McCabe) (cmccabe: rev 
e4b0c74434b82c25256a59b03d62b1a66bb8ac69)
* hadoop-common-project/hadoop-common/src/JNIFlags.cmake
* hadoop-common-project/hadoop-common/CHANGES.txt


 hadoop-common's JNIFlags.cmake is redundant and can be removed
 --

 Key: HADOOP-12170
 URL: https://issues.apache.org/jira/browse/HADOOP-12170
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 2.7.1
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12170.001.patch


 With the integration of:
 * HADOOP-12036 Consolidate all of the cmake extensions in one *directory
 * HADOOP-12104 Migrate Hadoop Pipes native build to new CMake
 * HDFS-8635 Migrate HDFS native build to new CMake framework
 * MAPREDUCE-6407 Migrate MAPREDUCE native build to new CMake
 * YARN-3827 Migrate YARN native build to new CMake framework
 hadoop-common-project/hadoop-common/src/JNIFlags.cmake is now redundant and 
 can be removed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11505) hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, doesn't work on non-x86

2015-07-24 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641037#comment-14641037
 ] 

Colin Patrick McCabe commented on HADOOP-11505:
---

Thanks, [~alanburlison].  Your change is a great improvement.

In {{Buffers.h}}:

{code}
  uint32_t lengthConvertEndium() {
uint64_t value = hadoop_be64toh(*((uint64_t *)this));
...
{code}

This looks very wrong. I suppose it's going to typecast the structure to a 
long\* and then dereference that?  I suppose it will do something as long as 
the compiler packs keyLength and valueLength into a single 8-byte region.  I 
suppose this is an existing problem.  If you can't fix it here, can you create 
a JIRA?

I would like to +1 this, but I'm concerned that this change hasn't been tested. 
 Although I think your change is absolutely correct, I'm concerned that we 
might expose a bug.  Can you do a quick test on this?

 hadoop-mapreduce-client-nativetask uses bswap where be32toh is needed, 
 doesn't work on non-x86
 --

 Key: HADOOP-11505
 URL: https://issues.apache.org/jira/browse/HADOOP-11505
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11505.001.patch, HADOOP-11505.003.patch, 
 HADOOP-11505.004.patch


 hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
 cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
 code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
 this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12261) Surefire needs to make sure the JVMs it fires up are 64-bit

2015-07-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12261:
--
Hadoop Flags: Incompatible change

 Surefire needs to make sure the JVMs it fires up are 64-bit
 ---

 Key: HADOOP-12261
 URL: https://issues.apache.org/jira/browse/HADOOP-12261
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.1
Reporter: Alan Burlison
Assignee: Alan Burlison

 hadoop-project/pom.xml sets maven-surefire-plugin.argLine to include 
 -Xmx4096m. Allocating  that amount of memory requires a 64-bit JVM, but on 
 platforms with both 32 and 64-bit JVMs surefire runs the 32 bit version by 
 default and tests fail to start as a result. -d64 should be added to the 
 command-line arguments to ensure a 64-bit JVM is always used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12259) Utility to Dynamic port allocation

2015-07-24 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-12259:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~brahmareddy].  Committed to trunk and branch-2!

 Utility to Dynamic port allocation
 --

 Key: HADOOP-12259
 URL: https://issues.apache.org/jira/browse/HADOOP-12259
 Project: Hadoop Common
  Issue Type: Bug
  Components: test, util
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HADOOP-12259.patch


 As per discussion in YARN-3528 and [~rkanter] comment [here | 
 https://issues.apache.org/jira/browse/YARN-3528?focusedCommentId=14637700page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14637700
  ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11487) FileNotFound on distcp to s3n due to creation inconsistency

2015-07-24 Thread Philip Deegan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640743#comment-14640743
 ] 

Philip Deegan commented on HADOOP-11487:


Are you sure? It's a read only op from the same region (eu-central)

 FileNotFound on distcp to s3n due to creation inconsistency 
 

 Key: HADOOP-11487
 URL: https://issues.apache.org/jira/browse/HADOOP-11487
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, fs/s3
Reporter: Paulo Motta

 I'm trying to copy a large amount of files from HDFS to S3 via distcp and I'm 
 getting the following exception:
 {code:java}
 2015-01-16 20:53:18,187 ERROR [main] 
 org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying 
 hdfs://10.165.35.216/hdfsFolder/file.gz to s3n://s3-bucket/file.gz
 java.io.FileNotFoundException: No such file or directory 
 's3n://s3-bucket/file.gz'
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445)
   at 
 org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187)
   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233)
   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:422)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 2015-01-16 20:53:18,276 WARN [main] org.apache.hadoop.mapred.YarnChild: 
 Exception running child : java.io.FileNotFoundException: No such file or 
 directory 's3n://s3-bucket/file.gz'
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445)
   at 
 org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187)
   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233)
   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:422)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 However, when I try hadoop fs -ls s3n://s3-bucket/file.gz the file is there. 
 So probably due to Amazon's S3 eventual consistency the job failure.
 In my opinion, in order to fix this problem NativeS3FileSystem.getFileStatus 
 must use fs.s3.maxRetries property in order to avoid failures like this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12269) Update aws-sdk dependency to 1.10.6

2015-07-24 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640847#comment-14640847
 ] 

Aaron Fabbri commented on HADOOP-12269:
---

Can we use HADOOP-12267 for 2.6.x and 2.7.x and do this latest-greatest sdk for 
trunk?  Eager to get this upstream as customers are hitting it.

 Update aws-sdk dependency to 1.10.6
 ---

 Key: HADOOP-12269
 URL: https://issues.apache.org/jira/browse/HADOOP-12269
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Thomas Demoor
 Attachments: HADOOP-12269-001.patch


 This was originally part of HADOOP-11684, pulling out to this separate 
 subtask as requested by [~ste...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12259) Utility to Dynamic port allocation

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640733#comment-14640733
 ] 

Hudson commented on HADOOP-12259:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8214 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8214/])
HADOOP-12259. Utility to Dynamic port allocation (brahmareddy via rkanter) 
(rkanter: rev ee233ec95ce8cfc8309d3adc072d926cd85eba08)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Utility to Dynamic port allocation
 --

 Key: HADOOP-12259
 URL: https://issues.apache.org/jira/browse/HADOOP-12259
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test, util
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HADOOP-12259.patch


 As per discussion in YARN-3528 and [~rkanter] comment [here | 
 https://issues.apache.org/jira/browse/YARN-3528?focusedCommentId=14637700page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14637700
  ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12259) Utility to Dynamic port allocation

2015-07-24 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-12259:
---
Issue Type: Improvement  (was: Bug)

 Utility to Dynamic port allocation
 --

 Key: HADOOP-12259
 URL: https://issues.apache.org/jira/browse/HADOOP-12259
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test, util
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0

 Attachments: HADOOP-12259.patch


 As per discussion in YARN-3528 and [~rkanter] comment [here | 
 https://issues.apache.org/jira/browse/YARN-3528?focusedCommentId=14637700page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14637700
  ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12267) s3a failure due to integer overflow bug in AWS SDK

2015-07-24 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640584#comment-14640584
 ] 

Thomas Demoor commented on HADOOP-12267:


I have isolated the aws-sdk bump in HADOOP-12269

 s3a failure due to integer overflow bug in AWS SDK
 --

 Key: HADOOP-12267
 URL: https://issues.apache.org/jira/browse/HADOOP-12267
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Aaron Fabbri
Assignee: Aaron Fabbri
 Attachments: HADOOP-12267.2.6.0.001.patch, 
 HADOOP-12267.2.7.1.001.patch


 Under high load writing to Amazon AWS S3 storage, a client can be throttled 
 enough to encounter 24 retries in a row.
 The amazon http client code (in aws-java-sdk jar) has a bug in its 
 exponential backoff retry code, that causes integer overflow, and a call to 
 Thread.sleep() with a negative value, which causes client to bail out with an 
 exception (see below).
 Bug has been fixed in aws-java-sdk:
 https://github.com/aws/aws-sdk-java/pull/388
 We need to pick this up for hadoop-tools/hadoop-aws.
 Error: java.io.IOException: File copy failed: hdfs://path-redacted -- 
 s3a://path-redacted
 at 
 org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:284)
 at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:252) 
 at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)  
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) 
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) 
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
 at java.security.AccessController.doPrivileged(Native Method) 
 at javax.security.auth.Subject.doAs(Subject.java:415) 
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: 
 java.io.IOException: Couldn't run retriable-command: Copying 
 hdfs://path-redacted to s3a://path-redacted
 at 
 org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
  
 at 
 org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:280)
  
 ... 10 more 
 Caused by: com.amazonaws.AmazonClientException: Unable to complete transfer: 
 timeout value is negative
 at 
 com.amazonaws.services.s3.transfer.internal.AbstractTransfer.unwrapExecutionException(AbstractTransfer.java:300)
 at 
 com.amazonaws.services.s3.transfer.internal.AbstractTransfer.rethrowExecutionException(AbstractTransfer.java:284)
 at 
 com.amazonaws.services.s3.transfer.internal.CopyImpl.waitForCopyResult(CopyImpl.java:67)
  
 at org.apache.hadoop.fs.s3a.S3AFileSystem.copyFile(S3AFileSystem.java:943) 
 at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:357) 
 at 
 org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.promoteTmpToTarget(RetriableFileCopyCommand.java:220)
 at 
 org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:137)
  
 at 
 org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:100)
 at 
 org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
  
 ... 11 more 
 Caused by: java.lang.IllegalArgumentException: timeout value is negative
 at java.lang.Thread.sleep(Native Method) 
 at 
 com.amazonaws.http.AmazonHttpClient.pauseBeforeNextRetry(AmazonHttpClient.java:864)
 at 
 com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:353) 
 at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232) 
 at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
 at 
 com.amazonaws.services.s3.AmazonS3Client.copyObject(AmazonS3Client.java:1507)
 at 
 com.amazonaws.services.s3.transfer.internal.CopyCallable.copyInOneChunk(CopyCallable.java:143)
 at 
 com.amazonaws.services.s3.transfer.internal.CopyCallable.call(CopyCallable.java:131)
  
 at 
 com.amazonaws.services.s3.transfer.internal.CopyMonitor.copy(CopyMonitor.java:189)
  
 at 
 com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:134)
  
 at 
 com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:46)
   
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  
 at java.lang.Thread.run(Thread.java:745) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12265) Pylint should be installed in test-patch docker environment

2015-07-24 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12265:

Assignee: Kengo Seki
  Status: Patch Available  (was: Open)

 Pylint should be installed in test-patch docker environment
 ---

 Key: HADOOP-12265
 URL: https://issues.apache.org/jira/browse/HADOOP-12265
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Kengo Seki
Assignee: Kengo Seki
 Attachments: HADOOP-12265.HADOOP-12111.00.patch


 HADOOP-12207 added pylint plugin to test-patch, but pylint won't be installed 
 in Docker environment because I forgot modify Dockerfile :) It must be 
 updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640676#comment-14640676
 ] 

Allen Wittenauer commented on HADOOP-11731:
---

I hope you realize there was more than just this JIRA for trunk's version of 
releasedocmaker (which is also several patches behind the one sitting in the 
Yetus branch)

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0, 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12272) Refactor ipc.Server and implementations to reduce constructor bloat

2015-07-24 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641222#comment-14641222
 ] 

Arpit Agarwal commented on HADOOP-12272:


Ran into this while adding a new Server constructor parameter for HADOOP-12250. 
Will probably perform the refactoring after fixing HADOOP-12250.

I have an incomplete patch to refactor the ipc code but need to fix usages in 
the rest of the code.

 Refactor ipc.Server and implementations to reduce constructor bloat
 ---

 Key: HADOOP-12272
 URL: https://issues.apache.org/jira/browse/HADOOP-12272
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Arpit Agarwal

 {{ipc.Server}} and its implementations have constructors taking large number 
 of parameters. This code can be simplified quite a bit by just moving 
 RPC.Builder to the Server class and passing the builder object to 
 constructors.
 The refactoring should be safe based on the class annotations but need to 
 confirm no dependent components will break.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11696) update compatibility documentation to reflect only API changes matter

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641155#comment-14641155
 ] 

Allen Wittenauer commented on HADOOP-11696:
---

Ran some stats today:

* 2.3.0 - 1 
* 2.4.0 - 7
* 2.5.0 - 4
* 2.6.0 - 4
* 2.7.0 - 9
* 2.8.0 - 9 (not released)
* 3.0.0 - 36 (not released)



 update compatibility documentation to reflect only API changes matter
 -

 Key: HADOOP-11696
 URL: https://issues.apache.org/jira/browse/HADOOP-11696
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer

 Given the changes file generated by processing JIRA and current discussion in 
 common-dev, we should update the compatibility documents to reflect reality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12272) Refactor ipc.Server and implementations to reduce constructor bloat

2015-07-24 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12272:
---
Description: 
{{ipc.Server}} and its implementations have constructors taking large number of 
parameters. This code can be simplified quite a bit by just moving RPC.Builder 
to the Server class and passing the builder object to constructors.

The refactoring should be safe based on the class annotations but need to 
confirm no dependent components outside of HDFS, YARN and MR will break.

  was:
{{ipc.Server}} and its implementations have constructors taking large number of 
parameters. This code can be simplified quite a bit by just moving RPC.Builder 
to the Server class and passing the builder object to constructors.

The refactoring should be safe based on the class annotations but need to 
confirm no dependent components will break.


 Refactor ipc.Server and implementations to reduce constructor bloat
 ---

 Key: HADOOP-12272
 URL: https://issues.apache.org/jira/browse/HADOOP-12272
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Arpit Agarwal

 {{ipc.Server}} and its implementations have constructors taking large number 
 of parameters. This code can be simplified quite a bit by just moving 
 RPC.Builder to the Server class and passing the builder object to 
 constructors.
 The refactoring should be safe based on the class annotations but need to 
 confirm no dependent components outside of HDFS, YARN and MR will break.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11696) update compatibility documentation to reflect only API changes matter

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641155#comment-14641155
 ] 

Allen Wittenauer edited comment on HADOOP-11696 at 7/24/15 10:13 PM:
-

Ran some stats today:
|| Version || Incompatible Changes ||
| 2.3.0 | 1 |
| 2.4.0 | 7 |
| 2.5.0 | 4 |
| 2.6.0 | 4 |
| 2.7.0 | 9 |
| 2.8.0 | 9 (not released) |
| 3.0.0 | 36 (not released) |




was (Author: aw):
Ran some stats today:

* 2.3.0 - 1 
* 2.4.0 - 7
* 2.5.0 - 4
* 2.6.0 - 4
* 2.7.0 - 9
* 2.8.0 - 9 (not released)
* 3.0.0 - 36 (not released)



 update compatibility documentation to reflect only API changes matter
 -

 Key: HADOOP-11696
 URL: https://issues.apache.org/jira/browse/HADOOP-11696
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer

 Given the changes file generated by processing JIRA and current discussion in 
 common-dev, we should update the compatibility documents to reflect reality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12272) Refactor ipc.Server and implementations to reduce constructor bloat

2015-07-24 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-12272:
--

 Summary: Refactor ipc.Server and implementations to reduce 
constructor bloat
 Key: HADOOP-12272
 URL: https://issues.apache.org/jira/browse/HADOOP-12272
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Arpit Agarwal


{{ipc.Server}} and its implementations have constructors taking large number of 
parameters. This code can be simplified quite a bit by just moving RPC.Builder 
to the Server class and passing the builder object to constructors.

The refactoring should be safe based on the class annotations but need to 
confirm no dependent components will break.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12272) Refactor ipc.Server and implementations to reduce constructor bloat

2015-07-24 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12272:
---
Description: 
{{ipc.Server}} and its implementations have constructors taking large number of 
parameters. This code can be simplified quite a bit by just moving RPC.Builder 
to the Server class and passing the builder object to constructors.

The refactoring should be safe based on the class annotations but need to 
confirm no components outside of HDFS, YARN and MR will break.

  was:
{{ipc.Server}} and its implementations have constructors taking large number of 
parameters. This code can be simplified quite a bit by just moving RPC.Builder 
to the Server class and passing the builder object to constructors.

The refactoring should be safe based on the class annotations but need to 
confirm no dependent components outside of HDFS, YARN and MR will break.


 Refactor ipc.Server and implementations to reduce constructor bloat
 ---

 Key: HADOOP-12272
 URL: https://issues.apache.org/jira/browse/HADOOP-12272
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Arpit Agarwal

 {{ipc.Server}} and its implementations have constructors taking large number 
 of parameters. This code can be simplified quite a bit by just moving 
 RPC.Builder to the Server class and passing the builder object to 
 constructors.
 The refactoring should be safe based on the class annotations but need to 
 confirm no components outside of HDFS, YARN and MR will break.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12272) Refactor ipc.Server and implementations to reduce constructor bloat

2015-07-24 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641222#comment-14641222
 ] 

Arpit Agarwal edited comment on HADOOP-12272 at 7/24/15 11:18 PM:
--

Ran into this while adding a new Server constructor parameter for HADOOP-12250. 
Will work on the refactoring after fixing HADOOP-12250 so leaving unassigned 
for now.

I have an incomplete patch to refactor the ipc code but need to fix usages in 
the rest of the code.


was (Author: arpitagarwal):
Ran into this while adding a new Server constructor parameter for HADOOP-12250. 
Will probably perform the refactoring after fixing HADOOP-12250.

I have an incomplete patch to refactor the ipc code but need to fix usages in 
the rest of the code.

 Refactor ipc.Server and implementations to reduce constructor bloat
 ---

 Key: HADOOP-12272
 URL: https://issues.apache.org/jira/browse/HADOOP-12272
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Arpit Agarwal

 {{ipc.Server}} and its implementations have constructors taking large number 
 of parameters. This code can be simplified quite a bit by just moving 
 RPC.Builder to the Server class and passing the builder object to 
 constructors.
 The refactoring should be safe based on the class annotations but need to 
 confirm no dependent components will break.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11813) releasedocmaker.py should use today's date instead of unreleased

2015-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11813:
-
Fix Version/s: (was: 3.0.0)
   2.8.0

 releasedocmaker.py should use today's date instead of unreleased
 

 Key: HADOOP-11813
 URL: https://issues.apache.org/jira/browse/HADOOP-11813
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Darrell Taylor
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-11813.001.patch, HADOOP-11813.patch


 After discussing with a few folks, it'd be more convenient if releasedocmaker 
 used the current date rather than unreleased when processing a version that 
 JIRA hasn't declared released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640954#comment-14640954
 ] 

Allen Wittenauer commented on HADOOP-11731:
---

You'll need some modifications to pom.xml to use the version from yetus.  I 
wrote a patch somewhere, but haven't tested it very much.  Also keep in mind 
that releasedocmaker was built from the perspective that there would be 
multiple versions in the release directory so that it could build an index to 
all of them.



 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0, 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12270) builtin personality is too hadoop specific

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640914#comment-14640914
 ] 

Allen Wittenauer commented on HADOOP-12270:
---

I think a worthwhile exercise might be to pull *all* of the built-in tests out 
and force them into plug-ins.  This would probably highlight where we are 
making assumptions about Java.  Additionally, we clearly need a delete test 
and/or a substitute test method for personalities to call so that when they 
are passed a, for example, cc test, they can swap out javac instead.

 builtin personality is too hadoop specific
 --

 Key: HADOOP-12270
 URL: https://issues.apache.org/jira/browse/HADOOP-12270
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Allen Wittenauer

 As I work on TAP support and getting Hadoop to use it for shell unit tests, 
 I'm finding that the builtin personality is way too Hadoop (and maybe Apache) 
 specific.
 For example, if test-patch sees a .c file touched, why is it adding a javac 
 test?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-07-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640931#comment-14640931
 ] 

Andrew Wang commented on HADOOP-11731:
--

Aha, thanks for the heads up. I just pulled in HADOOP-11797 and HADOOP-11813.

I just did some test cherry-picks of everything that touches releasedocmaker in 
the HADOOP-12111 branch, and they came back clean. Cool if I just pull them 
down to trunk/branch-2?

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0, 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10854) unit tests for the shell scripts

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641153#comment-14641153
 ] 

Hadoop QA commented on HADOOP-10854:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 55s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 28 new or modified test files. |
| {color:green}+1{color} | javac |   7m 49s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 28s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | shellcheck |   0m  6s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 21s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | common tests |  23m  7s | Tests passed in 
hadoop-common. |
| | |  59m 45s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747105/HADOOP-10854.04.patch |
| Optional Tests | shellcheck javadoc javac unit |
| git revision | trunk / 83fe34a |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7337/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7337/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7337/console |


This message was automatically generated.

 unit tests for the shell scripts
 

 Key: HADOOP-10854
 URL: https://issues.apache.org/jira/browse/HADOOP-10854
 Project: Hadoop Common
  Issue Type: Test
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: scripts
 Attachments: HADOOP-10854.00.patch, HADOOP-10854.01.patch, 
 HADOOP-10854.02.patch, HADOOP-10854.03.patch, HADOOP-10854.04.patch


 With HADOOP-9902 moving a lot of the core functionality to functions, we 
 should build some unit tests for them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-07-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641185#comment-14641185
 ] 

Andrew Wang commented on HADOOP-11731:
--

Thanks Allen. So if I understand you correctly, the remaining work is something 
like as follows:

* Pull aforementioned changes from HADOOP-12111 branch to trunk/branch-2
* Fix the -Preleasedocs profile (looks like we need the per-project logic? I 
only see it in common's pom right now)
* Fix the create-release script (HADOOP-11793) and update the instructions 
telling RMs to run the lint mode.
* You mentioned the HowToRelease wiki instructions being incorrect, but I 
didn't catch what exactly was off. It says to close the JIRAs as the last step, 
which still seems okay.

Regarding the index, are you recommending we run the script for historical 
releases? Since we still host the docs for prior releases, seems like people 
could just look at the old CHANGES.txt there. I ask because I know you did a 
lot of JIRA gardening when testing the tool, and it sounded like we never fully 
reconciled JIRA and manually entered CHANGES.txt state.

Please add anything else I might have missed, I'm willing to take the brunt of 
the work (though your help would of course be appreciated too :)).

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0, 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10854) unit tests for the shell scripts

2015-07-24 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641184#comment-14641184
 ] 

Sean Busbey commented on HADOOP-10854:
--

{quote}
 * no longer fail when bats isn't installed, despite the fact it probably should
{quote}

Is failing with a nice error message only when {{-Pshelltest}} and no bats a 
problem for some reason? I'm just curious; I'd be happy to submit a follow on 
patch that did this with the enforcer plugin, for example, so that the failure 
was in the validate phase instead of test (or process-test-classes as the case 
may be).

 unit tests for the shell scripts
 

 Key: HADOOP-10854
 URL: https://issues.apache.org/jira/browse/HADOOP-10854
 Project: Hadoop Common
  Issue Type: Test
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: scripts
 Attachments: HADOOP-10854.00.patch, HADOOP-10854.01.patch, 
 HADOOP-10854.02.patch, HADOOP-10854.03.patch, HADOOP-10854.04.patch


 With HADOOP-9902 moving a lot of the core functionality to functions, we 
 should build some unit tests for them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641267#comment-14641267
 ] 

Allen Wittenauer commented on HADOOP-11731:
---


bq. Fix the -Preleasedocs profile (looks like we need the per-project logic? I 
only see it in common's pom right now)

You only need it in common with the proper flags.  The generated website points 
to the index which contains all of them. Separating them out is only useful to 
maybe some of the PMC and committers.  Everyone else treats Hadoop as one 
package.  Pretending otherwise is dumb.

bq. You mentioned the HowToRelease wiki instructions being incorrect, but I 
didn't catch what exactly was off. It says to close the JIRAs as the last step, 
which still seems okay.

There's some other stuff, but I don't remember off the top of my head.  It 
mainly had to do with the date reported in the generated notes vs. the tar ball 
and keeping them in sync.

bq. Regarding the index, are you recommending we run the script for historical 
releases? 

Yes.

bq. Since we still host the docs for prior releases, seems like people could 
just look at the old CHANGES.txt there.

It's not about backwards, it's about forwards.  At some point, (probably Hadoop 
2.12 or Hadoop 2.13, say autumn 2017 given current pace), there will be no more 
releases on the website that still have a CHANGES.txt file.   Then what?  If we 
generate the historical data, the answer is obvious.




 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0, 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11696) update compatibility documentation to reflect only API changes matter

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641268#comment-14641268
 ] 

Allen Wittenauer edited comment on HADOOP-11696 at 7/24/15 11:58 PM:
-

Yes. (except for the 3.x ones of course)


was (Author: aw):
Yes.

 update compatibility documentation to reflect only API changes matter
 -

 Key: HADOOP-11696
 URL: https://issues.apache.org/jira/browse/HADOOP-11696
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer

 Given the changes file generated by processing JIRA and current discussion in 
 common-dev, we should update the compatibility documents to reflect reality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11696) update compatibility documentation to reflect only API changes matter

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641268#comment-14641268
 ] 

Allen Wittenauer commented on HADOOP-11696:
---

Yes.

 update compatibility documentation to reflect only API changes matter
 -

 Key: HADOOP-11696
 URL: https://issues.apache.org/jira/browse/HADOOP-11696
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer

 Given the changes file generated by processing JIRA and current discussion in 
 common-dev, we should update the compatibility documents to reflect reality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10854) unit tests for the shell scripts

2015-07-24 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641290#comment-14641290
 ] 

Sean Busbey commented on HADOOP-10854:
--

I could make the enforcer plugin recognize that bats isn't installed and point 
folks to an installation HOWTO. That would leave things equally bad for the 
uninformed (i.e. those who don't pass {{-Pshelltest}}) while giving those at 
least willing to work on improving a pointer on how to move towards better.

 unit tests for the shell scripts
 

 Key: HADOOP-10854
 URL: https://issues.apache.org/jira/browse/HADOOP-10854
 Project: Hadoop Common
  Issue Type: Test
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: scripts
 Attachments: HADOOP-10854.00.patch, HADOOP-10854.01.patch, 
 HADOOP-10854.02.patch, HADOOP-10854.03.patch, HADOOP-10854.04.patch


 With HADOOP-9902 moving a lot of the core functionality to functions, we 
 should build some unit tests for them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12135) cleanup releasedocmaker

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641332#comment-14641332
 ] 

Hudson commented on HADOOP-12135:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8221 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8221/])
HADOOP-12135. cleanup releasedocmaker (wang: rev 
e8b62d11d460e9706e48df92a0b0a72f4a02d3f5)
* dev-support/releasedocmaker.py


 cleanup releasedocmaker
 ---

 Key: HADOOP-12135
 URL: https://issues.apache.org/jira/browse/HADOOP-12135
 Project: Hadoop Common
  Issue Type: Improvement
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0

 Attachments: HADOOP-12135.HADOOP-12111.00.patch


 For Yetus, releasedocmaker needs some work:
 * de-hadoop-ify it
 * still allow it to work w/4+ different projects simultaneously 
 * just running it w/out any options should provide more help I think
 probably other things too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12237) releasedocmaker.py doesn't work behind a proxy

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641335#comment-14641335
 ] 

Hudson commented on HADOOP-12237:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8221 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8221/])
HADOOP-12237. releasedocmaker.py doesn't work behind a proxy (Tsuyoshi Ozawa 
via aw) (wang: rev adcf5dd94052481f66deaf402ac4ace1ffc06f49)
* dev-support/releasedocmaker.py


 releasedocmaker.py doesn't work behind a proxy
 --

 Key: HADOOP-12237
 URL: https://issues.apache.org/jira/browse/HADOOP-12237
 Project: Hadoop Common
  Issue Type: Bug
  Components: yetus
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Fix For: 2.8.0

 Attachments: HADOOP-12237.001.patch


 HADOOP-12236 for Yetus.
 {quote}
 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10854) unit tests for the shell scripts

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641271#comment-14641271
 ] 

Allen Wittenauer commented on HADOOP-10854:
---

I think it's a problem because the people who will need unit testing the most 
are most likely the ones who won't have it installed.  I mean, even simple 
things like hey, keep the case statements alphabetized has been an uphill 
battle.  Manipulating global vars is downright dangerous for most of Hadoop's 
committers.

 unit tests for the shell scripts
 

 Key: HADOOP-10854
 URL: https://issues.apache.org/jira/browse/HADOOP-10854
 Project: Hadoop Common
  Issue Type: Test
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: scripts
 Attachments: HADOOP-10854.00.patch, HADOOP-10854.01.patch, 
 HADOOP-10854.02.patch, HADOOP-10854.03.patch, HADOOP-10854.04.patch


 With HADOOP-9902 moving a lot of the core functionality to functions, we 
 should build some unit tests for them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-07-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641320#comment-14641320
 ] 

Andrew Wang commented on HADOOP-11731:
--

Cool. I'm going to start by pulling in the changes from the branch to 
trunk/branch-2, and do whatever additional development is required there. Might 
ping you for reviews.

bq. You only need it in common with the proper flags

Hmm, so I work mostly on HDFS and Common, so would somewhat prefer not seeing 
YARN and MR mixed in. If you feel very strongly I can do 4-in-1, but I'm also 
willing to do the work to separate them out.

bq. It's not about backwards, it's about forwards

We've kept the docs for branch-1 releases and I hope 2.7.x would have similar 
legs, but fair enough. Do you have a sense for the state of JIRA fix versions? 
If it's just a matter of running the script and fixing the dates, easy. If more 
gardening is necessary, not so easy.

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0, 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12237) releasedocmaker.py doesn't work behind a proxy

2015-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12237:
-
Fix Version/s: (was: HADOOP-12111)
   2.8.0

 releasedocmaker.py doesn't work behind a proxy
 --

 Key: HADOOP-12237
 URL: https://issues.apache.org/jira/browse/HADOOP-12237
 Project: Hadoop Common
  Issue Type: Bug
  Components: yetus
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Fix For: 2.8.0

 Attachments: HADOOP-12237.001.patch


 HADOOP-12236 for Yetus.
 {quote}
 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12202) releasedocmaker drops missing component and assignee entries

2015-07-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641328#comment-14641328
 ] 

Andrew Wang commented on HADOOP-12202:
--

I pulled this down to trunk/branch-2, thanks all.

 releasedocmaker drops missing component and assignee entries
 

 Key: HADOOP-12202
 URL: https://issues.apache.org/jira/browse/HADOOP-12202
 Project: Hadoop Common
  Issue Type: Bug
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 2.8.0

 Attachments: HADOOP-12202.HADOOP-12111.00.patch, 
 HADOOP-12202.HADOOP-12111.01.patch


 After HADOOP-11807, releasedocmaker is dropping missing component and 
 assignee entries.  It shouldn't drop entries, even if they are errors that 
 lint mode will flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12202) releasedocmaker drops missing component and assignee entries

2015-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12202:
-
Issue Type: Bug  (was: Sub-task)
Parent: (was: HADOOP-12111)

 releasedocmaker drops missing component and assignee entries
 

 Key: HADOOP-12202
 URL: https://issues.apache.org/jira/browse/HADOOP-12202
 Project: Hadoop Common
  Issue Type: Bug
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 2.8.0

 Attachments: HADOOP-12202.HADOOP-12111.00.patch, 
 HADOOP-12202.HADOOP-12111.01.patch


 After HADOOP-11807, releasedocmaker is dropping missing component and 
 assignee entries.  It shouldn't drop entries, even if they are errors that 
 lint mode will flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12237) releasedocmaker.py doesn't work behind a proxy

2015-07-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641329#comment-14641329
 ] 

Andrew Wang commented on HADOOP-12237:
--

I pulled this down to trunk/branch-2, thanks all.

 releasedocmaker.py doesn't work behind a proxy
 --

 Key: HADOOP-12237
 URL: https://issues.apache.org/jira/browse/HADOOP-12237
 Project: Hadoop Common
  Issue Type: Bug
  Components: yetus
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Fix For: 2.8.0

 Attachments: HADOOP-12237.001.patch


 HADOOP-12236 for Yetus.
 {quote}
 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12237) releasedocmaker.py doesn't work behind a proxy

2015-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12237:
-
Issue Type: Bug  (was: Sub-task)
Parent: (was: HADOOP-12111)

 releasedocmaker.py doesn't work behind a proxy
 --

 Key: HADOOP-12237
 URL: https://issues.apache.org/jira/browse/HADOOP-12237
 Project: Hadoop Common
  Issue Type: Bug
  Components: yetus
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Fix For: 2.8.0

 Attachments: HADOOP-12237.001.patch


 HADOOP-12236 for Yetus.
 {quote}
 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12202) releasedocmaker drops missing component and assignee entries

2015-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12202:
-
Fix Version/s: (was: HADOOP-12111)
   2.8.0

 releasedocmaker drops missing component and assignee entries
 

 Key: HADOOP-12202
 URL: https://issues.apache.org/jira/browse/HADOOP-12202
 Project: Hadoop Common
  Issue Type: Bug
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 2.8.0

 Attachments: HADOOP-12202.HADOOP-12111.00.patch, 
 HADOOP-12202.HADOOP-12111.01.patch


 After HADOOP-11807, releasedocmaker is dropping missing component and 
 assignee entries.  It shouldn't drop entries, even if they are errors that 
 lint mode will flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12135) cleanup releasedocmaker

2015-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12135:
-
Fix Version/s: (was: HADOOP-12111)
   2.8.0

 cleanup releasedocmaker
 ---

 Key: HADOOP-12135
 URL: https://issues.apache.org/jira/browse/HADOOP-12135
 Project: Hadoop Common
  Issue Type: Improvement
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0

 Attachments: HADOOP-12135.HADOOP-12111.00.patch


 For Yetus, releasedocmaker needs some work:
 * de-hadoop-ify it
 * still allow it to work w/4+ different projects simultaneously 
 * just running it w/out any options should provide more help I think
 probably other things too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12261) Surefire needs to make sure the JVMs it fires up are 64-bit

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641368#comment-14641368
 ] 

Allen Wittenauer commented on HADOOP-12261:
---

bq. I  disagree that the PMC is completely untrustworthy for occasionally 
allowing incompatible changes in branch-2.

Yes, you're right.  I should be totally forgiving of the data loss, broken 
jobs, lost functionality, and angry end users caused by these occasional (34 in 
branch-2 since 2.3.0, including the 9 in 2.8.0 already) 
actually-marked-as-incompatible changes.  (Who knows how many more have slipped 
through the cracks).

bq.  let's try to keep this discussion constructive and focused on the issue at 
hand.

Fine.  It won't matter anyway.  PMC members always use the we broke JDK6 so 
now we can do anything excuse for every incompatible change, so we'll drop 
32-bit in 2.8.0 regardless of the end user impact.

 Surefire needs to make sure the JVMs it fires up are 64-bit
 ---

 Key: HADOOP-12261
 URL: https://issues.apache.org/jira/browse/HADOOP-12261
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.1
Reporter: Alan Burlison
Assignee: Alan Burlison

 hadoop-project/pom.xml sets maven-surefire-plugin.argLine to include 
 -Xmx4096m. Allocating  that amount of memory requires a 64-bit JVM, but on 
 platforms with both 32 and 64-bit JVMs surefire runs the 32 bit version by 
 default and tests fail to start as a result. -d64 should be added to the 
 command-line arguments to ensure a 64-bit JVM is always used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641369#comment-14641369
 ] 

Allen Wittenauer commented on HADOOP-11731:
---

bq. Hmm, so I work mostly on HDFS and Common, so would somewhat prefer not 
seeing YARN and MR mixed in. If you feel very strongly I can do 4-in-1, but I'm 
also willing to do the work to separate them out.

Yes, I feel *very* strongly about this one.  Separating out the release notes 
into subprojects may be one of the most end user unfriendly things we do (#1 is 
the ridiculous number of incompatible changes).  Again, as I pointed out above, 
the *ONLY* people who truly, really care about these things being separated out 
are the committers and, frankly, the reasoning there is void, given they 
should know how to build their own set of notes if they really are too lazy to 
grep them out.. Everyone else wants to see one file that has all the big 
changes aka a single, consolidated release note file.

bq.  Do you have a sense for the state of JIRA fix versions? If it's just a 
matter of running the script and fixing the dates, easy. If more gardening is 
necessary, not so easy.

I know that up to 2.6.0 is correct.  But lint mode should tell you a lot about 
people doing things like assigning multiple versions to a JIRA.

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0, 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641372#comment-14641372
 ] 

Allen Wittenauer commented on HADOOP-11731:
---

BTW: https://github.com/aw-altiscale/eco-release-metadata is usually updated 
weekly.

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0, 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11696) update compatibility documentation to reflect only API changes matter

2015-07-24 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641260#comment-14641260
 ] 

Sean Busbey commented on HADOOP-11696:
--

are the above flagged changes incompatible in ways that the compat document 
says won't happen in minor releases?

 update compatibility documentation to reflect only API changes matter
 -

 Key: HADOOP-11696
 URL: https://issues.apache.org/jira/browse/HADOOP-11696
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer

 Given the changes file generated by processing JIRA and current discussion in 
 common-dev, we should update the compatibility documents to reflect reality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12135) cleanup releasedocmaker

2015-07-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641326#comment-14641326
 ] 

Andrew Wang commented on HADOOP-12135:
--

I pulled this down to trunk/branch-2, thanks all.

 cleanup releasedocmaker
 ---

 Key: HADOOP-12135
 URL: https://issues.apache.org/jira/browse/HADOOP-12135
 Project: Hadoop Common
  Issue Type: Improvement
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0

 Attachments: HADOOP-12135.HADOOP-12111.00.patch


 For Yetus, releasedocmaker needs some work:
 * de-hadoop-ify it
 * still allow it to work w/4+ different projects simultaneously 
 * just running it w/out any options should provide more help I think
 probably other things too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12135) cleanup releasedocmaker

2015-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12135:
-
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HADOOP-12111)

 cleanup releasedocmaker
 ---

 Key: HADOOP-12135
 URL: https://issues.apache.org/jira/browse/HADOOP-12135
 Project: Hadoop Common
  Issue Type: Improvement
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0

 Attachments: HADOOP-12135.HADOOP-12111.00.patch


 For Yetus, releasedocmaker needs some work:
 * de-hadoop-ify it
 * still allow it to work w/4+ different projects simultaneously 
 * just running it w/out any options should provide more help I think
 probably other things too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11807) add a lint mode to releasedocmaker

2015-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11807:
-
Fix Version/s: (was: HADOOP-12111)
   2.8.0

I pulled this down to trunk/branch-2, thanks all.

 add a lint mode to releasedocmaker
 --

 Key: HADOOP-11807
 URL: https://issues.apache.org/jira/browse/HADOOP-11807
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, documentation, yetus
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: ramtin
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11807.001.patch, HADOOP-11807.002.patch, 
 HADOOP-11807.003.patch, HADOOP-11807.004.patch, HADOOP-11807.005.patch


 * check for missing components (error)
 * check for missing assignee (error)
 * check for common version problems (warning)
 * add an error message for missing release notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12202) releasedocmaker drops missing component and assignee entries

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641333#comment-14641333
 ] 

Hudson commented on HADOOP-12202:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8221 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8221/])
HADOOP-12202. releasedocmaker drops missing component and assignee entries (aw) 
(wang: rev d7697831e3b24bec149990feed819e7d96f78184)
* dev-support/releasedocmaker.py


 releasedocmaker drops missing component and assignee entries
 

 Key: HADOOP-12202
 URL: https://issues.apache.org/jira/browse/HADOOP-12202
 Project: Hadoop Common
  Issue Type: Bug
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 2.8.0

 Attachments: HADOOP-12202.HADOOP-12111.00.patch, 
 HADOOP-12202.HADOOP-12111.01.patch


 After HADOOP-11807, releasedocmaker is dropping missing component and 
 assignee entries.  It shouldn't drop entries, even if they are errors that 
 lint mode will flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11807) add a lint mode to releasedocmaker

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641334#comment-14641334
 ] 

Hudson commented on HADOOP-11807:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8221 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8221/])
HADOOP-11807. add a lint mode to releasedocmaker (ramtin via aw) (wang: rev 
098ba450cc98475b84d60bb5ac3bd7b558b2a67c)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/releasedocmaker.py


 add a lint mode to releasedocmaker
 --

 Key: HADOOP-11807
 URL: https://issues.apache.org/jira/browse/HADOOP-11807
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, documentation, yetus
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: ramtin
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11807.001.patch, HADOOP-11807.002.patch, 
 HADOOP-11807.003.patch, HADOOP-11807.004.patch, HADOOP-11807.005.patch


 * check for missing components (error)
 * check for missing assignee (error)
 * check for common version problems (warning)
 * add an error message for missing release notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12261) Surefire needs to make sure the JVMs it fires up are 64-bit

2015-07-24 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641331#comment-14641331
 ] 

Colin Patrick McCabe commented on HADOOP-12261:
---

[~alanburlison], did you try running the unit tests with {{\-Xmx2048m}}?  If 
that works, then it would be an easy solution to this issue.

[~aw], I disagree that the PMC is completely untrustworthy for occasionally 
allowing incompatible changes in branch-2.  Sometimes incompatible changes are 
the only way forward that makes sense.  For example, when we moved to JDK7, 
very few users were still on JDK6 and we received no major complaints.  I'm 
sure there are things that we could do better (especially in the area of 
stability) but let's try to keep this discussion constructive and focused on 
the issue at hand.

If we can't easily fix the unit tests to run in 2048 megs, we should start a 
thread on the dev and user list about whether running the unit tests under 
32-bit should still be supported.  (I'm not even talking about dropping support 
for 32-bit deployments, but just for unit tests.)

 Surefire needs to make sure the JVMs it fires up are 64-bit
 ---

 Key: HADOOP-12261
 URL: https://issues.apache.org/jira/browse/HADOOP-12261
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.1
Reporter: Alan Burlison
Assignee: Alan Burlison

 hadoop-project/pom.xml sets maven-surefire-plugin.argLine to include 
 -Xmx4096m. Allocating  that amount of memory requires a 64-bit JVM, but on 
 platforms with both 32 and 64-bit JVMs surefire runs the 32 bit version by 
 default and tests fail to start as a result. -d64 should be added to the 
 command-line arguments to ensure a 64-bit JVM is always used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10854) unit tests for the shell scripts

2015-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14641370#comment-14641370
 ] 

Allen Wittenauer commented on HADOOP-10854:
---

Actually, if the code doesn't fail if bats isn't installed, then is there any 
reason not to have it turned on by default?  This would allow us to put up a 
big Yo, if you are testing shell code, install bats message.

 unit tests for the shell scripts
 

 Key: HADOOP-10854
 URL: https://issues.apache.org/jira/browse/HADOOP-10854
 Project: Hadoop Common
  Issue Type: Test
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: scripts
 Attachments: HADOOP-10854.00.patch, HADOOP-10854.01.patch, 
 HADOOP-10854.02.patch, HADOOP-10854.03.patch, HADOOP-10854.04.patch


 With HADOOP-9902 moving a lot of the core functionality to functions, we 
 should build some unit tests for them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12269) Update aws-sdk dependency version

2015-07-24 Thread Thomas Demoor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Demoor updated HADOOP-12269:
---
Description: This was originally part of HADOOP-11684, pulling out to this 
separate subtask as requested by [~ste...@apache.org]  (was: This was 
originally part of HADOOP-11684)

 Update aws-sdk dependency version
 -

 Key: HADOOP-12269
 URL: https://issues.apache.org/jira/browse/HADOOP-12269
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Thomas Demoor

 This was originally part of HADOOP-11684, pulling out to this separate 
 subtask as requested by [~ste...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11487) FileNotFound on distcp to s3n due to creation inconsistency

2015-07-24 Thread Philip Deegan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640362#comment-14640362
 ] 

Philip Deegan commented on HADOOP-11487:


Is it possible DRILL-3546 is similar?

 FileNotFound on distcp to s3n due to creation inconsistency 
 

 Key: HADOOP-11487
 URL: https://issues.apache.org/jira/browse/HADOOP-11487
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, fs/s3
Reporter: Paulo Motta

 I'm trying to copy a large amount of files from HDFS to S3 via distcp and I'm 
 getting the following exception:
 {code:java}
 2015-01-16 20:53:18,187 ERROR [main] 
 org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying 
 hdfs://10.165.35.216/hdfsFolder/file.gz to s3n://s3-bucket/file.gz
 java.io.FileNotFoundException: No such file or directory 
 's3n://s3-bucket/file.gz'
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445)
   at 
 org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187)
   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233)
   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:422)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 2015-01-16 20:53:18,276 WARN [main] org.apache.hadoop.mapred.YarnChild: 
 Exception running child : java.io.FileNotFoundException: No such file or 
 directory 's3n://s3-bucket/file.gz'
   at 
 org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:445)
   at 
 org.apache.hadoop.tools.util.DistCpUtils.preserve(DistCpUtils.java:187)
   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:233)
   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:422)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
 {code}
 However, when I try hadoop fs -ls s3n://s3-bucket/file.gz the file is there. 
 So probably due to Amazon's S3 eventual consistency the job failure.
 In my opinion, in order to fix this problem NativeS3FileSystem.getFileStatus 
 must use fs.s3.maxRetries property in order to avoid failures like this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640372#comment-14640372
 ] 

Hudson commented on HADOOP-12009:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #266 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/266/])
HADOOP-12009: Clarify FileSystem.listStatus() sorting order  fix 
FileSystemContractBaseTest:testListStatus. (J.Andreina via jghoman) (jghoman: 
rev ab3197c20452e0dd908193d6854c204e6ee34645)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009-004.patch, 
 HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12189) Improve CallQueueManager#swapQueue to make queue elements drop nearly impossible.

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14640376#comment-14640376
 ] 

Hudson commented on HADOOP-12189:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #266 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/266/])
HADOOP-12189. Improve CallQueueManager#swapQueue to make queue elements drop 
nearly impossible. Contributed by Zhihai Xu. (wang: rev 
6736a1ab7033523ed5f304fdfed46d7f348665b4)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java


 Improve CallQueueManager#swapQueue to make queue elements drop nearly 
 impossible.
 -

 Key: HADOOP-12189
 URL: https://issues.apache.org/jira/browse/HADOOP-12189
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, test
Affects Versions: 2.7.1
Reporter: zhihai xu
Assignee: zhihai xu
 Fix For: 2.8.0

 Attachments: HADOOP-12189.000.patch, HADOOP-12189.001.patch, 
 HADOOP-12189.none_guarantee.000.patch, HADOOP-12189.none_guarantee.001.patch, 
 HADOOP-12189.none_guarantee.002.patch


 Improve CallQueueManager#swapQueue to make queue elements drop nearly 
 impossible. This is the trade-off between performance and functionality, even 
 in the very very rare situation, we may drop one element, but it is not the 
 end of the world since the client may still recover with timeout.
 CallQueueManager may drop elements from the queue sometimes when calling 
 {{swapQueue}}. 
 The following test failure from TestCallQueueManager shown some elements in 
 the queue are dropped.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7150/testReport/org.apache.hadoop.ipc/TestCallQueueManager/testSwapUnderContention/
 {code}
 java.lang.AssertionError: expected:27241 but was:27245
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.ipc.TestCallQueueManager.testSwapUnderContention(TestCallQueueManager.java:220)
 {code}
 It looked like the elements in the queue are dropped due to 
 {{CallQueueManager#swapQueue}}
 Looked at the implementation of {{CallQueueManager#swapQueue}}, there is a 
 possibility that the elements in the queue are dropped. If the queue is full, 
 the calling thread for {{CallQueueManager#put}} is blocked for long time. It 
 may put the element into the old queue after queue in {{takeRef}} is changed 
 by swapQueue, then this element in the old queue will be dropped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >