[jira] [Commented] (HADOOP-11742) mkdir by file system shell fails on an empty bucket

2015-04-02 Thread Takenori Sato (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392251#comment-14392251
 ] 

Takenori Sato commented on HADOOP-11742:


_mkdir_ and _ls_ worked as expected with the fix.

{code}
# hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -Dfs.s3a.access.key=ACCESS_KEY 
-Dfs.s3a.secret.key=SECRET_KEY -ls s3a://s3atest/
15/04/02 06:52:55 DEBUG s3a.S3AFileSystem: Getting path status for 
s3a://s3atest/ ()
15/04/02 06:52:55 DEBUG s3a.S3AFileSystem: s3a://s3atest/ is empty? true
15/04/02 06:52:55 DEBUG s3a.S3AFileSystem: List status for path: s3a://s3atest/
15/04/02 06:52:55 DEBUG s3a.S3AFileSystem: Getting path status for 
s3a://s3atest/ ()
15/04/02 06:52:55 DEBUG s3a.S3AFileSystem: s3a://s3atest/ is empty? true
15/04/02 06:52:55 DEBUG s3a.S3AFileSystem: listStatus: doing listObjects for 
directory 
# hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -Dfs.s3a.access.key=ACCESS_KEY 
-Dfs.s3a.secret.key=SECRET_KEY -mkdir s3a://s3atest/root
15/04/02 06:53:20 DEBUG s3a.S3AFileSystem: Getting path status for 
s3a://s3atest/root (root)
15/04/02 06:53:20 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3atest/root
15/04/02 06:53:20 DEBUG s3a.S3AFileSystem: Getting path status for 
s3a://s3atest/ ()
15/04/02 06:53:20 DEBUG s3a.S3AFileSystem: s3a://s3atest/ is empty? true
15/04/02 06:53:20 DEBUG s3a.S3AFileSystem: Making directory: s3a://s3atest/root
15/04/02 06:53:20 DEBUG s3a.S3AFileSystem: Getting path status for 
s3a://s3atest/root (root)
15/04/02 06:53:20 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3atest/root
15/04/02 06:53:20 DEBUG s3a.S3AFileSystem: Getting path status for 
s3a://s3atest/root (root)
15/04/02 06:53:20 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3atest/root
15/04/02 06:53:20 DEBUG s3a.S3AFileSystem: Getting path status for 
s3a://s3atest/ ()
15/04/02 06:53:20 DEBUG s3a.S3AFileSystem: s3a://s3atest/ is empty? true
# hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -Dfs.s3a.access.key=ACCESS_KEY 
-Dfs.s3a.secret.key=SECRET_KEY -ls s3a://s3atest/
15/04/02 06:53:26 DEBUG s3a.S3AFileSystem: Getting path status for 
s3a://s3atest/ ()
15/04/02 06:53:26 DEBUG s3a.S3AFileSystem: s3a://s3atest/ is empty? false
15/04/02 06:53:26 DEBUG s3a.S3AFileSystem: List status for path: s3a://s3atest/
15/04/02 06:53:26 DEBUG s3a.S3AFileSystem: Getting path status for 
s3a://s3atest/ ()
15/04/02 06:53:26 DEBUG s3a.S3AFileSystem: s3a://s3atest/ is empty? false
15/04/02 06:53:26 DEBUG s3a.S3AFileSystem: listStatus: doing listObjects for 
directory 
15/04/02 06:53:26 DEBUG s3a.S3AFileSystem: Adding: rd: s3a://s3atest/root
Found 1 items
drwxrwxrwx   -  0 1970-01-01 00:00 s3a://s3atest/root 
{code}

The created directory didn't become visible immediately. But the successive 
_ls_ showed it was successful.

 mkdir by file system shell fails on an empty bucket
 ---

 Key: HADOOP-11742
 URL: https://issues.apache.org/jira/browse/HADOOP-11742
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.7.0
 Environment: CentOS 7
Reporter: Takenori Sato
Assignee: Takenori Sato
Priority: Minor
 Attachments: HADOOP-11742-branch-2.7.001.patch, 
 HADOOP-11742-branch-2.7.002.patch, HADOOP-11742-branch-2.7.003-1.patch, 
 HADOOP-11742-branch-2.7.003-2.patch


 I have built the latest 2.7, and tried S3AFileSystem.
 Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as 
 follows:
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo
 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for 
 s3a://s3a/foo (foo)
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
 ()
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
 mkdir: `s3a://s3a/foo': No such file or directory
 {code}
 So does _ls_.
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/
 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
 ()
 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
 ls: `s3a://s3a/': No such file or directory
 {code}
 This is how it works via s3n.
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
 Found 1 items
 drwxrwxrwx   -  0 1970-01-01 00:00 s3n://s3n/foo
 {code}
 The snapshot is the following:
 {quote}
 \# git branch
 \* branch-2.7
   trunk
 \# git log
 commit 929b04ce3a4fe419dece49ed68d4f6228be214c1
 Author: Harsh J ha...@cloudera.com
 Date:   Sun Mar 22 10:18:32 2015 +0530
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11784) failed to locate Winutils for win 32 platform

2015-04-02 Thread Gaurav Tiwari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392381#comment-14392381
 ] 

Gaurav Tiwari commented on HADOOP-11784:


it is basically related to winutils.exe . I didn't find compatible winutils.exe 
for my windows 32 bit system


 failed to locate Winutils for win 32 platform
 -

 Key: HADOOP-11784
 URL: https://issues.apache.org/jira/browse/HADOOP-11784
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 2.6.0
Reporter: Gaurav Tiwari

 During the execution of Map reduce example first I got the error telling 
 ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary 
 path
 So I downloaded a version and update the bin folder again executing the same 
 command I am getting error like 
 ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary 
 path
 java.io.IOException: Could not locate executable 
 C:\hadoop-2.6.0\bin\winutils.exe in the Hadoop binaries.
 at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:355)
 at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:370)
 at org.apache.hadoop.util.Shell.clinit(Shell.java:363)
 at 
 org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:438)
 at 
 org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:484)
 at 
 org.apache.hadoop.util.GenericOptionsParser.init(GenericOptionsParser.java:170)
 at 
 org.apache.hadoop.util.GenericOptionsParser.init(GenericOptionsParser.java:153)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
 at org.apache.hadoop.examples.Grep.main(Grep.java:101)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
 at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
 at 
 org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
 15/04/01 19:12:51 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where ap
 licable
 15/04/01 19:12:52 INFO Configuration.deprecation: session.id is deprecated. 
 Instead, use dfs.metrics.session-id
 15/04/01 19:12:52 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
 processName=JobTracker, sessionId=
 java.lang.NullPointerException
 at java.lang.ProcessBuilder.start(ProcessBuilder.java:1011)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
 at org.apache.hadoop.util.Shell.run(Shell.java:455)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:808)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:656)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:444)
 at 
 org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:293)
 at 
 org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:437)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
 at org.apache.hadoop.examples.Grep.run(Grep.java:77)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.examples.Grep.main(Grep.java:101)

[jira] [Commented] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2015-04-02 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392409#comment-14392409
 ] 

Kai Zheng commented on HADOOP-11540:


I had an initial patch for this. For the rather initial RS coder based on 
ISA-L, using the benchmark test framework in HADOOP-11588, I got the following 
results, in my development machine. The speed up is about 20X !  Will test it 
on serious servers.
{noformat}
Run encoding for JavaRSCoder takes: 12 seconds
Run decoding for JavaRSCoder takes: 12 seconds
Run encoding for ISARSCoder takes: 624 milliseconds
Run decoding for ISARSCoder takes: 737 milliseconds
{noformat}

 Raw Reed-Solomon coder using Intel ISA-L library
 

 Key: HADOOP-11540
 URL: https://issues.apache.org/jira/browse/HADOOP-11540
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Zhe Zhang
Assignee: Kai Zheng

 This is to provide RS codec implementation using Intel ISA-L library for 
 encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11742) mkdir by file system shell fails on an empty bucket

2015-04-02 Thread Takenori Sato (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takenori Sato updated HADOOP-11742:
---
Attachment: HADOOP-11742-branch-2.7.003-1.patch

This is the patch to fix _S3AFileSystem#getFileStatus_. The dedicated part to 
process a root directory was added, which is entered only when key.isEmpty() == 
true.

 mkdir by file system shell fails on an empty bucket
 ---

 Key: HADOOP-11742
 URL: https://issues.apache.org/jira/browse/HADOOP-11742
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.7.0
 Environment: CentOS 7
Reporter: Takenori Sato
Assignee: Takenori Sato
Priority: Minor
 Attachments: HADOOP-11742-branch-2.7.001.patch, 
 HADOOP-11742-branch-2.7.002.patch, HADOOP-11742-branch-2.7.003-1.patch


 I have built the latest 2.7, and tried S3AFileSystem.
 Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as 
 follows:
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo
 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for 
 s3a://s3a/foo (foo)
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
 ()
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
 mkdir: `s3a://s3a/foo': No such file or directory
 {code}
 So does _ls_.
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/
 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
 ()
 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
 ls: `s3a://s3a/': No such file or directory
 {code}
 This is how it works via s3n.
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
 Found 1 items
 drwxrwxrwx   -  0 1970-01-01 00:00 s3n://s3n/foo
 {code}
 The snapshot is the following:
 {quote}
 \# git branch
 \* branch-2.7
   trunk
 \# git log
 commit 929b04ce3a4fe419dece49ed68d4f6228be214c1
 Author: Harsh J ha...@cloudera.com
 Date:   Sun Mar 22 10:18:32 2015 +0530
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11742) mkdir by file system shell fails on an empty bucket

2015-04-02 Thread Takenori Sato (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392231#comment-14392231
 ] 

Takenori Sato commented on HADOOP-11742:


Patches are verified as follows.

1. run TestS3AContractRootDir to see it succeeds

{code}
---
 T E S T S
---
Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.855 sec - in 
org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir

Results :

Tests run: 5, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 10.341 s
[INFO] Finished at: 2015-04-02T05:41:48+00:00
[INFO] Final Memory: 28M/407M
[INFO] 
{code}

2. apply the test patch(003-2), and run TestS3AContractRootDir

{code}
---
 T E S T S
---
Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir
Tests run: 5, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 21.296 sec  
FAILURE! - in org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir
testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir)
  Time elapsed: 4.608 sec   ERROR!
java.io.FileNotFoundException: No such file or directory: /
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:996)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.assertIsDirectory(ContractTestUtils.java:464)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

testRmRootRecursive(org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir)  
Time elapsed: 2.509 sec   ERROR!
java.io.FileNotFoundException: No such file or directory: /
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:996)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.assertIsDirectory(ContractTestUtils.java:464)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmRootRecursive(AbstractContractRootDirectoryTest.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

testCreateFileOverRoot(org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir)
  Time elapsed: 3.006 sec   ERROR!
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS 
Service: Amazon S3, AWS Request ID: 2B352694A5577C62, AWS Error Code: 
MalformedXML, AWS Error Message: 

[jira] [Commented] (HADOOP-11781) rewrite smart-apply-patch.sh

2015-04-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392341#comment-14392341
 ] 

Hadoop QA commented on HADOOP-11781:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12708928/HADOOP-11781-02.patch
  against trunk revision 867d5d2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6049//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6049//console

This message is automatically generated.

 rewrite smart-apply-patch.sh
 

 Key: HADOOP-11781
 URL: https://issues.apache.org/jira/browse/HADOOP-11781
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Raymie Stata
 Attachments: HADOOP-11781-01.patch, HADOOP-11781-02.patch


 smart-apply-patch.sh has a few race conditions and is just generally crufty.  
 It should be rewritten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11781) rewrite smart-apply-patch.sh

2015-04-02 Thread Raymie Stata (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymie Stata updated HADOOP-11781:
--
Assignee: Raymie Stata
Release Note: Now auto-downloads patch from issue-id; fixed race 
conditions; fixed bug affecting some patches.
  Status: Patch Available  (was: Open)

 rewrite smart-apply-patch.sh
 

 Key: HADOOP-11781
 URL: https://issues.apache.org/jira/browse/HADOOP-11781
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Raymie Stata
 Attachments: HADOOP-11781-01.patch


 smart-apply-patch.sh has a few race conditions and is just generally crufty.  
 It should be rewritten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11740) Combine erasure encoder and decoder interfaces

2015-04-02 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392304#comment-14392304
 ] 

Kai Zheng commented on HADOOP-11740:


I had reviewed the codes and had discussed with [~zhz] offline.
Zhe, would you update the patch ? Or please let me know if anything I can help.

 Combine erasure encoder and decoder interfaces
 --

 Key: HADOOP-11740
 URL: https://issues.apache.org/jira/browse/HADOOP-11740
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HADOOP-11740-000.patch


 Rationale [discussed | 
 https://issues.apache.org/jira/browse/HDFS-7337?focusedCommentId=14376540page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376540]
  under HDFS-7337.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11784) failed to locate Winutils for win 32 platform

2015-04-02 Thread Gaurav Tiwari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392365#comment-14392365
 ] 

Gaurav Tiwari commented on HADOOP-11784:


yes i have done

 failed to locate Winutils for win 32 platform
 -

 Key: HADOOP-11784
 URL: https://issues.apache.org/jira/browse/HADOOP-11784
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 2.6.0
Reporter: Gaurav Tiwari

 During the execution of Map reduce example first I got the error telling 
 ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary 
 path
 So I downloaded a version and update the bin folder again executing the same 
 command I am getting error like 
 ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary 
 path
 java.io.IOException: Could not locate executable 
 C:\hadoop-2.6.0\bin\winutils.exe in the Hadoop binaries.
 at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:355)
 at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:370)
 at org.apache.hadoop.util.Shell.clinit(Shell.java:363)
 at 
 org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:438)
 at 
 org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:484)
 at 
 org.apache.hadoop.util.GenericOptionsParser.init(GenericOptionsParser.java:170)
 at 
 org.apache.hadoop.util.GenericOptionsParser.init(GenericOptionsParser.java:153)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
 at org.apache.hadoop.examples.Grep.main(Grep.java:101)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
 at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
 at 
 org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
 15/04/01 19:12:51 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where ap
 licable
 15/04/01 19:12:52 INFO Configuration.deprecation: session.id is deprecated. 
 Instead, use dfs.metrics.session-id
 15/04/01 19:12:52 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
 processName=JobTracker, sessionId=
 java.lang.NullPointerException
 at java.lang.ProcessBuilder.start(ProcessBuilder.java:1011)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
 at org.apache.hadoop.util.Shell.run(Shell.java:455)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:808)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:656)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:444)
 at 
 org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:293)
 at 
 org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133)
 at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:437)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
 at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
 at org.apache.hadoop.examples.Grep.run(Grep.java:77)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.examples.Grep.main(Grep.java:101)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 

[jira] [Updated] (HADOOP-11742) mkdir by file system shell fails on an empty bucket

2015-04-02 Thread Takenori Sato (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takenori Sato updated HADOOP-11742:
---
Attachment: HADOOP-11742-branch-2.7.003-2.patch

This is the patch to fix the unit test, _AbstractContractRootDirectoryTest_.

Changes are:
# setup() prepares an empty directory
# assertion was added to make sure the root dir is empty in 
testRmEmptyRootDirNonRecursive()
# teardown() does nothing

 mkdir by file system shell fails on an empty bucket
 ---

 Key: HADOOP-11742
 URL: https://issues.apache.org/jira/browse/HADOOP-11742
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.7.0
 Environment: CentOS 7
Reporter: Takenori Sato
Assignee: Takenori Sato
Priority: Minor
 Attachments: HADOOP-11742-branch-2.7.001.patch, 
 HADOOP-11742-branch-2.7.002.patch, HADOOP-11742-branch-2.7.003-1.patch, 
 HADOOP-11742-branch-2.7.003-2.patch


 I have built the latest 2.7, and tried S3AFileSystem.
 Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as 
 follows:
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo
 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for 
 s3a://s3a/foo (foo)
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
 ()
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
 mkdir: `s3a://s3a/foo': No such file or directory
 {code}
 So does _ls_.
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/
 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
 ()
 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
 ls: `s3a://s3a/': No such file or directory
 {code}
 This is how it works via s3n.
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
 Found 1 items
 drwxrwxrwx   -  0 1970-01-01 00:00 s3n://s3n/foo
 {code}
 The snapshot is the following:
 {quote}
 \# git branch
 \* branch-2.7
   trunk
 \# git log
 commit 929b04ce3a4fe419dece49ed68d4f6228be214c1
 Author: Harsh J ha...@cloudera.com
 Date:   Sun Mar 22 10:18:32 2015 +0530
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11588) Benchmark framework and test for erasure coders

2015-04-02 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11588:
---
Attachment: HADOOP-11588-v1.patch

Uploaded a patch that can measure performance for a coder.

 Benchmark framework and test for erasure coders
 ---

 Key: HADOOP-11588
 URL: https://issues.apache.org/jira/browse/HADOOP-11588
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-7285

 Attachments: HADOOP-11588-v1.patch


 Given more than one erasure coders are implemented for a code scheme, we need 
 benchmark and test to help evaluate which one outperforms in certain 
 environment. This is to implement the benchmark framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11781) rewrite smart-apply-patch.sh

2015-04-02 Thread Raymie Stata (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymie Stata updated HADOOP-11781:
--
Attachment: HADOOP-11781-02.patch

-02: applied previous version to all 926 issues in the patch avail state.  
This test uncovered a bug that is fixed by this version.  (BTW, 448 of those 
patches successfully apply, 478 fail to.)

 rewrite smart-apply-patch.sh
 

 Key: HADOOP-11781
 URL: https://issues.apache.org/jira/browse/HADOOP-11781
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Raymie Stata
 Attachments: HADOOP-11781-01.patch, HADOOP-11781-02.patch


 smart-apply-patch.sh has a few race conditions and is just generally crufty.  
 It should be rewritten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-02 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393435#comment-14393435
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-11731:
--

Generating change log from JIRA is a good idea.  It bases on an assumption that 
each JIRA has an accurate summary (a.k.a. JIRA title) to reflect the committed 
change. Unfortunately, the assumption is invalid for many cases since we never 
enforce that the JIRA summary must be the same as the change log.  Have you 
compared the current CHANGES.txt with the generated change log?  I beg the diff 
is long.

Besides, after a release R1 is out, someone may (accidentally or intentionally) 
modify the JIRA summary.  Then, the entry for the same item in a later release 
R2 could be different from the one in R1.

Yet another concern is that non-committers can add/edit JIRA summary but only 
committers could modify CHANGES.txt.

I agree that manually editing CHANGES.txt is not a perfect solution.  However, 
it worked well in the past for many releases.  I suggest we keep the current 
dev workflow.  Try using the new script provided here to generate the next 
release.  If everything works well, we shell remove CHANGES.txt and revise the 
dev workflow.  What do you think?

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11797) releasedocmaker.py needs to put ASF headers on output

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11797:
--
Status: Patch Available  (was: Open)

 releasedocmaker.py needs to put ASF headers on output
 -

 Key: HADOOP-11797
 URL: https://issues.apache.org/jira/browse/HADOOP-11797
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11797.000.patch


 ... otherwise mvn rat check fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-02 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393456#comment-14393456
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-11731:
--

An example of JIRA information being inaccurate is that the assignee is missing 
in this JIRA HADOOP-11731.  From CHANGES.txt, we see that aw has worked on it.
{noformat}
//hadoop-common-project/hadoop-common/CHANGES.txt 
HADOOP-11731. Rework the changelog and releasenotes (aw)
{noformat}

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11797) releasedocmaker.py needs to put ASF headers on output

2015-04-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393553#comment-14393553
 ] 

Hadoop QA commented on HADOOP-11797:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12709066/HADOOP-11797.000.patch
  against trunk revision 6a6a59d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6053//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6053//console

This message is automatically generated.

 releasedocmaker.py needs to put ASF headers on output
 -

 Key: HADOOP-11797
 URL: https://issues.apache.org/jira/browse/HADOOP-11797
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11797.000.patch


 ... otherwise mvn rat check fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11796) Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows

2015-04-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393579#comment-14393579
 ] 

Hadoop QA commented on HADOOP-11796:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12709064/HADOOP-11796.01.patch
  against trunk revision 6a6a59d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6052//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6052//console

This message is automatically generated.

 Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows
 ---

 Key: HADOOP-11796
 URL: https://issues.apache.org/jira/browse/HADOOP-11796
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Attachments: HADOOP-11796.00.patch, HADOOP-11796.01.patch


 The test should be skipped on Windows.
 {code}
 Stacktrace
 java.util.NoSuchElementException: null
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:809)
   at java.util.HashMap$EntryIterator.next(HashMap.java:847)
   at java.util.HashMap$EntryIterator.next(HashMap.java:845)
   at 
 com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:314)
   at 
 com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:306)
   at 
 org.apache.hadoop.security.TestShellBasedIdMapping.testStaticMapUpdate(TestShellBasedIdMapping.java:151)
 Standard Output
 2015-03-30 00:44:30,267 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:init(113)) - User configured user account update 
 time is less than 1 minute. Use 1 minute instead.
 2015-03-30 00:44:30,274 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
 UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
 not exist.
 2015-03-30 00:44:30,274 ERROR security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
 supported:Windows Server 2008 R2. Can't update user map and group map and 
 'nobody' will be used for any user and group.
 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:init(113)) - User configured user account update 
 time is less than 1 minute. Use 1 minute instead.
 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
 UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
 not exist.
 2015-03-30 00:44:30,275 ERROR security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
 supported:Windows Server 2008 R2. Can't update user map and group map and 
 'nobody' will be used for any user and group.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11791) Update src/site/markdown/releases to include old versions of Hadoop

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11791:
--
Attachment: HADOOP-11791.000.patch

 Update src/site/markdown/releases to include old versions of Hadoop
 ---

 Key: HADOOP-11791
 URL: https://issues.apache.org/jira/browse/HADOOP-11791
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11791.000.patch


 With the commit of HADOOP-11731, we need to include the new format of release 
 information in trunk.  This JIRA is about including those old versions in the 
 tree.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11791) Update src/site/markdown/releases to include old versions of Hadoop

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11791:
--
Status: Patch Available  (was: Open)

 Update src/site/markdown/releases to include old versions of Hadoop
 ---

 Key: HADOOP-11791
 URL: https://issues.apache.org/jira/browse/HADOOP-11791
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11791.000.patch


 With the commit of HADOOP-11731, we need to include the new format of release 
 information in trunk.  This JIRA is about including those old versions in the 
 tree.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11797) releasedocmaker.py needs to put ASF headers on output

2015-04-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11797:
-

 Summary: releasedocmaker.py needs to put ASF headers on output
 Key: HADOOP-11797
 URL: https://issues.apache.org/jira/browse/HADOOP-11797
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


... otherwise mvn rat check fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393572#comment-14393572
 ] 

Allen Wittenauer commented on HADOOP-11731:
---

bq. Have you compared the current CHANGES.txt with the generated change log? I 
beg the diff is long.

The summary lines no.  The content that matters? Very much so.  I spent about a 
man month repairing fix version information for all of branch-2 and trunk.  To 
the point that I know that this statement is patently false:

bq.  However, it worked well in the past for many releases.

I'd argue it has failed tremendously in past releases.  

* the number of incompatible changes that have been committed as bug fixes or 
what not with zero release notes to warn the user that we've screwed them over.

* CHANGES.txt is missing *hundreds* of commits in both branch-2 and trunk.  The 
number of JIRAs that needed to have 2.7.0 added to the fixversion vs. 
autogenerated was _3_.

* Right now, branch-2 and 2.7.0 changes.txt files even disagree about what 
patches are in 2.7.0.  That's not success at all.

The concern about the summary line is sort of moot since:

a) the reverse is also true:  it's rare that the changes.txt gets updated when 
it's incorrect vs the jira summary
b) the generated results provide links to the jira, providing easy click 
history to see what has happened.

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11797) releasedocmaker.py needs to put ASF headers on output

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11797:
--
Attachment: HADOOP-11797.000.patch

-00:
* simple fix.

 releasedocmaker.py needs to put ASF headers on output
 -

 Key: HADOOP-11797
 URL: https://issues.apache.org/jira/browse/HADOOP-11797
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11797.000.patch


 ... otherwise mvn rat check fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11791) Update src/site/markdown/releases to include old versions of Hadoop

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11791:
--
Attachment: HADOOP-11791.001.patch

-01:
* ASF license on the top to clear rat checks

 Update src/site/markdown/releases to include old versions of Hadoop
 ---

 Key: HADOOP-11791
 URL: https://issues.apache.org/jira/browse/HADOOP-11791
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11791.001.patch


 With the commit of HADOOP-11731, we need to include the new format of release 
 information in trunk.  This JIRA is about including those old versions in the 
 tree.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11650) configuration file generates syntax error on Ubuntu14.10 with bash 4.3

2015-04-02 Thread Jean-Pierre Matsumoto (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Pierre Matsumoto resolved HADOOP-11650.

  Resolution: Invalid
Assignee: Jean-Pierre Matsumoto
Hadoop Flags:   (was: Incompatible change)

I close it as invalid as the reporter has probably use dash instead of bash by 
error.

 configuration file generates syntax error on Ubuntu14.10 with bash 4.3
 --

 Key: HADOOP-11650
 URL: https://issues.apache.org/jira/browse/HADOOP-11650
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.6.0
 Environment: hadoop: 2.6.0
 OS: Linux version 3.16.0-31-generic (buildd@batsu) (gcc version 4.9.1 (Ubuntu 
 4.9.1-16ubuntu6) )
 bash:GNU bash, version 4.3.30(1)-release (x86_64-pc-linux-gnu)
Reporter: Tomorrow
Assignee: Jean-Pierre Matsumoto
Priority: Critical
 Attachments: bash trace.rtf


 hadoop configuration file : /path/to/hadoop/libexec/hadoop-config.sh
 line:99
 if [[ ( $HADOOP_SLAVES != '' )  ( $HADOOP_SLAVE_NAMES != '' ) ]] ; then
 On system ubuntu 14.10 (Linux version 3.16.0-31-generic  Bash version 4.3.s) 
 , this'll generate a syntax error :
 hadoop-config.sh: 99: hadoop-config.sh: Syntax error: word unexpected 
 (expecting ))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11792) Remove all of the CHANGES.txt files

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11792:
--
Release Note: With the introduction of the markdown-formatted and 
automatically built changes file, the CHANGES.txt files have been eliminated.
Hadoop Flags: Incompatible change

 Remove all of the CHANGES.txt files
 ---

 Key: HADOOP-11792
 URL: https://issues.apache.org/jira/browse/HADOOP-11792
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer

 With the commit of HADOOP-11731, the CHANGES.txt files are now EOLed.  We 
 should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11796) Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows

2015-04-02 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-11796:

Status: Patch Available  (was: Open)

 Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows
 ---

 Key: HADOOP-11796
 URL: https://issues.apache.org/jira/browse/HADOOP-11796
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Attachments: HADOOP-11796.00.patch


 The test should be skipped on Windows.
 {code}
 Stacktrace
 java.util.NoSuchElementException: null
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:809)
   at java.util.HashMap$EntryIterator.next(HashMap.java:847)
   at java.util.HashMap$EntryIterator.next(HashMap.java:845)
   at 
 com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:314)
   at 
 com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:306)
   at 
 org.apache.hadoop.security.TestShellBasedIdMapping.testStaticMapUpdate(TestShellBasedIdMapping.java:151)
 Standard Output
 2015-03-30 00:44:30,267 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:init(113)) - User configured user account update 
 time is less than 1 minute. Use 1 minute instead.
 2015-03-30 00:44:30,274 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
 UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
 not exist.
 2015-03-30 00:44:30,274 ERROR security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
 supported:Windows Server 2008 R2. Can't update user map and group map and 
 'nobody' will be used for any user and group.
 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:init(113)) - User configured user account update 
 time is less than 1 minute. Use 1 minute instead.
 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
 UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
 not exist.
 2015-03-30 00:44:30,275 ERROR security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
 supported:Windows Server 2008 R2. Can't update user map and group map and 
 'nobody' will be used for any user and group.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9805) Refactor RawLocalFileSystem#rename for improved testability.

2015-04-02 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9805:
--
Target Version/s: 2.8.0  (was: 3.0.0, 1-win, 1.3.0, 2.1.1-beta)
Hadoop Flags: Reviewed

+1 from me too.  Thanks!  I'll commit this in a few hours unless someone else 
beats me to it.

 Refactor RawLocalFileSystem#rename for improved testability.
 

 Key: HADOOP-9805
 URL: https://issues.apache.org/jira/browse/HADOOP-9805
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.0.0, 1-win, 1.3.0, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Jean-Pierre Matsumoto
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9805.001.patch, HADOOP-9805.002.patch, 
 HADOOP-9805.003.patch


 {{RawLocalFileSystem#rename}} contains fallback logic to provide POSIX rename 
 behavior on platforms where {{java.io.File#renameTo}} fails.  The method 
 returns early if {{java.io.File#renameTo}} succeeds, so test runs may not 
 cover the fallback logic depending on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11792) Remove all of the CHANGES.txt files

2015-04-02 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393442#comment-14393442
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-11792:
--

I think we should not remove CHANGES.txt until the next release; see also [this 
comment|https://issues.apache.org/jira/browse/HADOOP-11731?focusedCommentId=14393435page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14393435].

 Remove all of the CHANGES.txt files
 ---

 Key: HADOOP-11792
 URL: https://issues.apache.org/jira/browse/HADOOP-11792
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer

 With the commit of HADOOP-11731, the CHANGES.txt files are now EOLed.  We 
 should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11796) Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows

2015-04-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393405#comment-14393405
 ] 

Hadoop QA commented on HADOOP-11796:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12709023/HADOOP-11796.00.patch
  against trunk revision eccb7d4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6051//console

This message is automatically generated.

 Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows
 ---

 Key: HADOOP-11796
 URL: https://issues.apache.org/jira/browse/HADOOP-11796
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Attachments: HADOOP-11796.00.patch


 The test should be skipped on Windows.
 {code}
 Stacktrace
 java.util.NoSuchElementException: null
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:809)
   at java.util.HashMap$EntryIterator.next(HashMap.java:847)
   at java.util.HashMap$EntryIterator.next(HashMap.java:845)
   at 
 com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:314)
   at 
 com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:306)
   at 
 org.apache.hadoop.security.TestShellBasedIdMapping.testStaticMapUpdate(TestShellBasedIdMapping.java:151)
 Standard Output
 2015-03-30 00:44:30,267 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:init(113)) - User configured user account update 
 time is less than 1 minute. Use 1 minute instead.
 2015-03-30 00:44:30,274 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
 UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
 not exist.
 2015-03-30 00:44:30,274 ERROR security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
 supported:Windows Server 2008 R2. Can't update user map and group map and 
 'nobody' will be used for any user and group.
 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:init(113)) - User configured user account update 
 time is less than 1 minute. Use 1 minute instead.
 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
 UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
 not exist.
 2015-03-30 00:44:30,275 ERROR security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
 supported:Windows Server 2008 R2. Can't update user map and group map and 
 'nobody' will be used for any user and group.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11718) CHANGES.TXT in trunk is incorrect

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11718.
---
Resolution: Won't Fix

 CHANGES.TXT in trunk is incorrect
 -

 Key: HADOOP-11718
 URL: https://issues.apache.org/jira/browse/HADOOP-11718
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11718.patch


 As part of my auditing of JIRA fixversions, it's becoming clear that there 
 are a few JIRAs listed as being only in trunk that were actually released as 
 part of  either 0.23 or 2.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11791) Update src/site/markdown/releases to include old versions of Hadoop

2015-04-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393500#comment-14393500
 ] 

Hadoop QA commented on HADOOP-11791:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12709056/HADOOP-11791.000.patch
  against trunk revision eccb7d4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 190 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6050//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6050//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6050//console

This message is automatically generated.

 Update src/site/markdown/releases to include old versions of Hadoop
 ---

 Key: HADOOP-11791
 URL: https://issues.apache.org/jira/browse/HADOOP-11791
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11791.001.patch


 With the commit of HADOOP-11731, we need to include the new format of release 
 information in trunk.  This JIRA is about including those old versions in the 
 tree.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11650) configuration file generates syntax error on Ubuntu14.10 with bash 4.3

2015-04-02 Thread Jean-Pierre Matsumoto (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Pierre Matsumoto updated HADOOP-11650:
---
Attachment: bash trace.rtf

I have reproduced the issue on my Ubuntu 14.10. Find attached a trace with same 
error message at the end.

The reporter has used {{sh}} which is linked to {{dash}} on this Ubuntu 
version. I have no idea why.

 configuration file generates syntax error on Ubuntu14.10 with bash 4.3
 --

 Key: HADOOP-11650
 URL: https://issues.apache.org/jira/browse/HADOOP-11650
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.6.0
 Environment: hadoop: 2.6.0
 OS: Linux version 3.16.0-31-generic (buildd@batsu) (gcc version 4.9.1 (Ubuntu 
 4.9.1-16ubuntu6) )
 bash:GNU bash, version 4.3.30(1)-release (x86_64-pc-linux-gnu)
Reporter: Tomorrow
Priority: Critical
 Attachments: bash trace.rtf


 hadoop configuration file : /path/to/hadoop/libexec/hadoop-config.sh
 line:99
 if [[ ( $HADOOP_SLAVES != '' )  ( $HADOOP_SLAVE_NAMES != '' ) ]] ; then
 On system ubuntu 14.10 (Linux version 3.16.0-31-generic  Bash version 4.3.s) 
 , this'll generate a syntax error :
 hadoop-config.sh: 99: hadoop-config.sh: Syntax error: word unexpected 
 (expecting ))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9805) Refactor RawLocalFileSystem#rename for improved testability.

2015-04-02 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9805:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I committed this to trunk and branch-2.  [~jpmat], thank you for contributing 
the patch.  Steve and Colin, thank you for the code reviews.

 Refactor RawLocalFileSystem#rename for improved testability.
 

 Key: HADOOP-9805
 URL: https://issues.apache.org/jira/browse/HADOOP-9805
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 1-win, 1.3.0, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Jean-Pierre Matsumoto
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-9805.001.patch, HADOOP-9805.002.patch, 
 HADOOP-9805.003.patch


 {{RawLocalFileSystem#rename}} contains fallback logic to provide POSIX rename 
 behavior on platforms where {{java.io.File#renameTo}} fails.  The method 
 returns early if {{java.io.File#renameTo}} succeeds, so test runs may not 
 cover the fallback logic depending on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11797) releasedocmaker.py needs to put ASF headers on output

2015-04-02 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393869#comment-14393869
 ] 

Chris Douglas commented on HADOOP-11797:


+1 lgtm

 releasedocmaker.py needs to put ASF headers on output
 -

 Key: HADOOP-11797
 URL: https://issues.apache.org/jira/browse/HADOOP-11797
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11797.000.patch


 ... otherwise mvn rat check fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-02 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393963#comment-14393963
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-11731:
--

 ... the changes.txt gets updated when it's incorrect vs the jira summary

We won't update it since CHANGES.txt and jira summary are not supposed to be 
the same.

 the generated results provide links to the jira, providing easy click history 
 to see what has happened.

The entry in CHANGES.txt is a concise statement about the change committed.  It 
takes much more time to understand what was going to read the JIRA itself, 
especially, when there was a long discussion.


 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9805) Refactor RawLocalFileSystem#rename for improved testability.

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393732#comment-14393732
 ] 

Hudson commented on HADOOP-9805:


FAILURE: Integrated in Hadoop-trunk-Commit #7499 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7499/])
HADOOP-9805. Refactor RawLocalFileSystem#rename for improved testability. 
Contributed by Jean-Pierre Matsumoto. (cnauroth: rev 
5763b173d34dcf7372520076f00b576f493662cd)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractRename.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Refactor RawLocalFileSystem#rename for improved testability.
 

 Key: HADOOP-9805
 URL: https://issues.apache.org/jira/browse/HADOOP-9805
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 1-win, 1.3.0, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Jean-Pierre Matsumoto
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-9805.001.patch, HADOOP-9805.002.patch, 
 HADOOP-9805.003.patch


 {{RawLocalFileSystem#rename}} contains fallback logic to provide POSIX rename 
 behavior on platforms where {{java.io.File#renameTo}} fails.  The method 
 returns early if {{java.io.File#renameTo}} succeeds, so test runs may not 
 cover the fallback logic depending on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11740) Combine erasure encoder and decoder interfaces

2015-04-02 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393815#comment-14393815
 ] 

Kai Zheng commented on HADOOP-11740:


Thanks for the update. I looked the new patch, just two minor comments:
1. In the test codes, may be better to use {{ErasureCoder}} instead of 
{{AbstractErasureEncoder}} or {{AbstractErasureDecoder}} since the interface 
type is good enough, which is why we're here. With this refining, from caller's 
point of view, nothing different from between encoder and decoder, so it should 
use the common interface.
2. Those unnecessary Javadoc are there to conform Javadoc conventions and 
format. In future someone may fill them. I suggest we don't remove them, you 
can find so many in the project.

 Combine erasure encoder and decoder interfaces
 --

 Key: HADOOP-11740
 URL: https://issues.apache.org/jira/browse/HADOOP-11740
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HADOOP-11740-000.patch, HADOOP-11740-001.patch


 Rationale [discussed | 
 https://issues.apache.org/jira/browse/HDFS-7337?focusedCommentId=14376540page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376540]
  under HDFS-7337.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-02 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393661#comment-14393661
 ] 

Owen O'Malley commented on HADOOP-11717:


I think this looks good. +1

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch, HADOOP-11717-7.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9805) Refactor RawLocalFileSystem#rename for improved testability.

2015-04-02 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9805:
--
Issue Type: Improvement  (was: Bug)

 Refactor RawLocalFileSystem#rename for improved testability.
 

 Key: HADOOP-9805
 URL: https://issues.apache.org/jira/browse/HADOOP-9805
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 1-win, 1.3.0, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Jean-Pierre Matsumoto
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-9805.001.patch, HADOOP-9805.002.patch, 
 HADOOP-9805.003.patch


 {{RawLocalFileSystem#rename}} contains fallback logic to provide POSIX rename 
 behavior on platforms where {{java.io.File#renameTo}} fails.  The method 
 returns early if {{java.io.File#renameTo}} succeeds, so test runs may not 
 cover the fallback logic depending on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11789) NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec

2015-04-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393948#comment-14393948
 ] 

Hadoop QA commented on HADOOP-11789:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12709129/HADOOP-11789.001.patch
  against trunk revision bad070f.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6055//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6055//console

This message is automatically generated.

 NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
 -

 Key: HADOOP-11789
 URL: https://issues.apache.org/jira/browse/HADOOP-11789
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.8.0
 Environment: ASF Jenkins
Reporter: Steve Loughran
Assignee: Yi Liu
 Attachments: HADOOP-11789.001.patch


 NPE surfacing in {{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} on  Jenkins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-02 Thread Jiajia Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393982#comment-14393982
 ] 

Jiajia Li commented on HADOOP-11717:


Hi, it has error when I try to apply this patch to trunk, can you update the 
patch?

patching file pom.xml
Hunk #1 FAILED at 98.
1 out of 1 hunk FAILED -- saving rejects to file pom.xml.rej
patching file JWTRedirectAuthenticationHandler.java
patching file CertificateUtil.java
patching file TestJWTRedirectAuthentictionHandler.java
patching file TestCertificateUtil.java
patching file pom.xml
Hunk #1 FAILED at 803.
1 out of 1 hunk FAILED -- saving rejects to file pom.xml.rej

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch, HADOOP-11717-7.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11789) NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec

2015-04-02 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11789:

Attachment: HADOOP-11789.001.patch

The failure is because openssl is not loaded or test is not run with -Pnative 
flag. Update the patch.

 NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
 -

 Key: HADOOP-11789
 URL: https://issues.apache.org/jira/browse/HADOOP-11789
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.8.0
 Environment: ASF Jenkins
Reporter: Steve Loughran
Assignee: Yi Liu
 Attachments: HADOOP-11789.001.patch


 NPE surfacing in {{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} on  Jenkins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11789) NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec

2015-04-02 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11789:

Status: Patch Available  (was: Open)

 NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
 -

 Key: HADOOP-11789
 URL: https://issues.apache.org/jira/browse/HADOOP-11789
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.8.0
 Environment: ASF Jenkins
Reporter: Steve Loughran
Assignee: Yi Liu
 Attachments: HADOOP-11789.001.patch


 NPE surfacing in {{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} on  Jenkins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11791) Update src/site/markdown/releases to include old versions of Hadoop

2015-04-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393644#comment-14393644
 ] 

Hadoop QA commented on HADOOP-11791:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12709071/HADOOP-11791.001.patch
  against trunk revision 6a6a59d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6054//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6054//console

This message is automatically generated.

 Update src/site/markdown/releases to include old versions of Hadoop
 ---

 Key: HADOOP-11791
 URL: https://issues.apache.org/jira/browse/HADOOP-11791
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11791.001.patch


 With the commit of HADOOP-11731, we need to include the new format of release 
 information in trunk.  This JIRA is about including those old versions in the 
 tree.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-02 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393986#comment-14393986
 ] 

Larry McCay commented on HADOOP-11717:
--

Will do - thanks!



 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch, HADOOP-11717-7.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-02 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394081#comment-14394081
 ] 

Chris Douglas commented on HADOOP-11731:


bq. An example of JIRA information being inaccurate is that the assignee is 
missing...

Some of these inconsistencies are easier to spot with the tool, and fix both 
JIRA and the release notes. [~aw], would it be difficult to add a report for 
cases where the assignee is missing, incompatible changes don't have release 
notes, etc? I see it prints a warning for the latter, but a lint-style report 
could help RMs fixup JIRA as part of rolling a release.

bq. The single line summary at the top of a JIRA isn't what you are putting in 
CHANGES.txt?

I sometimes add more detail in the commit message, but if committers get used 
to setting the summary and release notes this seems workable.

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11797) releasedocmaker.py needs to put ASF headers on output

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11797:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
 Assignee: Allen Wittenauer
   Status: Resolved  (was: Patch Available)

thanks for the review.

committed to trunk.

 releasedocmaker.py needs to put ASF headers on output
 -

 Key: HADOOP-11797
 URL: https://issues.apache.org/jira/browse/HADOOP-11797
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11797.000.patch


 ... otherwise mvn rat check fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-11731:
-

Assignee: Allen Wittenauer

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394027#comment-14394027
 ] 

Allen Wittenauer commented on HADOOP-11731:
---

bq. We won't update it since CHANGES.txt and jira summary are not supposed to 
be the same.

Wait, what?  The single line summary at the top of a JIRA isn't what you are 
putting in CHANGES.txt?  I think you might be the *only* person who isn't using 
the summary line for the changes.txt that who any sort of semi-regular commits.

bq. The entry in CHANGES.txt is a concise statement about the change committed.

In other words, exactly what the JIRA summary is supposed to be as well, but 
used prior to commit.  





 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11797) releasedocmaker.py needs to put ASF headers on output

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394030#comment-14394030
 ] 

Hudson commented on HADOOP-11797:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7502 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7502/])
HADOOP-11797. releasedocmaker.py needs to put ASF headers on output (aw) (aw: 
rev 8d3c0f601d549a22648050bcc9a0e4acf37edc81)
* dev-support/releasedocmaker.py
* hadoop-common-project/hadoop-common/CHANGES.txt


 releasedocmaker.py needs to put ASF headers on output
 -

 Key: HADOOP-11797
 URL: https://issues.apache.org/jira/browse/HADOOP-11797
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11797.000.patch


 ... otherwise mvn rat check fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11781) fix race conditions and add URL support to smart-apply-patch.sh

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11781:
--
Summary: fix race conditions and add URL support to smart-apply-patch.sh  
(was: rewrite smart-apply-patch.sh)

 fix race conditions and add URL support to smart-apply-patch.sh
 ---

 Key: HADOOP-11781
 URL: https://issues.apache.org/jira/browse/HADOOP-11781
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Raymie Stata
 Attachments: HADOOP-11781-01.patch, HADOOP-11781-02.patch


 smart-apply-patch.sh has a few race conditions and is just generally crufty.  
 It should be rewritten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11791) Update src/site/markdown/releases to include old versions of Hadoop

2015-04-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11791:
-

 Summary: Update src/site/markdown/releases to include old versions 
of Hadoop
 Key: HADOOP-11791
 URL: https://issues.apache.org/jira/browse/HADOOP-11791
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


With the commit of HADOOP-11731, we need to include the new format of release 
information in trunk.  This JIRA is about including those old versions in the 
tree.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11791) Update src/site/markdown/releases to include old versions of Hadoop

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11791:
--
Attachment: HADOOP-11791.000.patch

-00:
* Release notes and changes for 2.0.0-alpha through 2.6.1

 Update src/site/markdown/releases to include old versions of Hadoop
 ---

 Key: HADOOP-11791
 URL: https://issues.apache.org/jira/browse/HADOOP-11791
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11791.000.patch


 With the commit of HADOOP-11731, we need to include the new format of release 
 information in trunk.  This JIRA is about including those old versions in the 
 tree.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11757) NFS gateway should shutdown when it can't start UDP or TCP server

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392546#comment-14392546
 ] 

Hudson commented on HADOOP-11757:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #885 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/885/])
HADOOP-11757. NFS gateway should shutdown when it can't start UDP or TCP 
server. Contributed by Brandon Li (brandonli: rev 
60ce825a71850fe0622d551159e8d66f32448bb5)
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/mount/MountdBase.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/Nfs3Base.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpServer.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleTcpServer.java


 NFS gateway should shutdown when it can't start UDP or TCP server
 -

 Key: HADOOP-11757
 URL: https://issues.apache.org/jira/browse/HADOOP-11757
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.7.0

 Attachments: HDFS-7989.001.patch, HDFS-7989.002.patch


 Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11790) Testcase failures in PowerPC due to leveldbjni artifact

2015-04-02 Thread Ayappan (JIRA)
Ayappan created HADOOP-11790:


 Summary: Testcase failures in PowerPC due to leveldbjni artifact
 Key: HADOOP-11790
 URL: https://issues.apache.org/jira/browse/HADOOP-11790
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.6.0
 Environment: PowerPC64LE
Reporter: Ayappan


The leveldbjni artifact in maven repository has been built for only x86 
architecture. Due to which some of the testcases are failing in PowerPC. The 
leveldbjni community has no plans to support other platforms [ 
https://github.com/fusesource/leveldbjni/issues/54 ]. Right now , the approach 
is we need to locally built leveldbjni prior to running hadoop testcases. 
Pushing a PowerPC-specific leveldbjni artifact in central maven repository and 
making pom.xml to pickup it up while running in PowerPC is another option but i 
don't know whether this is a suitable one . Any other alternative/solution is 
there ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11757) NFS gateway should shutdown when it can't start UDP or TCP server

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392600#comment-14392600
 ] 

Hudson commented on HADOOP-11757:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #151 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/151/])
HADOOP-11757. NFS gateway should shutdown when it can't start UDP or TCP 
server. Contributed by Brandon Li (brandonli: rev 
60ce825a71850fe0622d551159e8d66f32448bb5)
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/Nfs3Base.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleTcpServer.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/mount/MountdBase.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpServer.java


 NFS gateway should shutdown when it can't start UDP or TCP server
 -

 Key: HADOOP-11757
 URL: https://issues.apache.org/jira/browse/HADOOP-11757
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.7.0

 Attachments: HDFS-7989.001.patch, HDFS-7989.002.patch


 Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392593#comment-14392593
 ] 

Hudson commented on HADOOP-11731:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #151 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/151/])
HADOOP-11731. Rework the changelog and releasenotes (aw) (aw: rev 
f383fd9b6caf4557613250c5c218b1a1b65a212b)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/relnotes.py
* hadoop-common-project/hadoop-common/pom.xml
* BUILDING.txt
* hadoop-project/src/site/site.xml
* dev-support/releasedocmaker.py


 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11787) OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit Mac

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392599#comment-14392599
 ] 

Hudson commented on HADOOP-11787:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #151 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/151/])
HADOOP-11787. OpensslSecureRandom.c pthread_threadid_np usage signature is 
wrong on 32-bit Mac. Contributed by Kiran Kumar M R. (cnauroth: rev 
a3a96a07faf0c6f6aa3ed33608271c2b1657e437)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c


 OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit 
 Mac
 

 Key: HADOOP-11787
 URL: https://issues.apache.org/jira/browse/HADOOP-11787
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Kiran Kumar M R
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7938-001.patch


 In OpensslSecureRandom.c, pthread_threadid_np is being used with an unsigned 
 long, but the type signature requires a uint64_t.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11787) OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit Mac

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392660#comment-14392660
 ] 

Hudson commented on HADOOP-11787:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2083 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2083/])
HADOOP-11787. OpensslSecureRandom.c pthread_threadid_np usage signature is 
wrong on 32-bit Mac. Contributed by Kiran Kumar M R. (cnauroth: rev 
a3a96a07faf0c6f6aa3ed33608271c2b1657e437)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c


 OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit 
 Mac
 

 Key: HADOOP-11787
 URL: https://issues.apache.org/jira/browse/HADOOP-11787
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Kiran Kumar M R
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7938-001.patch


 In OpensslSecureRandom.c, pthread_threadid_np is being used with an unsigned 
 long, but the type signature requires a uint64_t.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392654#comment-14392654
 ] 

Hudson commented on HADOOP-11731:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2083 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2083/])
HADOOP-11731. Rework the changelog and releasenotes (aw) (aw: rev 
f383fd9b6caf4557613250c5c218b1a1b65a212b)
* dev-support/relnotes.py
* BUILDING.txt
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-project/src/site/site.xml
* dev-support/releasedocmaker.py


 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11792) Remove all of the CHANGES.txt files

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11792:
--
Description: With the commit of HADOOP-11731, the CHANGES.txt files are now 
EOLed.  We should remove them.  (was: With the commit of HADOOP-11731, the 
CHANGES.TXT files are now EOLed.  We should remove them.)

 Remove all of the CHANGES.txt files
 ---

 Key: HADOOP-11792
 URL: https://issues.apache.org/jira/browse/HADOOP-11792
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer

 With the commit of HADOOP-11731, the CHANGES.txt files are now EOLed.  We 
 should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11792) Remove all of the CHANGES.txt files

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11792:
--
Summary: Remove all of the CHANGES.txt files  (was: Remove all of the 
CHANGES.TXT files)

 Remove all of the CHANGES.txt files
 ---

 Key: HADOOP-11792
 URL: https://issues.apache.org/jira/browse/HADOOP-11792
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer

 With the commit of HADOOP-11731, the CHANGES.TXT files are now EOLed.  We 
 should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11781) fix race conditions and add URL support to smart-apply-patch.sh

2015-04-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392819#comment-14392819
 ] 

Allen Wittenauer commented on HADOOP-11781:
---

This is awesome! Changed the summary and description so we can push this in 
sooner rather than later.

One nit:  let's change the {{sort | uniq}}'s to {{sort -u}} so it runs a little 
faster.

 fix race conditions and add URL support to smart-apply-patch.sh
 ---

 Key: HADOOP-11781
 URL: https://issues.apache.org/jira/browse/HADOOP-11781
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Raymie Stata
 Attachments: HADOOP-11781-01.patch, HADOOP-11781-02.patch


 smart-apply-patch.sh has a few race conditions and is just generally crufty.  
 It should really be rewritten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11757) NFS gateway should shutdown when it can't start UDP or TCP server

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392875#comment-14392875
 ] 

Hudson commented on HADOOP-11757:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #151 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/151/])
HADOOP-11757. NFS gateway should shutdown when it can't start UDP or TCP 
server. Contributed by Brandon Li (brandonli: rev 
60ce825a71850fe0622d551159e8d66f32448bb5)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleTcpServer.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/Nfs3Base.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpServer.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/mount/MountdBase.java


 NFS gateway should shutdown when it can't start UDP or TCP server
 -

 Key: HADOOP-11757
 URL: https://issues.apache.org/jira/browse/HADOOP-11757
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.7.0

 Attachments: HDFS-7989.001.patch, HDFS-7989.002.patch


 Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392869#comment-14392869
 ] 

Hudson commented on HADOOP-11731:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #151 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/151/])
HADOOP-11731. Rework the changelog and releasenotes (aw) (aw: rev 
f383fd9b6caf4557613250c5c218b1a1b65a212b)
* dev-support/relnotes.py
* hadoop-project/src/site/site.xml
* hadoop-common-project/hadoop-common/pom.xml
* dev-support/releasedocmaker.py
* hadoop-common-project/hadoop-common/CHANGES.txt
* BUILDING.txt


 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11787) OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit Mac

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392874#comment-14392874
 ] 

Hudson commented on HADOOP-11787:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #151 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/151/])
HADOOP-11787. OpensslSecureRandom.c pthread_threadid_np usage signature is 
wrong on 32-bit Mac. Contributed by Kiran Kumar M R. (cnauroth: rev 
a3a96a07faf0c6f6aa3ed33608271c2b1657e437)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c


 OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit 
 Mac
 

 Key: HADOOP-11787
 URL: https://issues.apache.org/jira/browse/HADOOP-11787
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Kiran Kumar M R
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7938-001.patch


 In OpensslSecureRandom.c, pthread_threadid_np is being used with an unsigned 
 long, but the type signature requires a uint64_t.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11792) Remove all of the CHANGES.TXT files

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11792:
--
Affects Version/s: 3.0.0

 Remove all of the CHANGES.TXT files
 ---

 Key: HADOOP-11792
 URL: https://issues.apache.org/jira/browse/HADOOP-11792
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer

 With the commit of HADOOP-11731, the CHANGES.TXT files are now EOLed.  We 
 should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11792) Remove all of the CHANGES.TXT files

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11792:
--
Component/s: build

 Remove all of the CHANGES.TXT files
 ---

 Key: HADOOP-11792
 URL: https://issues.apache.org/jira/browse/HADOOP-11792
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer

 With the commit of HADOOP-11731, the CHANGES.TXT files are now EOLed.  We 
 should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11792) Remove all of the CHANGES.TXT files

2015-04-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11792:
-

 Summary: Remove all of the CHANGES.TXT files
 Key: HADOOP-11792
 URL: https://issues.apache.org/jira/browse/HADOOP-11792
 Project: Hadoop Common
  Issue Type: Task
Reporter: Allen Wittenauer


With the commit of HADOOP-11731, the CHANGES.TXT files are now EOLed.  We 
should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11764) Hadoop should have the option to use directory other than tmp for extracting and loading leveldbjni

2015-04-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392795#comment-14392795
 ] 

Allen Wittenauer commented on HADOOP-11764:
---

I'm starting to think more and more that using leveldb is a HUGE mistake.

a) There's this complete nonsense about requiring all this pre-configuration.

b) What prevents a user from inserting a malicious .so into this shared 
directory?  Given that we have to default some where like /tmp or even 
hadoop.tmp.dir, this is a massive security hole that directly impacts the 
running daemons.

c) HADOOP-11790 means we've effectively broken the build for probably 
non-linux, non-x86.

 Hadoop should have the option to use directory other than tmp for extracting 
 and loading leveldbjni
 ---

 Key: HADOOP-11764
 URL: https://issues.apache.org/jira/browse/HADOOP-11764
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Anubhav Dhoot
Assignee: Anubhav Dhoot
 Attachments: YARN-3331.001.patch, YARN-3331.002.patch


 /tmp can be  required to be noexec in many environments. This causes a 
 problem when  nodemanager tries to load the leveldbjni library which can get 
 unpacked and executed from /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392787#comment-14392787
 ] 

Hudson commented on HADOOP-11731:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #142 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/142/])
HADOOP-11731. Rework the changelog and releasenotes (aw) (aw: rev 
f383fd9b6caf4557613250c5c218b1a1b65a212b)
* BUILDING.txt
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/relnotes.py
* dev-support/releasedocmaker.py
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-project/src/site/site.xml


 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11787) OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit Mac

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392793#comment-14392793
 ] 

Hudson commented on HADOOP-11787:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #142 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/142/])
HADOOP-11787. OpensslSecureRandom.c pthread_threadid_np usage signature is 
wrong on 32-bit Mac. Contributed by Kiran Kumar M R. (cnauroth: rev 
a3a96a07faf0c6f6aa3ed33608271c2b1657e437)
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c
* hadoop-common-project/hadoop-common/CHANGES.txt


 OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit 
 Mac
 

 Key: HADOOP-11787
 URL: https://issues.apache.org/jira/browse/HADOOP-11787
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Kiran Kumar M R
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7938-001.patch


 In OpensslSecureRandom.c, pthread_threadid_np is being used with an unsigned 
 long, but the type signature requires a uint64_t.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11757) NFS gateway should shutdown when it can't start UDP or TCP server

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392794#comment-14392794
 ] 

Hudson commented on HADOOP-11757:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #142 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/142/])
HADOOP-11757. NFS gateway should shutdown when it can't start UDP or TCP 
server. Contributed by Brandon Li (brandonli: rev 
60ce825a71850fe0622d551159e8d66f32448bb5)
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/mount/MountdBase.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/Nfs3Base.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpServer.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleTcpServer.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 NFS gateway should shutdown when it can't start UDP or TCP server
 -

 Key: HADOOP-11757
 URL: https://issues.apache.org/jira/browse/HADOOP-11757
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.7.0

 Attachments: HDFS-7989.001.patch, HDFS-7989.002.patch


 Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11781) fix race conditions and add URL support to smart-apply-patch.sh

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11781:
--
Description: smart-apply-patch.sh has a few race conditions and is just 
generally crufty.  It should really be rewritten.  (was: smart-apply-patch.sh 
has a few race conditions and is just generally crufty.  It should be 
rewritten.)

 fix race conditions and add URL support to smart-apply-patch.sh
 ---

 Key: HADOOP-11781
 URL: https://issues.apache.org/jira/browse/HADOOP-11781
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Raymie Stata
 Attachments: HADOOP-11781-01.patch, HADOOP-11781-02.patch


 smart-apply-patch.sh has a few race conditions and is just generally crufty.  
 It should really be rewritten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10907) Single Node Setup still thinks it is hadoop 1.x

2015-04-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HADOOP-10907.

Resolution: Invalid

The document was removed by HADOOP-10618. Closing.

 Single Node Setup still thinks it is hadoop 1.x
 ---

 Key: HADOOP-10907
 URL: https://issues.apache.org/jira/browse/HADOOP-10907
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Allen Wittenauer
  Labels: newbie

 # JDK 1.6 is deprecated
 # the *-all.sh scripts are now in sbin.
 # location of hadoop-*-examples.jar is no longer there
 # hadoop jar should be replaced with yarn jar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11793) Update create-release for releasedocmaker.py

2015-04-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11793:
-

 Summary: Update create-release for releasedocmaker.py
 Key: HADOOP-11793
 URL: https://issues.apache.org/jira/browse/HADOOP-11793
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


With the commit of HADOOP-11731, the changelog and release note data is now 
automated with the build.  The create-release script needs to do the correct 
thing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11757) NFS gateway should shutdown when it can't start UDP or TCP server

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392661#comment-14392661
 ] 

Hudson commented on HADOOP-11757:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2083 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2083/])
HADOOP-11757. NFS gateway should shutdown when it can't start UDP or TCP 
server. Contributed by Brandon Li (brandonli: rev 
60ce825a71850fe0622d551159e8d66f32448bb5)
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpServer.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleTcpServer.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/mount/MountdBase.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/Nfs3Base.java


 NFS gateway should shutdown when it can't start UDP or TCP server
 -

 Key: HADOOP-11757
 URL: https://issues.apache.org/jira/browse/HADOOP-11757
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.7.0

 Attachments: HDFS-7989.001.patch, HDFS-7989.002.patch


 Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11789) NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec

2015-04-02 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu reassigned HADOOP-11789:
---

Assignee: Yi Liu

 NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
 -

 Key: HADOOP-11789
 URL: https://issues.apache.org/jira/browse/HADOOP-11789
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.8.0
 Environment: ASF Jenkins
Reporter: Steve Loughran
Assignee: Yi Liu

 NPE surfacing in {{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} on  Jenkins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10924) LocalDistributedCacheManager for concurrent sqoop processes fails to create unique directories

2015-04-02 Thread William Watson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392691#comment-14392691
 ] 

William Watson commented on HADOOP-10924:
-

Thanks [~zxu] for the great information.

So, just to be clear, you want me to add a test similar to what you're 
describing (again, thanks for that), but the jobID + UUID code that I have 
should be sufficient? I just want to be clear on the remaining changes and I 
would like some verification that the method signatures I have changed are okay 
to change OR if I should go back and do something else so the method signatures 
won't change.

Sorry if I'm burdensome with my questions.

 LocalDistributedCacheManager for concurrent sqoop processes fails to create 
 unique directories
 --

 Key: HADOOP-10924
 URL: https://issues.apache.org/jira/browse/HADOOP-10924
 Project: Hadoop Common
  Issue Type: Bug
Reporter: William Watson
Assignee: William Watson
 Attachments: HADOOP-10924.02.patch, 
 HADOOP-10924.03.jobid-plus-uuid.patch


 Kicking off many sqoop processes in different threads results in:
 {code}
 2014-08-01 13:47:24 -0400:  INFO - 14/08/01 13:47:22 ERROR tool.ImportTool: 
 Encountered IOException running import job: java.io.IOException: 
 java.util.concurrent.ExecutionException: java.io.IOException: Rename cannot 
 overwrite non empty destination directory 
 /tmp/hadoop-hadoop/mapred/local/1406915233073
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:149)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.init(LocalJobRunner.java:163)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
 2014-08-01 13:47:24 -0400:  INFO -at 
 java.security.AccessController.doPrivileged(Native Method)
 2014-08-01 13:47:24 -0400:  INFO -at 
 javax.security.auth.Subject.doAs(Subject.java:415)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:239)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:645)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.run(Sqoop.java:145)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.main(Sqoop.java:238)
 {code}
 If two are kicked off in the same second. The issue is the following lines of 
 code in the org.apache.hadoop.mapred.LocalDistributedCacheManager class: 
 {code}
 // Generating unique numbers for FSDownload.
 AtomicLong uniqueNumberGenerator =
new AtomicLong(System.currentTimeMillis());
 {code}
 and 
 {code}
 Long.toString(uniqueNumberGenerator.incrementAndGet())),
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11789) NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec

2015-04-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran moved HDFS-8044 to HADOOP-11789:
---

Affects Version/s: (was: 3.0.0)
   2.8.0
   3.0.0
  Key: HADOOP-11789  (was: HDFS-8044)
  Project: Hadoop Common  (was: Hadoop HDFS)

 NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
 -

 Key: HADOOP-11789
 URL: https://issues.apache.org/jira/browse/HADOOP-11789
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.8.0
 Environment: ASF Jenkins
Reporter: Steve Loughran

 NPE surfacing in {{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} on  Jenkins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11788) ZK failover tests failing: port in use

2015-04-02 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11788:
---

 Summary: ZK failover tests failing: port in use
 Key: HADOOP-11788
 URL: https://issues.apache.org/jira/browse/HADOOP-11788
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Steve Loughran


ZK failover tests failing, port in use. Looks like the tests try to find free 
ports between other tests *but don't actually check to see if the port is free*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392539#comment-14392539
 ] 

Hudson commented on HADOOP-11731:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #885 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/885/])
HADOOP-11731. Rework the changelog and releasenotes (aw) (aw: rev 
f383fd9b6caf4557613250c5c218b1a1b65a212b)
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-project/src/site/site.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* BUILDING.txt
* dev-support/relnotes.py
* dev-support/releasedocmaker.py


 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11788) ZK failover tests failing: port in use

2015-04-02 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392501#comment-14392501
 ] 

Steve Loughran commented on HADOOP-11788:
-

{code}
org.apache.hadoop.ha.TestZKFailoverController.testOneOfEverything

Failing for the past 1 build (Since Failed#1453 )
Took 11 ms.
Error Message

Address already in use
Stacktrace

java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at 
org.apache.zookeeper.server.ServerCnxnFactory.createFactory(ServerCnxnFactory.java:126)
at 
org.apache.zookeeper.server.ServerCnxnFactory.createFactory(ServerCnxnFactory.java:119)
at 
org.apache.hadoop.ha.ClientBaseWithFixes.createNewServerInstance(ClientBaseWithFixes.java:348)
at 
org.apache.hadoop.ha.ClientBaseWithFixes.startServer(ClientBaseWithFixes.java:445)
at 
org.apache.hadoop.ha.ClientBaseWithFixes.setUp(ClientBaseWithFixes.java:409)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatchman$1.evaluate(TestWatchman.java:53)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{code}

 ZK failover tests failing: port in use
 --

 Key: HADOOP-11788
 URL: https://issues.apache.org/jira/browse/HADOOP-11788
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Steve Loughran

 ZK failover tests failing, port in use. Looks like the tests try to find free 
 ports between other tests *but don't actually check to see if the port is 
 free*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11787) OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit Mac

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392545#comment-14392545
 ] 

Hudson commented on HADOOP-11787:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #885 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/885/])
HADOOP-11787. OpensslSecureRandom.c pthread_threadid_np usage signature is 
wrong on 32-bit Mac. Contributed by Kiran Kumar M R. (cnauroth: rev 
a3a96a07faf0c6f6aa3ed33608271c2b1657e437)
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c
* hadoop-common-project/hadoop-common/CHANGES.txt


 OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit 
 Mac
 

 Key: HADOOP-11787
 URL: https://issues.apache.org/jira/browse/HADOOP-11787
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Kiran Kumar M R
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7938-001.patch


 In OpensslSecureRandom.c, pthread_threadid_np is being used with an unsigned 
 long, but the type signature requires a uint64_t.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11788) ZK failover tests failing: port in use

2015-04-02 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HADOOP-11788:
-

Assignee: Brahma Reddy Battula

 ZK failover tests failing: port in use
 --

 Key: HADOOP-11788
 URL: https://issues.apache.org/jira/browse/HADOOP-11788
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Steve Loughran
Assignee: Brahma Reddy Battula

 ZK failover tests failing, port in use. Looks like the tests try to find free 
 ports between other tests *but don't actually check to see if the port is 
 free*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11790) Testcase failures in PowerPC due to leveldbjni artifact

2015-04-02 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392932#comment-14392932
 ] 

Steve Loughran commented on HADOOP-11790:
-

Getting a PPC binary into maven is the obvious best choice, though it is 
clearly not on the leveldbjni roadmap.

Unless you can come up with a way which enables those tests to be skipped on 
PPC systems, you are going to have to locally build leveldbjni. 



 Testcase failures in PowerPC due to leveldbjni artifact
 ---

 Key: HADOOP-11790
 URL: https://issues.apache.org/jira/browse/HADOOP-11790
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.6.0
 Environment: PowerPC64LE
Reporter: Ayappan
Priority: Minor

 The leveldbjni artifact in maven repository has been built for only x86 
 architecture. Due to which some of the testcases are failing in PowerPC. The 
 leveldbjni community has no plans to support other platforms [ 
 https://github.com/fusesource/leveldbjni/issues/54 ]. Right now , the 
 approach is we need to locally built leveldbjni prior to running hadoop 
 testcases. Pushing a PowerPC-specific leveldbjni artifact in central maven 
 repository and making pom.xml to pickup it up while running in PowerPC is 
 another option but i don't know whether this is a suitable one . Any other 
 alternative/solution is there ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11059) relnotes.py should figure out the previous version by itself

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-11059.
---
Resolution: Won't Fix

closing as won't fix, since relnotes.py no longer exists.

 relnotes.py should figure out the previous version by itself
 

 Key: HADOOP-11059
 URL: https://issues.apache.org/jira/browse/HADOOP-11059
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 2.5.0
Reporter: Karthik Kambatla





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11790) Testcase failures in PowerPC due to leveldbjni artifact

2015-04-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11790:

Priority: Minor  (was: Major)
Target Version/s:   (was: 2.7.0)

 Testcase failures in PowerPC due to leveldbjni artifact
 ---

 Key: HADOOP-11790
 URL: https://issues.apache.org/jira/browse/HADOOP-11790
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.6.0
 Environment: PowerPC64LE
Reporter: Ayappan
Priority: Minor

 The leveldbjni artifact in maven repository has been built for only x86 
 architecture. Due to which some of the testcases are failing in PowerPC. The 
 leveldbjni community has no plans to support other platforms [ 
 https://github.com/fusesource/leveldbjni/issues/54 ]. Right now , the 
 approach is we need to locally built leveldbjni prior to running hadoop 
 testcases. Pushing a PowerPC-specific leveldbjni artifact in central maven 
 repository and making pom.xml to pickup it up while running in PowerPC is 
 another option but i don't know whether this is a suitable one . Any other 
 alternative/solution is there ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2015-04-02 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392930#comment-14392930
 ] 

Zhe Zhang commented on HADOOP-11540:


Really impressive results! Thanks for the great work Kai.

 Raw Reed-Solomon coder using Intel ISA-L library
 

 Key: HADOOP-11540
 URL: https://issues.apache.org/jira/browse/HADOOP-11540
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Zhe Zhang
Assignee: Kai Zheng

 This is to provide RS codec implementation using Intel ISA-L library for 
 encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11789) NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec

2015-04-02 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392911#comment-14392911
 ] 

Xiaoyu Yao commented on HADOOP-11789:
-

+1, I've seen this failure on our recently Windows run as well. 

 NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
 -

 Key: HADOOP-11789
 URL: https://issues.apache.org/jira/browse/HADOOP-11789
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.8.0
 Environment: ASF Jenkins
Reporter: Steve Loughran
Assignee: Yi Liu

 NPE surfacing in {{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} on  Jenkins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11794) distcp can copy blocks in parallel

2015-04-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer moved MAPREDUCE-2257 to HADOOP-11794:
--

  Component/s: (was: distcp)
   tools/distcp
Affects Version/s: (was: 0.21.0)
   0.21.0
  Key: HADOOP-11794  (was: MAPREDUCE-2257)
  Project: Hadoop Common  (was: Hadoop Map/Reduce)

 distcp can copy blocks in parallel
 --

 Key: HADOOP-11794
 URL: https://issues.apache.org/jira/browse/HADOOP-11794
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 0.21.0
Reporter: dhruba borthakur
Assignee: Mithun Radhakrishnan
 Attachments: MAPREDUCE-2257.patch


 The minimum unit of work for a distcp task is a file. We have files that are 
 greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
 files, the tasks either take a long long long time or finally fails. A better 
 way for distcp would be to copy all the source blocks in parallel, and then 
 stich the blocks back to files at the destination via the HDFS Concat API 
 (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11787) OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit Mac

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393124#comment-14393124
 ] 

Hudson commented on HADOOP-11787:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2101 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2101/])
HADOOP-11787. OpensslSecureRandom.c pthread_threadid_np usage signature is 
wrong on 32-bit Mac. Contributed by Kiran Kumar M R. (cnauroth: rev 
a3a96a07faf0c6f6aa3ed33608271c2b1657e437)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c


 OpensslSecureRandom.c pthread_threadid_np usage signature is wrong on 32-bit 
 Mac
 

 Key: HADOOP-11787
 URL: https://issues.apache.org/jira/browse/HADOOP-11787
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Kiran Kumar M R
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7938-001.patch


 In OpensslSecureRandom.c, pthread_threadid_np is being used with an unsigned 
 long, but the type signature requires a uint64_t.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393119#comment-14393119
 ] 

Hudson commented on HADOOP-11731:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2101 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2101/])
HADOOP-11731. Rework the changelog and releasenotes (aw) (aw: rev 
f383fd9b6caf4557613250c5c218b1a1b65a212b)
* hadoop-common-project/hadoop-common/pom.xml
* BUILDING.txt
* dev-support/releasedocmaker.py
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-project/src/site/site.xml
* dev-support/relnotes.py


 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11757) NFS gateway should shutdown when it can't start UDP or TCP server

2015-04-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393125#comment-14393125
 ] 

Hudson commented on HADOOP-11757:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2101 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2101/])
HADOOP-11757. NFS gateway should shutdown when it can't start UDP or TCP 
server. Contributed by Brandon Li (brandonli: rev 
60ce825a71850fe0622d551159e8d66f32448bb5)
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleTcpServer.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/mount/MountdBase.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpServer.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/Nfs3Base.java


 NFS gateway should shutdown when it can't start UDP or TCP server
 -

 Key: HADOOP-11757
 URL: https://issues.apache.org/jira/browse/HADOOP-11757
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.7.0

 Attachments: HDFS-7989.001.patch, HDFS-7989.002.patch


 Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11796) Fix TestShellBasedIdMapping.testStaticMapUpdate failure on Windows

2015-04-02 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-11796:

Issue Type: Sub-task  (was: Test)
Parent: HADOOP-11795

 Fix TestShellBasedIdMapping.testStaticMapUpdate failure on Windows
 --

 Key: HADOOP-11796
 URL: https://issues.apache.org/jira/browse/HADOOP-11796
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor

 The test should be skipped on Windows.
 {code}
 Stacktrace
 java.util.NoSuchElementException: null
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:809)
   at java.util.HashMap$EntryIterator.next(HashMap.java:847)
   at java.util.HashMap$EntryIterator.next(HashMap.java:845)
   at 
 com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:314)
   at 
 com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:306)
   at 
 org.apache.hadoop.security.TestShellBasedIdMapping.testStaticMapUpdate(TestShellBasedIdMapping.java:151)
 Standard Output
 2015-03-30 00:44:30,267 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:init(113)) - User configured user account update 
 time is less than 1 minute. Use 1 minute instead.
 2015-03-30 00:44:30,274 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
 UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
 not exist.
 2015-03-30 00:44:30,274 ERROR security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
 supported:Windows Server 2008 R2. Can't update user map and group map and 
 'nobody' will be used for any user and group.
 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:init(113)) - User configured user account update 
 time is less than 1 minute. Use 1 minute instead.
 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
 UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
 not exist.
 2015-03-30 00:44:30,275 ERROR security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
 supported:Windows Server 2008 R2. Can't update user map and group map and 
 'nobody' will be used for any user and group.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11552) Allow handoff on the server side for RPC requests

2015-04-02 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HADOOP-11552:

Target Version/s: 2.8.0  (was: 2.7.0)

 Allow handoff on the server side for RPC requests
 -

 Key: HADOOP-11552
 URL: https://issues.apache.org/jira/browse/HADOOP-11552
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Siddharth Seth
Assignee: Siddharth Seth
 Attachments: HADOOP-11552.1.wip.txt, HADOOP-11552.2.txt, 
 HADOOP-11552.3.txt, HADOOP-11552.3.txt, HADOOP-11552.4.txt


 An RPC server handler thread is tied up for each incoming RPC request. This 
 isn't ideal, since this essentially implies that RPC operations should be 
 short lived, and most operations which could take time end up falling back to 
 a polling mechanism.
 Some use cases where this is useful.
 - YARN submitApplication - which currently submits, followed by a poll to 
 check if the application is accepted while the submit operation is written 
 out to storage. This can be collapsed into a single call.
 - YARN allocate - requests and allocations use the same protocol. New 
 allocations are received via polling.
 The allocate protocol could be split into a request/heartbeat along with a 
 'awaitResponse'. The request/heartbeat is sent only when there's a request or 
 on a much longer heartbeat interval. awaitResponse is always left active with 
 the RM - and returns the moment something is available.
 MapReduce/Tez task to AM communication is another example of this pattern.
 The same pattern of splitting calls can be used for other protocols as well. 
 This should serve to improve latency, as well as reduce network traffic since 
 the keep-alive heartbeat can be sent less frequently.
 I believe there's some cases in HDFS as well, where the DN gets told to 
 perform some operations when they heartbeat into the NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11552) Allow handoff on the server side for RPC requests

2015-04-02 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393094#comment-14393094
 ] 

Siddharth Seth commented on HADOOP-11552:
-

cc/ [~vinodkv] - thoughts on making YARN changes in a branch ?

 Allow handoff on the server side for RPC requests
 -

 Key: HADOOP-11552
 URL: https://issues.apache.org/jira/browse/HADOOP-11552
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Siddharth Seth
Assignee: Siddharth Seth
 Attachments: HADOOP-11552.1.wip.txt, HADOOP-11552.2.txt, 
 HADOOP-11552.3.txt, HADOOP-11552.3.txt, HADOOP-11552.4.txt


 An RPC server handler thread is tied up for each incoming RPC request. This 
 isn't ideal, since this essentially implies that RPC operations should be 
 short lived, and most operations which could take time end up falling back to 
 a polling mechanism.
 Some use cases where this is useful.
 - YARN submitApplication - which currently submits, followed by a poll to 
 check if the application is accepted while the submit operation is written 
 out to storage. This can be collapsed into a single call.
 - YARN allocate - requests and allocations use the same protocol. New 
 allocations are received via polling.
 The allocate protocol could be split into a request/heartbeat along with a 
 'awaitResponse'. The request/heartbeat is sent only when there's a request or 
 on a much longer heartbeat interval. awaitResponse is always left active with 
 the RM - and returns the moment something is available.
 MapReduce/Tez task to AM communication is another example of this pattern.
 The same pattern of splitting calls can be used for other protocols as well. 
 This should serve to improve latency, as well as reduce network traffic since 
 the keep-alive heartbeat can be sent less frequently.
 I believe there's some cases in HDFS as well, where the DN gets told to 
 perform some operations when they heartbeat into the NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11552) Allow handoff on the server side for RPC requests

2015-04-02 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14393093#comment-14393093
 ] 

Siddharth Seth commented on HADOOP-11552:
-

I'm interested in getting this patch into a released version of Hadoop. Having 
it in a released version does make it easier to consume for downstream 
projects; and I do intend to use this feature in Tez - and that can serve as 
another testbed. Was hoping to get this into 2.7, but it's too late for that. 
Will change the target version to 2.8 - which gives more breathing room to have 
it reviewed, and tried out in components within Hadoop.

There isn't that much work in the RPC layer itself. Follow up patches like the 
shared thread pool will be more disruptive. When this is used by YARN / HDFS - 
those patches are likely to be more involved, and a larger change set. I can 
create jiras for some of the YARN tasks, and would request folks in HDFS to 
create relevant jiras there.

This could absolutely be done in a branch. If this particular patch is 
considered 'safe' - it'd be good to get it into 2.8 even if the rest of the 
work to use it in sub-components isn't done.

HADOOP-10300 is related, and this patch borrows elements from there - like I 
mentioned in my first comment. If I'm not mistaken, 10300 doesn't allow for a 
return value. Daryn could correct me here if I've understood that incorrectly.

Multiplexing UGIs over a single connection - that's TBD right ? We still use 
distinct connections per UGI if I'm not mistaken. Don't think the patch affects 
this path. Are there plans to support multiplexing responses on a connection - 
i.e. allow a smaller response through, even if the responder isn't done with a 
previous response on the same connection ? 


 Allow handoff on the server side for RPC requests
 -

 Key: HADOOP-11552
 URL: https://issues.apache.org/jira/browse/HADOOP-11552
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Siddharth Seth
Assignee: Siddharth Seth
 Attachments: HADOOP-11552.1.wip.txt, HADOOP-11552.2.txt, 
 HADOOP-11552.3.txt, HADOOP-11552.3.txt, HADOOP-11552.4.txt


 An RPC server handler thread is tied up for each incoming RPC request. This 
 isn't ideal, since this essentially implies that RPC operations should be 
 short lived, and most operations which could take time end up falling back to 
 a polling mechanism.
 Some use cases where this is useful.
 - YARN submitApplication - which currently submits, followed by a poll to 
 check if the application is accepted while the submit operation is written 
 out to storage. This can be collapsed into a single call.
 - YARN allocate - requests and allocations use the same protocol. New 
 allocations are received via polling.
 The allocate protocol could be split into a request/heartbeat along with a 
 'awaitResponse'. The request/heartbeat is sent only when there's a request or 
 on a much longer heartbeat interval. awaitResponse is always left active with 
 the RM - and returns the moment something is available.
 MapReduce/Tez task to AM communication is another example of this pattern.
 The same pattern of splitting calls can be used for other protocols as well. 
 This should serve to improve latency, as well as reduce network traffic since 
 the keep-alive heartbeat can be sent less frequently.
 I believe there's some cases in HDFS as well, where the DN gets told to 
 perform some operations when they heartbeat into the NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11795) Fix Hadoop unit test failures on Windows

2015-04-02 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-11795:
---

 Summary: Fix Hadoop unit test failures on Windows
 Key: HADOOP-11795
 URL: https://issues.apache.org/jira/browse/HADOOP-11795
 Project: Hadoop Common
  Issue Type: Test
Reporter: Xiaoyu Yao
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11796) Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows

2015-04-02 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-11796:

Summary: Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows  (was: 
Fix TestShellBasedIdMapping.testStaticMapUpdate failure on Windows)

 Skip TestShellBasedIdMapping.testStaticMapUpdate on Windows
 ---

 Key: HADOOP-11796
 URL: https://issues.apache.org/jira/browse/HADOOP-11796
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor

 The test should be skipped on Windows.
 {code}
 Stacktrace
 java.util.NoSuchElementException: null
   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:809)
   at java.util.HashMap$EntryIterator.next(HashMap.java:847)
   at java.util.HashMap$EntryIterator.next(HashMap.java:845)
   at 
 com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:314)
   at 
 com.google.common.collect.AbstractBiMap$EntrySet$1.next(AbstractBiMap.java:306)
   at 
 org.apache.hadoop.security.TestShellBasedIdMapping.testStaticMapUpdate(TestShellBasedIdMapping.java:151)
 Standard Output
 2015-03-30 00:44:30,267 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:init(113)) - User configured user account update 
 time is less than 1 minute. Use 1 minute instead.
 2015-03-30 00:44:30,274 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
 UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
 not exist.
 2015-03-30 00:44:30,274 ERROR security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
 supported:Windows Server 2008 R2. Can't update user map and group map and 
 'nobody' will be used for any user and group.
 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:init(113)) - User configured user account update 
 time is less than 1 minute. Use 1 minute instead.
 2015-03-30 00:44:30,275 INFO  security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:updateStaticMapping(322)) - Not doing static 
 UID/GID mapping because 'D:\tmp\hadoop-dal\nfs-6561166579146979876.map' does 
 not exist.
 2015-03-30 00:44:30,275 ERROR security.ShellBasedIdMapping 
 (ShellBasedIdMapping.java:checkSupportedPlatform(278)) - Platform is not 
 supported:Windows Server 2008 R2. Can't update user map and group map and 
 'nobody' will be used for any user and group.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >