[jira] [Commented] (HADOOP-11781) fix race conditions and add URL support to smart-apply-patch.sh

2015-04-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394157#comment-14394157
 ] 

Hadoop QA commented on HADOOP-11781:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12709187/HADOOP-11781-03.patch
  against trunk revision 72f6bd4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6056//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6056//console

This message is automatically generated.

 fix race conditions and add URL support to smart-apply-patch.sh
 ---

 Key: HADOOP-11781
 URL: https://issues.apache.org/jira/browse/HADOOP-11781
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Raymie Stata
 Attachments: HADOOP-11781-01.patch, HADOOP-11781-02.patch, 
 HADOOP-11781-03.patch


 smart-apply-patch.sh has a few race conditions and is just generally crufty.  
 It should really be rewritten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11781) fix race conditions and add URL support to smart-apply-patch.sh

2015-04-03 Thread Raymie Stata (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymie Stata updated HADOOP-11781:
--
Attachment: HADOOP-11781-03.patch

-03: made sort -u change.

 fix race conditions and add URL support to smart-apply-patch.sh
 ---

 Key: HADOOP-11781
 URL: https://issues.apache.org/jira/browse/HADOOP-11781
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Raymie Stata
 Attachments: HADOOP-11781-01.patch, HADOOP-11781-02.patch, 
 HADOOP-11781-03.patch


 smart-apply-patch.sh has a few race conditions and is just generally crufty.  
 It should really be rewritten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-04-03 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11772:
---
Attachment: HADOOP-11772-wip-001.patch

Thanks [~gopalv] for your comment. Attaching a sample patch to create pool for 
Clients.
TODO:
* Add a document for the new parameter
* Create a test

 RPC Invoker relies on static ClientCache which has synchronized(this) blocks
 

 Key: HADOOP-11772
 URL: https://issues.apache.org/jira/browse/HADOOP-11772
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, performance
Reporter: Gopal V
Assignee: Akira AJISAKA
 Attachments: HADOOP-11772-001.patch, HADOOP-11772-wip-001.patch, 
 dfs-sync-ipc.png, sync-client-bt.png, sync-client-threads.png


 {code}
   private static ClientCache CLIENTS=new ClientCache();
 ...
 this.client = CLIENTS.getClient(conf, factory);
 {code}
 Meanwhile in ClientCache
 {code}
 public synchronized Client getClient(Configuration conf,
   SocketFactory factory, Class? extends Writable valueClass) {
 ...
Client client = clients.get(factory);
 if (client == null) {
   client = new Client(valueClass, conf, factory);
   clients.put(factory, client);
 } else {
   client.incCount();
 }
 {code}
 All invokers end up calling these methods, resulting in IPC clients choking 
 up.
 !sync-client-threads.png!
 !sync-client-bt.png!
 !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9805) Refactor RawLocalFileSystem#rename for improved testability.

2015-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394311#comment-14394311
 ] 

Hudson commented on HADOOP-9805:


FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #152 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/152/])
HADOOP-9805. Refactor RawLocalFileSystem#rename for improved testability. 
Contributed by Jean-Pierre Matsumoto. (cnauroth: rev 
5763b173d34dcf7372520076f00b576f493662cd)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractRename.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Refactor RawLocalFileSystem#rename for improved testability.
 

 Key: HADOOP-9805
 URL: https://issues.apache.org/jira/browse/HADOOP-9805
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 1-win, 1.3.0, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Jean-Pierre Matsumoto
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-9805.001.patch, HADOOP-9805.002.patch, 
 HADOOP-9805.003.patch


 {{RawLocalFileSystem#rename}} contains fallback logic to provide POSIX rename 
 behavior on platforms where {{java.io.File#renameTo}} fails.  The method 
 returns early if {{java.io.File#renameTo}} succeeds, so test runs may not 
 cover the fallback logic depending on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11798) Native raw erasure coder in XOR codes

2015-04-03 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-11798:
--

 Summary: Native raw erasure coder in XOR codes
 Key: HADOOP-11798
 URL: https://issues.apache.org/jira/browse/HADOOP-11798
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
recover only one erased block which is in most often case. It can also be used 
in HitchHiker coder. Therefore a native implementation of it would be deserved 
for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11799) In Cluster setup documentation ,many of the configurations do not have the default values

2015-04-03 Thread Jagadesh Kiran N (JIRA)
Jagadesh Kiran N created HADOOP-11799:
-

 Summary: In Cluster setup documentation ,many of the 
configurations do not have the default values 
 Key: HADOOP-11799
 URL: https://issues.apache.org/jira/browse/HADOOP-11799
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jagadesh Kiran N
Priority: Trivial


It would be helpful to have default values for the configuration in Cluster 
Setup documentation page ,so that user can directly use the same for 
installation rather than searching the default values in *defualt.xml. And he 
can modify only required configurations.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11797) releasedocmaker.py needs to put ASF headers on output

2015-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394310#comment-14394310
 ] 

Hudson commented on HADOOP-11797:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #152 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/152/])
HADOOP-11797. releasedocmaker.py needs to put ASF headers on output (aw) (aw: 
rev 8d3c0f601d549a22648050bcc9a0e4acf37edc81)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/releasedocmaker.py


 releasedocmaker.py needs to put ASF headers on output
 -

 Key: HADOOP-11797
 URL: https://issues.apache.org/jira/browse/HADOOP-11797
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11797.000.patch


 ... otherwise mvn rat check fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9805) Refactor RawLocalFileSystem#rename for improved testability.

2015-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394333#comment-14394333
 ] 

Hudson commented on HADOOP-9805:


FAILURE: Integrated in Hadoop-Yarn-trunk #886 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/886/])
HADOOP-9805. Refactor RawLocalFileSystem#rename for improved testability. 
Contributed by Jean-Pierre Matsumoto. (cnauroth: rev 
5763b173d34dcf7372520076f00b576f493662cd)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractRename.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java


 Refactor RawLocalFileSystem#rename for improved testability.
 

 Key: HADOOP-9805
 URL: https://issues.apache.org/jira/browse/HADOOP-9805
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 1-win, 1.3.0, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Jean-Pierre Matsumoto
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-9805.001.patch, HADOOP-9805.002.patch, 
 HADOOP-9805.003.patch


 {{RawLocalFileSystem#rename}} contains fallback logic to provide POSIX rename 
 behavior on platforms where {{java.io.File#renameTo}} fails.  The method 
 returns early if {{java.io.File#renameTo}} succeeds, so test runs may not 
 cover the fallback logic depending on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11797) releasedocmaker.py needs to put ASF headers on output

2015-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394332#comment-14394332
 ] 

Hudson commented on HADOOP-11797:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #886 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/886/])
HADOOP-11797. releasedocmaker.py needs to put ASF headers on output (aw) (aw: 
rev 8d3c0f601d549a22648050bcc9a0e4acf37edc81)
* dev-support/releasedocmaker.py
* hadoop-common-project/hadoop-common/CHANGES.txt


 releasedocmaker.py needs to put ASF headers on output
 -

 Key: HADOOP-11797
 URL: https://issues.apache.org/jira/browse/HADOOP-11797
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11797.000.patch


 ... otherwise mvn rat check fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11800) Clean up some test methods in TestCodec.java

2015-04-03 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-11800:
--

 Summary: Clean up some test methods in TestCodec.java
 Key: HADOOP-11800
 URL: https://issues.apache.org/jira/browse/HADOOP-11800
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA


Found two issues when reviewing the patches in HADOOP-11627.
1. There is no {{@Test}} annotation, so the test is not executed.
{code}
  public void testCodecPoolAndGzipDecompressor() {
{code}
2. The method should be private because it is called from other tests.
{code}
  public void testGzipCodecWrite(boolean useNative) throws IOException {
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11627) Remove io.native.lib.available from trunk

2015-04-03 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394358#comment-14394358
 ] 

Akira AJISAKA commented on HADOOP-11627:


I found two issues in {{TestCodec.java}} when I reviewed the patch. Filed 
HADOOP-11800 for fixing these.


 Remove io.native.lib.available from trunk
 -

 Key: HADOOP-11627
 URL: https://issues.apache.org/jira/browse/HADOOP-11627
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
 HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, 
 HADOOP-11627.patch


 According to the discussion in HADOOP-8642, we should remove 
 {{io.native.lib.available}} from trunk, and always use native libraries if 
 they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11800) Clean up some test methods in TestCodec.java

2015-04-03 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HADOOP-11800:
-

Assignee: Brahma Reddy Battula

 Clean up some test methods in TestCodec.java
 

 Key: HADOOP-11800
 URL: https://issues.apache.org/jira/browse/HADOOP-11800
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie

 Found two issues when reviewing the patches in HADOOP-11627.
 1. There is no {{@Test}} annotation, so the test is not executed.
 {code}
   public void testCodecPoolAndGzipDecompressor() {
 {code}
 2. The method should be private because it is called from other tests.
 {code}
   public void testGzipCodecWrite(boolean useNative) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-04-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394349#comment-14394349
 ] 

Hadoop QA commented on HADOOP-11772:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12709214/HADOOP-11772-wip-001.patch
  against trunk revision 72f6bd4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ipc.TestRPCCallBenchmark

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6057//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6057//console

This message is automatically generated.

 RPC Invoker relies on static ClientCache which has synchronized(this) blocks
 

 Key: HADOOP-11772
 URL: https://issues.apache.org/jira/browse/HADOOP-11772
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, performance
Reporter: Gopal V
Assignee: Akira AJISAKA
 Attachments: HADOOP-11772-001.patch, HADOOP-11772-wip-001.patch, 
 dfs-sync-ipc.png, sync-client-bt.png, sync-client-threads.png


 {code}
   private static ClientCache CLIENTS=new ClientCache();
 ...
 this.client = CLIENTS.getClient(conf, factory);
 {code}
 Meanwhile in ClientCache
 {code}
 public synchronized Client getClient(Configuration conf,
   SocketFactory factory, Class? extends Writable valueClass) {
 ...
Client client = clients.get(factory);
 if (client == null) {
   client = new Client(valueClass, conf, factory);
   clients.put(factory, client);
 } else {
   client.incCount();
 }
 {code}
 All invokers end up calling these methods, resulting in IPC clients choking 
 up.
 !sync-client-threads.png!
 !sync-client-bt.png!
 !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11799) In Cluster setup documentation ,many of the configurations do not have the default values

2015-04-03 Thread Gururaj Shetty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gururaj Shetty reassigned HADOOP-11799:
---

Assignee: Gururaj Shetty

 In Cluster setup documentation ,many of the configurations do not have the 
 default values 
 --

 Key: HADOOP-11799
 URL: https://issues.apache.org/jira/browse/HADOOP-11799
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jagadesh Kiran N
Assignee: Gururaj Shetty
Priority: Trivial

 It would be helpful to have default values for the configuration in Cluster 
 Setup documentation page ,so that user can directly use the same for 
 installation rather than searching the default values in *defualt.xml. And he 
 can modify only required configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11627) Remove io.native.lib.available from trunk

2015-04-03 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394350#comment-14394350
 ] 

Akira AJISAKA commented on HADOOP-11627:


Thanks [~brahmareddy] for updating the patch. Some comments.
1. Would you remove unused imports from the following files?
* ZlibFactory.java
* BZip2Factory.java
* NativeCodeLoader.java
* TestZlibCompressorDecompressor.java
* TestConcatenatedCompressedInput.java

2. In {{TestCodec.java}}, would you add 
{{ZlibFactory.setNativeZlibLoaded(false)}} in the below tests?
{code}
@@ -458,7 +461,6 @@ public void testCodecInitWithCompressionLevel() throws 
Exception {
+ : native libs not loaded);
 }
 conf = new Configuration();
-conf.setBoolean(CommonConfigurationKeys.IO_NATIVE_LIB_AVAILABLE_KEY, 
false);
 codecTestWithNOCompression( conf,
  org.apache.hadoop.io.compress.DefaultCodec);
   }
{code}
{code}
 } else {
   LOG.warn(testCodecPoolCompressorReinit skipped: native libs not 
loaded);
 }
-conf.setBoolean(CommonConfigurationKeys.IO_NATIVE_LIB_AVAILABLE_KEY, 
false);
 DefaultCodec dfc = ReflectionUtils.newInstance(DefaultCodec.class, conf);
 gzipReinitTest(conf, dfc);
   }
{code}
{code}
@@ -901,8 +899,6 @@ public void testCodecPoolAndGzipDecompressor() {
 
 // Don't use native libs for this test.
 Configuration conf = new Configuration();
-conf.setBoolean(CommonConfigurationKeys.IO_NATIVE_LIB_AVAILABLE_KEY,
-false);
{code}

3. I'm thinking try-finally statement in 
{{TestConcatenatedCompressedInput#testPrototypeInflaterGzip()}}  is not needed. 
Adding {{ZlibFactory.loadNativeZLib()}} before 
{{doMultipleGzipBufferSizes(jobConf, true)}} is sufficient for me.
{code}
@@ -352,7 +360,13 @@ public void testBuiltInGzipDecompressor() throws 
IOException {
  84, lineNum);
 
 // test BuiltInGzipDecompressor with lots of different input-buffer sizes
-doMultipleGzipBufferSizes(jobConf, false);
+// Don't use native libs
+try {
+  ZlibFactory.setNativeZlibLoaded(false);
+  doMultipleGzipBufferSizes(jobConf, false);
+} finally {
+  ZlibFactory.loadNativeZLib();
+}
{code}

 Remove io.native.lib.available from trunk
 -

 Key: HADOOP-11627
 URL: https://issues.apache.org/jira/browse/HADOOP-11627
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
 HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, 
 HADOOP-11627.patch


 According to the discussion in HADOOP-8642, we should remove 
 {{io.native.lib.available}} from trunk, and always use native libraries if 
 they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9805) Refactor RawLocalFileSystem#rename for improved testability.

2015-04-03 Thread Jean-Pierre Matsumoto (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394353#comment-14394353
 ] 

Jean-Pierre Matsumoto commented on HADOOP-9805:
---

You're welcome. I'm glad to contribute.

 Refactor RawLocalFileSystem#rename for improved testability.
 

 Key: HADOOP-9805
 URL: https://issues.apache.org/jira/browse/HADOOP-9805
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 1-win, 1.3.0, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Jean-Pierre Matsumoto
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-9805.001.patch, HADOOP-9805.002.patch, 
 HADOOP-9805.003.patch


 {{RawLocalFileSystem#rename}} contains fallback logic to provide POSIX rename 
 behavior on platforms where {{java.io.File#renameTo}} fails.  The method 
 returns early if {{java.io.File#renameTo}} succeeds, so test runs may not 
 cover the fallback logic depending on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-10532) Jenkins test-patch timed out on a large patch touching files in multiple modules.

2015-04-03 Thread Jean-Pierre Matsumoto (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Pierre Matsumoto reassigned HADOOP-10532:
--

Assignee: Jean-Pierre Matsumoto

 Jenkins test-patch timed out on a large patch touching files in multiple 
 modules.
 -

 Key: HADOOP-10532
 URL: https://issues.apache.org/jira/browse/HADOOP-10532
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Jean-Pierre Matsumoto
 Attachments: PreCommit-HADOOP-Build-3821-consoleText.txt.gz


 On HADOOP-10503, I had posted a consolidated patch touching multiple files 
 across all sub-modules: Hadoop, HDFS, YARN and MapReduce.  The Jenkins 
 test-patch runs for these consolidated patches timed out.  I also 
 experimented with a dummy patch that simply added one-line comment changes to 
 files.  This patch also timed out, which seems to indicate a bug in our 
 automation rather than a problem with any patch in particular.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394362#comment-14394362
 ] 

Steve Loughran commented on HADOOP-11731:
-

I don't think CHANGES.TXT works that well. We may think it does, but that's 
because without the tooling to validate it, stuff doesn't get added and so it 
can omit a lot of work. Then there's the problem of merging across branches, 
and dealing with race conditions/commit conflict between other people's work 
and yours.

automated generation not only gives us the change list, it adds the option of 
generating a hyperlinked version where you can actually click through the JIRAs.

I do recognise the concerns about JIRA naming and tagging —but that's a process 
thing that is correctable. If we can put the effort in to maintaining 
CHANGES.TXT, we can do it for JIRAs. Many use cases will be easier. For example 
commit something to trunk and then backport it, today, is:
# pull trunk
# apply patch to trunk
# edit changes.txt
# commit
# attempt to push
# if unlucky: fix CHANGES.TXT conflict  commit, push.
# close JIRA

The backport workflow becomes
# switch to branch-2
# pull
# apply  the code bits of the patch
# edit changes.txt
# commit
# go to trunk
# fix changes.txt
# push
# remember to edit JIRA


Forward porting is easier: edit  commit branch-2, cherry pick the patch, push 
both

Now think of a JIRA-only workflow
# apply patch to trunk
# commit
# close in JIRA with 3.0.0 version

you'll only get commit conflict if someone patched the source files, so commits 
are more likely to go through.

backporting
# cherry pick patch
# commit  push
# in JIRA, change fix version.

It's backporting  cross version code where this stuff really excels. 



 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11801) Update BUILDING.txt

2015-04-03 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HADOOP-11801:
--
Attachment: (was: HADOOP-11801.patch)

 Update BUILDING.txt
 ---

 Key: HADOOP-11801
 URL: https://issues.apache.org/jira/browse/HADOOP-11801
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Gabor Liptak
Priority: Minor

 ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11801) Update BUILDING.txt

2015-04-03 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HADOOP-11801:
--
Attachment: HADOOP-11801.patch

 Update BUILDING.txt
 ---

 Key: HADOOP-11801
 URL: https://issues.apache.org/jira/browse/HADOOP-11801
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Gabor Liptak
Priority: Minor
 Attachments: HADOOP-11801.patch


 ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394540#comment-14394540
 ] 

Hadoop QA commented on HADOOP-11717:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12709235/HADOOP-11717-8.patch
  against trunk revision 72f6bd4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6059//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6059//console

This message is automatically generated.

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt

2015-04-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394554#comment-14394554
 ] 

Hadoop QA commented on HADOOP-11801:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12709237/HADOOP-11801.patch
  against trunk revision 72f6bd4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6060//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6060//console

This message is automatically generated.

 Update BUILDING.txt
 ---

 Key: HADOOP-11801
 URL: https://issues.apache.org/jira/browse/HADOOP-11801
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Gabor Liptak
Priority: Minor
 Attachments: HADOOP-11801.patch


 ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt

2015-04-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394557#comment-14394557
 ] 

Hadoop QA commented on HADOOP-11801:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12709240/HADOOP-11801.patch
  against trunk revision 72f6bd4.

{color:red}-1 @author{color}.  The patch appears to contain  @author tags 
which the Hadoop community has agreed to not allow in code contributions.

{color:green}+1 tests included{color}.  The patch appears to include  new 
or modified test files.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6061//console

This message is automatically generated.

 Update BUILDING.txt
 ---

 Key: HADOOP-11801
 URL: https://issues.apache.org/jira/browse/HADOOP-11801
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Gabor Liptak
Priority: Minor
 Attachments: HADOOP-11801.patch


 ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11797) releasedocmaker.py needs to put ASF headers on output

2015-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394504#comment-14394504
 ] 

Hudson commented on HADOOP-11797:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #143 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/143/])
HADOOP-11797. releasedocmaker.py needs to put ASF headers on output (aw) (aw: 
rev 8d3c0f601d549a22648050bcc9a0e4acf37edc81)
* dev-support/releasedocmaker.py
* hadoop-common-project/hadoop-common/CHANGES.txt


 releasedocmaker.py needs to put ASF headers on output
 -

 Key: HADOOP-11797
 URL: https://issues.apache.org/jira/browse/HADOOP-11797
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11797.000.patch


 ... otherwise mvn rat check fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9805) Refactor RawLocalFileSystem#rename for improved testability.

2015-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394505#comment-14394505
 ] 

Hudson commented on HADOOP-9805:


FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #143 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/143/])
HADOOP-9805. Refactor RawLocalFileSystem#rename for improved testability. 
Contributed by Jean-Pierre Matsumoto. (cnauroth: rev 
5763b173d34dcf7372520076f00b576f493662cd)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractRename.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java


 Refactor RawLocalFileSystem#rename for improved testability.
 

 Key: HADOOP-9805
 URL: https://issues.apache.org/jira/browse/HADOOP-9805
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 1-win, 1.3.0, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Jean-Pierre Matsumoto
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-9805.001.patch, HADOOP-9805.002.patch, 
 HADOOP-9805.003.patch


 {{RawLocalFileSystem#rename}} contains fallback logic to provide POSIX rename 
 behavior on platforms where {{java.io.File#renameTo}} fails.  The method 
 returns early if {{java.io.File#renameTo}} succeeds, so test runs may not 
 cover the fallback logic depending on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt

2015-04-03 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394512#comment-14394512
 ] 

Gabor Liptak commented on HADOOP-11801:
---

Patch available at https://github.com/gliptak/hadoop/tree/HADOOP-11801

 Update BUILDING.txt
 ---

 Key: HADOOP-11801
 URL: https://issues.apache.org/jira/browse/HADOOP-11801
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Gabor Liptak
Priority: Minor

 ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11797) releasedocmaker.py needs to put ASF headers on output

2015-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394519#comment-14394519
 ] 

Hudson commented on HADOOP-11797:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2084 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2084/])
HADOOP-11797. releasedocmaker.py needs to put ASF headers on output (aw) (aw: 
rev 8d3c0f601d549a22648050bcc9a0e4acf37edc81)
* dev-support/releasedocmaker.py
* hadoop-common-project/hadoop-common/CHANGES.txt


 releasedocmaker.py needs to put ASF headers on output
 -

 Key: HADOOP-11797
 URL: https://issues.apache.org/jira/browse/HADOOP-11797
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11797.000.patch


 ... otherwise mvn rat check fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9805) Refactor RawLocalFileSystem#rename for improved testability.

2015-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394520#comment-14394520
 ] 

Hudson commented on HADOOP-9805:


FAILURE: Integrated in Hadoop-Hdfs-trunk #2084 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2084/])
HADOOP-9805. Refactor RawLocalFileSystem#rename for improved testability. 
Contributed by Jean-Pierre Matsumoto. (cnauroth: rev 
5763b173d34dcf7372520076f00b576f493662cd)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractRename.java


 Refactor RawLocalFileSystem#rename for improved testability.
 

 Key: HADOOP-9805
 URL: https://issues.apache.org/jira/browse/HADOOP-9805
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 1-win, 1.3.0, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Jean-Pierre Matsumoto
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-9805.001.patch, HADOOP-9805.002.patch, 
 HADOOP-9805.003.patch


 {{RawLocalFileSystem#rename}} contains fallback logic to provide POSIX rename 
 behavior on platforms where {{java.io.File#renameTo}} fails.  The method 
 returns early if {{java.io.File#renameTo}} succeeds, so test runs may not 
 cover the fallback logic depending on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11801) Update BUILDING.txt

2015-04-03 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HADOOP-11801:
--
Release Note: ProtocolBuffer is packaged in Ubuntu
  Status: Patch Available  (was: Open)

 Update BUILDING.txt
 ---

 Key: HADOOP-11801
 URL: https://issues.apache.org/jira/browse/HADOOP-11801
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Gabor Liptak
Priority: Minor

 ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11801) Update BUILDING.txt

2015-04-03 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HADOOP-11801:
--
Attachment: HADOOP-11801.patch

 Update BUILDING.txt
 ---

 Key: HADOOP-11801
 URL: https://issues.apache.org/jira/browse/HADOOP-11801
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Gabor Liptak
Priority: Minor
 Attachments: HADOOP-11801.patch


 ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11798) Native raw erasure coder in XOR codes

2015-04-03 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11798:
---
Fix Version/s: HDFS-7285

 Native raw erasure coder in XOR codes
 -

 Key: HADOOP-11798
 URL: https://issues.apache.org/jira/browse/HADOOP-11798
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-7285


 Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
 recover only one erased block which is in most often case. It can also be 
 used in HitchHiker coder. Therefore a native implementation of it would be 
 deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11801) Update BUILDING.txt

2015-04-03 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HADOOP-11801:
--
Target Version/s: 2.6.1

 Update BUILDING.txt
 ---

 Key: HADOOP-11801
 URL: https://issues.apache.org/jira/browse/HADOOP-11801
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Gabor Liptak
Priority: Minor
 Attachments: HADOOP-11801.patch


 ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11802) DomainSocketWatcher#watcherThread encounters IllegalStateException in finally block when calling sendCallback

2015-04-03 Thread Eric Payne (JIRA)
Eric Payne created HADOOP-11802:
---

 Summary: DomainSocketWatcher#watcherThread encounters 
IllegalStateException in finally block when calling sendCallback
 Key: HADOOP-11802
 URL: https://issues.apache.org/jira/browse/HADOOP-11802
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Eric Payne


In the main finally block of the {{DomainSocketWatcher#watcherThread}}, the 
call to {{sendCallback}} can encountering an {{IllegalStateException}}, and 
leave some cleanup tasks undone.

{code}
  } finally {
lock.lock();
try {
  kick(); // allow the handler for notificationSockets[0] to read a byte
  for (Entry entry : entries.values()) {
// We do not remove from entries as we iterate, because that can
// cause a ConcurrentModificationException.
sendCallback(close, entries, fdSet, entry.getDomainSocket().fd);
  }
  entries.clear();
  fdSet.close();
} finally {
  lock.unlock();
}
  }
{code}

The exception causes {{watcherThread}} to skip the calls to {{entries.clear()}} 
and {{fdSet.close()}}.

{code}
2015-04-02 11:48:09,941 [DataXceiver for client 
unix:/home/gs/var/run/hdfs/dn_socket [Waiting for operation #1]] INFO 
DataNode.clienttrace: cliID: DFSClient_NONMAPREDUCE_-807148576_1, src: 
127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: n/a, srvID: 
e6b6cdd7-1bf8-415f-a412-32d8493554df, success: false
2015-04-02 11:48:09,941 [Thread-14] ERROR unix.DomainSocketWatcher: 
Thread[Thread-14,5,main] terminating on unexpected exception
java.lang.IllegalStateException: failed to remove 
b845649551b6b1eab5c17f630e42489d
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:145)
at 
org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.removeShm(ShortCircuitRegistry.java:119)
at 
org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry$RegisteredShm.handle(ShortCircuitRegistry.java:102)
at 
org.apache.hadoop.net.unix.DomainSocketWatcher.sendCallback(DomainSocketWatcher.java:402)
at 
org.apache.hadoop.net.unix.DomainSocketWatcher.access$1100(DomainSocketWatcher.java:52)
at 
org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:522)
at java.lang.Thread.run(Thread.java:722)
{code}

Please note that this is not a duplicate of HADOOP-11333, HADOOP-11604, or 
HADOOP-10404. The cluster installation is running code with all of these fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11792) Remove all of the CHANGES.txt files

2015-04-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394761#comment-14394761
 ] 

Allen Wittenauer commented on HADOOP-11792:
---

This is only targeting trunk.  branch-2 is too old, too crufty, and too 
fundamentally broken for my time to be wasted on it.

 Remove all of the CHANGES.txt files
 ---

 Key: HADOOP-11792
 URL: https://issues.apache.org/jira/browse/HADOOP-11792
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer

 With the commit of HADOOP-11731, the CHANGES.txt files are now EOLed.  We 
 should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11792) Remove all of the CHANGES.txt files

2015-04-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11792:
--
Target Version/s: 3.0.0

 Remove all of the CHANGES.txt files
 ---

 Key: HADOOP-11792
 URL: https://issues.apache.org/jira/browse/HADOOP-11792
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer

 With the commit of HADOOP-11731, the CHANGES.txt files are now EOLed.  We 
 should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10924) LocalDistributedCacheManager for concurrent sqoop processes fails to create unique directories

2015-04-03 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394849#comment-14394849
 ] 

zhihai xu commented on HADOOP-10924:


[~wattsinabox], it is fine to use jobID + UUID. Your current patch looks ok to 
me. You just need add a multi-thread test case to your current patch. thanks

 LocalDistributedCacheManager for concurrent sqoop processes fails to create 
 unique directories
 --

 Key: HADOOP-10924
 URL: https://issues.apache.org/jira/browse/HADOOP-10924
 Project: Hadoop Common
  Issue Type: Bug
Reporter: William Watson
Assignee: William Watson
 Attachments: HADOOP-10924.02.patch, 
 HADOOP-10924.03.jobid-plus-uuid.patch


 Kicking off many sqoop processes in different threads results in:
 {code}
 2014-08-01 13:47:24 -0400:  INFO - 14/08/01 13:47:22 ERROR tool.ImportTool: 
 Encountered IOException running import job: java.io.IOException: 
 java.util.concurrent.ExecutionException: java.io.IOException: Rename cannot 
 overwrite non empty destination directory 
 /tmp/hadoop-hadoop/mapred/local/1406915233073
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:149)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.init(LocalJobRunner.java:163)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
 2014-08-01 13:47:24 -0400:  INFO -at 
 java.security.AccessController.doPrivileged(Native Method)
 2014-08-01 13:47:24 -0400:  INFO -at 
 javax.security.auth.Subject.doAs(Subject.java:415)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:239)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:645)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.run(Sqoop.java:145)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.main(Sqoop.java:238)
 {code}
 If two are kicked off in the same second. The issue is the following lines of 
 code in the org.apache.hadoop.mapred.LocalDistributedCacheManager class: 
 {code}
 // Generating unique numbers for FSDownload.
 AtomicLong uniqueNumberGenerator =
new AtomicLong(System.currentTimeMillis());
 {code}
 and 
 {code}
 Long.toString(uniqueNumberGenerator.incrementAndGet())),
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6371) Misleading information in documentation - Directories don't use host file system space and don't count against the space quota.

2015-04-03 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394611#comment-14394611
 ] 

Gabor Liptak commented on HADOOP-6371:
--

Ravi, can you offer an example of these counts?

http://www.michael-noll.com/blog/2011/10/20/understanding-hdfs-quotas-and-hadoop-fs-and-fsck-tools/

 Misleading information in documentation - Directories don't use host file 
 system space and don't count against the space quota.
 ---

 Key: HADOOP-6371
 URL: https://issues.apache.org/jira/browse/HADOOP-6371
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Ravi Phulari
  Labels: newbie

 Need to remove misleading information from quota documentation.
 {noformat}
 Directories don't use host file system space and don't count against the 
 space quota. 
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9805) Refactor RawLocalFileSystem#rename for improved testability.

2015-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394820#comment-14394820
 ] 

Hudson commented on HADOOP-9805:


FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #153 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/153/])
HADOOP-9805. Refactor RawLocalFileSystem#rename for improved testability. 
Contributed by Jean-Pierre Matsumoto. (cnauroth: rev 
5763b173d34dcf7372520076f00b576f493662cd)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractRename.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java


 Refactor RawLocalFileSystem#rename for improved testability.
 

 Key: HADOOP-9805
 URL: https://issues.apache.org/jira/browse/HADOOP-9805
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 1-win, 1.3.0, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Jean-Pierre Matsumoto
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-9805.001.patch, HADOOP-9805.002.patch, 
 HADOOP-9805.003.patch


 {{RawLocalFileSystem#rename}} contains fallback logic to provide POSIX rename 
 behavior on platforms where {{java.io.File#renameTo}} fails.  The method 
 returns early if {{java.io.File#renameTo}} succeeds, so test runs may not 
 cover the fallback logic depending on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11797) releasedocmaker.py needs to put ASF headers on output

2015-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394819#comment-14394819
 ] 

Hudson commented on HADOOP-11797:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #153 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/153/])
HADOOP-11797. releasedocmaker.py needs to put ASF headers on output (aw) (aw: 
rev 8d3c0f601d549a22648050bcc9a0e4acf37edc81)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/releasedocmaker.py


 releasedocmaker.py needs to put ASF headers on output
 -

 Key: HADOOP-11797
 URL: https://issues.apache.org/jira/browse/HADOOP-11797
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11797.000.patch


 ... otherwise mvn rat check fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11791) Update src/site/markdown/releases to include old versions of Hadoop

2015-04-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11791:
--
Target Version/s: 3.0.0

 Update src/site/markdown/releases to include old versions of Hadoop
 ---

 Key: HADOOP-11791
 URL: https://issues.apache.org/jira/browse/HADOOP-11791
 Project: Hadoop Common
  Issue Type: Task
  Components: build, documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11791.001.patch


 With the commit of HADOOP-11731, we need to include the new format of release 
 information in trunk.  This JIRA is about including those old versions in the 
 tree.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11785) Reduce number of listStatus operation in distcp buildListing()

2015-04-03 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394930#comment-14394930
 ] 

Colin Patrick McCabe commented on HADOOP-11785:
---

+1.  Thanks, [~3opan]

 Reduce number of listStatus operation in distcp buildListing()
 --

 Key: HADOOP-11785
 URL: https://issues.apache.org/jira/browse/HADOOP-11785
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 3.0.0
Reporter: Zoran Dimitrijevic
Assignee: Zoran Dimitrijevic
Priority: Minor
 Attachments: distcp-liststatus.patch, distcp-liststatus2.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Distcp was taking long time in copyListing.buildListing() for large source 
 trees (I was using source of 1.5M files in a tree of about 50K directories). 
 For input at s3 buildListing was taking more than one hour. I've noticed a 
 performance bug in the current code which does listStatus twice for each 
 directory which doubles number of RPCs in some cases (if most directories do 
 not contain 1000 files).
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11802) DomainSocketWatcher#watcherThread encounters IllegalStateException in finally block when calling sendCallback

2015-04-03 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne reassigned HADOOP-11802:
---

Assignee: Eric Payne

 DomainSocketWatcher#watcherThread encounters IllegalStateException in finally 
 block when calling sendCallback
 -

 Key: HADOOP-11802
 URL: https://issues.apache.org/jira/browse/HADOOP-11802
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Eric Payne
Assignee: Eric Payne

 In the main finally block of the {{DomainSocketWatcher#watcherThread}}, the 
 call to {{sendCallback}} can encountering an {{IllegalStateException}}, and 
 leave some cleanup tasks undone.
 {code}
   } finally {
 lock.lock();
 try {
   kick(); // allow the handler for notificationSockets[0] to read a 
 byte
   for (Entry entry : entries.values()) {
 // We do not remove from entries as we iterate, because that can
 // cause a ConcurrentModificationException.
 sendCallback(close, entries, fdSet, entry.getDomainSocket().fd);
   }
   entries.clear();
   fdSet.close();
 } finally {
   lock.unlock();
 }
   }
 {code}
 The exception causes {{watcherThread}} to skip the calls to 
 {{entries.clear()}} and {{fdSet.close()}}.
 {code}
 2015-04-02 11:48:09,941 [DataXceiver for client 
 unix:/home/gs/var/run/hdfs/dn_socket [Waiting for operation #1]] INFO 
 DataNode.clienttrace: cliID: DFSClient_NONMAPREDUCE_-807148576_1, src: 
 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: n/a, srvID: 
 e6b6cdd7-1bf8-415f-a412-32d8493554df, success: false
 2015-04-02 11:48:09,941 [Thread-14] ERROR unix.DomainSocketWatcher: 
 Thread[Thread-14,5,main] terminating on unexpected exception
 java.lang.IllegalStateException: failed to remove 
 b845649551b6b1eab5c17f630e42489d
 at 
 com.google.common.base.Preconditions.checkState(Preconditions.java:145)
 at 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.removeShm(ShortCircuitRegistry.java:119)
 at 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry$RegisteredShm.handle(ShortCircuitRegistry.java:102)
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher.sendCallback(DomainSocketWatcher.java:402)
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher.access$1100(DomainSocketWatcher.java:52)
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:522)
 at java.lang.Thread.run(Thread.java:722)
 {code}
 Please note that this is not a duplicate of HADOOP-11333, HADOOP-11604, or 
 HADOOP-10404. The cluster installation is running code with all of these 
 fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11803) Extend ECBlock allowing to customize how to read chunks from block

2015-04-03 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-11803:
--

 Summary: Extend ECBlock allowing to customize how to read chunks 
from block
 Key: HADOOP-11803
 URL: https://issues.apache.org/jira/browse/HADOOP-11803
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


As discussed in HDFS-7715 with [~jack_liuquan] and [~rashmikv], we may need to 
extend {{ECBlock}} in erasure coder layer, allowing erasure codes like 
Hitchhiker to be able to customize the behavior about how to read chunk(s) from 
a block for a coding process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11656) Classpath isolation for downstream clients

2015-04-03 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11656:
-
Attachment: HADOOP-11656_proposal.md

Attaching a more detailed proposal that can be included in branch-2 without 
being a breaking change. ([you can see the markdown rendered in this github 
gist|https://gist.github.com/busbey/4401f7b92e005e798242])

I'll make a subtask to start doing a POC on the client-side bit with HBase as a 
representative downstream client.

Once we iterate on the proposal enough that folks are convinced it's worth me 
moving forward, I'll remove the incompatible change flag until I know there's 
something that will break.

 Classpath isolation for downstream clients
 --

 Key: HADOOP-11656
 URL: https://issues.apache.org/jira/browse/HADOOP-11656
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Sean Busbey
Assignee: Sean Busbey
  Labels: classloading, classpath, dependencies, scripts, shell
 Attachments: HADOOP-11656_proposal.md


 Currently, Hadoop exposes downstream clients to a variety of third party 
 libraries. As our code base grows and matures we increase the set of 
 libraries we rely on. At the same time, as our user base grows we increase 
 the likelihood that some downstream project will run into a conflict while 
 attempting to use a different version of some library we depend on. This has 
 already happened with i.e. Guava several times for HBase, Accumulo, and Spark 
 (and I'm sure others).
 While YARN-286 and MAPREDUCE-1700 provided an initial effort, they default to 
 off and they don't do anything to help dependency conflicts on the driver 
 side or for folks talking to HDFS directly. This should serve as an umbrella 
 for changes needed to do things thoroughly on the next major version.
 We should ensure that downstream clients
 1) can depend on a client artifact for each of HDFS, YARN, and MapReduce that 
 doesn't pull in any third party dependencies
 2) only see our public API classes (or as close to this as feasible) when 
 executing user provided code, whether client side in a launcher/driver or on 
 the cluster in a container or within MR.
 This provides us with a double benefit: users get less grief when they want 
 to run substantially ahead or behind the versions we need and the project is 
 freer to change our own dependency versions because they'll no longer be in 
 our compatibility promises.
 Project specific task jiras to follow after I get some justifying use cases 
 written in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11791) Update src/site/markdown/releases to include old versions of Hadoop

2015-04-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11791:
--
Component/s: documentation

 Update src/site/markdown/releases to include old versions of Hadoop
 ---

 Key: HADOOP-11791
 URL: https://issues.apache.org/jira/browse/HADOOP-11791
 Project: Hadoop Common
  Issue Type: Task
  Components: build, documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11791.001.patch


 With the commit of HADOOP-11731, we need to include the new format of release 
 information in trunk.  This JIRA is about including those old versions in the 
 tree.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11801) Update BUILDING.txt

2015-04-03 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HADOOP-11801:
--
Component/s: (was: build)
 documentation

 Update BUILDING.txt
 ---

 Key: HADOOP-11801
 URL: https://issues.apache.org/jira/browse/HADOOP-11801
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Gabor Liptak
Priority: Minor
 Attachments: HADOOP-11801.patch


 ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9805) Refactor RawLocalFileSystem#rename for improved testability.

2015-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394873#comment-14394873
 ] 

Hudson commented on HADOOP-9805:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #2102 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2102/])
HADOOP-9805. Refactor RawLocalFileSystem#rename for improved testability. 
Contributed by Jean-Pierre Matsumoto. (cnauroth: rev 
5763b173d34dcf7372520076f00b576f493662cd)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/rawlocal/TestRawlocalContractRename.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java


 Refactor RawLocalFileSystem#rename for improved testability.
 

 Key: HADOOP-9805
 URL: https://issues.apache.org/jira/browse/HADOOP-9805
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 1-win, 1.3.0, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Jean-Pierre Matsumoto
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-9805.001.patch, HADOOP-9805.002.patch, 
 HADOOP-9805.003.patch


 {{RawLocalFileSystem#rename}} contains fallback logic to provide POSIX rename 
 behavior on platforms where {{java.io.File#renameTo}} fails.  The method 
 returns early if {{java.io.File#renameTo}} succeeds, so test runs may not 
 cover the fallback logic depending on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11797) releasedocmaker.py needs to put ASF headers on output

2015-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394872#comment-14394872
 ] 

Hudson commented on HADOOP-11797:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2102 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2102/])
HADOOP-11797. releasedocmaker.py needs to put ASF headers on output (aw) (aw: 
rev 8d3c0f601d549a22648050bcc9a0e4acf37edc81)
* dev-support/releasedocmaker.py
* hadoop-common-project/hadoop-common/CHANGES.txt


 releasedocmaker.py needs to put ASF headers on output
 -

 Key: HADOOP-11797
 URL: https://issues.apache.org/jira/browse/HADOOP-11797
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11797.000.patch


 ... otherwise mvn rat check fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11802) DomainSocketWatcher#watcherThread encounters IllegalStateException in finally block when calling sendCallback

2015-04-03 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394675#comment-14394675
 ] 

Eric Payne commented on HADOOP-11802:
-

The place in {{sendCallback}} where it is encountering the exception is
{code}
if (entry.getHandler().handle(sock)) {
{code}

Once the {{IllegalStateException}} occurs, I am seeing 4069 datanode threads 
getting stuck in {{DomainSocketWatcher#add}} when {{DataXceiver}} is trying to 
request a new short circuit read. This is similar to the symptoms seen in 
HADOOP-11333, but, as I mentioned above, the cluster is already running with 
that fix.

Here is the stack trace from the stuck threads, for reference:
{noformat}
DataXceiver for client unix:/home/gs/var/run/hdfs/dn_socket [Waiting for operat
ion #1] daemon prio=10 tid=0x7fcbbcae1000 nid=0x498a waiting on condition [
0x7fcb61132000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  0xd06c3a78 (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at 
org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:323)
at 
org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(ShortCircuitRegistry.java:322)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:403)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:214)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:95)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
at java.lang.Thread.run(Thread.java:722)
{noformat}

 DomainSocketWatcher#watcherThread encounters IllegalStateException in finally 
 block when calling sendCallback
 -

 Key: HADOOP-11802
 URL: https://issues.apache.org/jira/browse/HADOOP-11802
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Eric Payne
Assignee: Eric Payne

 In the main finally block of the {{DomainSocketWatcher#watcherThread}}, the 
 call to {{sendCallback}} can encountering an {{IllegalStateException}}, and 
 leave some cleanup tasks undone.
 {code}
   } finally {
 lock.lock();
 try {
   kick(); // allow the handler for notificationSockets[0] to read a 
 byte
   for (Entry entry : entries.values()) {
 // We do not remove from entries as we iterate, because that can
 // cause a ConcurrentModificationException.
 sendCallback(close, entries, fdSet, entry.getDomainSocket().fd);
   }
   entries.clear();
   fdSet.close();
 } finally {
   lock.unlock();
 }
   }
 {code}
 The exception causes {{watcherThread}} to skip the calls to 
 {{entries.clear()}} and {{fdSet.close()}}.
 {code}
 2015-04-02 11:48:09,941 [DataXceiver for client 
 unix:/home/gs/var/run/hdfs/dn_socket [Waiting for operation #1]] INFO 
 DataNode.clienttrace: cliID: DFSClient_NONMAPREDUCE_-807148576_1, src: 
 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: n/a, srvID: 
 e6b6cdd7-1bf8-415f-a412-32d8493554df, success: false
 2015-04-02 11:48:09,941 [Thread-14] ERROR unix.DomainSocketWatcher: 
 Thread[Thread-14,5,main] terminating on unexpected exception
 java.lang.IllegalStateException: failed to remove 
 b845649551b6b1eab5c17f630e42489d
 at 
 com.google.common.base.Preconditions.checkState(Preconditions.java:145)
 at 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.removeShm(ShortCircuitRegistry.java:119)
 at 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry$RegisteredShm.handle(ShortCircuitRegistry.java:102)
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher.sendCallback(DomainSocketWatcher.java:402)
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher.access$1100(DomainSocketWatcher.java:52)
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:522)
 at java.lang.Thread.run(Thread.java:722)
 {code}
 Please note that this is not a duplicate of HADOOP-11333, HADOOP-11604, or 
 HADOOP-10404. The cluster installation is running code with all of these 
 fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10924) LocalDistributedCacheManager for concurrent sqoop processes fails to create unique directories

2015-04-03 Thread William Watson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394882#comment-14394882
 ] 

William Watson commented on HADOOP-10924:
-

Awesome, thanks for the clarification. I'll finish this ASAP.

 LocalDistributedCacheManager for concurrent sqoop processes fails to create 
 unique directories
 --

 Key: HADOOP-10924
 URL: https://issues.apache.org/jira/browse/HADOOP-10924
 Project: Hadoop Common
  Issue Type: Bug
Reporter: William Watson
Assignee: William Watson
 Attachments: HADOOP-10924.02.patch, 
 HADOOP-10924.03.jobid-plus-uuid.patch


 Kicking off many sqoop processes in different threads results in:
 {code}
 2014-08-01 13:47:24 -0400:  INFO - 14/08/01 13:47:22 ERROR tool.ImportTool: 
 Encountered IOException running import job: java.io.IOException: 
 java.util.concurrent.ExecutionException: java.io.IOException: Rename cannot 
 overwrite non empty destination directory 
 /tmp/hadoop-hadoop/mapred/local/1406915233073
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:149)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.init(LocalJobRunner.java:163)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
 2014-08-01 13:47:24 -0400:  INFO -at 
 java.security.AccessController.doPrivileged(Native Method)
 2014-08-01 13:47:24 -0400:  INFO -at 
 javax.security.auth.Subject.doAs(Subject.java:415)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:239)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:645)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.run(Sqoop.java:145)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
 2014-08-01 13:47:24 -0400:  INFO -at 
 org.apache.sqoop.Sqoop.main(Sqoop.java:238)
 {code}
 If two are kicked off in the same second. The issue is the following lines of 
 code in the org.apache.hadoop.mapred.LocalDistributedCacheManager class: 
 {code}
 // Generating unique numbers for FSDownload.
 AtomicLong uniqueNumberGenerator =
new AtomicLong(System.currentTimeMillis());
 {code}
 and 
 {code}
 Long.toString(uniqueNumberGenerator.incrementAndGet())),
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11740) Combine erasure encoder and decoder interfaces

2015-04-03 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-11740:
---
Attachment: HADOOP-11740-002.patch

Thanks Kai for the review!

I read the test classes again and figured out more about the structure. Let me 
know if it looks OK now.

Regarding Javadoc I believe we shouldn't have empty statements when merging to 
trunk. If a parameter or return value is self-descriptory, I think it's better 
not to add a Javadoc than adding an empty one. But let's discuss that 
separately since it's not in the scope of this JIRA.

 Combine erasure encoder and decoder interfaces
 --

 Key: HADOOP-11740
 URL: https://issues.apache.org/jira/browse/HADOOP-11740
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HADOOP-11740-000.patch, HADOOP-11740-001.patch, 
 HADOOP-11740-002.patch


 Rationale [discussed | 
 https://issues.apache.org/jira/browse/HDFS-7337?focusedCommentId=14376540page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376540]
  under HDFS-7337.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11740) Combine erasure encoder and decoder interfaces

2015-04-03 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395172#comment-14395172
 ] 

Zhe Zhang commented on HADOOP-11740:


Thanks Kai! I just committed the patch.

 Combine erasure encoder and decoder interfaces
 --

 Key: HADOOP-11740
 URL: https://issues.apache.org/jira/browse/HADOOP-11740
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: HDFS-7285

 Attachments: HADOOP-11740-000.patch, HADOOP-11740-001.patch, 
 HADOOP-11740-002.patch


 Rationale [discussed | 
 https://issues.apache.org/jira/browse/HDFS-7337?focusedCommentId=14376540page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376540]
  under HDFS-7337.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11740) Combine erasure encoder and decoder interfaces

2015-04-03 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HADOOP-11740.

   Resolution: Fixed
Fix Version/s: HDFS-7285

 Combine erasure encoder and decoder interfaces
 --

 Key: HADOOP-11740
 URL: https://issues.apache.org/jira/browse/HADOOP-11740
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: HDFS-7285

 Attachments: HADOOP-11740-000.patch, HADOOP-11740-001.patch, 
 HADOOP-11740-002.patch


 Rationale [discussed | 
 https://issues.apache.org/jira/browse/HDFS-7337?focusedCommentId=14376540page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376540]
  under HDFS-7337.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11802) DomainSocketWatcher#watcherThread can encounter IllegalStateException in finally block when calling sendCallback

2015-04-03 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HADOOP-11802:

Summary: DomainSocketWatcher#watcherThread can encounter 
IllegalStateException in finally block when calling sendCallback  (was: 
DomainSocketWatcher#watcherThread encounters IllegalStateException in finally 
block when calling sendCallback)

 DomainSocketWatcher#watcherThread can encounter IllegalStateException in 
 finally block when calling sendCallback
 

 Key: HADOOP-11802
 URL: https://issues.apache.org/jira/browse/HADOOP-11802
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Eric Payne
Assignee: Eric Payne

 In the main finally block of the {{DomainSocketWatcher#watcherThread}}, the 
 call to {{sendCallback}} can encounter an {{IllegalStateException}}, and 
 leave some cleanup tasks undone.
 {code}
   } finally {
 lock.lock();
 try {
   kick(); // allow the handler for notificationSockets[0] to read a 
 byte
   for (Entry entry : entries.values()) {
 // We do not remove from entries as we iterate, because that can
 // cause a ConcurrentModificationException.
 sendCallback(close, entries, fdSet, entry.getDomainSocket().fd);
   }
   entries.clear();
   fdSet.close();
 } finally {
   lock.unlock();
 }
   }
 {code}
 The exception causes {{watcherThread}} to skip the calls to 
 {{entries.clear()}} and {{fdSet.close()}}.
 {code}
 2015-04-02 11:48:09,941 [DataXceiver for client 
 unix:/home/gs/var/run/hdfs/dn_socket [Waiting for operation #1]] INFO 
 DataNode.clienttrace: cliID: DFSClient_NONMAPREDUCE_-807148576_1, src: 
 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: n/a, srvID: 
 e6b6cdd7-1bf8-415f-a412-32d8493554df, success: false
 2015-04-02 11:48:09,941 [Thread-14] ERROR unix.DomainSocketWatcher: 
 Thread[Thread-14,5,main] terminating on unexpected exception
 java.lang.IllegalStateException: failed to remove 
 b845649551b6b1eab5c17f630e42489d
 at 
 com.google.common.base.Preconditions.checkState(Preconditions.java:145)
 at 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.removeShm(ShortCircuitRegistry.java:119)
 at 
 org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry$RegisteredShm.handle(ShortCircuitRegistry.java:102)
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher.sendCallback(DomainSocketWatcher.java:402)
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher.access$1100(DomainSocketWatcher.java:52)
 at 
 org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:522)
 at java.lang.Thread.run(Thread.java:722)
 {code}
 Please note that this is not a duplicate of HADOOP-11333, HADOOP-11604, or 
 HADOOP-10404. The cluster installation is running code with all of these 
 fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11802) DomainSocketWatcher#watcherThread encounters IllegalStateException in finally block when calling sendCallback

2015-04-03 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HADOOP-11802:

Description: 
In the main finally block of the {{DomainSocketWatcher#watcherThread}}, the 
call to {{sendCallback}} can encounter an {{IllegalStateException}}, and leave 
some cleanup tasks undone.

{code}
  } finally {
lock.lock();
try {
  kick(); // allow the handler for notificationSockets[0] to read a byte
  for (Entry entry : entries.values()) {
// We do not remove from entries as we iterate, because that can
// cause a ConcurrentModificationException.
sendCallback(close, entries, fdSet, entry.getDomainSocket().fd);
  }
  entries.clear();
  fdSet.close();
} finally {
  lock.unlock();
}
  }
{code}

The exception causes {{watcherThread}} to skip the calls to {{entries.clear()}} 
and {{fdSet.close()}}.

{code}
2015-04-02 11:48:09,941 [DataXceiver for client 
unix:/home/gs/var/run/hdfs/dn_socket [Waiting for operation #1]] INFO 
DataNode.clienttrace: cliID: DFSClient_NONMAPREDUCE_-807148576_1, src: 
127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: n/a, srvID: 
e6b6cdd7-1bf8-415f-a412-32d8493554df, success: false
2015-04-02 11:48:09,941 [Thread-14] ERROR unix.DomainSocketWatcher: 
Thread[Thread-14,5,main] terminating on unexpected exception
java.lang.IllegalStateException: failed to remove 
b845649551b6b1eab5c17f630e42489d
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:145)
at 
org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.removeShm(ShortCircuitRegistry.java:119)
at 
org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry$RegisteredShm.handle(ShortCircuitRegistry.java:102)
at 
org.apache.hadoop.net.unix.DomainSocketWatcher.sendCallback(DomainSocketWatcher.java:402)
at 
org.apache.hadoop.net.unix.DomainSocketWatcher.access$1100(DomainSocketWatcher.java:52)
at 
org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:522)
at java.lang.Thread.run(Thread.java:722)
{code}

Please note that this is not a duplicate of HADOOP-11333, HADOOP-11604, or 
HADOOP-10404. The cluster installation is running code with all of these fixes.

  was:
In the main finally block of the {{DomainSocketWatcher#watcherThread}}, the 
call to {{sendCallback}} can encountering an {{IllegalStateException}}, and 
leave some cleanup tasks undone.

{code}
  } finally {
lock.lock();
try {
  kick(); // allow the handler for notificationSockets[0] to read a byte
  for (Entry entry : entries.values()) {
// We do not remove from entries as we iterate, because that can
// cause a ConcurrentModificationException.
sendCallback(close, entries, fdSet, entry.getDomainSocket().fd);
  }
  entries.clear();
  fdSet.close();
} finally {
  lock.unlock();
}
  }
{code}

The exception causes {{watcherThread}} to skip the calls to {{entries.clear()}} 
and {{fdSet.close()}}.

{code}
2015-04-02 11:48:09,941 [DataXceiver for client 
unix:/home/gs/var/run/hdfs/dn_socket [Waiting for operation #1]] INFO 
DataNode.clienttrace: cliID: DFSClient_NONMAPREDUCE_-807148576_1, src: 
127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_SHM, shmId: n/a, srvID: 
e6b6cdd7-1bf8-415f-a412-32d8493554df, success: false
2015-04-02 11:48:09,941 [Thread-14] ERROR unix.DomainSocketWatcher: 
Thread[Thread-14,5,main] terminating on unexpected exception
java.lang.IllegalStateException: failed to remove 
b845649551b6b1eab5c17f630e42489d
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:145)
at 
org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.removeShm(ShortCircuitRegistry.java:119)
at 
org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry$RegisteredShm.handle(ShortCircuitRegistry.java:102)
at 
org.apache.hadoop.net.unix.DomainSocketWatcher.sendCallback(DomainSocketWatcher.java:402)
at 
org.apache.hadoop.net.unix.DomainSocketWatcher.access$1100(DomainSocketWatcher.java:52)
at 
org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:522)
at java.lang.Thread.run(Thread.java:722)
{code}

Please note that this is not a duplicate of HADOOP-11333, HADOOP-11604, or 
HADOOP-10404. The cluster installation is running code with all of these fixes.


 DomainSocketWatcher#watcherThread encounters IllegalStateException in finally 
 block when calling sendCallback
 -

 Key: HADOOP-11802
 URL: https://issues.apache.org/jira/browse/HADOOP-11802
 Project: Hadoop Common
 

[jira] [Commented] (HADOOP-11740) Combine erasure encoder and decoder interfaces

2015-04-03 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395100#comment-14395100
 ] 

Kai Zheng commented on HADOOP-11740:


Thanks Zhe for the update. The new patch looks great. +1
For the Javadoc question, let's find chances to discuss it separately.

 Combine erasure encoder and decoder interfaces
 --

 Key: HADOOP-11740
 URL: https://issues.apache.org/jira/browse/HADOOP-11740
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HADOOP-11740-000.patch, HADOOP-11740-001.patch, 
 HADOOP-11740-002.patch


 Rationale [discussed | 
 https://issues.apache.org/jira/browse/HDFS-7337?focusedCommentId=14376540page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376540]
  under HDFS-7337.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2015-04-03 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-11804:


 Summary: POC Hadoop Client w/o transitive dependencies
 Key: HADOOP-11804
 URL: https://issues.apache.org/jira/browse/HADOOP-11804
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Sean Busbey
Assignee: Sean Busbey


make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
talk with a Hadoop cluster without seeing any of the implementation 
dependencies.

see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11805) Better to rename some raw erasure coders

2015-04-03 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-11805:
--

 Summary: Better to rename some raw erasure coders
 Key: HADOOP-11805
 URL: https://issues.apache.org/jira/browse/HADOOP-11805
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


When work on more coders, it was found better to rename some existing raw 
coders for consistency, and more meaningful. As a result, we may have:
XORRawErasureCoder, in Java
NativeXORRawErasureCoder, in native
RSRawErasureCoder, in Java
NativeRSRawErasureCoder, in native and using ISA-L



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2015-04-03 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-11804 started by Sean Busbey.

 POC Hadoop Client w/o transitive dependencies
 -

 Key: HADOOP-11804
 URL: https://issues.apache.org/jira/browse/HADOOP-11804
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Sean Busbey
Assignee: Sean Busbey

 make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
 talk with a Hadoop cluster without seeing any of the implementation 
 dependencies.
 see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-03 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395058#comment-14395058
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-11731:
--

Again, I do not oppose using the new tool.  However, we do need a transition 
period to see if it indeed works well.

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11785) Reduce number of listStatus operation in distcp buildListing()

2015-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395094#comment-14395094
 ] 

Hudson commented on HADOOP-11785:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7508 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7508/])
HADOOP-11785. Reduce the number of listStatus operation in distcp buildListing 
(Zoran Dimitrijevic via Colin P. McCabe) (cmccabe: rev 
932730df7d62077f7356464ad27f69469965d77a)
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Reduce number of listStatus operation in distcp buildListing()
 --

 Key: HADOOP-11785
 URL: https://issues.apache.org/jira/browse/HADOOP-11785
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 3.0.0
Reporter: Zoran Dimitrijevic
Assignee: Zoran Dimitrijevic
Priority: Minor
 Fix For: 2.8.0

 Attachments: distcp-liststatus.patch, distcp-liststatus2.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Distcp was taking long time in copyListing.buildListing() for large source 
 trees (I was using source of 1.5M files in a tree of about 50K directories). 
 For input at s3 buildListing was taking more than one hour. I've noticed a 
 performance bug in the current code which does listStatus twice for each 
 directory which doubles number of RPCs in some cases (if most directories do 
 not contain 1000 files).
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11789) NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec

2015-04-03 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394990#comment-14394990
 ] 

Colin Patrick McCabe commented on HADOOP-11789:
---

 The test is designed to catch cases where the openssl implementation is not 
loaded.  Perhaps we can avoid running the test when pnative is not set, but we 
should not pass the test when the openssl library can't be loaded.

 NPE in TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
 -

 Key: HADOOP-11789
 URL: https://issues.apache.org/jira/browse/HADOOP-11789
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.8.0
 Environment: ASF Jenkins
Reporter: Steve Loughran
Assignee: Yi Liu
 Attachments: HADOOP-11789.001.patch


 NPE surfacing in {{TestCryptoStreamsWithOpensslAesCtrCryptoCodec}} on  Jenkins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11785) Reduce number of listStatus operation in distcp buildListing()

2015-04-03 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11785:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

committed to 2.8

 Reduce number of listStatus operation in distcp buildListing()
 --

 Key: HADOOP-11785
 URL: https://issues.apache.org/jira/browse/HADOOP-11785
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 3.0.0
Reporter: Zoran Dimitrijevic
Assignee: Zoran Dimitrijevic
Priority: Minor
 Fix For: 2.8.0

 Attachments: distcp-liststatus.patch, distcp-liststatus2.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Distcp was taking long time in copyListing.buildListing() for large source 
 trees (I was using source of 1.5M files in a tree of about 50K directories). 
 For input at s3 buildListing was taking more than one hour. I've noticed a 
 performance bug in the current code which does listStatus twice for each 
 directory which doubles number of RPCs in some cases (if most directories do 
 not contain 1000 files).
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11627) Remove io.native.lib.available from trunk

2015-04-03 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395467#comment-14395467
 ] 

Brahma Reddy Battula commented on HADOOP-11627:
---

thanks a lot for review.. Updated the patch.. Kindly review

 Remove io.native.lib.available from trunk
 -

 Key: HADOOP-11627
 URL: https://issues.apache.org/jira/browse/HADOOP-11627
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
 HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, 
 HADOOP-11627-007.patch, HADOOP-11627.patch


 According to the discussion in HADOOP-8642, we should remove 
 {{io.native.lib.available}} from trunk, and always use native libraries if 
 they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11806) Test issue for JIRA automation scripts

2015-04-03 Thread Raymie Stata (JIRA)
Raymie Stata created HADOOP-11806:
-

 Summary: Test issue for JIRA automation scripts
 Key: HADOOP-11806
 URL: https://issues.apache.org/jira/browse/HADOOP-11806
 Project: Hadoop Common
  Issue Type: Test
Reporter: Raymie Stata
Assignee: Raymie Stata
Priority: Trivial


I'm writing some scripts to automate some JIRA clean-up activities.  I've 
created this issue for testing these scripts.  Please ignore...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11805) Better to rename some raw erasure coders

2015-04-03 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14395225#comment-14395225
 ] 

Kai Zheng commented on HADOOP-11805:


Hi [~zhz],

Would you help review this? I hope the renaming would make you more 
comfortable, some just followed what you suggested previously. Thanks.

 Better to rename some raw erasure coders
 

 Key: HADOOP-11805
 URL: https://issues.apache.org/jira/browse/HADOOP-11805
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11805-v1.patch


 When work on more coders, it was found better to rename some existing raw 
 coders for consistency, and more meaningful. As a result, we may have:
 XORRawErasureCoder, in Java
 NativeXORRawErasureCoder, in native
 RSRawErasureCoder, in Java
 NativeRSRawErasureCoder, in native and using ISA-L



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11627) Remove io.native.lib.available from trunk

2015-04-03 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11627:
--
Attachment: HADOOP-11627-007.patch

 Remove io.native.lib.available from trunk
 -

 Key: HADOOP-11627
 URL: https://issues.apache.org/jira/browse/HADOOP-11627
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
 HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627-006.patch, 
 HADOOP-11627-007.patch, HADOOP-11627.patch


 According to the discussion in HADOOP-8642, we should remove 
 {{io.native.lib.available}} from trunk, and always use native libraries if 
 they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11805) Better to rename some raw erasure coders

2015-04-03 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11805:
---
Attachment: HADOOP-11805-v1.patch

Uploaded a patch performing some renaming.

 Better to rename some raw erasure coders
 

 Key: HADOOP-11805
 URL: https://issues.apache.org/jira/browse/HADOOP-11805
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11805-v1.patch


 When work on more coders, it was found better to rename some existing raw 
 coders for consistency, and more meaningful. As a result, we may have:
 XORRawErasureCoder, in Java
 NativeXORRawErasureCoder, in native
 RSRawErasureCoder, in Java
 NativeRSRawErasureCoder, in native and using ISA-L



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11806) Test issue for JIRA automation scripts

2015-04-03 Thread Raymie Stata (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymie Stata updated HADOOP-11806:
--
Status: Patch Available  (was: Open)

 Test issue for JIRA automation scripts
 --

 Key: HADOOP-11806
 URL: https://issues.apache.org/jira/browse/HADOOP-11806
 Project: Hadoop Common
  Issue Type: Test
Reporter: Raymie Stata
Assignee: Raymie Stata
Priority: Trivial

 I'm writing some scripts to automate some JIRA clean-up activities.  I've 
 created this issue for testing these scripts.  Please ignore...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11800) Clean up some test methods in TestCodec.java

2015-04-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394463#comment-14394463
 ] 

Hadoop QA commented on HADOOP-11800:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12709227/HADOOP-11800.patch
  against trunk revision 72f6bd4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6058//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6058//console

This message is automatically generated.

 Clean up some test methods in TestCodec.java
 

 Key: HADOOP-11800
 URL: https://issues.apache.org/jira/browse/HADOOP-11800
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Attachments: HADOOP-11800.patch


 Found two issues when reviewing the patches in HADOOP-11627.
 1. There is no {{@Test}} annotation, so the test is not executed.
 {code}
   public void testCodecPoolAndGzipDecompressor() {
 {code}
 2. The method should be private because it is called from other tests.
 {code}
   public void testGzipCodecWrite(boolean useNative) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Status: Open  (was: Patch Available)

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch, HADOOP-11717-7.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Attachment: HADOOP-11717-8.patch

Fixed the prefix issue with the previous patch. 
It should apply fine now.

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Status: Patch Available  (was: Open)

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11801) Update BUILDING.txt

2015-04-03 Thread Gabor Liptak (JIRA)
Gabor Liptak created HADOOP-11801:
-

 Summary: Update BUILDING.txt
 Key: HADOOP-11801
 URL: https://issues.apache.org/jira/browse/HADOOP-11801
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Gabor Liptak
Priority: Minor


ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11800) Clean up some test methods in TestCodec.java

2015-04-03 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394408#comment-14394408
 ] 

Brahma Reddy Battula commented on HADOOP-11800:
---

[~ajisakaa] Thanks for reporting this jira... Attached the patch..Kindly 
review...

 Clean up some test methods in TestCodec.java
 

 Key: HADOOP-11800
 URL: https://issues.apache.org/jira/browse/HADOOP-11800
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Attachments: HADOOP-11800.patch


 Found two issues when reviewing the patches in HADOOP-11627.
 1. There is no {{@Test}} annotation, so the test is not executed.
 {code}
   public void testCodecPoolAndGzipDecompressor() {
 {code}
 2. The method should be private because it is called from other tests.
 {code}
   public void testGzipCodecWrite(boolean useNative) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11800) Clean up some test methods in TestCodec.java

2015-04-03 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11800:
--
Attachment: HADOOP-11800.patch

 Clean up some test methods in TestCodec.java
 

 Key: HADOOP-11800
 URL: https://issues.apache.org/jira/browse/HADOOP-11800
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Attachments: HADOOP-11800.patch


 Found two issues when reviewing the patches in HADOOP-11627.
 1. There is no {{@Test}} annotation, so the test is not executed.
 {code}
   public void testCodecPoolAndGzipDecompressor() {
 {code}
 2. The method should be private because it is called from other tests.
 {code}
   public void testGzipCodecWrite(boolean useNative) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11800) Clean up some test methods in TestCodec.java

2015-04-03 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11800:
--
Status: Patch Available  (was: Open)

 Clean up some test methods in TestCodec.java
 

 Key: HADOOP-11800
 URL: https://issues.apache.org/jira/browse/HADOOP-11800
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Attachments: HADOOP-11800.patch


 Found two issues when reviewing the patches in HADOOP-11627.
 1. There is no {{@Test}} annotation, so the test is not executed.
 {code}
   public void testCodecPoolAndGzipDecompressor() {
 {code}
 2. The method should be private because it is called from other tests.
 {code}
   public void testGzipCodecWrite(boolean useNative) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11758) Add options to filter out too much granular tracing spans

2015-04-03 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11758:
--
Attachment: testWriteTraceHooks-HDFS-8026.html

Thanks [~cmccabe]! That is nice improvement. I attached graph with with the 
patch applied.

 Add options to filter out too much granular tracing spans
 -

 Key: HADOOP-11758
 URL: https://issues.apache.org/jira/browse/HADOOP-11758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tracing
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: testWriteTraceHooks-HDFS-8026.html, 
 testWriteTraceHooks.html


 in order to avoid queue in span receiver spills



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11800) Clean up some test methods in TestCodec.java

2015-04-03 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11800:
---
Target Version/s: 2.8.0

 Clean up some test methods in TestCodec.java
 

 Key: HADOOP-11800
 URL: https://issues.apache.org/jira/browse/HADOOP-11800
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Attachments: HADOOP-11800.patch


 Found two issues when reviewing the patches in HADOOP-11627.
 1. There is no {{@Test}} annotation, so the test is not executed.
 {code}
   public void testCodecPoolAndGzipDecompressor() {
 {code}
 2. The method should be private because it is called from other tests.
 {code}
   public void testGzipCodecWrite(boolean useNative) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11800) Clean up some test methods in TestCodec.java

2015-04-03 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394447#comment-14394447
 ] 

Akira AJISAKA commented on HADOOP-11800:


+1 pending Jenkins. Thanks [~brahmareddy] for taking this.

 Clean up some test methods in TestCodec.java
 

 Key: HADOOP-11800
 URL: https://issues.apache.org/jira/browse/HADOOP-11800
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Attachments: HADOOP-11800.patch


 Found two issues when reviewing the patches in HADOOP-11627.
 1. There is no {{@Test}} annotation, so the test is not executed.
 {code}
   public void testCodecPoolAndGzipDecompressor() {
 {code}
 2. The method should be private because it is called from other tests.
 {code}
   public void testGzipCodecWrite(boolean useNative) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11800) Clean up some test methods in TestCodec.java

2015-04-03 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11800:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~brahmareddy] for the 
contribution.

 Clean up some test methods in TestCodec.java
 

 Key: HADOOP-11800
 URL: https://issues.apache.org/jira/browse/HADOOP-11800
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-11800.patch


 Found two issues when reviewing the patches in HADOOP-11627.
 1. There is no {{@Test}} annotation, so the test is not executed.
 {code}
   public void testCodecPoolAndGzipDecompressor() {
 {code}
 2. The method should be private because it is called from other tests.
 {code}
   public void testGzipCodecWrite(boolean useNative) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11800) Clean up some test methods in TestCodec.java

2015-04-03 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394587#comment-14394587
 ] 

Brahma Reddy Battula commented on HADOOP-11800:
---

Thanks Akira!!!

 Clean up some test methods in TestCodec.java
 

 Key: HADOOP-11800
 URL: https://issues.apache.org/jira/browse/HADOOP-11800
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-11800.patch


 Found two issues when reviewing the patches in HADOOP-11627.
 1. There is no {{@Test}} annotation, so the test is not executed.
 {code}
   public void testCodecPoolAndGzipDecompressor() {
 {code}
 2. The method should be private because it is called from other tests.
 {code}
   public void testGzipCodecWrite(boolean useNative) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11800) Clean up some test methods in TestCodec.java

2015-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14394586#comment-14394586
 ] 

Hudson commented on HADOOP-11800:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7504 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7504/])
HADOOP-11800. Clean up some test methods in TestCodec.java. Contributed by 
Brahma Reddy Battula. (aajisaka: rev 228ae9aaa40750cb796bbdfd69ba5646c28cd4e7)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java


 Clean up some test methods in TestCodec.java
 

 Key: HADOOP-11800
 URL: https://issues.apache.org/jira/browse/HADOOP-11800
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-11800.patch


 Found two issues when reviewing the patches in HADOOP-11627.
 1. There is no {{@Test}} annotation, so the test is not executed.
 {code}
   public void testCodecPoolAndGzipDecompressor() {
 {code}
 2. The method should be private because it is called from other tests.
 {code}
   public void testGzipCodecWrite(boolean useNative) throws IOException {
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)