[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545003#comment-14545003
 ] 

Hadoop QA commented on HADOOP-11934:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 28s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 31s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  4s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 38s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  23m  0s | Tests passed in 
hadoop-common. |
| | |  59m 39s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733063/HADOOP-11934.010.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / ee7beda |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6701/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6701/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6701/console |


This message was automatically generated.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934.001.patch, HADOOP-11934.002.patch, 
 HADOOP-11934.003.patch, HADOOP-11934.004.patch, HADOOP-11934.005.patch, 
 HADOOP-11934.006.patch, HADOOP-11934.007.patch, HADOOP-11934.008.patch, 
 HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 

[jira] [Commented] (HADOOP-11970) Replace uses of ThreadLocalRandom with ThreadLocalRandom on branches that are JDK7+

2015-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545078#comment-14545078
 ] 

Hadoop QA commented on HADOOP-11970:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 51s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 7 new or modified test files. |
| {color:green}+1{color} | javac |   7m 33s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   3m 36s | The applied patch generated  2 
new checkstyle issues (total was 851, now 847). |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m 43s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  22m 15s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | yarn tests |   6m  0s | Tests passed in 
hadoop-yarn-server-nodemanager. |
| {color:green}+1{color} | hdfs tests | 168m 52s | Tests passed in hadoop-hdfs. 
|
| | | 241m 19s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733054/HADOOP-11970.3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cbc01ed |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6699/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6699/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-yarn-server-nodemanager test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6699/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6699/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6699/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6699/console |


This message was automatically generated.

 Replace uses of ThreadLocalRandom with ThreadLocalRandom on branches that 
 are JDK7+
 -

 Key: HADOOP-11970
 URL: https://issues.apache.org/jira/browse/HADOOP-11970
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sean Busbey
Assignee: Sean Busbey
 Attachments: HADOOP-11970.1.patch, HADOOP-11970.2.patch, 
 HADOOP-11970.3.patch


 ThreadLocalRandom should be used when available in place of 
 ThreadLocalRandom. For JDK7 the difference is minimal, but JDK8 starts 
 including optimizations for ThreadLocalRandom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11938) Fix ByteBuffer version encode/decode API of raw erasure coder

2015-05-15 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11938:
---
Attachment: HADOOP-11938-HDFS-7285-v4.patch

Updated the patch addressing Yi's comment and Jenkin's output. 
[~zhz] as you noted in HADOOP-11920 you may re-sort related commits, so would 
you commit this as well if everything is ok? Thanks a lot.

 Fix ByteBuffer version encode/decode API of raw erasure coder
 -

 Key: HADOOP-11938
 URL: https://issues.apache.org/jira/browse/HADOOP-11938
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11938-HDFS-7285-v1.patch, 
 HADOOP-11938-HDFS-7285-v2.patch, HADOOP-11938-HDFS-7285-v3.patch, 
 HADOOP-11938-HDFS-7285-v4.patch, HADOOP-11938-HDFS-7285-workaround.patch


 While investigating a test failure in {{TestRecoverStripedFile}}, one issue 
 in raw erasrue coder, caused by an optimization in below codes. It assumes 
 the  heap buffer backed by the bytes array available for reading or writing 
 always starts with zero and takes the whole space.
 {code}
   protected static byte[][] toArrays(ByteBuffer[] buffers) {
 byte[][] bytesArr = new byte[buffers.length][];
 ByteBuffer buffer;
 for (int i = 0; i  buffers.length; i++) {
   buffer = buffers[i];
   if (buffer == null) {
 bytesArr[i] = null;
 continue;
   }
   if (buffer.hasArray()) {
 bytesArr[i] = buffer.array();
   } else {
 throw new IllegalArgumentException(Invalid ByteBuffer passed,  +
 expecting heap buffer);
   }
 }
 return bytesArr;
   }
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11969) ThreadLocal initialization in several classes is not thread safe

2015-05-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545135#comment-14545135
 ] 

Sean Busbey commented on HADOOP-11969:
--

reopened HDFS-8332, which git bisect says is the cause of the test failures 
above.

 ThreadLocal initialization in several classes is not thread safe
 

 Key: HADOOP-11969
 URL: https://issues.apache.org/jira/browse/HADOOP-11969
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Critical
  Labels: thread-safety
 Attachments: HADOOP-11969.1.patch, HADOOP-11969.2.patch, 
 HADOOP-11969.3.patch, HADOOP-11969.4.patch


 Right now, the initialization of hte thread local factories for encoder / 
 decoder in Text are not marked final. This means they end up with a static 
 initializer that is not guaranteed to be finished running before the members 
 are visible. 
 Under heavy contention, this means during initialization some users will get 
 an NPE:
 {code}
 (2015-05-05 08:58:03.974 : solr_server_log.log) 
  org.apache.solr.common.SolrException; null:java.lang.NullPointerException
   at org.apache.hadoop.io.Text.decode(Text.java:406)
   at org.apache.hadoop.io.Text.decode(Text.java:389)
   at org.apache.hadoop.io.Text.toString(Text.java:280)
   at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:764)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildBaseHeader(DataTransferProtoUtil.java:81)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildClientHeader(DataTransferProtoUtil.java:71)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Sender.readBlock(Sender.java:101)
   at 
 org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:400)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1027)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:974)
   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1305)
   at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:78)
   at 
 org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:107)
 ... SNIP...
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11975) Native code needs to be built to match the 32/64 bitness of the JVM

2015-05-15 Thread Alan Burlison (JIRA)
Alan Burlison created HADOOP-11975:
--

 Summary: Native code needs to be built to match the 32/64 bitness 
of the JVM
 Key: HADOOP-11975
 URL: https://issues.apache.org/jira/browse/HADOOP-11975
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: Solaris
Reporter: Alan Burlison
Assignee: Alan Burlison


When building with a 64-bit JVM on Solaris the following error occurs at the 
link stage of building the native code:

 [exec] ld: fatal: file 
/usr/jdk/instances/jdk1.8.0/jre/lib/amd64/server/libjvm.so: wrong ELF class: 
ELFCLASS64
 [exec] collect2: error: ld returned 1 exit status
 [exec] make[2]: *** [target/usr/local/lib/libhadoop.so.1.0.0] Error 1
 [exec] make[1]: *** [CMakeFiles/hadoop.dir/all] Error 2

The compilation flags in the makefiles need to explicitly state if 32 or 64 bit 
code is to be generated, to match the JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11713) ViewFileSystem should support snapshot methods.

2015-05-15 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14544980#comment-14544980
 ] 

Rakesh R commented on HADOOP-11713:
---

Thanks a lot [~cnauroth] for the helpful code review and commit the changes!

 ViewFileSystem should support snapshot methods.
 ---

 Key: HADOOP-11713
 URL: https://issues.apache.org/jira/browse/HADOOP-11713
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.2.0
Reporter: Chris Nauroth
Assignee: Rakesh R
 Fix For: 2.8.0

 Attachments: HADOOP-11713-001.patch, HADOOP-11713-002.patch, 
 HDFS-5641-001.patch


 Currently, {{ViewFileSystem}} does not dispatch snapshot methods through the 
 mount table.  All snapshot methods throw {{UnsupportedOperationException}}, 
 even though the underlying mount points could be HDFS instances that support 
 snapshots.  We need to update {{ViewFileSystem}} to implement the snapshot 
 methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11938) Fix ByteBuffer version encode/decode API of raw erasure coder

2015-05-15 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545154#comment-14545154
 ] 

Yi Liu commented on HADOOP-11938:
-

Looks good now, one nit, +1 after addressing
in TestRawCoderBase.java
{code}
Assert.fail(Encoding test with bad input passed);
{code}
We should write Encoding test with bad input should fail.  You write 
oppositely.  Same as few other Assert.fail.

Furthermore, we need to fix the Jenkins warnings (release 
audit/checkstyle/whitespace) if they are related to this patch.

 Fix ByteBuffer version encode/decode API of raw erasure coder
 -

 Key: HADOOP-11938
 URL: https://issues.apache.org/jira/browse/HADOOP-11938
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11938-HDFS-7285-v1.patch, 
 HADOOP-11938-HDFS-7285-v2.patch, HADOOP-11938-HDFS-7285-v3.patch, 
 HADOOP-11938-HDFS-7285-workaround.patch


 While investigating a test failure in {{TestRecoverStripedFile}}, one issue 
 in raw erasrue coder, caused by an optimization in below codes. It assumes 
 the  heap buffer backed by the bytes array available for reading or writing 
 always starts with zero and takes the whole space.
 {code}
   protected static byte[][] toArrays(ByteBuffer[] buffers) {
 byte[][] bytesArr = new byte[buffers.length][];
 ByteBuffer buffer;
 for (int i = 0; i  buffers.length; i++) {
   buffer = buffers[i];
   if (buffer == null) {
 bytesArr[i] = null;
 continue;
   }
   if (buffer.hasArray()) {
 bytesArr[i] = buffer.array();
   } else {
 throw new IllegalArgumentException(Invalid ByteBuffer passed,  +
 expecting heap buffer);
   }
 }
 return bytesArr;
   }
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11974) FIONREAD is not always in the same header

2015-05-15 Thread Alan Burlison (JIRA)
Alan Burlison created HADOOP-11974:
--

 Summary: FIONREAD is not always in the same header
 Key: HADOOP-11974
 URL: https://issues.apache.org/jira/browse/HADOOP-11974
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.7.0
 Environment: Solaris
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Minor


The FIONREAD macro is found in sys/ioctl.h on Linux and sys/filio.h on 
Solaris. A conditional include block is required to make sure it is looked for 
in the right place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-7824) Native IO uses wrong constants almost everywhere

2015-05-15 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison reassigned HADOOP-7824:
-

Assignee: Alan Burlison  (was: Todd Lipcon)

 Native IO uses wrong constants almost everywhere
 

 Key: HADOOP-7824
 URL: https://issues.apache.org/jira/browse/HADOOP-7824
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.20.204.0, 0.20.205.0, 1.0.3, 0.23.0, 2.0.0-alpha, 3.0.0
 Environment: Mac OS X, Linux, Solaris, Windows, ... 
Reporter: Dmytro Shteflyuk
Assignee: Alan Burlison
  Labels: hadoop
 Attachments: HADOOP-7824.patch, HADOOP-7824.patch, hadoop-7824.txt


 Constants like O_CREAT, O_EXCL, etc. have different values on OS X and many 
 other operating systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11969) ThreadLocal initialization in several classes is not thread safe

2015-05-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545086#comment-14545086
 ] 

Sean Busbey commented on HADOOP-11969:
--

those three tests all fail on trunk for me without this patch.

 ThreadLocal initialization in several classes is not thread safe
 

 Key: HADOOP-11969
 URL: https://issues.apache.org/jira/browse/HADOOP-11969
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Critical
  Labels: thread-safety
 Attachments: HADOOP-11969.1.patch, HADOOP-11969.2.patch, 
 HADOOP-11969.3.patch, HADOOP-11969.4.patch


 Right now, the initialization of hte thread local factories for encoder / 
 decoder in Text are not marked final. This means they end up with a static 
 initializer that is not guaranteed to be finished running before the members 
 are visible. 
 Under heavy contention, this means during initialization some users will get 
 an NPE:
 {code}
 (2015-05-05 08:58:03.974 : solr_server_log.log) 
  org.apache.solr.common.SolrException; null:java.lang.NullPointerException
   at org.apache.hadoop.io.Text.decode(Text.java:406)
   at org.apache.hadoop.io.Text.decode(Text.java:389)
   at org.apache.hadoop.io.Text.toString(Text.java:280)
   at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:764)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildBaseHeader(DataTransferProtoUtil.java:81)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildClientHeader(DataTransferProtoUtil.java:71)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Sender.readBlock(Sender.java:101)
   at 
 org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:400)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1027)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:974)
   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1305)
   at org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:78)
   at 
 org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:107)
 ... SNIP...
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11974) FIONREAD is not always in the same header file

2015-05-15 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-11974:
---
Summary: FIONREAD is not always in the same header file  (was: FIONREAD is 
not always in the same header)

 FIONREAD is not always in the same header file
 --

 Key: HADOOP-11974
 URL: https://issues.apache.org/jira/browse/HADOOP-11974
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.7.0
 Environment: Solaris
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Minor

 The FIONREAD macro is found in sys/ioctl.h on Linux and sys/filio.h on 
 Solaris. A conditional include block is required to make sure it is looked 
 for in the right place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11970) Replace uses of ThreadLocalRandom with ThreadLocalRandom on branches that are JDK7+

2015-05-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545081#comment-14545081
 ] 

Sean Busbey commented on HADOOP-11970:
--

remaining checkstyle warnings are 

{quote}
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1:
 File length is 3,251 lines (max allowed is 2,000).
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:1:
 File length is 3,842 lines (max allowed is 2,000).
{quote}

refactoring DFSClient and BlockManager to be ~half the size is (hopefully) out 
of scope?

 Replace uses of ThreadLocalRandom with ThreadLocalRandom on branches that 
 are JDK7+
 -

 Key: HADOOP-11970
 URL: https://issues.apache.org/jira/browse/HADOOP-11970
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sean Busbey
Assignee: Sean Busbey
 Attachments: HADOOP-11970.1.patch, HADOOP-11970.2.patch, 
 HADOOP-11970.3.patch


 ThreadLocalRandom should be used when available in place of 
 ThreadLocalRandom. For JDK7 the difference is minimal, but JDK8 starts 
 including optimizations for ThreadLocalRandom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11954) Solaris does not support RLIMIT_MEMLOCK as in Linux

2015-05-15 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison reassigned HADOOP-11954:
--

Assignee: Alan Burlison  (was: Malcolm Kavalsky)

 Solaris does not support RLIMIT_MEMLOCK as in Linux
 ---

 Key: HADOOP-11954
 URL: https://issues.apache.org/jira/browse/HADOOP-11954
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0, 2.3.0, 2.4.1, 2.6.0, 2.7.0, 2.5.2
Reporter: Malcolm Kavalsky
Assignee: Alan Burlison
   Original Estimate: 24h
  Remaining Estimate: 24h

 This affects the JNI call to NativeIO_getMemlockLimit0.
 We can just return 0, as Windows does which also does not support this 
 feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11976) submit job with oozie to between cluster

2015-05-15 Thread sunmeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545267#comment-14545267
 ] 

sunmeng commented on HADOOP-11976:
--

hello,i submit job with oozie to between cluster have some problem,i see some 
hadoop code,i think the hadoop2.5cdh5.3.2 can not submit job with oozie to 
between cluster,who can help me?thank you very much!

protected static T T createRMProxy(final Configuration configuration,
  final ClassT protocol, RMProxy instance) throws IOException {
YarnConfiguration conf = (configuration instanceof YarnConfiguration)
? (YarnConfiguration) configuration
: new YarnConfiguration(configuration);
RetryPolicy retryPolicy = createRetryPolicy(conf);
if (HAUtil.isHAEnabled(conf)) {
  RMFailoverProxyProviderT provider =
  instance.createRMFailoverProxyProvider(conf, protocol);
  return (T) RetryProxy.create(protocol, provider, retryPolicy);
} else {
  InetSocketAddress rmAddress = instance.getRMAddress(conf, protocol);
  LOG.info(Connecting to ResourceManager at  + rmAddress);
  T proxy = RMProxy.TgetProxy(conf, protocol, rmAddress);
  return (T) RetryProxy.create(protocol, proxy, retryPolicy);
}
  }

 submit job with oozie to between cluster
 

 Key: HADOOP-11976
 URL: https://issues.apache.org/jira/browse/HADOOP-11976
 Project: Hadoop Common
  Issue Type: Bug
Reporter: sunmeng





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11713) ViewFileSystem should support snapshot methods.

2015-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545386#comment-14545386
 ] 

Hudson commented on HADOOP-11713:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #928 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/928/])
HADOOP-11713. ViewFileSystem should support snapshot methods. Contributed by 
Rakesh R. (cnauroth: rev 09fe16f166392a99e1e54001a9112c6a4632dfc8)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFs.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java


 ViewFileSystem should support snapshot methods.
 ---

 Key: HADOOP-11713
 URL: https://issues.apache.org/jira/browse/HADOOP-11713
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.2.0
Reporter: Chris Nauroth
Assignee: Rakesh R
 Fix For: 2.8.0

 Attachments: HADOOP-11713-001.patch, HADOOP-11713-002.patch, 
 HDFS-5641-001.patch


 Currently, {{ViewFileSystem}} does not dispatch snapshot methods through the 
 mount table.  All snapshot methods throw {{UnsupportedOperationException}}, 
 even though the underlying mount points could be HDFS instances that support 
 snapshots.  We need to update {{ViewFileSystem}} to implement the snapshot 
 methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11960) Enable Azure-Storage Client Side logging.

2015-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545388#comment-14545388
 ] 

Hudson commented on HADOOP-11960:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #928 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/928/])
HADOOP-11960. Enable Azure-Storage Client Side logging. Contributed by 
Dushyanth. (cnauroth: rev cb8e69a80cecb95abdfc93a787bea0bedef275ed)
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemClientLogging.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Enable Azure-Storage Client Side logging.
 -

 Key: HADOOP-11960
 URL: https://issues.apache.org/jira/browse/HADOOP-11960
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Reporter: Dushyanth
Assignee: Dushyanth
 Fix For: 2.8.0

 Attachments: 
 0001-HADOOP-945-Enabling-WASB-Azure-Storage-Client-side-l.patch, 
 HADOOP-11960.002.patch


 AzureStorageExceptions currently are logged as part of the WAB code which 
 often is not too informative. AzureStorage SDK supports client side logging 
 that can be enabled that logs relevant information w.r.t request made from 
 the Storage client. 
 This JIRA is created to enable Azure Storage Client Side logging at the Job 
 submission level. User should be able to configure Client Side logging on a 
 Per Job bases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11960) Enable Azure-Storage Client Side logging.

2015-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545411#comment-14545411
 ] 

Hudson commented on HADOOP-11960:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #197 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/197/])
HADOOP-11960. Enable Azure-Storage Client Side logging. Contributed by 
Dushyanth. (cnauroth: rev cb8e69a80cecb95abdfc93a787bea0bedef275ed)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemClientLogging.java


 Enable Azure-Storage Client Side logging.
 -

 Key: HADOOP-11960
 URL: https://issues.apache.org/jira/browse/HADOOP-11960
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Reporter: Dushyanth
Assignee: Dushyanth
 Fix For: 2.8.0

 Attachments: 
 0001-HADOOP-945-Enabling-WASB-Azure-Storage-Client-side-l.patch, 
 HADOOP-11960.002.patch


 AzureStorageExceptions currently are logged as part of the WAB code which 
 often is not too informative. AzureStorage SDK supports client side logging 
 that can be enabled that logs relevant information w.r.t request made from 
 the Storage client. 
 This JIRA is created to enable Azure Storage Client Side logging at the Job 
 submission level. User should be able to configure Client Side logging on a 
 Per Job bases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11938) Fix ByteBuffer version encode/decode API of raw erasure coder

2015-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545269#comment-14545269
 ] 

Hadoop QA commented on HADOOP-11938:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 51s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 6 new or modified test files. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 41s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  5s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m 11s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 42s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 41s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  22m 34s | Tests passed in 
hadoop-common. |
| | |  60m 14s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733092/HADOOP-11938-HDFS-7285-v4.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / a35936d |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6702/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6702/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6702/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6702/console |


This message was automatically generated.

 Fix ByteBuffer version encode/decode API of raw erasure coder
 -

 Key: HADOOP-11938
 URL: https://issues.apache.org/jira/browse/HADOOP-11938
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11938-HDFS-7285-v1.patch, 
 HADOOP-11938-HDFS-7285-v2.patch, HADOOP-11938-HDFS-7285-v3.patch, 
 HADOOP-11938-HDFS-7285-v4.patch, HADOOP-11938-HDFS-7285-workaround.patch


 While investigating a test failure in {{TestRecoverStripedFile}}, one issue 
 in raw erasrue coder, caused by an optimization in below codes. It assumes 
 the  heap buffer backed by the bytes array available for reading or writing 
 always starts with zero and takes the whole space.
 {code}
   protected static byte[][] toArrays(ByteBuffer[] buffers) {
 byte[][] bytesArr = new byte[buffers.length][];
 ByteBuffer buffer;
 for (int i = 0; i  buffers.length; i++) {
   buffer = buffers[i];
   if (buffer == null) {
 bytesArr[i] = null;
 continue;
   }
   if (buffer.hasArray()) {
 bytesArr[i] = buffer.array();
   } else {
 throw new IllegalArgumentException(Invalid ByteBuffer passed,  +
 expecting heap buffer);
   }
 }
 return bytesArr;
   }
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11505) hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some cases

2015-05-15 Thread Georgios Kasapoglou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545294#comment-14545294
 ] 

Georgios Kasapoglou commented on HADOOP-11505:
--

Hello,
instead of doing calculations, can we use gcc's:
val=__builtin_bswap32 (val);
and 
val=__builtin_bswap64(val);

It is supposed to work on any architecture. Is that wrong? I've used it and 
compilation was successful, and it seems to work on T1/sparc.


 hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
 cases
 ---

 Key: HADOOP-11505
 URL: https://issues.apache.org/jira/browse/HADOOP-11505
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11505.001.patch


 hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
 cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
 code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
 this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11484) hadoop-mapreduce-client-nativetask fails to build on ARM AARCH64 due to x86 asm statements

2015-05-15 Thread Georgios Kasapoglou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545279#comment-14545279
 ] 

Georgios Kasapoglou commented on HADOOP-11484:
--

Hello,
I've used gcc's __builtin_bswap instead. 

+++ ./src/main/native/src/lib/primitives.h  2015-05-08 16:52:22.568992860 
+0300
@@ -97,13 +97,15 @@
  * little-endian to big-endian or vice versa
  */
 inline uint32_t bswap(uint32_t val) {
-  __asm__(bswap %0 : =r (val) : 0 (val));
+//  __asm__(bswap %0 : =r (val) : 0 (val));
+  val=__builtin_bswap32 (val);
   return val;
 }
 
 inline uint64_t bswap64(uint64_t val) {
 #ifdef __X64
-  __asm__(bswapq %0 : =r (val) : 0 (val));
+//  __asm__(bswapq %0 : =r (val) : 0 (val));
+  val=__builtin_bswap64(val);
 #else
 
   uint64_t lower = val  0xU;



 hadoop-mapreduce-client-nativetask fails to build on ARM AARCH64 due to x86 
 asm statements
 --

 Key: HADOOP-11484
 URL: https://issues.apache.org/jira/browse/HADOOP-11484
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: ARM aarch64 development board
Reporter: Edward Nevill
Assignee: Edward Nevill
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-11484.001.patch


 Hadoop fails to build on ARM aarch64 (or any non x86 platform) because of the 
 following in
 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/primitives.h
 {noformat}
 /**
  * little-endian to big-endian or vice versa
  */
 inline uint32_t bswap(uint32_t val) {
   __asm__(bswap %0 : =r (val) : 0 (val));
   return val;
 }
 inline uint64_t bswap64(uint64_t val) {
 #ifdef __X64
   __asm__(bswapq %0 : =r (val) : 0 (val));
 #else
   uint64_t lower = val  0xU;
   uint32_t higher = (val  32)  0xU;
   lower = bswap(lower);
   higher = bswap(higher);
   return (lower  32) + higher;
 #endif
   return val;
 }
 {noformat}
 The following also fails in
 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/util/Checksum.cc
 {noformat}
 static uint32_t cpuid(uint32_t eax_in) {
   uint32_t eax, ebx, ecx, edx;
 #  if defined(__PIC__)  !defined(__LP64__)
 // 32-bit PIC code uses the ebx register for the base offset --
 // have to save and restore it on the stack
   asm(pushl %%ebx\n\t
   cpuid\n\t
   movl %%ebx, %[ebx]\n\t
   popl %%ebx : =a (eax), [ebx] =r(ebx), =c(ecx), =d(edx) : a 
 (eax_in) 
   : cc);
 #  else
   asm(cpuid : =a (eax), =b(ebx), =c(ecx), =d(edx) : a(eax_in)
   : cc);
 #  endif
   return ecx;
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11505) hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some cases

2015-05-15 Thread Georgios Kasapoglou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545396#comment-14545396
 ] 

Georgios Kasapoglou commented on HADOOP-11505:
--

Thanks a lot Binglin. I was not aware of this ticket.
Actually my comment was for primitives.h that this patch (11505) is for, but 
you pointed me to the solution of my next issue. :)


 hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
 cases
 ---

 Key: HADOOP-11505
 URL: https://issues.apache.org/jira/browse/HADOOP-11505
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11505.001.patch


 hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
 cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
 code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
 this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11713) ViewFileSystem should support snapshot methods.

2015-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545409#comment-14545409
 ] 

Hudson commented on HADOOP-11713:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #197 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/197/])
HADOOP-11713. ViewFileSystem should support snapshot methods. Contributed by 
Rakesh R. (cnauroth: rev 09fe16f166392a99e1e54001a9112c6a4632dfc8)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java


 ViewFileSystem should support snapshot methods.
 ---

 Key: HADOOP-11713
 URL: https://issues.apache.org/jira/browse/HADOOP-11713
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.2.0
Reporter: Chris Nauroth
Assignee: Rakesh R
 Fix For: 2.8.0

 Attachments: HADOOP-11713-001.patch, HADOOP-11713-002.patch, 
 HDFS-5641-001.patch


 Currently, {{ViewFileSystem}} does not dispatch snapshot methods through the 
 mount table.  All snapshot methods throw {{UnsupportedOperationException}}, 
 even though the underlying mount points could be HDFS instances that support 
 snapshots.  We need to update {{ViewFileSystem}} to implement the snapshot 
 methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11976) submit job with oozie to between cluster

2015-05-15 Thread sunmeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545262#comment-14545262
 ] 

sunmeng commented on HADOOP-11976:
--


Logged in as: dr.who 
About Apache Hadoop
Application
About
Jobs
Tools
Log Type: syslog
Log Length: 29472
2015-05-14 14:45:41,956 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
application appattempt_1431573307666_0426_01
2015-05-14 14:45:42,498 WARN [main] org.apache.hadoop.conf.Configuration: 
job.xml:an attempt to override final parameter: hadoop.ssl.require.client.cert; 
 Ignoring.
2015-05-14 14:45:42,503 WARN [main] org.apache.hadoop.conf.Configuration: 
job.xml:an attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2015-05-14 14:45:42,505 WARN [main] org.apache.hadoop.conf.Configuration: 
job.xml:an attempt to override final parameter: hadoop.ssl.client.conf;  
Ignoring.
2015-05-14 14:45:42,508 WARN [main] org.apache.hadoop.conf.Configuration: 
job.xml:an attempt to override final parameter: 
hadoop.ssl.keystores.factory.class;  Ignoring.
2015-05-14 14:45:42,515 WARN [main] org.apache.hadoop.conf.Configuration: 
job.xml:an attempt to override final parameter: hadoop.ssl.server.conf;  
Ignoring.
2015-05-14 14:45:42,544 WARN [main] org.apache.hadoop.conf.Configuration: 
job.xml:an attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
2015-05-14 14:45:42,707 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2015-05-14 14:45:42,707 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, 
Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@344d638a)
2015-05-14 14:45:42,748 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: RM_DELEGATION_TOKEN, 
Service: 10.10.11.248:8032, Ident: (owner=mengsun, renewer=oozie mr token, 
realUser=oozie, issueDate=1431585938658, maxDate=1432190738658, 
sequenceNumber=2, masterKeyId=2)
2015-05-14 14:45:43,467 WARN [main] org.apache.hadoop.conf.Configuration: 
job.xml:an attempt to override final parameter: hadoop.ssl.require.client.cert; 
 Ignoring.
2015-05-14 14:45:43,470 WARN [main] org.apache.hadoop.conf.Configuration: 
job.xml:an attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2015-05-14 14:45:43,471 WARN [main] org.apache.hadoop.conf.Configuration: 
job.xml:an attempt to override final parameter: hadoop.ssl.client.conf;  
Ignoring.
2015-05-14 14:45:43,472 WARN [main] org.apache.hadoop.conf.Configuration: 
job.xml:an attempt to override final parameter: 
hadoop.ssl.keystores.factory.class;  Ignoring.
2015-05-14 14:45:43,475 WARN [main] org.apache.hadoop.conf.Configuration: 
job.xml:an attempt to override final parameter: hadoop.ssl.server.conf;  
Ignoring.
2015-05-14 14:45:43,488 WARN [main] org.apache.hadoop.conf.Configuration: 
job.xml:an attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
2015-05-14 14:45:44,093 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 
Unable to load native-hadoop library for your platform... using builtin-java 
classes where applicable
2015-05-14 14:45:44,316 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config 
null
2015-05-14 14:45:44,318 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is 
org.apache.hadoop.mapred.FileOutputCommitter
2015-05-14 14:45:44,387 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.jobhistory.EventType for class 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2015-05-14 14:45:44,389 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2015-05-14 14:45:44,390 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2015-05-14 14:45:44,391 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2015-05-14 14:45:44,392 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class 
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2015-05-14 14:45:44,393 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class 

[jira] [Created] (HADOOP-11976) submit job with oozie to between cluster

2015-05-15 Thread sunmeng (JIRA)
sunmeng created HADOOP-11976:


 Summary: submit job with oozie to between cluster
 Key: HADOOP-11976
 URL: https://issues.apache.org/jira/browse/HADOOP-11976
 Project: Hadoop Common
  Issue Type: Bug
Reporter: sunmeng






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11505) hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some cases

2015-05-15 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545322#comment-14545322
 ] 

Binglin Chang commented on HADOOP-11505:


Yes, that's what HADOOP-11665 did

 hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
 cases
 ---

 Key: HADOOP-11505
 URL: https://issues.apache.org/jira/browse/HADOOP-11505
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11505.001.patch


 hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
 cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
 code is incorrect.  Thanks to Steve Loughran and Edward Nevill for finding 
 this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11979) optionally have test-patch make a partial -1 before long run tests

2015-05-15 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-11979:


 Summary: optionally have test-patch make a partial -1 before long 
run tests
 Key: HADOOP-11979
 URL: https://issues.apache.org/jira/browse/HADOOP-11979
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Sean Busbey
Priority: Minor


If test-patch has -1s for things that are relatively fast (e.g. javac, 
checkstyle, whitespace, etc), then it would be nice if it would post a -1s so 
far message to jira before running the long-run phase (i.e. test) similar to 
what it does now for reexec of dev-support patches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11713) ViewFileSystem should support snapshot methods.

2015-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545709#comment-14545709
 ] 

Hudson commented on HADOOP-11713:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #196 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/196/])
HADOOP-11713. ViewFileSystem should support snapshot methods. Contributed by 
Rakesh R. (cnauroth: rev 09fe16f166392a99e1e54001a9112c6a4632dfc8)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java


 ViewFileSystem should support snapshot methods.
 ---

 Key: HADOOP-11713
 URL: https://issues.apache.org/jira/browse/HADOOP-11713
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.2.0
Reporter: Chris Nauroth
Assignee: Rakesh R
 Fix For: 2.8.0

 Attachments: HADOOP-11713-001.patch, HADOOP-11713-002.patch, 
 HDFS-5641-001.patch


 Currently, {{ViewFileSystem}} does not dispatch snapshot methods through the 
 mount table.  All snapshot methods throw {{UnsupportedOperationException}}, 
 even though the underlying mount points could be HDFS instances that support 
 snapshots.  We need to update {{ViewFileSystem}} to implement the snapshot 
 methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11960) Enable Azure-Storage Client Side logging.

2015-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545711#comment-14545711
 ] 

Hudson commented on HADOOP-11960:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #196 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/196/])
HADOOP-11960. Enable Azure-Storage Client Side logging. Contributed by 
Dushyanth. (cnauroth: rev cb8e69a80cecb95abdfc93a787bea0bedef275ed)
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemClientLogging.java


 Enable Azure-Storage Client Side logging.
 -

 Key: HADOOP-11960
 URL: https://issues.apache.org/jira/browse/HADOOP-11960
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Reporter: Dushyanth
Assignee: Dushyanth
 Fix For: 2.8.0

 Attachments: 
 0001-HADOOP-945-Enabling-WASB-Azure-Storage-Client-side-l.patch, 
 HADOOP-11960.002.patch


 AzureStorageExceptions currently are logged as part of the WAB code which 
 often is not too informative. AzureStorage SDK supports client side logging 
 that can be enabled that logs relevant information w.r.t request made from 
 the Storage client. 
 This JIRA is created to enable Azure Storage Client Side logging at the Job 
 submission level. User should be able to configure Client Side logging on a 
 Per Job bases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11978) test-patch should allow pom-only changes without test change complaints

2015-05-15 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-11978:


 Summary: test-patch should allow pom-only changes without test 
change complaints
 Key: HADOOP-11978
 URL: https://issues.apache.org/jira/browse/HADOOP-11978
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sean Busbey
Priority: Minor


When a change only touches poms, we still need javac tests but we don't need to 
complain that there aren't more tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11979) optionally have test-patch make a partial -1 before long run checks

2015-05-15 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11979:
-
Summary: optionally have test-patch make a partial -1 before long run 
checks  (was: optionally have test-patch make a partial -1 before long run 
tests)

 optionally have test-patch make a partial -1 before long run checks
 ---

 Key: HADOOP-11979
 URL: https://issues.apache.org/jira/browse/HADOOP-11979
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Sean Busbey
Priority: Minor

 If test-patch has -1s for things that are relatively fast (e.g. javac, 
 checkstyle, whitespace, etc), then it would be nice if it would post a -1s 
 so far message to jira before running the long-run phase (i.e. test) similar 
 to what it does now for reexec of dev-support patches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11713) ViewFileSystem should support snapshot methods.

2015-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545726#comment-14545726
 ] 

Hudson commented on HADOOP-11713:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2144 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2144/])
HADOOP-11713. ViewFileSystem should support snapshot methods. Contributed by 
Rakesh R. (cnauroth: rev 09fe16f166392a99e1e54001a9112c6a4632dfc8)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java


 ViewFileSystem should support snapshot methods.
 ---

 Key: HADOOP-11713
 URL: https://issues.apache.org/jira/browse/HADOOP-11713
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.2.0
Reporter: Chris Nauroth
Assignee: Rakesh R
 Fix For: 2.8.0

 Attachments: HADOOP-11713-001.patch, HADOOP-11713-002.patch, 
 HDFS-5641-001.patch


 Currently, {{ViewFileSystem}} does not dispatch snapshot methods through the 
 mount table.  All snapshot methods throw {{UnsupportedOperationException}}, 
 even though the underlying mount points could be HDFS instances that support 
 snapshots.  We need to update {{ViewFileSystem}} to implement the snapshot 
 methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11960) Enable Azure-Storage Client Side logging.

2015-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545728#comment-14545728
 ] 

Hudson commented on HADOOP-11960:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2144 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2144/])
HADOOP-11960. Enable Azure-Storage Client Side logging. Contributed by 
Dushyanth. (cnauroth: rev cb8e69a80cecb95abdfc93a787bea0bedef275ed)
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemClientLogging.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Enable Azure-Storage Client Side logging.
 -

 Key: HADOOP-11960
 URL: https://issues.apache.org/jira/browse/HADOOP-11960
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Reporter: Dushyanth
Assignee: Dushyanth
 Fix For: 2.8.0

 Attachments: 
 0001-HADOOP-945-Enabling-WASB-Azure-Storage-Client-side-l.patch, 
 HADOOP-11960.002.patch


 AzureStorageExceptions currently are logged as part of the WAB code which 
 often is not too informative. AzureStorage SDK supports client side logging 
 that can be enabled that logs relevant information w.r.t request made from 
 the Storage client. 
 This JIRA is created to enable Azure Storage Client Side logging at the Job 
 submission level. User should be able to configure Client Side logging on a 
 Per Job bases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11918) Listing an empty s3a root directory throws FileNotFound.

2015-05-15 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545741#comment-14545741
 ] 

Lei (Eddy) Xu commented on HADOOP-11918:


Hi, [~thodemoor] 

Are you pointing to another JIRA? HADOOP-11918 is this one.

Thanks!


 Listing an empty s3a root directory throws FileNotFound.
 

 Key: HADOOP-11918
 URL: https://issues.apache.org/jira/browse/HADOOP-11918
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: BB2015-05-TBR, s3
 Attachments: HADOOP-11918.000.patch, HADOOP-11918.001.patch


 With an empty s3 bucket and run
 {code}
 $ hadoop fs -D... -ls s3a://hdfs-s3a-test/
 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 ls: `s3a://hdfs-s3a-test/': No such file or directory
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-05-15 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546236#comment-14546236
 ] 

Haohui Mai commented on HADOOP-11772:
-

It seems that the profiling result points to the connection cache in the 
{{Client}} class.

The v4 patch use Guava's Loading cache to implement the connection cache, where 
the read path should be lock-free. [~t3rmin4t0r], can you try it out? 

 RPC Invoker relies on static ClientCache which has synchronized(this) blocks
 

 Key: HADOOP-11772
 URL: https://issues.apache.org/jira/browse/HADOOP-11772
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, performance
Reporter: Gopal V
Assignee: Akira AJISAKA
  Labels: BB2015-05-RFC
 Attachments: HADOOP-11772-001.patch, HADOOP-11772-002.patch, 
 HADOOP-11772-003.patch, HADOOP-11772-wip-001.patch, 
 HADOOP-11772-wip-002.patch, HADOOP-11772.004.patch, after-ipc-fix.png, 
 dfs-sync-ipc.png, sync-client-bt.png, sync-client-threads.png


 {code}
   private static ClientCache CLIENTS=new ClientCache();
 ...
 this.client = CLIENTS.getClient(conf, factory);
 {code}
 Meanwhile in ClientCache
 {code}
 public synchronized Client getClient(Configuration conf,
   SocketFactory factory, Class? extends Writable valueClass) {
 ...
Client client = clients.get(factory);
 if (client == null) {
   client = new Client(valueClass, conf, factory);
   clients.put(factory, client);
 } else {
   client.incCount();
 }
 {code}
 All invokers end up calling these methods, resulting in IPC clients choking 
 up.
 !sync-client-threads.png!
 !sync-client-bt.png!
 !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11982) Inconsistency in handling URI without authority

2015-05-15 Thread Kannan Rajah (JIRA)
Kannan Rajah created HADOOP-11982:
-

 Summary: Inconsistency in handling URI without authority
 Key: HADOOP-11982
 URL: https://issues.apache.org/jira/browse/HADOOP-11982
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Kannan Rajah
Assignee: Kannan Rajah


There are some inconsistencies coming from Hadoop class Path.java. This seems 
to be the behavior for a very long time. I am not sure about the implications 
of correcting it, so want to get some opinion.

When you use makeQualified, a NULL authority is converted into empty authority. 
When authority is NULL, the toString will not contain the // before the actual 
absolute path. Otherwise it will not. There are ecosystem components that may 
or may not use makeQualified consistently. We have hit cases where the 
Path.toString() is used as key in hashmap. So lookups start failing when the 
entry has Path object constructed using makeQualified and lookup key does not.

Proposal: Can we default to empty authority always when its NULL?

-
Examples
---
Path p = new Path(hdfs:/a/b/c)
p.toString() - hdfs:/a/b/c  - There is a single slash
p.makeQualified(fs);
p/toString() - hdfs:///a/b/c- There are 3 slashes
-



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546315#comment-14546315
 ] 

Allen Wittenauer edited comment on HADOOP-11929 at 5/15/15 10:43 PM:
-

Oh, because it's relatively easy to compare, here's what the new check_site 
test looks like.  The biggest change you'll notice is that what were formerly 
global tests are now module tests.  So instead of compiling the entire site, 
we only compile the modules that personality said we should do:

{code}
function check_site
{
  local -r mypwd=$(pwd)
  local results=0

  big_console_header Determining if patched site still builds

  verify_needed_test site

  if [[ $? == 0 ]]; then
echo This patch does not appear to need site checks.
return 0
  fi

  start_clock
  
  personality patch site
  
  until [[ $i -eq ${#MODULE[@]} ]]; do
pushd ${BASEDIR}/${MODULE[${i}]} /dev/null
echo_and_redirect ${PATCH_DIR}/patchSiteWarnings-${MODULE[${i}]}.txt \
   ${MVN} clean site site:stage -Dmaven.javadoc.skip=true 
${MODULEEXTRAPARAM[${i}]} -D${PROJECT_NAME}PatchProcess
if [[ $? != 0 ]] ; then
  echo Site compilation for ${MODULE[${i}]} is broken
  add_jira_table -1 site Site compilation for ${MODULE[${i}]} is broken.
  add_jira_footer site @@BASE@@/patchSiteWarnings-${MODULE[${i}]}.txt
  ((results = results + 1))
fi
popd /dev/null
((i=i+1))
  done

  if [[ ${result} -eq 0 ]]; then
return 1
  fi
  
  add_jira_table +1 site Site still builds.
  return 0
}
{code}


was (Author: aw):
Oh, because it's relatively easy to compare, here's what the new check_site 
test looks like.  The biggest change you'll notice is that what were formerly 
global tests are now module tests.  So instead of compile the entire site, we 
only compile the modules that personality said we should do:

{code}
function check_site
{
  local -r mypwd=$(pwd)
  local results=0

  big_console_header Determining if patched site still builds

  verify_needed_test site

  if [[ $? == 0 ]]; then
echo This patch does not appear to need site checks.
return 0
  fi

  start_clock
  
  personality patch site
  
  until [[ $i -eq ${#MODULE[@]} ]]; do
pushd ${BASEDIR}/${MODULE[${i}]} /dev/null
echo_and_redirect ${PATCH_DIR}/patchSiteWarnings-${MODULE[${i}]}.txt \
   ${MVN} clean site site:stage -Dmaven.javadoc.skip=true 
${MODULEEXTRAPARAM[${i}]} -D${PROJECT_NAME}PatchProcess
if [[ $? != 0 ]] ; then
  echo Site compilation for ${MODULE[${i}]} is broken
  add_jira_table -1 site Site compilation for ${MODULE[${i}]} is broken.
  add_jira_footer site @@BASE@@/patchSiteWarnings-${MODULE[${i}]}.txt
  ((results = results + 1))
fi
popd /dev/null
((i=i+1))
  done

  if [[ ${result} -eq 0 ]]; then
return 1
  fi
  
  add_jira_table +1 site Site still builds.
  return 0
}
{code}

 add test-patch plugin points for customizing build layout
 -

 Key: HADOOP-11929
 URL: https://issues.apache.org/jira/browse/HADOOP-11929
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sean Busbey
Assignee: Allen Wittenauer
Priority: Minor
 Attachments: hadoop.sh


 nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9489) Eclipse instructions in BUILDING.txt don't work

2015-05-15 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546318#comment-14546318
 ] 

Ravi Prakash commented on HADOOP-9489:
--

Thanks for the patches Chris! I use Eclipse and I am able to import the 
projects without this patch. I *do* build the project before running mvn 
eclipse:eclipse, so I'm a +1 to adding that in BUILDING.txt. We should probably 
also level expectations (some newcomers may expect to be able to build and 
run the project from Eclipse; I've never been able to do this) I find Eclipse 
immensely useful for navigating the code, attaching remote debuggers and 
stepping through, but I build and run only on the command line using mvn . I 
wonder if that is other peoples' process too?

 Eclipse instructions in BUILDING.txt don't work
 ---

 Key: HADOOP-9489
 URL: https://issues.apache.org/jira/browse/HADOOP-9489
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
Reporter: Carl Steinbach
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9489.003.patch, HADOOP-9489.004.patch, 
 HADOOP-9489.1.patch, HADOOP-9489.2.patch, eclipse_hadoop_errors.txt, error.log


 I have tried several times to import Hadoop trunk into Eclipse following the 
 instructions in the BUILDING.txt file, but so far have not been able to get 
 it to work.
 If I use a fresh install of Eclipse 4.2.2, Eclipse will complain about an 
 undefined M2_REPO environment variable. I discovered that this is defined 
 automatically by the M2Eclipse plugin, and think that the BUILDING.txt doc 
 should be updated to explain this.
 After installing M2Eclipse I tried importing the code again, and now get over 
 2500 errors related to missing class dependencies. Many of these errors 
 correspond to missing classes in the oah*.proto namespace, which makes me 
 think that 'mvn eclipse:eclipse' is not triggering protoc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11983) HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do

2015-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546431#comment-14546431
 ] 

Hadoop QA commented on HADOOP-11983:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | release audit |   0m 15s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | shellcheck |   0m  4s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   0m 22s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733278/HADOOP-11983.001.patch 
|
| Optional Tests | shellcheck |
| git revision | trunk / 8f37873 |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6706/console |


This message was automatically generated.

 HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do
 ---

 Key: HADOOP-11983
 URL: https://issues.apache.org/jira/browse/HADOOP-11983
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-11983.001.patch


 The behavior of HADOOP_USER_CLASSPATH_FIRST works the opposite of what it 
 should do. If it is not set, HADOOP_CLASSPATH is prepended. If set, it is 
 appended.
 You can easily try out by doing something like
 {noformat}
 HADOOP_CLASSPATH=/Users/alice/tmp hadoop classpath
 {noformat}
 (HADOOP_CLASSPATH should point to an existing directory)
 I think the if clause in hadoop_add_to_classpath_userpath is reversed.
 This issue seems specific to the trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-1540) distcp should support an exclude list

2015-05-15 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546429#comment-14546429
 ] 

Jing Zhao commented on HADOOP-1540:
---

The patch looks pretty good to me. Just some minors:
# In {{RegexCopyFilter}} and {{TrueCopyFilter}}, please make sure @Override 
annotation is added for overrided methods. Besides, inheritDoc is actually 
unnecessary unless there is extra javadoc added.
# {{RegexCopyFilter}}'s constructor can take the file name as the parameter 
instead of the Configuration object.
# {{RegexCopyFilter}}'s default constructor can be removed. For test we can 
pass a string to its constructor.
# The parameter options can removed from 
{{SimpleCopyListing#writeToFileListing}}.

 distcp should support an exclude list
 -

 Key: HADOOP-1540
 URL: https://issues.apache.org/jira/browse/HADOOP-1540
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.6.0
Reporter: Senthil Subramanian
Assignee: Rich Haase
Priority: Minor
  Labels: BB2015-05-TBR, patch
 Attachments: HADOOP-1540.008.patch


 There should be a way to ignore specific paths (eg: those that have already 
 been copied over under the current srcPath). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11894) Bump the version of HTrace to 3.2.0-incubating

2015-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546440#comment-14546440
 ] 

Hadoop QA commented on HADOOP-11894:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 46s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 39s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 44s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   2m 56s | Site still builds. |
| {color:red}-1{color} | checkstyle |   3m 11s | The applied patch generated  1 
new checkstyle issues (total was 118, now 118). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 39s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:red}-1{color} | common tests |  22m 34s | Tests failed in 
hadoop-common. |
| {color:green}+1{color} | hdfs tests | 169m  5s | Tests passed in hadoop-hdfs. 
|
| | | 240m  9s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733229/HADOOP-11894.002.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / 03a293a |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6704/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6704/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6704/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6704/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6704/console |


This message was automatically generated.

 Bump the version of HTrace to 3.2.0-incubating
 --

 Key: HADOOP-11894
 URL: https://issues.apache.org/jira/browse/HADOOP-11894
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
 Attachments: HADOOP-11894.001.patch, HADOOP-11894.002.patch


 * update pom.xml
 * update documentation
 * replace {{addKVAnnotation(byte[] key, byte[] value)}} with 
 {{addKVAnnotation(String key, String value)}}
 * replace {{SpanReceiverHost#getUniqueLocalTraceFileName}} with 
 {{LocalFileSpanReceiver#getUniqueLocalTraceFileName}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-05-15 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11772:

Attachment: HADOOP-11772.004.patch

 RPC Invoker relies on static ClientCache which has synchronized(this) blocks
 

 Key: HADOOP-11772
 URL: https://issues.apache.org/jira/browse/HADOOP-11772
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, performance
Reporter: Gopal V
Assignee: Akira AJISAKA
  Labels: BB2015-05-RFC
 Attachments: HADOOP-11772-001.patch, HADOOP-11772-002.patch, 
 HADOOP-11772-003.patch, HADOOP-11772-wip-001.patch, 
 HADOOP-11772-wip-002.patch, HADOOP-11772.004.patch, after-ipc-fix.png, 
 dfs-sync-ipc.png, sync-client-bt.png, sync-client-threads.png


 {code}
   private static ClientCache CLIENTS=new ClientCache();
 ...
 this.client = CLIENTS.getClient(conf, factory);
 {code}
 Meanwhile in ClientCache
 {code}
 public synchronized Client getClient(Configuration conf,
   SocketFactory factory, Class? extends Writable valueClass) {
 ...
Client client = clients.get(factory);
 if (client == null) {
   client = new Client(valueClass, conf, factory);
   clients.put(factory, client);
 } else {
   client.incCount();
 }
 {code}
 All invokers end up calling these methods, resulting in IPC clients choking 
 up.
 !sync-client-threads.png!
 !sync-client-bt.png!
 !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-05-15 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546323#comment-14546323
 ] 

Gopal V commented on HADOOP-11772:
--

bq. reproduce the problem is to spawn a client that talks to 200 nodes 
concurrently, but unfortunately I don't have the access of the cluster nor 
YourKit.

The problem was reported as being visible on 1 process when it talks to 1 
NameNode. You do not need 200 nodes to reproduce this bug - I reported this as 
observed using 1 single process and 1 namenode instance (not even HA).

I got my yourkit license for use with Apache Hive for free - see section (G) of 
their license and email their sales folks to get a free license.

Those arguments aside, the earlier patch had a unit test - the 
testClientCacheFromMultiThreads() that [~ajisakaa] wrote, when you run that 
does that show blocked threads or de-scheduled threads with the new patch?

This is an important fix late in the cycle, the new patch should get as much 
testing as early as possible.

 RPC Invoker relies on static ClientCache which has synchronized(this) blocks
 

 Key: HADOOP-11772
 URL: https://issues.apache.org/jira/browse/HADOOP-11772
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, performance
Reporter: Gopal V
Assignee: Akira AJISAKA
  Labels: BB2015-05-RFC
 Attachments: HADOOP-11772-001.patch, HADOOP-11772-002.patch, 
 HADOOP-11772-003.patch, HADOOP-11772-wip-001.patch, 
 HADOOP-11772-wip-002.patch, HADOOP-11772.004.patch, after-ipc-fix.png, 
 dfs-sync-ipc.png, sync-client-bt.png, sync-client-threads.png


 {code}
   private static ClientCache CLIENTS=new ClientCache();
 ...
 this.client = CLIENTS.getClient(conf, factory);
 {code}
 Meanwhile in ClientCache
 {code}
 public synchronized Client getClient(Configuration conf,
   SocketFactory factory, Class? extends Writable valueClass) {
 ...
Client client = clients.get(factory);
 if (client == null) {
   client = new Client(valueClass, conf, factory);
   clients.put(factory, client);
 } else {
   client.incCount();
 }
 {code}
 All invokers end up calling these methods, resulting in IPC clients choking 
 up.
 !sync-client-threads.png!
 !sync-client-bt.png!
 !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9489) Eclipse instructions in BUILDING.txt don't work

2015-05-15 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546335#comment-14546335
 ] 

Ravi Prakash commented on HADOOP-9489:
--

Also I import as Existing Eclipse Projects (Rather than using the m2eclipse 
plugin) . With m2eclipse I find that it complains with Plugin execution not 
covered by lifecycle configuration: 
org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (execution: 
compile-protoc, phase: generate-sources)


 Eclipse instructions in BUILDING.txt don't work
 ---

 Key: HADOOP-9489
 URL: https://issues.apache.org/jira/browse/HADOOP-9489
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
Reporter: Carl Steinbach
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9489.003.patch, HADOOP-9489.004.patch, 
 HADOOP-9489.1.patch, HADOOP-9489.2.patch, eclipse_hadoop_errors.txt, error.log


 I have tried several times to import Hadoop trunk into Eclipse following the 
 instructions in the BUILDING.txt file, but so far have not been able to get 
 it to work.
 If I use a fresh install of Eclipse 4.2.2, Eclipse will complain about an 
 undefined M2_REPO environment variable. I discovered that this is defined 
 automatically by the M2Eclipse plugin, and think that the BUILDING.txt doc 
 should be updated to explain this.
 After installing M2Eclipse I tried importing the code again, and now get over 
 2500 errors related to missing class dependencies. Many of these errors 
 correspond to missing classes in the oah*.proto namespace, which makes me 
 think that 'mvn eclipse:eclipse' is not triggering protoc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11983) HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do

2015-05-15 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee reassigned HADOOP-11983:


Assignee: Sangjin Lee

 HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do
 ---

 Key: HADOOP-11983
 URL: https://issues.apache.org/jira/browse/HADOOP-11983
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee

 The behavior of HADOOP_USER_CLASSPATH_FIRST works the opposite of what it 
 should do. If it is not set, HADOOP_CLASSPATH is prepended. If set, it is 
 appended.
 You can easily try out by doing something like
 {noformat}
 HADOOP_CLASSPATH=/Users/alice/tmp hadoop classpath
 {noformat}
 (HADOOP_CLASSPATH should point to an existing directory)
 I think the if clause in hadoop_add_to_classpath_userpath is reversed.
 This issue seems specific to the trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-1540) distcp should support an exclude list

2015-05-15 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546384#comment-14546384
 ] 

Jing Zhao commented on HADOOP-1540:
---

Sorry for the delay, [~rhaase]. I will review it asap.

 distcp should support an exclude list
 -

 Key: HADOOP-1540
 URL: https://issues.apache.org/jira/browse/HADOOP-1540
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.6.0
Reporter: Senthil Subramanian
Assignee: Rich Haase
Priority: Minor
  Labels: BB2015-05-TBR, patch
 Attachments: HADOOP-1540.008.patch


 There should be a way to ignore specific paths (eg: those that have already 
 been copied over under the current srcPath). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11983) HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do

2015-05-15 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546398#comment-14546398
 ] 

Sangjin Lee commented on HADOOP-11983:
--

The Windows side seems fine (hadoop-config.cmd).

 HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do
 ---

 Key: HADOOP-11983
 URL: https://issues.apache.org/jira/browse/HADOOP-11983
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-11983.001.patch


 The behavior of HADOOP_USER_CLASSPATH_FIRST works the opposite of what it 
 should do. If it is not set, HADOOP_CLASSPATH is prepended. If set, it is 
 appended.
 You can easily try out by doing something like
 {noformat}
 HADOOP_CLASSPATH=/Users/alice/tmp hadoop classpath
 {noformat}
 (HADOOP_CLASSPATH should point to an existing directory)
 I think the if clause in hadoop_add_to_classpath_userpath is reversed.
 This issue seems specific to the trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9984) FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by default

2015-05-15 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546242#comment-14546242
 ] 

Jason Lowe commented on HADOOP-9984:


bq. In the case of globStatus, things are even worse if you choose to resolve 
symlinks, since then you can glob for '*foo' and get back 'bar'. A lot of 
software breaks if globs return back file names that the glob doesn't match.

As I understand it, globStatus is simply listStatus with filtering applied to 
the results.  If that's the case then globStatus should do whatever listStatus 
does with respect to symlinks, and that would be to resolve the symlink 
_except_ for the path in the resulting FileStatus.  This goes back to the 
readdir() + stat() analogy -- everything in the resulting FileStatus needs to 
be about where the symlink points _except_ the path.  The path would still be 
the path to the link, since that's what readdir() would see as well.  Every 
other field in FileStatus has to do with what stat() would return, so those 
fields should be reflective of what the symlink references.  So globStatus 
should not lead to surprises where foo* returns bar even in the presence of 
symlinks.

 FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by 
 default
 --

 Key: HADOOP-9984
 URL: https://issues.apache.org/jira/browse/HADOOP-9984
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Critical
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9984.001.patch, HADOOP-9984.003.patch, 
 HADOOP-9984.005.patch, HADOOP-9984.007.patch, HADOOP-9984.009.patch, 
 HADOOP-9984.010.patch, HADOOP-9984.011.patch, HADOOP-9984.012.patch, 
 HADOOP-9984.013.patch, HADOOP-9984.014.patch, HADOOP-9984.015.patch


 During the process of adding symlink support to FileSystem, we realized that 
 many existing HDFS clients would be broken by listStatus and globStatus 
 returning symlinks.  One example is applications that assume that 
 !FileStatus#isFile implies that the inode is a directory.  As we discussed in 
 HADOOP-9972 and HADOOP-9912, we should default these APIs to returning 
 resolved paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11938) Fix ByteBuffer version encode/decode API of raw erasure coder

2015-05-15 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546289#comment-14546289
 ] 

Kai Zheng commented on HADOOP-11938:


Looks all good. I will commit this unless anybody suggests otherwise.

 Fix ByteBuffer version encode/decode API of raw erasure coder
 -

 Key: HADOOP-11938
 URL: https://issues.apache.org/jira/browse/HADOOP-11938
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11938-HDFS-7285-v1.patch, 
 HADOOP-11938-HDFS-7285-v2.patch, HADOOP-11938-HDFS-7285-v3.patch, 
 HADOOP-11938-HDFS-7285-v4.patch, HADOOP-11938-HDFS-7285-workaround.patch


 While investigating a test failure in {{TestRecoverStripedFile}}, one issue 
 in raw erasrue coder, caused by an optimization in below codes. It assumes 
 the  heap buffer backed by the bytes array available for reading or writing 
 always starts with zero and takes the whole space.
 {code}
   protected static byte[][] toArrays(ByteBuffer[] buffers) {
 byte[][] bytesArr = new byte[buffers.length][];
 ByteBuffer buffer;
 for (int i = 0; i  buffers.length; i++) {
   buffer = buffers[i];
   if (buffer == null) {
 bytesArr[i] = null;
 continue;
   }
   if (buffer.hasArray()) {
 bytesArr[i] = buffer.array();
   } else {
 throw new IllegalArgumentException(Invalid ByteBuffer passed,  +
 expecting heap buffer);
   }
 }
 return bytesArr;
   }
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11929:
--
Attachment: hadoop.sh

hadoop.sh:
* an early cut at what the hadoop personality module might look like



 add test-patch plugin points for customizing build layout
 -

 Key: HADOOP-11929
 URL: https://issues.apache.org/jira/browse/HADOOP-11929
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sean Busbey
Assignee: Allen Wittenauer
Priority: Minor
 Attachments: hadoop.sh


 nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546315#comment-14546315
 ] 

Allen Wittenauer commented on HADOOP-11929:
---

Oh, because it's relatively easy to compare, here's what the new check_site 
test looks like.  The biggest change you'll notice is that what were formerly 
global tests are now module tests.  So instead of compile the entire site, we 
only compile the modules that personality said we should do:

{code}
function check_site
{
  local -r mypwd=$(pwd)
  local results=0

  big_console_header Determining if patched site still builds

  verify_needed_test site

  if [[ $? == 0 ]]; then
echo This patch does not appear to need site checks.
return 0
  fi

  start_clock
  
  personality patch site
  
  until [[ $i -eq ${#MODULE[@]} ]]; do
pushd ${BASEDIR}/${MODULE[${i}]} /dev/null
echo_and_redirect ${PATCH_DIR}/patchSiteWarnings-${MODULE[${i}]}.txt \
   ${MVN} clean site site:stage -Dmaven.javadoc.skip=true 
${MODULEEXTRAPARAM[${i}]} -D${PROJECT_NAME}PatchProcess
if [[ $? != 0 ]] ; then
  echo Site compilation for ${MODULE[${i}]} is broken
  add_jira_table -1 site Site compilation for ${MODULE[${i}]} is broken.
  add_jira_footer site @@BASE@@/patchSiteWarnings-${MODULE[${i}]}.txt
  ((results = results + 1))
fi
popd /dev/null
((i=i+1))
  done

  if [[ ${result} -eq 0 ]]; then
return 1
  fi
  
  add_jira_table +1 site Site still builds.
  return 0
}
{code}

 add test-patch plugin points for customizing build layout
 -

 Key: HADOOP-11929
 URL: https://issues.apache.org/jira/browse/HADOOP-11929
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sean Busbey
Assignee: Allen Wittenauer
Priority: Minor
 Attachments: hadoop.sh


 nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546350#comment-14546350
 ] 

Hadoop QA commented on HADOOP-11772:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 57s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 48s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 55s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  6s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 42s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:red}-1{color} | common tests |  22m 44s | Tests failed in 
hadoop-common. |
| | |  60m 47s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733258/HADOOP-11772.004.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f7e051c |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6705/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6705/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6705/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6705/console |


This message was automatically generated.

 RPC Invoker relies on static ClientCache which has synchronized(this) blocks
 

 Key: HADOOP-11772
 URL: https://issues.apache.org/jira/browse/HADOOP-11772
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, performance
Reporter: Gopal V
Assignee: Akira AJISAKA
  Labels: BB2015-05-RFC
 Attachments: HADOOP-11772-001.patch, HADOOP-11772-002.patch, 
 HADOOP-11772-003.patch, HADOOP-11772-wip-001.patch, 
 HADOOP-11772-wip-002.patch, HADOOP-11772.004.patch, after-ipc-fix.png, 
 dfs-sync-ipc.png, sync-client-bt.png, sync-client-threads.png


 {code}
   private static ClientCache CLIENTS=new ClientCache();
 ...
 this.client = CLIENTS.getClient(conf, factory);
 {code}
 Meanwhile in ClientCache
 {code}
 public synchronized Client getClient(Configuration conf,
   SocketFactory factory, Class? extends Writable valueClass) {
 ...
Client client = clients.get(factory);
 if (client == null) {
   client = new Client(valueClass, conf, factory);
   clients.put(factory, client);
 } else {
   client.incCount();
 }
 {code}
 All invokers end up calling these methods, resulting in IPC clients choking 
 up.
 !sync-client-threads.png!
 !sync-client-bt.png!
 !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11983) HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do

2015-05-15 Thread Sangjin Lee (JIRA)
Sangjin Lee created HADOOP-11983:


 Summary: HADOOP_USER_CLASSPATH_FIRST works the opposite of what it 
is supposed to do
 Key: HADOOP-11983
 URL: https://issues.apache.org/jira/browse/HADOOP-11983
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
Reporter: Sangjin Lee


The behavior of HADOOP_USER_CLASSPATH_FIRST works the opposite of what it 
should do. If it is not set, HADOOP_CLASSPATH is prepended. If set, it is 
appended.

You can easily try out by doing something like

{noformat}
HADOOP_CLASSPATH=/Users/alice/tmp hadoop classpath
{noformat}
(HADOOP_CLASSPATH should point to an existing directory)

I think the if clause in hadoop_add_to_classpath_userpath is reversed.

This issue seems specific to the trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11983) HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do

2015-05-15 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-11983:
-
Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

 HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do
 ---

 Key: HADOOP-11983
 URL: https://issues.apache.org/jira/browse/HADOOP-11983
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-11983.001.patch


 The behavior of HADOOP_USER_CLASSPATH_FIRST works the opposite of what it 
 should do. If it is not set, HADOOP_CLASSPATH is prepended. If set, it is 
 appended.
 You can easily try out by doing something like
 {noformat}
 HADOOP_CLASSPATH=/Users/alice/tmp hadoop classpath
 {noformat}
 (HADOOP_CLASSPATH should point to an existing directory)
 I think the if clause in hadoop_add_to_classpath_userpath is reversed.
 This issue seems specific to the trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11983) HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do

2015-05-15 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-11983:
-
Attachment: HADOOP-11983.001.patch

Patch posted.

 HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do
 ---

 Key: HADOOP-11983
 URL: https://issues.apache.org/jira/browse/HADOOP-11983
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
 Attachments: HADOOP-11983.001.patch


 The behavior of HADOOP_USER_CLASSPATH_FIRST works the opposite of what it 
 should do. If it is not set, HADOOP_CLASSPATH is prepended. If set, it is 
 appended.
 You can easily try out by doing something like
 {noformat}
 HADOOP_CLASSPATH=/Users/alice/tmp hadoop classpath
 {noformat}
 (HADOOP_CLASSPATH should point to an existing directory)
 I think the if clause in hadoop_add_to_classpath_userpath is reversed.
 This issue seems specific to the trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11938) Enhance ByteBuffer version encode/decode API of raw erasure coder

2015-05-15 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11938:
---
Description: 
While investigating a test failure in {{TestRecoverStripedFile}}, it was found 
that raw erasure coder may be used breaking its assumptions in the ByteBuffer 
version encode/decode API. Originally it assumes the ByteBuffer inputs/outputs 
available for reading or writing always start with zero (position) and takes 
the whole space (limit). Below is a code sample in that assumption:
{code}
  protected static byte[][] toArrays(ByteBuffer[] buffers) {
byte[][] bytesArr = new byte[buffers.length][];

ByteBuffer buffer;
for (int i = 0; i  buffers.length; i++) {
  buffer = buffers[i];
  if (buffer == null) {
bytesArr[i] = null;
continue;
  }

  if (buffer.hasArray()) {
bytesArr[i] = buffer.array();
  } else {
throw new IllegalArgumentException(Invalid ByteBuffer passed,  +
expecting heap buffer);
  }
}

return bytesArr;
  }
{code} 

  was:
While investigating a test failure in {{TestRecoverStripedFile}}, one issue in 
raw erasrue coder, caused by an optimization in below codes. It assumes the  
heap buffer backed by the bytes array available for reading or writing always 
starts with zero and takes the whole space.
{code}
  protected static byte[][] toArrays(ByteBuffer[] buffers) {
byte[][] bytesArr = new byte[buffers.length][];

ByteBuffer buffer;
for (int i = 0; i  buffers.length; i++) {
  buffer = buffers[i];
  if (buffer == null) {
bytesArr[i] = null;
continue;
  }

  if (buffer.hasArray()) {
bytesArr[i] = buffer.array();
  } else {
throw new IllegalArgumentException(Invalid ByteBuffer passed,  +
expecting heap buffer);
  }
}

return bytesArr;
  }
{code} 


 Enhance ByteBuffer version encode/decode API of raw erasure coder
 -

 Key: HADOOP-11938
 URL: https://issues.apache.org/jira/browse/HADOOP-11938
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11938-HDFS-7285-v1.patch, 
 HADOOP-11938-HDFS-7285-v2.patch, HADOOP-11938-HDFS-7285-v3.patch, 
 HADOOP-11938-HDFS-7285-v4.patch, HADOOP-11938-HDFS-7285-workaround.patch


 While investigating a test failure in {{TestRecoverStripedFile}}, it was 
 found that raw erasure coder may be used breaking its assumptions in the 
 ByteBuffer version encode/decode API. Originally it assumes the ByteBuffer 
 inputs/outputs available for reading or writing always start with zero 
 (position) and takes the whole space (limit). Below is a code sample in that 
 assumption:
 {code}
   protected static byte[][] toArrays(ByteBuffer[] buffers) {
 byte[][] bytesArr = new byte[buffers.length][];
 ByteBuffer buffer;
 for (int i = 0; i  buffers.length; i++) {
   buffer = buffers[i];
   if (buffer == null) {
 bytesArr[i] = null;
 continue;
   }
   if (buffer.hasArray()) {
 bytesArr[i] = buffer.array();
   } else {
 throw new IllegalArgumentException(Invalid ByteBuffer passed,  +
 expecting heap buffer);
   }
 }
 return bytesArr;
   }
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11938) Enhance ByteBuffer version encode/decode API of raw erasure coder

2015-05-15 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11938:
---
  Resolution: Fixed
Release Note: This enhances the ByteBuffer version of encode / decode API 
of raw erasure coder allowing to process variable length of inputs data.
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

It was just committed. Thanks [~hitliuyi] a lot for the great and thorough 
review!

 Enhance ByteBuffer version encode/decode API of raw erasure coder
 -

 Key: HADOOP-11938
 URL: https://issues.apache.org/jira/browse/HADOOP-11938
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11938-HDFS-7285-v1.patch, 
 HADOOP-11938-HDFS-7285-v2.patch, HADOOP-11938-HDFS-7285-v3.patch, 
 HADOOP-11938-HDFS-7285-v4.patch, HADOOP-11938-HDFS-7285-workaround.patch


 While investigating a test failure in {{TestRecoverStripedFile}}, it was 
 found that raw erasure coder may be used breaking its assumptions in the 
 ByteBuffer version encode/decode API. Originally it assumes the ByteBuffer 
 inputs/outputs available for reading or writing always start with zero 
 (position) and takes the whole space (limit). Below is a code sample in that 
 assumption:
 {code}
   protected static byte[][] toArrays(ByteBuffer[] buffers) {
 byte[][] bytesArr = new byte[buffers.length][];
 ByteBuffer buffer;
 for (int i = 0; i  buffers.length; i++) {
   buffer = buffers[i];
   if (buffer == null) {
 bytesArr[i] = null;
 continue;
   }
   if (buffer.hasArray()) {
 bytesArr[i] = buffer.array();
   } else {
 throw new IllegalArgumentException(Invalid ByteBuffer passed,  +
 expecting heap buffer);
   }
 }
 return bytesArr;
   }
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-05-15 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546245#comment-14546245
 ] 

Gopal V commented on HADOOP-11772:
--

[~wheat9]: have you got any profiles on this with a multi-threaded test?

 RPC Invoker relies on static ClientCache which has synchronized(this) blocks
 

 Key: HADOOP-11772
 URL: https://issues.apache.org/jira/browse/HADOOP-11772
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, performance
Reporter: Gopal V
Assignee: Akira AJISAKA
  Labels: BB2015-05-RFC
 Attachments: HADOOP-11772-001.patch, HADOOP-11772-002.patch, 
 HADOOP-11772-003.patch, HADOOP-11772-wip-001.patch, 
 HADOOP-11772-wip-002.patch, HADOOP-11772.004.patch, after-ipc-fix.png, 
 dfs-sync-ipc.png, sync-client-bt.png, sync-client-threads.png


 {code}
   private static ClientCache CLIENTS=new ClientCache();
 ...
 this.client = CLIENTS.getClient(conf, factory);
 {code}
 Meanwhile in ClientCache
 {code}
 public synchronized Client getClient(Configuration conf,
   SocketFactory factory, Class? extends Writable valueClass) {
 ...
Client client = clients.get(factory);
 if (client == null) {
   client = new Client(valueClass, conf, factory);
   clients.put(factory, client);
 } else {
   client.incCount();
 }
 {code}
 All invokers end up calling these methods, resulting in IPC clients choking 
 up.
 !sync-client-threads.png!
 !sync-client-bt.png!
 !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546307#comment-14546307
 ] 

Allen Wittenauer edited comment on HADOOP-11929 at 5/15/15 10:38 PM:
-

This cut clearly has some bugs\-\-like module manip not called in the javac 
code\-\-but wanted to post a sample of what I've been playing with to elicit 
feedback to see if this covers the usage cases.   This doesn't include the 
changes needed to test-patch.sh, which are pretty extensive, as you can imagine.

[~cnauroth], go ahead and file a jira to get the parallel tests in.  That's a 
minor thing and should make any rebasing for docker or this patch too hard. ;)

Thanks!


was (Author: aw):
This cut clearly has some bugs--like module manip not called in the javac 
code--but wanted to post a sample of what I've been playing with to elicit 
feedback to see if this covers the usage cases.   This doesn't include the 
changes needed to test-patch.sh, which are pretty extensive, as you can imagine.

[~cnauroth], go ahead and file a jira to get the parallel tests in.  That's a 
minor thing and should make any rebasing for docker or this patch too hard. ;)

Thanks!

 add test-patch plugin points for customizing build layout
 -

 Key: HADOOP-11929
 URL: https://issues.apache.org/jira/browse/HADOOP-11929
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sean Busbey
Assignee: Allen Wittenauer
Priority: Minor
 Attachments: hadoop.sh


 nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546307#comment-14546307
 ] 

Allen Wittenauer commented on HADOOP-11929:
---

This cut clearly has some bugs--like module manip not called in the javac 
code--but wanted to post a sample of what I've been playing with to elicit 
feedback to see if this covers the usage cases.   This doesn't include the 
changes needed to test-patch.sh, which are pretty extensive, as you can imagine.

[~cnauroth], go ahead and file a jira to get the parallel tests in.  That's a 
minor thing and should make any rebasing for docker or this patch too hard. ;)

Thanks!

 add test-patch plugin points for customizing build layout
 -

 Key: HADOOP-11929
 URL: https://issues.apache.org/jira/browse/HADOOP-11929
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sean Busbey
Assignee: Allen Wittenauer
Priority: Minor
 Attachments: hadoop.sh


 nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-15 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-11984:
--

 Summary: Enable parallel JUnit tests in pre-commit.
 Key: HADOOP-11984
 URL: https://issues.apache.org/jira/browse/HADOOP-11984
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Reporter: Chris Nauroth
Assignee: Chris Nauroth


HADOOP-9287 and related issues implemented the parallel-tests Maven profile for 
running JUnit tests in multiple concurrent processes.  This issue proposes to 
activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-05-15 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546297#comment-14546297
 ] 

Haohui Mai commented on HADOOP-11772:
-

I think that it might be feasible to reproduce the problem is to spawn a client 
that talks to 200 nodes concurrently, but unfortunately I don't have the access 
of the cluster nor YourKit.

 RPC Invoker relies on static ClientCache which has synchronized(this) blocks
 

 Key: HADOOP-11772
 URL: https://issues.apache.org/jira/browse/HADOOP-11772
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, performance
Reporter: Gopal V
Assignee: Akira AJISAKA
  Labels: BB2015-05-RFC
 Attachments: HADOOP-11772-001.patch, HADOOP-11772-002.patch, 
 HADOOP-11772-003.patch, HADOOP-11772-wip-001.patch, 
 HADOOP-11772-wip-002.patch, HADOOP-11772.004.patch, after-ipc-fix.png, 
 dfs-sync-ipc.png, sync-client-bt.png, sync-client-threads.png


 {code}
   private static ClientCache CLIENTS=new ClientCache();
 ...
 this.client = CLIENTS.getClient(conf, factory);
 {code}
 Meanwhile in ClientCache
 {code}
 public synchronized Client getClient(Configuration conf,
   SocketFactory factory, Class? extends Writable valueClass) {
 ...
Client client = clients.get(factory);
 if (client == null) {
   client = new Client(valueClass, conf, factory);
   clients.put(factory, client);
 } else {
   client.incCount();
 }
 {code}
 All invokers end up calling these methods, resulting in IPC clients choking 
 up.
 !sync-client-threads.png!
 !sync-client-bt.png!
 !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11938) Enhance ByteBuffer version encode/decode API of raw erasure coder

2015-05-15 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11938:
---
Summary: Enhance ByteBuffer version encode/decode API of raw erasure coder  
(was: Fix ByteBuffer version encode/decode API of raw erasure coder)

 Enhance ByteBuffer version encode/decode API of raw erasure coder
 -

 Key: HADOOP-11938
 URL: https://issues.apache.org/jira/browse/HADOOP-11938
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11938-HDFS-7285-v1.patch, 
 HADOOP-11938-HDFS-7285-v2.patch, HADOOP-11938-HDFS-7285-v3.patch, 
 HADOOP-11938-HDFS-7285-v4.patch, HADOOP-11938-HDFS-7285-workaround.patch


 While investigating a test failure in {{TestRecoverStripedFile}}, one issue 
 in raw erasrue coder, caused by an optimization in below codes. It assumes 
 the  heap buffer backed by the bytes array available for reading or writing 
 always starts with zero and takes the whole space.
 {code}
   protected static byte[][] toArrays(ByteBuffer[] buffers) {
 byte[][] bytesArr = new byte[buffers.length][];
 ByteBuffer buffer;
 for (int i = 0; i  buffers.length; i++) {
   buffer = buffers[i];
   if (buffer == null) {
 bytesArr[i] = null;
 continue;
   }
   if (buffer.hasArray()) {
 bytesArr[i] = buffer.array();
   } else {
 throw new IllegalArgumentException(Invalid ByteBuffer passed,  +
 expecting heap buffer);
   }
 }
 return bytesArr;
   }
 {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11894) Bump the version of HTrace to 3.2.0-incubating

2015-05-15 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546466#comment-14546466
 ] 

Masatake Iwasaki commented on HADOOP-11894:
---

bq. Failed unit tests   
hadoop.security.token.delegation.web.TestWebDelegationToken

This test is not related to the path fixed by the patch. 
{{TestWebDelegationToken}} succeeded on my local environment.


 Bump the version of HTrace to 3.2.0-incubating
 --

 Key: HADOOP-11894
 URL: https://issues.apache.org/jira/browse/HADOOP-11894
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
 Attachments: HADOOP-11894.001.patch, HADOOP-11894.002.patch


 * update pom.xml
 * update documentation
 * replace {{addKVAnnotation(byte[] key, byte[] value)}} with 
 {{addKVAnnotation(String key, String value)}}
 * replace {{SpanReceiverHost#getUniqueLocalTraceFileName}} with 
 {{LocalFileSpanReceiver#getUniqueLocalTraceFileName}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11977) test-patch.sh should only run javadoc if comments are touched

2015-05-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer moved HDFS-8414 to HADOOP-11977:
-

Key: HADOOP-11977  (was: HDFS-8414)
Project: Hadoop Common  (was: Hadoop HDFS)

 test-patch.sh should only run javadoc if comments are touched
 -

 Key: HADOOP-11977
 URL: https://issues.apache.org/jira/browse/HADOOP-11977
 Project: Hadoop Common
  Issue Type: Test
Reporter: Allen Wittenauer

 I suspect an optimization could be made that when a patch to java code only 
 touches a comment, the javac and unit test code shouldn't fire off.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11713) ViewFileSystem should support snapshot methods.

2015-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545566#comment-14545566
 ] 

Hudson commented on HADOOP-11713:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2126 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2126/])
HADOOP-11713. ViewFileSystem should support snapshot methods. Contributed by 
Rakesh R. (cnauroth: rev 09fe16f166392a99e1e54001a9112c6a4632dfc8)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java


 ViewFileSystem should support snapshot methods.
 ---

 Key: HADOOP-11713
 URL: https://issues.apache.org/jira/browse/HADOOP-11713
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.2.0
Reporter: Chris Nauroth
Assignee: Rakesh R
 Fix For: 2.8.0

 Attachments: HADOOP-11713-001.patch, HADOOP-11713-002.patch, 
 HDFS-5641-001.patch


 Currently, {{ViewFileSystem}} does not dispatch snapshot methods through the 
 mount table.  All snapshot methods throw {{UnsupportedOperationException}}, 
 even though the underlying mount points could be HDFS instances that support 
 snapshots.  We need to update {{ViewFileSystem}} to implement the snapshot 
 methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11960) Enable Azure-Storage Client Side logging.

2015-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545568#comment-14545568
 ] 

Hudson commented on HADOOP-11960:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2126 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2126/])
HADOOP-11960. Enable Azure-Storage Client Side logging. Contributed by 
Dushyanth. (cnauroth: rev cb8e69a80cecb95abdfc93a787bea0bedef275ed)
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemClientLogging.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Enable Azure-Storage Client Side logging.
 -

 Key: HADOOP-11960
 URL: https://issues.apache.org/jira/browse/HADOOP-11960
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Reporter: Dushyanth
Assignee: Dushyanth
 Fix For: 2.8.0

 Attachments: 
 0001-HADOOP-945-Enabling-WASB-Azure-Storage-Client-side-l.patch, 
 HADOOP-11960.002.patch


 AzureStorageExceptions currently are logged as part of the WAB code which 
 often is not too informative. AzureStorage SDK supports client side logging 
 that can be enabled that logs relevant information w.r.t request made from 
 the Storage client. 
 This JIRA is created to enable Azure Storage Client Side logging at the Job 
 submission level. User should be able to configure Client Side logging on a 
 Per Job bases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11713) ViewFileSystem should support snapshot methods.

2015-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545596#comment-14545596
 ] 

Hudson commented on HADOOP-11713:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/186/])
HADOOP-11713. ViewFileSystem should support snapshot methods. Contributed by 
Rakesh R. (cnauroth: rev 09fe16f166392a99e1e54001a9112c6a4632dfc8)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java


 ViewFileSystem should support snapshot methods.
 ---

 Key: HADOOP-11713
 URL: https://issues.apache.org/jira/browse/HADOOP-11713
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.2.0
Reporter: Chris Nauroth
Assignee: Rakesh R
 Fix For: 2.8.0

 Attachments: HADOOP-11713-001.patch, HADOOP-11713-002.patch, 
 HDFS-5641-001.patch


 Currently, {{ViewFileSystem}} does not dispatch snapshot methods through the 
 mount table.  All snapshot methods throw {{UnsupportedOperationException}}, 
 even though the underlying mount points could be HDFS instances that support 
 snapshots.  We need to update {{ViewFileSystem}} to implement the snapshot 
 methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11960) Enable Azure-Storage Client Side logging.

2015-05-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545598#comment-14545598
 ] 

Hudson commented on HADOOP-11960:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/186/])
HADOOP-11960. Enable Azure-Storage Client Side logging. Contributed by 
Dushyanth. (cnauroth: rev cb8e69a80cecb95abdfc93a787bea0bedef275ed)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemClientLogging.java


 Enable Azure-Storage Client Side logging.
 -

 Key: HADOOP-11960
 URL: https://issues.apache.org/jira/browse/HADOOP-11960
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Reporter: Dushyanth
Assignee: Dushyanth
 Fix For: 2.8.0

 Attachments: 
 0001-HADOOP-945-Enabling-WASB-Azure-Storage-Client-side-l.patch, 
 HADOOP-11960.002.patch


 AzureStorageExceptions currently are logged as part of the WAB code which 
 often is not too informative. AzureStorage SDK supports client side logging 
 that can be enabled that logs relevant information w.r.t request made from 
 the Storage client. 
 This JIRA is created to enable Azure Storage Client Side logging at the Job 
 submission level. User should be able to configure Client Side logging on a 
 Per Job bases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10971) Flag to make `hadoop fs -ls` print filenames only

2015-05-15 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-10971:

Attachment: HADOOP-10971.003.patch

I was wrong. TestLs#processPathDirectoryPathOnly was failed also in my 
environment.
So I'm attaching the revised patch. In this,

* fixed the unit test failure
* removed Found n items message when -C is specified
* updated the document
* made -C to overrule -u, because timestamp isn't displayed when -C is specified
* moved the variables and functions related to -C option before -d option, 
because they are sorted in alphabetical order in the existing code
* add test cases and modified existing ones because the previous patch was not 
sufficient


 Flag to make `hadoop fs -ls` print filenames only
 -

 Key: HADOOP-10971
 URL: https://issues.apache.org/jira/browse/HADOOP-10971
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.3.0
Reporter: Ryan Williams
Assignee: Kengo Seki
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10971.001.patch, HADOOP-10971.002.patch, 
 HADOOP-10971.003.patch


 It would be useful to have a flag that made {{hadoop fs -ls}} only print 
 filenames, instead of full {{stat}} info. The {{-C}} flag from GNU {{ls}} 
 is the closest analog to this behavior that I've found, so I propose that as 
 the flag.
 Per [this stackoverflow answer|http://stackoverflow.com/a/21574829], I've 
 reluctantly added a {{hadoop-ls-C}} wrapper that expands to {{hadoop fs -ls 
 $@ | sed 1d | perl -wlne'print +(split  ,$_,8)\[7\]'}} to a few projects 
 I've worked on, and it would obviously be nice to have hadoop save me (and 
 others) from such hackery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10971) Flag to make `hadoop fs -ls` print filenames only

2015-05-15 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-10971:

Labels: BB2015-05-RFC  (was: BB2015-05-TBR)

 Flag to make `hadoop fs -ls` print filenames only
 -

 Key: HADOOP-10971
 URL: https://issues.apache.org/jira/browse/HADOOP-10971
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.3.0
Reporter: Ryan Williams
Assignee: Kengo Seki
  Labels: BB2015-05-RFC
 Attachments: HADOOP-10971.001.patch, HADOOP-10971.002.patch, 
 HADOOP-10971.003.patch


 It would be useful to have a flag that made {{hadoop fs -ls}} only print 
 filenames, instead of full {{stat}} info. The {{-C}} flag from GNU {{ls}} 
 is the closest analog to this behavior that I've found, so I propose that as 
 the flag.
 Per [this stackoverflow answer|http://stackoverflow.com/a/21574829], I've 
 reluctantly added a {{hadoop-ls-C}} wrapper that expands to {{hadoop fs -ls 
 $@ | sed 1d | perl -wlne'print +(split  ,$_,8)\[7\]'}} to a few projects 
 I've worked on, and it would obviously be nice to have hadoop save me (and 
 others) from such hackery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11847) Enhance raw coder allowing to read least required inputs in decoding

2015-05-15 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11847:
---
Attachment: HADOOP-11847-HDFS-7285-v5.patch

Rebased after HADOOP-11938.

 Enhance raw coder allowing to read least required inputs in decoding
 

 Key: HADOOP-11847
 URL: https://issues.apache.org/jira/browse/HADOOP-11847
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11847-HDFS-7285-v3.patch, 
 HADOOP-11847-HDFS-7285-v4.patch, HADOOP-11847-HDFS-7285-v5.patch, 
 HADOOP-11847-v1.patch, HADOOP-11847-v2.patch, HADOOP-11847-v5.patch


 This is to enhance raw erasure coder to allow only reading least required 
 inputs while decoding. It will also refine and document the relevant APIs for 
 better understanding and usage. When using least required inputs, it may add 
 computating overhead but will possiblly outperform overall since less network 
 traffic and disk IO are involved.
 This is something planned to do but just got reminded by [~zhz]' s question 
 raised in HDFS-7678, also copied here:
 bq.Kai Zheng I have a question about decoding: in a (6+3) schema, if block #2 
 is missing, and I want to repair it with blocks 0, 1, 3, 4, 5, 8, how should 
 I construct the inputs to RawErasureDecoder#decode?
 With this work, hopefully the answer to above question would be obvious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11847) Enhance raw coder allowing to read least required inputs in decoding

2015-05-15 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11847:
---
Attachment: (was: HADOOP-11847-v5.patch)

 Enhance raw coder allowing to read least required inputs in decoding
 

 Key: HADOOP-11847
 URL: https://issues.apache.org/jira/browse/HADOOP-11847
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11847-HDFS-7285-v3.patch, 
 HADOOP-11847-HDFS-7285-v4.patch, HADOOP-11847-HDFS-7285-v5.patch, 
 HADOOP-11847-v1.patch, HADOOP-11847-v2.patch


 This is to enhance raw erasure coder to allow only reading least required 
 inputs while decoding. It will also refine and document the relevant APIs for 
 better understanding and usage. When using least required inputs, it may add 
 computating overhead but will possiblly outperform overall since less network 
 traffic and disk IO are involved.
 This is something planned to do but just got reminded by [~zhz]' s question 
 raised in HDFS-7678, also copied here:
 bq.Kai Zheng I have a question about decoding: in a (6+3) schema, if block #2 
 is missing, and I want to repair it with blocks 0, 1, 3, 4, 5, 8, how should 
 I construct the inputs to RawErasureDecoder#decode?
 With this work, hopefully the answer to above question would be obvious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11847) Enhance raw coder allowing to read least required inputs in decoding

2015-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546516#comment-14546516
 ] 

Hadoop QA commented on HADOOP-11847:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  6s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 47s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  8s | The applied patch generated  5 
new checkstyle issues (total was 8, now 13). |
| {color:red}-1{color} | whitespace |   0m  3s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 43s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  22m 56s | Tests passed in 
hadoop-common. |
| | |  60m 47s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733291/HADOOP-11847-HDFS-7285-v5.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 0180ef0 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6707/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6707/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6707/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6707/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6707/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6707/console |


This message was automatically generated.

 Enhance raw coder allowing to read least required inputs in decoding
 

 Key: HADOOP-11847
 URL: https://issues.apache.org/jira/browse/HADOOP-11847
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11847-HDFS-7285-v3.patch, 
 HADOOP-11847-HDFS-7285-v4.patch, HADOOP-11847-HDFS-7285-v5.patch, 
 HADOOP-11847-v1.patch, HADOOP-11847-v2.patch


 This is to enhance raw erasure coder to allow only reading least required 
 inputs while decoding. It will also refine and document the relevant APIs for 
 better understanding and usage. When using least required inputs, it may add 
 computating overhead but will possiblly outperform overall since less network 
 traffic and disk IO are involved.
 This is something planned to do but just got reminded by [~zhz]' s question 
 raised in HDFS-7678, also copied here:
 bq.Kai Zheng I have a question about decoding: in a (6+3) schema, if block #2 
 is missing, and I want to repair it with blocks 0, 1, 3, 4, 5, 8, how should 
 I construct the inputs to RawErasureDecoder#decode?
 With this work, hopefully the answer to above question would be obvious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11980) Add HBase as user of DataChecksum library

2015-05-15 Thread Apekshit Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apekshit Sharma updated HADOOP-11980:
-
Status: Patch Available  (was: Open)

 Add HBase as user of DataChecksum library
 -

 Key: HADOOP-11980
 URL: https://issues.apache.org/jira/browse/HADOOP-11980
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Apekshit Sharma
Priority: Trivial
 Attachments: HADOOP-11980.patch


 HBASE-11927 adds functionality in hbase to use native hadoop library if 
 available, by using DataChecksum library.
 Add HBase to InterfaceAudience.LimitedPrivate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11980) Add HBase as user of DataChecksum library

2015-05-15 Thread Apekshit Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apekshit Sharma updated HADOOP-11980:
-
Attachment: HADOOP-11980.patch

 Add HBase as user of DataChecksum library
 -

 Key: HADOOP-11980
 URL: https://issues.apache.org/jira/browse/HADOOP-11980
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Apekshit Sharma
Priority: Trivial
 Attachments: HADOOP-11980.patch


 HBASE-11927 adds functionality in hbase to use native hadoop library if 
 available, by using DataChecksum library.
 Add HBase to InterfaceAudience.LimitedPrivate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11894) Bump the version of HTrace to 3.2.0-incubating

2015-05-15 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11894:
--
Attachment: HADOOP-11894.002.patch

I fixed checkstyle issues except for the length of DFSClient.java.

bq. DFSClient.java:1: File length is 3,240 lines (max allowed is 2,000).

There is no quick fix for this and it should be addressed in another JIRA.

bq. The patch doesn't appear to include any new or modified tests. 

The changes of this patch are covered by existing tests.

bq. Failed unit tests   hadoop.tools.TestHdfsConfigFields

This must be fixed by HDFS-8371 now.


 Bump the version of HTrace to 3.2.0-incubating
 --

 Key: HADOOP-11894
 URL: https://issues.apache.org/jira/browse/HADOOP-11894
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
 Attachments: HADOOP-11894.001.patch, HADOOP-11894.002.patch


 * update pom.xml
 * update documentation
 * replace {{addKVAnnotation(byte[] key, byte[] value)}} with 
 {{addKVAnnotation(String key, String value)}}
 * replace {{SpanReceiverHost#getUniqueLocalTraceFileName}} with 
 {{LocalFileSpanReceiver#getUniqueLocalTraceFileName}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7723) Automatically generate good Release Notes

2015-05-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7723.
--
Resolution: Duplicate

Closing as a dupe, based upon HADOOP-11731.

 Automatically generate good Release Notes
 -

 Key: HADOOP-7723
 URL: https://issues.apache.org/jira/browse/HADOOP-7723
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.20.204.0, 0.23.0
Reporter: Matt Foley
Assignee: Matt Foley

 In branch-0.20-security, there is a tool src/docs/relnotes.py, that 
 automatically generates Release Notes.  Fix deficiencies and port it up to 
 trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8364) Rationalize the way architecture-specific sub-components are built with ant in branch-1

2015-05-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8364.
--
Resolution: Won't Fix

Closing as Won't Fix given development on branch-1 has effectively ceased.

 Rationalize the way architecture-specific sub-components are built with ant 
 in branch-1
 ---

 Key: HADOOP-8364
 URL: https://issues.apache.org/jira/browse/HADOOP-8364
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Matt Foley
Assignee: Giridharan Kesavan

 Three different compile flags, compile.native, compile.c++, and 
 compile.libhdfs, turn on or off different architecture-specific subcomponent 
 builds, but they are generally all off or all on and there's no evident need 
 for three different ways to do things.  Also, in build.xml, jsvc and 
 task-controller are included in targets package and bin-package as 
 sub-ant tasks, while librecordio is included as a simple dependency.  We 
 should work through these and get them done in one understandable way.
 This is a matter of maintainability and understandability, and therefore 
 robustness under future changes in build.xml.  No substantial change in 
 functionality is proposed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-05-15 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545934#comment-14545934
 ] 

Gopal V commented on HADOOP-11772:
--

bq. The RPC client will send out the request asynchronously.

Asynchronously is what it does - so it does not fail even without this patch.

The problem is that it takes 200-300ms to send it out, by which time another 
IPC update has already queued up for the same connection.

See the two threads locked against each other in the bug report, where one is 
doing a NameNode operation and another is doing an ApplicationMaster update - 
which need never lock against each other in reality.

Because they both use the same {{ipc.Client}} singleton.

If you want to revisit this fix, please remove the Client singleton or find 
another way to remove the synchronization barrier around the getConnection()  
the way it prevents reopening connections for IPC.

The current IPC implementation works asynchronously, but is too slow to keep up 
with sub-second performance on a multi-threaded daemon which uses a singleton 
locked object for 24 cores doing everything (namenode lookups, app master 
heartbeats, data movement events, statistic updates, error recovery).

 RPC Invoker relies on static ClientCache which has synchronized(this) blocks
 

 Key: HADOOP-11772
 URL: https://issues.apache.org/jira/browse/HADOOP-11772
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, performance
Reporter: Gopal V
Assignee: Akira AJISAKA
  Labels: BB2015-05-RFC
 Attachments: HADOOP-11772-001.patch, HADOOP-11772-002.patch, 
 HADOOP-11772-003.patch, HADOOP-11772-wip-001.patch, 
 HADOOP-11772-wip-002.patch, after-ipc-fix.png, dfs-sync-ipc.png, 
 sync-client-bt.png, sync-client-threads.png


 {code}
   private static ClientCache CLIENTS=new ClientCache();
 ...
 this.client = CLIENTS.getClient(conf, factory);
 {code}
 Meanwhile in ClientCache
 {code}
 public synchronized Client getClient(Configuration conf,
   SocketFactory factory, Class? extends Writable valueClass) {
 ...
Client client = clients.get(factory);
 if (client == null) {
   client = new Client(valueClass, conf, factory);
   clients.put(factory, client);
 } else {
   client.incCount();
 }
 {code}
 All invokers end up calling these methods, resulting in IPC clients choking 
 up.
 !sync-client-threads.png!
 !sync-client-bt.png!
 !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-6700) Hadoop commands guide should include examples

2015-05-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6700.
--
Resolution: Fixed

 Hadoop commands guide should include examples
 -

 Key: HADOOP-6700
 URL: https://issues.apache.org/jira/browse/HADOOP-6700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Amareshwari Sriramadasu
  Labels: newbie

 Currently, The Hadoop command guide 
 (http://hadoop.apache.org/common/docs/r0.20.0/commands_manual.html) just 
 lists all the available command line options, with a description. It should 
 include examples for each command for more clarity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11980) Make DataChecksum APIs public

2015-05-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11980:

Summary: Make DataChecksum APIs public  (was: Add HBase as user of 
DataChecksum library)

 Make DataChecksum APIs public
 -

 Key: HADOOP-11980
 URL: https://issues.apache.org/jira/browse/HADOOP-11980
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Apekshit Sharma
Priority: Trivial
 Attachments: HADOOP-11980.patch


 HBASE-11927 adds functionality in hbase to use native hadoop library if 
 available, by using DataChecksum library.
 Add HBase to InterfaceAudience.LimitedPrivate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11980) Make DataChecksum APIs public

2015-05-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546111#comment-14546111
 ] 

Steve Loughran commented on HADOOP-11980:
-

..so change the title. And yes, scoping things down is wrong. 
private-in-project, yes, but restricted outside is unfair to other apps, and 
you've just made a commitment to some form of API stability.

Leaving it to others to cover whether this API is stable enough to make public.

 Make DataChecksum APIs public
 -

 Key: HADOOP-11980
 URL: https://issues.apache.org/jira/browse/HADOOP-11980
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Apekshit Sharma
Priority: Trivial
 Attachments: HADOOP-11980.patch


 HBASE-11927 adds functionality in hbase to use native hadoop library if 
 available, by using DataChecksum library.
 Add HBase to InterfaceAudience.LimitedPrivate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11980) Make DataChecksum APIs public

2015-05-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546142#comment-14546142
 ] 

Hadoop QA commented on HADOOP-11980:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 32s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 33s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  5s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 40s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  21m 57s | Tests passed in 
hadoop-common. |
| | |  58m 44s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733227/HADOOP-11980.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 03a293a |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6703/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6703/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6703/console |


This message was automatically generated.

 Make DataChecksum APIs public
 -

 Key: HADOOP-11980
 URL: https://issues.apache.org/jira/browse/HADOOP-11980
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Apekshit Sharma
Priority: Trivial
 Attachments: HADOOP-11980.patch


 HBASE-11927 adds functionality in hbase to use native hadoop library if 
 available, by using DataChecksum library.
 Add HBase to InterfaceAudience.LimitedPrivate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-15 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546153#comment-14546153
 ] 

Chris Nauroth commented on HADOOP-11929:


bq. Note for me: For hadoop, add -Pparallel-tests while we're here.

[~aw], I was thinking of turning on parallel-tests today in a separate jira and 
seeing how it goes for the hadoop-hdfs tests.  Any objection?  If you think 
you'll do this soon anyway, then I'll hold off.

 add test-patch plugin points for customizing build layout
 -

 Key: HADOOP-11929
 URL: https://issues.apache.org/jira/browse/HADOOP-11929
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sean Busbey
Assignee: Allen Wittenauer
Priority: Minor

 nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11980) Add HBase as user of DataChecksum library

2015-05-15 Thread Apekshit Sharma (JIRA)
Apekshit Sharma created HADOOP-11980:


 Summary: Add HBase as user of DataChecksum library
 Key: HADOOP-11980
 URL: https://issues.apache.org/jira/browse/HADOOP-11980
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Apekshit Sharma
Priority: Trivial


[HBASE-11927|https://issues.apache.org/jira/browse/HBASE-11927] adds 
functionality in hbase to use native hadoop library if available, by using 
DataChecksum library.
Add HBase to InterfaceAudience.LimitedPrivate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11980) Add HBase as user of DataChecksum library

2015-05-15 Thread Apekshit Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apekshit Sharma updated HADOOP-11980:
-
Description: 
HBASE-11927 adds functionality in hbase to use native hadoop library if 
available, by using DataChecksum library.
Add HBase to InterfaceAudience.LimitedPrivate.

  was:
[HBASE-11927|https://issues.apache.org/jira/browse/HBASE-11927] adds 
functionality in hbase to use native hadoop library if available, by using 
DataChecksum library.
Add HBase to InterfaceAudience.LimitedPrivate.


 Add HBase as user of DataChecksum library
 -

 Key: HADOOP-11980
 URL: https://issues.apache.org/jira/browse/HADOOP-11980
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Apekshit Sharma
Priority: Trivial

 HBASE-11927 adds functionality in hbase to use native hadoop library if 
 available, by using DataChecksum library.
 Add HBase to InterfaceAudience.LimitedPrivate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11980) Add HBase as user of DataChecksum library

2015-05-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546027#comment-14546027
 ] 

Allen Wittenauer commented on HADOOP-11980:
---

 InterfaceAudience.LimitedPrivate is one of the most useless classifications we 
have.  Just make it public and be done with it. 

 Add HBase as user of DataChecksum library
 -

 Key: HADOOP-11980
 URL: https://issues.apache.org/jira/browse/HADOOP-11980
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Apekshit Sharma
Priority: Trivial

 HBASE-11927 adds functionality in hbase to use native hadoop library if 
 available, by using DataChecksum library.
 Add HBase to InterfaceAudience.LimitedPrivate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11976) submit job with oozie to between cluster

2015-05-15 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-11976.
---
Resolution: Invalid

Sunmeng! JIRA is for tracking development changes to Hadoop. For user queries 
please send an email to u...@hadoop.apache.org . Being able to submit jobs to 
YARN using oozie is a pretty basic feature that I doubt would be broken in 
2.5.x . 

 submit job with oozie to between cluster
 

 Key: HADOOP-11976
 URL: https://issues.apache.org/jira/browse/HADOOP-11976
 Project: Hadoop Common
  Issue Type: Bug
Reporter: sunmeng





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11981) Add storage policy APIs to filesystem docs

2015-05-15 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-11981:
--

 Summary: Add storage policy APIs to filesystem docs
 Key: HADOOP-11981
 URL: https://issues.apache.org/jira/browse/HADOOP-11981
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Arpit Agarwal


HDFS-8345 exposed the storage policy APIs via the FileSystem.

The FileSystem docs should be updated accordingly.
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11980) Make DataChecksum APIs public

2015-05-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546205#comment-14546205
 ] 

Sean Busbey commented on HADOOP-11980:
--

{quote}
Leaving it to others to cover whether this API is stable enough to make public.
{quote}

Nothing jumps out as problematic. Some of hte methods could use better javadocs.

 Make DataChecksum APIs public
 -

 Key: HADOOP-11980
 URL: https://issues.apache.org/jira/browse/HADOOP-11980
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Apekshit Sharma
Priority: Trivial
 Attachments: HADOOP-11980.patch


 HBASE-11927 adds functionality in hbase to use native hadoop library if 
 available, by using DataChecksum library.
 Add HBase to InterfaceAudience.LimitedPrivate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)