[jira] [Commented] (HADOOP-11997) CMake CMAKE_C_FLAGS are non-portable

2015-05-27 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560753#comment-14560753
 ] 

Alan Burlison commented on HADOOP-11997:


I've currently modified the various CMake files to set RelWithDebInfo if no 
build mode is specified, I'll remove that and just set CFLAGS explicitly if 
that's considered to be a better option.

However the -D_GNU_SOURCE flag is toxic to portability and will just repeatedly 
cause portability breakage if it's left in so I'm intending to remove it and 
explicitly bracket all such Linux-specific code in the source with 
#define/#undef blocks. That will mean that people doing port work to other 
platforms will have an easier time of it in future.

Adding Solaris Studio compiler support is a long-term goal but initially just 
targeting gcc is the simplest option so I'm proposing just to add Solaris/gcc 
support initially. Note also that Solaris Studio is also available on Linux 
(http://www.oracle.com/technetwork/server-storage/solarisstudio/downloads/index-jsp-141149.html)
 so when Studio support is added it needs to be done separately from the 
platform detection in any case.

 CMake CMAKE_C_FLAGS are non-portable
 

 Key: HADOOP-11997
 URL: https://issues.apache.org/jira/browse/HADOOP-11997
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.7.0
 Environment: All
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Critical

 hadoop-common-project/hadoop-common/src/CMakeLists.txt 
 (https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt#L110)
  contains the following unconditional assignments to CMAKE_C_FLAGS:
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -g -Wall -O2)
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -D_REENTRANT -D_GNU_SOURCE)
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -D_LARGEFILE_SOURCE 
 -D_FILE_OFFSET_BITS=64)
 There are several issues here:
 1. -D_GNU_SOURCE globally enables the use of all Linux-only extensions in 
 hadoop-common native source. This is probably a major contributor to the poor 
 cross-platform portability of Hadoop native code to non-Linux platforms as it 
 makes it easy for developers to use non-portable Linux features without 
 realising. Use of Linux-specific features should be correctly bracketed with 
 conditional macro blocks that provide an alternative for non-Linux platforms.
 2. -g -Wall -O2 turns on debugging for all builds, I believe the correct 
 mechanism is to set the CMAKE_BUILD_TYPE CMake variable. If it is still 
 necessary to override CFLAGS it should probably be done conditionally 
 dependent on the value of CMAKE_BUILD_TYPE.
 3. -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 On Solaris these flags are 
 only needed for largefile support in ILP32 applications, LP64 applications 
 are largefile by default. I believe the same is true on Linux, so these flags 
 are harmless but redundant for 64-bit compilation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12037) Fix wrong classname in example configuration of hadoop-auth documentation

2015-05-27 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12037:
--
Attachment: HADOOP-12037.002.patch

 Fix wrong classname in example configuration of hadoop-auth documentation
 -

 Key: HADOOP-12037
 URL: https://issues.apache.org/jira/browse/HADOOP-12037
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Trivial
 Attachments: HADOOP-12037.001.patch, HADOOP-12037.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12037) Fix wrong classname in example configuration of hadoop-auth documentation

2015-05-27 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12037:
--
Status: Patch Available  (was: Open)

 Fix wrong classname in example configuration of hadoop-auth documentation
 -

 Key: HADOOP-12037
 URL: https://issues.apache.org/jira/browse/HADOOP-12037
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Trivial
 Attachments: HADOOP-12037.001.patch, HADOOP-12037.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12037) Fix wrong classname in example configuration of hadoop-auth documentation

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560687#comment-14560687
 ] 

Hadoop QA commented on HADOOP-12037:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   3m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | release audit |   0m 19s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  1s | Site still builds. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   6m 23s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735561/HADOOP-12037.002.patch 
|
| Optional Tests | site |
| git revision | trunk / bb18163 |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6844/console |


This message was automatically generated.

 Fix wrong classname in example configuration of hadoop-auth documentation
 -

 Key: HADOOP-12037
 URL: https://issues.apache.org/jira/browse/HADOOP-12037
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Trivial
 Attachments: HADOOP-12037.001.patch, HADOOP-12037.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560809#comment-14560809
 ] 

Hudson commented on HADOOP-11242:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #940 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/940/])
HADOOP-11242. Record the time of calling in tracing span of IPC server. 
Contributed by Mastake Iwasaki. (aajisaka: rev 
bb1816328a36ec3f8c6bd9fdb950d9a4ec8388c8)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11242-test.patch, HADOOP-11242.002.patch, 
 HADOOP-11242.003.patch, HADOOP-11242.1.patch, HADOOP-11242.1.patch, 
 ScreenShot-test.png


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11969) ThreadLocal initialization in several classes is not thread safe

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560811#comment-14560811
 ] 

Hudson commented on HADOOP-11969:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #940 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/940/])
HADOOP-11969. ThreadLocal initialization in several classes is not thread safe 
(Sean Busbey via Colin P. McCabe) (cmccabe: rev 
7dba7005b79994106321b0f86bc8f4ea51a3c185)
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestDirHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesPartitioner.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordOutput.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleSchedulerImpl.java
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSMDCFilter.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableOutput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/ServerWebApp.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/Chain.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesInput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordOutput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesOutput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableInput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordInput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordInput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MD5Hash.java


 ThreadLocal initialization in several classes is not thread safe
 

 Key: HADOOP-11969
 URL: https://issues.apache.org/jira/browse/HADOOP-11969
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Critical
  Labels: thread-safety
 Fix For: 2.8.0

 Attachments: HADOOP-11969.1.patch, HADOOP-11969.2.patch, 
 HADOOP-11969.3.patch, HADOOP-11969.4.patch, HADOOP-11969.5.patch


 Right now, the initialization of hte thread local factories for encoder / 
 decoder in Text are not marked final. This means they end up with a static 
 initializer that is not guaranteed to be finished running before the members 
 are visible. 
 Under heavy contention, this means during initialization some users will get 
 an NPE:
 {code}
 (2015-05-05 08:58:03.974 : solr_server_log.log) 
  org.apache.solr.common.SolrException; null:java.lang.NullPointerException
   at org.apache.hadoop.io.Text.decode(Text.java:406)
   at org.apache.hadoop.io.Text.decode(Text.java:389)
   at org.apache.hadoop.io.Text.toString(Text.java:280)
   at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:764)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildBaseHeader(DataTransferProtoUtil.java:81)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildClientHeader(DataTransferProtoUtil.java:71)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Sender.readBlock(Sender.java:101)
   at 
 org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:400)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1027)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:974)
   at 

[jira] [Commented] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560677#comment-14560677
 ] 

Hudson commented on HADOOP-11242:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7907 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7907/])
HADOOP-11242. Record the time of calling in tracing span of IPC server. 
Contributed by Mastake Iwasaki. (aajisaka: rev 
bb1816328a36ec3f8c6bd9fdb950d9a4ec8388c8)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11242-test.patch, HADOOP-11242.002.patch, 
 HADOOP-11242.003.patch, HADOOP-11242.1.patch, HADOOP-11242.1.patch, 
 ScreenShot-test.png


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560725#comment-14560725
 ] 

Hadoop QA commented on HADOOP-11242:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 30s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 26s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 38s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 16s | The applied patch generated  1 
new checkstyle issues (total was 218, now 218). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 42s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 28s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | hdfs tests | 163m 18s | Tests passed in hadoop-hdfs. 
|
| | | 229m  3s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731539/HADOOP-11242.003.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cdbd66b |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6842/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6842/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6842/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6842/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6842/console |


This message was automatically generated.

 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11242-test.patch, HADOOP-11242.002.patch, 
 HADOOP-11242.003.patch, HADOOP-11242.1.patch, HADOOP-11242.1.patch, 
 ScreenShot-test.png


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12037) Fix wrong classname in example configuration of hadoop-auth documentation

2015-05-27 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12037:
--
Status: Open  (was: Patch Available)

 Fix wrong classname in example configuration of hadoop-auth documentation
 -

 Key: HADOOP-12037
 URL: https://issues.apache.org/jira/browse/HADOOP-12037
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Trivial
 Attachments: HADOOP-12037.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11952) Native compilation on Solaris fails on Yarn due to use of FTS

2015-05-27 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560730#comment-14560730
 ] 

Alan Burlison commented on HADOOP-11952:


if they are in HDFS please log them under the HDFS top-level JIRA that I opened 
rather than the YARN one, see HDFS-8478

There's little point in targeting the 32-bit JVM as, apart from the CMake 
adjustments, the code changes to build 64-bit on Solaris are mostly identical 
to those needed for 32-bit. Also, as of Java8 there is no 32-bit JVM for 
Solaris, only 64-bit is available - 
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8023288 and Java7 is EOL - 
https://www.java.com/en/download/faq/java_7.xml

I think it's premature at this point to talk about timelines as we are still 
evaluating the work that needs to be done and any targets we gave would just be 
guesses.

 Native compilation on Solaris fails on Yarn due to use of FTS
 -

 Key: HADOOP-11952
 URL: https://issues.apache.org/jira/browse/HADOOP-11952
 Project: Hadoop Common
  Issue Type: Sub-task
 Environment: Solaris 11.2
Reporter: Malcolm Kavalsky
Assignee: Alan Burlison
   Original Estimate: 24h
  Remaining Estimate: 24h

 Compiling the Yarn Node Manager results in fts not found. On Solaris we 
 have an alternative ftw with similar functionality.
 This is isolated to a single file container-executor.c
 Note that this will just fix the compilation error. A more serious issue is 
 that Solaris does not support cgroups as Linux does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560769#comment-14560769
 ] 

Larry McCay commented on HADOOP-11934:
--

[~cnauroth] - thanks again for the review!
#4 above - ugh - wouldn't leveraging FileUtil for this introduce the lookup of 
groups through LdapGroupsMapping again - putting us back at square one?

The reason for adding the LocalJavaKeyStoreProvider was to avoid the recursive 
dependency on LdapGroupsMapping that we get when using LDAP based group lookup 
with the credential provider.

Perhaps, we can use: 
http://docs.oracle.com/javase/7/docs/api/java/nio/file/attribute/AclFileAttributeView.html
 for this

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 

[jira] [Commented] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560825#comment-14560825
 ] 

Hudson commented on HADOOP-11242:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #210 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/210/])
HADOOP-11242. Record the time of calling in tracing span of IPC server. 
Contributed by Mastake Iwasaki. (aajisaka: rev 
bb1816328a36ec3f8c6bd9fdb950d9a4ec8388c8)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11242-test.patch, HADOOP-11242.002.patch, 
 HADOOP-11242.003.patch, HADOOP-11242.1.patch, HADOOP-11242.1.patch, 
 ScreenShot-test.png


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2015-05-27 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11242:
---
Labels:   (was: BB2015-05-RFC)

 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11242-test.patch, HADOOP-11242.002.patch, 
 HADOOP-11242.003.patch, HADOOP-11242.1.patch, HADOOP-11242.1.patch, 
 ScreenShot-test.png


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2015-05-27 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560642#comment-14560642
 ] 

Akira AJISAKA commented on HADOOP-11242:


Looks good to me, +1. I used HADOOP-11242-test.patch to delay putting Call into 
callQueue in 10ms to verify the time is recorded and delayed.

!ScreenShot-test.png!
Yeah! A red dot, which specifies the elapsed time when the Call got out of 
callQueue, is added  10ms after the span starts.

 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: HADOOP-11242-test.patch, HADOOP-11242.002.patch, 
 HADOOP-11242.003.patch, HADOOP-11242.1.patch, HADOOP-11242.1.patch, 
 ScreenShot-test.png


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2015-05-27 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11242:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~iwasakims] for contribution.

 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11242-test.patch, HADOOP-11242.002.patch, 
 HADOOP-11242.003.patch, HADOOP-11242.1.patch, HADOOP-11242.1.patch, 
 ScreenShot-test.png


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12037) Fix wrong classname in example configuration of hadoop-auth documentation

2015-05-27 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-12037:
-

 Summary: Fix wrong classname in example configuration of 
hadoop-auth documentation
 Key: HADOOP-12037
 URL: https://issues.apache.org/jira/browse/HADOOP-12037
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) FileSystemContractBaseTest:testListStatus should not assume listStatus returns sorted results

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560747#comment-14560747
 ] 

Hadoop QA commented on HADOOP-12009:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 50s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 37s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  5s | The applied patch generated  1 
new checkstyle issues (total was 142, now 142). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 4  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 40s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 48s | Tests passed in 
hadoop-common. |
| | |  61m  9s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735557/HADOOP-12009.1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / bb18163 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6845/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6845/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6845/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6845/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6845/console |


This message was automatically generated.

 FileSystemContractBaseTest:testListStatus should not assume listStatus 
 returns sorted results
 -

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Attachments: HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2015-05-27 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11242:
---
Attachment: ScreenShot-test.png
HADOOP-11242-test.patch

 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: HADOOP-11242-test.patch, HADOOP-11242.002.patch, 
 HADOOP-11242.003.patch, HADOOP-11242.1.patch, HADOOP-11242.1.patch, 
 ScreenShot-test.png


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12009) FileSystemContractBaseTest:testListStatus should not assume listStatus returns sorted results

2015-05-27 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HADOOP-12009:

Attachment: HADOOP-12009.1.patch

Attached an initial patch.
Please review.

 FileSystemContractBaseTest:testListStatus should not assume listStatus 
 returns sorted results
 -

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Attachments: HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12009) FileSystemContractBaseTest:testListStatus should not assume listStatus returns sorted results

2015-05-27 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HADOOP-12009:

Status: Patch Available  (was: Open)

 FileSystemContractBaseTest:testListStatus should not assume listStatus 
 returns sorted results
 -

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Attachments: HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12037) Fix wrong classname in example configuration of hadoop-auth documentation

2015-05-27 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12037:
--
Attachment: HADOOP-12037.001.patch

 Fix wrong classname in example configuration of hadoop-auth documentation
 -

 Key: HADOOP-12037
 URL: https://issues.apache.org/jira/browse/HADOOP-12037
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Trivial
 Attachments: HADOOP-12037.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12037) Fix wrong classname in example configuration of hadoop-auth documentation

2015-05-27 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12037:
--
Status: Patch Available  (was: Open)

 Fix wrong classname in example configuration of hadoop-auth documentation
 -

 Key: HADOOP-12037
 URL: https://issues.apache.org/jira/browse/HADOOP-12037
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Trivial
 Attachments: HADOOP-12037.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11538) [Startup script ]Start-dfs.sh and start-yarn.sh does not work when we export JAVA_HOME Manually

2015-05-27 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11538:
--
Status: Patch Available  (was: Open)

 [Startup script ]Start-dfs.sh and start-yarn.sh does not work when we export 
 JAVA_HOME Manually
 ---

 Key: HADOOP-11538
 URL: https://issues.apache.org/jira/browse/HADOOP-11538
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jagadesh Kiran N
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-11538.patch


 Scenario:
 ===
 Followed the document for installation of standalone cluster..
 http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
 export JAVA_HOME=/path/to/Java
 When we execute start-dfs.sh , it will execute slaves.sh where it is list of 
 hostnamess are mentioned.
 As slaves.sh will do ssh to machine and start the server, where JAVA_HOME 
 will not present.
  *Actual:* 
 We will get error like JAVA_HOME is not present
  *Expected:* 
 JAVA_HOME should to passed to slaves.sh as a argument.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11538) [Startup script ]Start-dfs.sh and start-yarn.sh does not work when we export JAVA_HOME Manually

2015-05-27 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561033#comment-14561033
 ] 

Brahma Reddy Battula commented on HADOOP-11538:
---

[~aw] Attached initail patch..Kindly Review..

 [Startup script ]Start-dfs.sh and start-yarn.sh does not work when we export 
 JAVA_HOME Manually
 ---

 Key: HADOOP-11538
 URL: https://issues.apache.org/jira/browse/HADOOP-11538
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jagadesh Kiran N
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-11538.patch


 Scenario:
 ===
 Followed the document for installation of standalone cluster..
 http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
 export JAVA_HOME=/path/to/Java
 When we execute start-dfs.sh , it will execute slaves.sh where it is list of 
 hostnamess are mentioned.
 As slaves.sh will do ssh to machine and start the server, where JAVA_HOME 
 will not present.
  *Actual:* 
 We will get error like JAVA_HOME is not present
  *Expected:* 
 JAVA_HOME should to passed to slaves.sh as a argument.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11975) Native code needs to be built to match the 32/64 bitness of the JVM

2015-05-27 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560838#comment-14560838
 ] 

Alan Burlison commented on HADOOP-11975:


Everything in the current version of JNIFlags.cmake is either specific to Linux 
(e.g. the ARM float ABI detection) or makes assumptions that only work on Linux 
+ gcc (e.g. the 32/64 bit detection). I suggest that JNIFlags.cmake becomes a 
wrapper around the standard CMake FindJNI.cmake module and defers the bulk of 
the JNI detection/configuration to that. Allen Wittenaur has logged a separate 
JIRA roughtly along those lines, see HADOOP-12036.

 Native code needs to be built to match the 32/64 bitness of the JVM
 ---

 Key: HADOOP-11975
 URL: https://issues.apache.org/jira/browse/HADOOP-11975
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.7.0
 Environment: Solaris
Reporter: Alan Burlison
Assignee: Alan Burlison

 When building with a 64-bit JVM on Solaris the following error occurs at the 
 link stage of building the native code:
  [exec] ld: fatal: file 
 /usr/jdk/instances/jdk1.8.0/jre/lib/amd64/server/libjvm.so: wrong ELF class: 
 ELFCLASS64
  [exec] collect2: error: ld returned 1 exit status
  [exec] make[2]: *** [target/usr/local/lib/libhadoop.so.1.0.0] Error 1
  [exec] make[1]: *** [CMakeFiles/hadoop.dir/all] Error 2
 The compilation flags in the makefiles need to explicitly state if 32 or 64 
 bit code is to be generated, to match the JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11538) [Startup script ]Start-dfs.sh and start-yarn.sh does not work when we export JAVA_HOME Manually

2015-05-27 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11538:
--
Attachment: HADOOP-11538.patch

 [Startup script ]Start-dfs.sh and start-yarn.sh does not work when we export 
 JAVA_HOME Manually
 ---

 Key: HADOOP-11538
 URL: https://issues.apache.org/jira/browse/HADOOP-11538
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jagadesh Kiran N
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-11538.patch


 Scenario:
 ===
 Followed the document for installation of standalone cluster..
 http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
 export JAVA_HOME=/path/to/Java
 When we execute start-dfs.sh , it will execute slaves.sh where it is list of 
 hostnamess are mentioned.
 As slaves.sh will do ssh to machine and start the server, where JAVA_HOME 
 will not present.
  *Actual:* 
 We will get error like JAVA_HOME is not present
  *Expected:* 
 JAVA_HOME should to passed to slaves.sh as a argument.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560942#comment-14560942
 ] 

Hudson commented on HADOOP-11242:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #208 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/208/])
HADOOP-11242. Record the time of calling in tracing span of IPC server. 
Contributed by Mastake Iwasaki. (aajisaka: rev 
bb1816328a36ec3f8c6bd9fdb950d9a4ec8388c8)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11242-test.patch, HADOOP-11242.002.patch, 
 HADOOP-11242.003.patch, HADOOP-11242.1.patch, HADOOP-11242.1.patch, 
 ScreenShot-test.png


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11969) ThreadLocal initialization in several classes is not thread safe

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560944#comment-14560944
 ] 

Hudson commented on HADOOP-11969:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #208 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/208/])
HADOOP-11969. ThreadLocal initialization in several classes is not thread safe 
(Sean Busbey via Colin P. McCabe) (cmccabe: rev 
7dba7005b79994106321b0f86bc8f4ea51a3c185)
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordOutput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableOutput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesOutput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/Chain.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordOutput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesPartitioner.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableInput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MD5Hash.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleSchedulerImpl.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordInput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/ServerWebApp.java
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSMDCFilter.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesInput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordInput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestDirHelper.java


 ThreadLocal initialization in several classes is not thread safe
 

 Key: HADOOP-11969
 URL: https://issues.apache.org/jira/browse/HADOOP-11969
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Critical
  Labels: thread-safety
 Fix For: 2.8.0

 Attachments: HADOOP-11969.1.patch, HADOOP-11969.2.patch, 
 HADOOP-11969.3.patch, HADOOP-11969.4.patch, HADOOP-11969.5.patch


 Right now, the initialization of hte thread local factories for encoder / 
 decoder in Text are not marked final. This means they end up with a static 
 initializer that is not guaranteed to be finished running before the members 
 are visible. 
 Under heavy contention, this means during initialization some users will get 
 an NPE:
 {code}
 (2015-05-05 08:58:03.974 : solr_server_log.log) 
  org.apache.solr.common.SolrException; null:java.lang.NullPointerException
   at org.apache.hadoop.io.Text.decode(Text.java:406)
   at org.apache.hadoop.io.Text.decode(Text.java:389)
   at org.apache.hadoop.io.Text.toString(Text.java:280)
   at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:764)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildBaseHeader(DataTransferProtoUtil.java:81)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildClientHeader(DataTransferProtoUtil.java:71)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Sender.readBlock(Sender.java:101)
   at 
 org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:400)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1027)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:974)
   at 

[jira] [Commented] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-05-27 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560856#comment-14560856
 ] 

Alan Burlison commented on HADOOP-12036:


I agree, as I'm already working on trying to come up with a good solution for 
this as part of HADOOP-11985, I'm happy to take this issue if that's OK with 
you.

 Consolidate all of the cmake extensions in one directory
 

 Key: HADOOP-12036
 URL: https://issues.apache.org/jira/browse/HADOOP-12036
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer

 Rather than have a half-dozen redefinitions, custom extensions, etc, we 
 should move them all to one location so that the cmake environment is 
 consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11952) Native compilation on Solaris fails on Yarn due to use of FTS

2015-05-27 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-11952:
---
External issue ID: 15778275 : SUNBT7152812 libc should provide fts 
interfaces for traversing file hierarchy

15778275 is the Solaris bug that has been logged to investigate adding the fts 
interfaces to Solaris

 Native compilation on Solaris fails on Yarn due to use of FTS
 -

 Key: HADOOP-11952
 URL: https://issues.apache.org/jira/browse/HADOOP-11952
 Project: Hadoop Common
  Issue Type: Sub-task
 Environment: Solaris 11.2
Reporter: Malcolm Kavalsky
Assignee: Alan Burlison
   Original Estimate: 24h
  Remaining Estimate: 24h

 Compiling the Yarn Node Manager results in fts not found. On Solaris we 
 have an alternative ftw with similar functionality.
 This is isolated to a single file container-executor.c
 Note that this will just fix the compilation error. A more serious issue is 
 that Solaris does not support cgroups as Linux does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561138#comment-14561138
 ] 

Hudson commented on HADOOP-11242:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2156 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2156/])
HADOOP-11242. Record the time of calling in tracing span of IPC server. 
Contributed by Mastake Iwasaki. (aajisaka: rev 
bb1816328a36ec3f8c6bd9fdb950d9a4ec8388c8)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java


 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11242-test.patch, HADOOP-11242.002.patch, 
 HADOOP-11242.003.patch, HADOOP-11242.1.patch, HADOOP-11242.1.patch, 
 ScreenShot-test.png


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11969) ThreadLocal initialization in several classes is not thread safe

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561140#comment-14561140
 ] 

Hudson commented on HADOOP-11969:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2156 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2156/])
HADOOP-11969. ThreadLocal initialization in several classes is not thread safe 
(Sean Busbey via Colin P. McCabe) (cmccabe: rev 
7dba7005b79994106321b0f86bc8f4ea51a3c185)
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableInput.java
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSMDCFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/ServerWebApp.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesOutput.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleSchedulerImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableOutput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordInput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesPartitioner.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesInput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MD5Hash.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/Chain.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordOutput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordInput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestDirHelper.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordOutput.java


 ThreadLocal initialization in several classes is not thread safe
 

 Key: HADOOP-11969
 URL: https://issues.apache.org/jira/browse/HADOOP-11969
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Critical
  Labels: thread-safety
 Fix For: 2.8.0

 Attachments: HADOOP-11969.1.patch, HADOOP-11969.2.patch, 
 HADOOP-11969.3.patch, HADOOP-11969.4.patch, HADOOP-11969.5.patch


 Right now, the initialization of hte thread local factories for encoder / 
 decoder in Text are not marked final. This means they end up with a static 
 initializer that is not guaranteed to be finished running before the members 
 are visible. 
 Under heavy contention, this means during initialization some users will get 
 an NPE:
 {code}
 (2015-05-05 08:58:03.974 : solr_server_log.log) 
  org.apache.solr.common.SolrException; null:java.lang.NullPointerException
   at org.apache.hadoop.io.Text.decode(Text.java:406)
   at org.apache.hadoop.io.Text.decode(Text.java:389)
   at org.apache.hadoop.io.Text.toString(Text.java:280)
   at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:764)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildBaseHeader(DataTransferProtoUtil.java:81)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildClientHeader(DataTransferProtoUtil.java:71)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Sender.readBlock(Sender.java:101)
   at 
 org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:400)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1027)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:974)
   at 

[jira] [Commented] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561058#comment-14561058
 ] 

Hudson commented on HADOOP-11242:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2138/])
HADOOP-11242. Record the time of calling in tracing span of IPC server. 
Contributed by Mastake Iwasaki. (aajisaka: rev 
bb1816328a36ec3f8c6bd9fdb950d9a4ec8388c8)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java


 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11242-test.patch, HADOOP-11242.002.patch, 
 HADOOP-11242.003.patch, HADOOP-11242.1.patch, HADOOP-11242.1.patch, 
 ScreenShot-test.png


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11985) Improve Solaris support in Hadoop

2015-05-27 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561067#comment-14561067
 ] 

Alan Burlison commented on HADOOP-11985:


See also HADOOP-12036 Consolidate all of the cmake extensions in one directory

 Improve Solaris support in Hadoop
 -

 Key: HADOOP-11985
 URL: https://issues.apache.org/jira/browse/HADOOP-11985
 Project: Hadoop Common
  Issue Type: New Feature
  Components: build, conf
Affects Versions: 2.7.0
 Environment: Solaris x86, Solaris sparc
Reporter: Alan Burlison
Assignee: Alan Burlison
  Labels: solaris

 At present the Hadoop native components aren't fully supported on Solaris 
 primarily due to differences between Linux and Solaris. This top-level task 
 will be used to group together both existing and new issues related to this 
 work. A second goal is to improve Hadoop performance on Solaris wherever 
 possible.
 Steve Loughran suggested a top-level JIRA was the best way to manage the work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11969) ThreadLocal initialization in several classes is not thread safe

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561060#comment-14561060
 ] 

Hudson commented on HADOOP-11969:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2138/])
HADOOP-11969. ThreadLocal initialization in several classes is not thread safe 
(Sean Busbey via Colin P. McCabe) (cmccabe: rev 
7dba7005b79994106321b0f86bc8f4ea51a3c185)
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestDirHelper.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/ServerWebApp.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/Chain.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordOutput.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesPartitioner.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableInput.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordInput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableOutput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesInput.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleSchedulerImpl.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordInput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordOutput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MD5Hash.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesOutput.java
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSMDCFilter.java


 ThreadLocal initialization in several classes is not thread safe
 

 Key: HADOOP-11969
 URL: https://issues.apache.org/jira/browse/HADOOP-11969
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Critical
  Labels: thread-safety
 Fix For: 2.8.0

 Attachments: HADOOP-11969.1.patch, HADOOP-11969.2.patch, 
 HADOOP-11969.3.patch, HADOOP-11969.4.patch, HADOOP-11969.5.patch


 Right now, the initialization of hte thread local factories for encoder / 
 decoder in Text are not marked final. This means they end up with a static 
 initializer that is not guaranteed to be finished running before the members 
 are visible. 
 Under heavy contention, this means during initialization some users will get 
 an NPE:
 {code}
 (2015-05-05 08:58:03.974 : solr_server_log.log) 
  org.apache.solr.common.SolrException; null:java.lang.NullPointerException
   at org.apache.hadoop.io.Text.decode(Text.java:406)
   at org.apache.hadoop.io.Text.decode(Text.java:389)
   at org.apache.hadoop.io.Text.toString(Text.java:280)
   at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:764)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildBaseHeader(DataTransferProtoUtil.java:81)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildClientHeader(DataTransferProtoUtil.java:71)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Sender.readBlock(Sender.java:101)
   at 
 org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:400)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1027)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:974)
   at 

[jira] [Commented] (HADOOP-11987) JNI build should use default cmake FindJNI.cmake

2015-05-27 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561053#comment-14561053
 ] 

Alan Burlison commented on HADOOP-11987:


In hadoop/hadoop-common-project/hadoop-common/src on Non-Linux platforms 
FindJNI.cmake gets included twice, once by JNIFlags.cmake when that is included 
into the CMakeLists.txt, and then again explicitly from CMakeLists.txt. 
Obviously that's not correct. If we fix the issues in JNIFlags.cmake then the 
FIND_PACKAGE(JNI REQUIRED) can be removed from  CMakeLists.txt

You are correct about the 3 items in the bullet list above still being needed 
but they are all Linux-specific and whilst some parts are guarded into a 
Linux-specific block, some aren't so that needs fixing as well.

Although there's also a JNIFlags.cmake in hadoop-mapreduce-client, it isn't 
completely identical to the one in hadoop-common, it looks like the 
hadoop-common one is more up-to-date and the one in hadoop-mapreduce-client 
should be removed and replaced by a reference to a single copy.

Also, the JNIFlags.cmake from hadoop-common-project/hadoop-common gets included 
in multiple other places

* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* hadoop-tools/hadoop-pipes/src/CMakeLists.txt
* hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt

Which all suggests the functionality in hadoop-common's JNIFlags.cmake needs to 
be generalised and moved to a common place in the build tree, as suggested in 
HADOOP-12036.

 JNI build should use default cmake FindJNI.cmake
 

 Key: HADOOP-11987
 URL: https://issues.apache.org/jira/browse/HADOOP-11987
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 2.7.0
 Environment: All
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Minor

 From 
 http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201505.mbox/%3C55568DAC.1040303%40oracle.com%3E
 --
 Why does  hadoop-common-project/hadoop-common/src/CMakeLists.txt use 
 JNIFlags.cmake in the same directory to set things up for JNI 
 compilation rather than FindJNI.cmake, which comes as a standard cmake 
 module? The checks in JNIFlags.cmake make several assumptions that I 
 believe are only correct on Linux whereas I'd expect FindJNI.cmake to be 
 more platform-independent.
 --
 Just checked the repo of cmake and it turns out that FindJNI.cmake is
 available even before cmake 2.4. I think it makes sense to file a bug
 to replace it to the standard cmake module. Can you please file a jira
 for this?
 --
 This also applies to 
 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/JNIFlags.cmake



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11969) ThreadLocal initialization in several classes is not thread safe

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561076#comment-14561076
 ] 

Hudson commented on HADOOP-11969:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #198 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/198/])
HADOOP-11969. ThreadLocal initialization in several classes is not thread safe 
(Sean Busbey via Colin P. McCabe) (cmccabe: rev 
7dba7005b79994106321b0f86bc8f4ea51a3c185)
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSMDCFilter.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MD5Hash.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleSchedulerImpl.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesOutput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordOutput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/servlet/ServerWebApp.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableInput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesInput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/PipesPartitioner.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/Chain.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/BinaryRecordInput.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestDirHelper.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesWritableOutput.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordOutput.java
* 
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/TypedBytesRecordInput.java


 ThreadLocal initialization in several classes is not thread safe
 

 Key: HADOOP-11969
 URL: https://issues.apache.org/jira/browse/HADOOP-11969
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Critical
  Labels: thread-safety
 Fix For: 2.8.0

 Attachments: HADOOP-11969.1.patch, HADOOP-11969.2.patch, 
 HADOOP-11969.3.patch, HADOOP-11969.4.patch, HADOOP-11969.5.patch


 Right now, the initialization of hte thread local factories for encoder / 
 decoder in Text are not marked final. This means they end up with a static 
 initializer that is not guaranteed to be finished running before the members 
 are visible. 
 Under heavy contention, this means during initialization some users will get 
 an NPE:
 {code}
 (2015-05-05 08:58:03.974 : solr_server_log.log) 
  org.apache.solr.common.SolrException; null:java.lang.NullPointerException
   at org.apache.hadoop.io.Text.decode(Text.java:406)
   at org.apache.hadoop.io.Text.decode(Text.java:389)
   at org.apache.hadoop.io.Text.toString(Text.java:280)
   at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:764)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildBaseHeader(DataTransferProtoUtil.java:81)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.buildClientHeader(DataTransferProtoUtil.java:71)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Sender.readBlock(Sender.java:101)
   at 
 org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:400)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
   at 
 org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1027)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:974)
   at 

[jira] [Commented] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561074#comment-14561074
 ] 

Hudson commented on HADOOP-11242:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #198 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/198/])
HADOOP-11242. Record the time of calling in tracing span of IPC server. 
Contributed by Mastake Iwasaki. (aajisaka: rev 
bb1816328a36ec3f8c6bd9fdb950d9a4ec8388c8)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTracing.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11242-test.patch, HADOOP-11242.002.patch, 
 HADOOP-11242.003.patch, HADOOP-11242.1.patch, HADOOP-11242.1.patch, 
 ScreenShot-test.png


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-05-27 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-12036:
---
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-11985

 Consolidate all of the cmake extensions in one directory
 

 Key: HADOOP-12036
 URL: https://issues.apache.org/jira/browse/HADOOP-12036
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer
Assignee: Alan Burlison

 Rather than have a half-dozen redefinitions, custom extensions, etc, we 
 should move them all to one location so that the cmake environment is 
 consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-05-27 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison reassigned HADOOP-12036:
--

Assignee: Alan Burlison

 Consolidate all of the cmake extensions in one directory
 

 Key: HADOOP-12036
 URL: https://issues.apache.org/jira/browse/HADOOP-12036
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer
Assignee: Alan Burlison

 Rather than have a half-dozen redefinitions, custom extensions, etc, we 
 should move them all to one location so that the cmake environment is 
 consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11997) CMake CMAKE_C_FLAGS are non-portable

2015-05-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561116#comment-14561116
 ] 

Allen Wittenauer commented on HADOOP-11997:
---

bq. Adding Solaris Studio compiler support is a long-term goal

Let me check and see if I still have my patch around that does some of that.

 CMake CMAKE_C_FLAGS are non-portable
 

 Key: HADOOP-11997
 URL: https://issues.apache.org/jira/browse/HADOOP-11997
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.7.0
 Environment: All
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Critical

 hadoop-common-project/hadoop-common/src/CMakeLists.txt 
 (https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt#L110)
  contains the following unconditional assignments to CMAKE_C_FLAGS:
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -g -Wall -O2)
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -D_REENTRANT -D_GNU_SOURCE)
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -D_LARGEFILE_SOURCE 
 -D_FILE_OFFSET_BITS=64)
 There are several issues here:
 1. -D_GNU_SOURCE globally enables the use of all Linux-only extensions in 
 hadoop-common native source. This is probably a major contributor to the poor 
 cross-platform portability of Hadoop native code to non-Linux platforms as it 
 makes it easy for developers to use non-portable Linux features without 
 realising. Use of Linux-specific features should be correctly bracketed with 
 conditional macro blocks that provide an alternative for non-Linux platforms.
 2. -g -Wall -O2 turns on debugging for all builds, I believe the correct 
 mechanism is to set the CMAKE_BUILD_TYPE CMake variable. If it is still 
 necessary to override CFLAGS it should probably be done conditionally 
 dependent on the value of CMAKE_BUILD_TYPE.
 3. -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 On Solaris these flags are 
 only needed for largefile support in ILP32 applications, LP64 applications 
 are largefile by default. I believe the same is true on Linux, so these flags 
 are harmless but redundant for 64-bit compilation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-05-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561120#comment-14561120
 ] 

Allen Wittenauer commented on HADOOP-12036:
---

Go for it.  I've been too wrapped up in fixing test-patch. 

Off the top of my head, I was thinking these should probably all live somewhere 
in either dev-support or hadoop-common.

 Consolidate all of the cmake extensions in one directory
 

 Key: HADOOP-12036
 URL: https://issues.apache.org/jira/browse/HADOOP-12036
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer

 Rather than have a half-dozen redefinitions, custom extensions, etc, we 
 should move them all to one location so that the cmake environment is 
 consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11538) [Startup script ]Start-dfs.sh and start-yarn.sh does not work when we export JAVA_HOME Manually

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561046#comment-14561046
 ] 

Hadoop QA commented on HADOOP-11538:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   3m  1s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | release audit |   0m 20s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  1s | Site still builds. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   6m 26s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735602/HADOOP-11538.patch |
| Optional Tests | site |
| git revision | trunk / bb18163 |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6846/console |


This message was automatically generated.

 [Startup script ]Start-dfs.sh and start-yarn.sh does not work when we export 
 JAVA_HOME Manually
 ---

 Key: HADOOP-11538
 URL: https://issues.apache.org/jira/browse/HADOOP-11538
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jagadesh Kiran N
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-11538.patch


 Scenario:
 ===
 Followed the document for installation of standalone cluster..
 http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
 export JAVA_HOME=/path/to/Java
 When we execute start-dfs.sh , it will execute slaves.sh where it is list of 
 hostnamess are mentioned.
 As slaves.sh will do ssh to machine and start the server, where JAVA_HOME 
 will not present.
  *Actual:* 
 We will get error like JAVA_HOME is not present
  *Expected:* 
 JAVA_HOME should to passed to slaves.sh as a argument.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11997) CMake CMAKE_C_FLAGS are non-portable

2015-05-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561281#comment-14561281
 ] 

Allen Wittenauer commented on HADOOP-11997:
---

bq. However the -D_GNU_SOURCE flag is toxic to portability and will just 
repeatedly cause portability breakage if it's left in so I'm intending to 
remove it and explicitly bracket all such Linux-specific code in the source 
with #define/#undef blocks. That will mean that people doing port work to other 
platforms will have an easier time of it in future.

+1 to this approach. It's been a huge problem over the years to have all of 
these wacky Linux-only functions hanging around in random places, especially 
when there is a perfectly fine POSIX equivalent.

 CMake CMAKE_C_FLAGS are non-portable
 

 Key: HADOOP-11997
 URL: https://issues.apache.org/jira/browse/HADOOP-11997
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.7.0
 Environment: All
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Critical

 hadoop-common-project/hadoop-common/src/CMakeLists.txt 
 (https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt#L110)
  contains the following unconditional assignments to CMAKE_C_FLAGS:
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -g -Wall -O2)
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -D_REENTRANT -D_GNU_SOURCE)
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -D_LARGEFILE_SOURCE 
 -D_FILE_OFFSET_BITS=64)
 There are several issues here:
 1. -D_GNU_SOURCE globally enables the use of all Linux-only extensions in 
 hadoop-common native source. This is probably a major contributor to the poor 
 cross-platform portability of Hadoop native code to non-Linux platforms as it 
 makes it easy for developers to use non-portable Linux features without 
 realising. Use of Linux-specific features should be correctly bracketed with 
 conditional macro blocks that provide an alternative for non-Linux platforms.
 2. -g -Wall -O2 turns on debugging for all builds, I believe the correct 
 mechanism is to set the CMAKE_BUILD_TYPE CMake variable. If it is still 
 necessary to override CFLAGS it should probably be done conditionally 
 dependent on the value of CMAKE_BUILD_TYPE.
 3. -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 On Solaris these flags are 
 only needed for largefile support in ILP32 applications, LP64 applications 
 are largefile by default. I believe the same is true on Linux, so these flags 
 are harmless but redundant for 64-bit compilation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561312#comment-14561312
 ] 

Chris Nauroth commented on HADOOP-11934:


{{FileUtil#setPermission}} is a static method that's implemented entirely in 
terms of JDK classes like {{java.io.File}}.  It doesn't interact with a Hadoop 
{{FileSystem}}, so I don't expect it to trigger the Hadoop group lookup 
machinery.

I think using {{AclFileAttributeView}} implies that we'd need to reimplement 
the mapping of POSIX permissions onto NTFS ACLs.  As per the following quote, 
the special OWNER, GROUP and EVERYONE users that would map directly to POSIX 
permissions are only applicable if the file system also supports 
{{PosixFileAttributeView}}, which Windows doesn't.

bq. When both the AclFileAttributeView and the PosixFileAttributeView are 
supported then these special user identities may be included in ACL entries 
that are read or written.

This mapping is a non-trivial piece of logic.  We already have that logic down 
in the JNI layer inside libwinutils.c, functions {{GetWindowsDACLs}} and 
{{ChangeFileModeByMask}}.  I'm going to play around with this a bit more and 
come back with a recommendation for the simplest way this code can call into 
that logic.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 

[jira] [Updated] (HADOOP-11952) Native compilation on Solaris fails on Yarn due to use of FTS

2015-05-27 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-11952:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: HADOOP-11985)

 Native compilation on Solaris fails on Yarn due to use of FTS
 -

 Key: HADOOP-11952
 URL: https://issues.apache.org/jira/browse/HADOOP-11952
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Solaris 11.2
Reporter: Malcolm Kavalsky
Assignee: Alan Burlison
   Original Estimate: 24h
  Remaining Estimate: 24h

 Compiling the Yarn Node Manager results in fts not found. On Solaris we 
 have an alternative ftw with similar functionality.
 This is isolated to a single file container-executor.c
 Note that this will just fix the compilation error. A more serious issue is 
 that Solaris does not support cgroups as Linux does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11655) Native compilation fails on Solaris due to use of getgrouplist function.

2015-05-27 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-11655:
---
External issue ID: 15399670 - SUNBT6561960 deliver getgrouplist API (was: 
Make _getgroupsbymember public)

15399670 is the Solaris bugid for the addition of getgrouplist to Solaris

 Native compilation fails on Solaris due to use of getgrouplist function.
 

 Key: HADOOP-11655
 URL: https://issues.apache.org/jira/browse/HADOOP-11655
 Project: Hadoop Common
  Issue Type: Sub-task
 Environment: Solaris 11.2
Reporter: Malcolm Kavalsky
   Original Estimate: 168h
  Remaining Estimate: 168h

 getgrouplist() does not exist in Solaris, thus preventing compilation of the 
 native libraries. 
 The easiest solution would be to port this function from Linux or FreeBSD to 
 Solaris and add it to the library if compiling for Solaris.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561533#comment-14561533
 ] 

Chris Nauroth commented on HADOOP-11934:


It looks like our best option is adding a bit of special-case logic for Windows 
in {{LocalJavaKeyStoreProvider}}.  In {{flush}}, we can check for Windows and 
call {{FileUtil#setPermission}}.  The {{SetPosixFilePermission}} would need 
to get converted by calling {{FsPermission#valueOf}}.  In 
{{stashOriginalFilePermissions}}, Windows would need to issue an external 
winutils call using {{Shell#getGetPermissionCommand}}.  The returned string can 
be parsed back to a {{SetPosixFilePermission}}.

After we implement HADOOP-11935 (native stat call), we can come back to some of 
this code and simplify.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561546#comment-14561546
 ] 

Larry McCay commented on HADOOP-11934:
--

[~cnauroth] - thanks for doing the leg work there!
I'll see what I can do with that great insight.


 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 

[jira] [Updated] (HADOOP-7139) Allow appending to existing SequenceFiles

2015-05-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-7139:
--
Release Note:   (was: Existing SequenceFiles can now be appended to)

 Allow appending to existing SequenceFiles
 -

 Key: HADOOP-7139
 URL: https://issues.apache.org/jira/browse/HADOOP-7139
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 1.0.0
Reporter: Stephen Rose
Assignee: Stephen Rose
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-7139-kt.patch, HADOOP-7139.patch, 
 HADOOP-7139.patch, HADOOP-7139.patch, HADOOP-7139.patch

   Original Estimate: 2h
  Remaining Estimate: 2h





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10282) Create a FairCallQueue: a multi-level call queue which schedules incoming calls and multiplexes outgoing calls

2015-05-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10282:
---
Fix Version/s: 2.6.0

 Create a FairCallQueue: a multi-level call queue which schedules incoming 
 calls and multiplexes outgoing calls
 --

 Key: HADOOP-10282
 URL: https://issues.apache.org/jira/browse/HADOOP-10282
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Chris Li
Assignee: Chris Li
 Fix For: 2.6.0

 Attachments: HADOOP-10282.patch, HADOOP-10282.patch, 
 HADOOP-10282.patch, HADOOP-10282.patch


 The FairCallQueue ensures quality of service by altering the order of RPC 
 calls internally. 
 It consists of three parts:
 1. a Scheduler (`HistoryRpcScheduler` is provided) which provides a priority 
 number from 0 to N (0 being highest priority)
 2. a Multi-level queue (residing in `FairCallQueue`) which provides a way to 
 keep calls in priority order internally
 3. a Multiplexer (`WeightedRoundRobinMultiplexer` is provided) which provides 
 logic to control which queue to take from
 Currently the Mux and Scheduler are not pluggable, but they probably should 
 be (up for discussion).
 This is how it is used:
 // Production
 1. Call is created and given to the CallQueueManager
 2. CallQueueManager requests a `put(T call)` into the `FairCallQueue` which 
 implements `BlockingQueue`
 3. `FairCallQueue` asks its scheduler for a scheduling decision, which is an 
 integer e.g. 12
 4. `FairCallQueue` inserts Call into the 12th queue: 
 `queues.get(12).put(call)`
 // Consumption
 1. CallQueueManager requests `take()` or `poll()` on FairCallQueue
 2. `FairCallQueue` asks its multiplexer for which queue to draw from, which 
 will also be an integer e.g. 2
 3. `FairCallQueue` draws from this queue if it has an available call (or 
 tries other queues if it is empty)
 Additional information is available in the linked JIRAs regarding the 
 Scheduler and Multiplexer's roles.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Status: Open  (was: Patch Available)

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 

[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Attachment: HADOOP-11934.012.patch

I've addressed each of the review items.
I can see how HADOOP-11935 could help with the translation required in 
stashOriginalFilePermissions.

I will try and follow up with a separate patch to simplify this once 
HADOOP-11935 is done.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
 HADOOP-11934.012.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 

[jira] [Commented] (HADOOP-7139) Allow appending to existing SequenceFiles

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561800#comment-14561800
 ] 

Hadoop QA commented on HADOOP-7139:
---

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12510845/HADOOP-7139-kt.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 4102e58 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6847/console |


This message was automatically generated.

 Allow appending to existing SequenceFiles
 -

 Key: HADOOP-7139
 URL: https://issues.apache.org/jira/browse/HADOOP-7139
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 1.0.0
Reporter: Stephen Rose
Assignee: Stephen Rose
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-7139-kt.patch, HADOOP-7139.patch, 
 HADOOP-7139.patch, HADOOP-7139.patch, HADOOP-7139.patch

   Original Estimate: 2h
  Remaining Estimate: 2h





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Status: Patch Available  (was: Open)

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
 HADOOP-11934.012.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 

[jira] [Commented] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560477#comment-14560477
 ] 

Hadoop QA commented on HADOOP-11242:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 21s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 24s | The applied patch generated  1 
new checkstyle issues (total was 218, now 218). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 39s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 31s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | hdfs tests | 162m 13s | Tests passed in hadoop-hdfs. 
|
| | | 227m 17s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731539/HADOOP-11242.003.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cdbd66b |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6836/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6836/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6836/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6836/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6836/console |


This message was automatically generated.

 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: HADOOP-11242.002.patch, HADOOP-11242.003.patch, 
 HADOOP-11242.1.patch, HADOOP-11242.1.patch


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560517#comment-14560517
 ] 

Hadoop QA commented on HADOOP-11934:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 13s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 33s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 47s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  2s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 39s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 35s | Tests passed in 
hadoop-common. |
| | |  60m 24s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735523/HADOOP-11934-11.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cdbd66b |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6841/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6841/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6841/console |


This message was automatically generated.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 

[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560539#comment-14560539
 ] 

Hadoop QA commented on HADOOP-9613:
---

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 11s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 18 new or modified test files. |
| {color:red}-1{color} | javac |   7m 44s | The applied patch generated  5  
additional warning messages. |
| {color:green}+1{color} | javadoc |   9m 42s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   4m 47s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m 14s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   1m 46s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   7m 44s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 51s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | mapreduce tests |   9m 31s | Tests failed in 
hadoop-mapreduce-client-app. |
| {color:red}-1{color} | mapreduce tests |   5m 24s | Tests failed in 
hadoop-mapreduce-client-hs. |
| {color:red}-1{color} | yarn tests |   1m 54s | Tests failed in 
hadoop-yarn-common. |
| {color:red}-1{color} | yarn tests |   2m 52s | Tests failed in 
hadoop-yarn-server-applicationhistoryservice. |
| {color:red}-1{color} | yarn tests |   5m 58s | Tests failed in 
hadoop-yarn-server-nodemanager. |
| {color:red}-1{color} | yarn tests |  49m 16s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | | 147m 55s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.mapreduce.v2.app.webapp.TestAMWebServicesTasks |
|   | hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobConf |
|   | hadoop.mapreduce.v2.app.webapp.TestAMWebServices |
|   | hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobs |
|   | hadoop.mapreduce.v2.app.webapp.TestAMWebServicesAttempts |
|   | hadoop.mapreduce.v2.app.webapp.TestAMWebServicesAttempt |
|   | hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesJobsQuery |
|   | hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesTasks |
|   | hadoop.mapreduce.v2.hs.TestJobHistoryParsing |
|   | hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesJobs |
|   | hadoop.mapreduce.v2.hs.webapp.dao.TestJobInfo |
|   | hadoop.mapreduce.v2.hs.TestJobHistoryEntities |
|   | hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesAttempts |
|   | hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesJobConf |
|   | hadoop.mapreduce.v2.hs.webapp.TestHsWebServices |
|   | hadoop.yarn.client.api.impl.TestTimelineClient |
|   | hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
|   | hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices |
|   | hadoop.yarn.server.nodemanager.webapp.TestNMWebServicesContainers |
|   | hadoop.yarn.server.nodemanager.webapp.TestNMWebServicesApps |
|   | hadoop.yarn.server.nodemanager.webapp.TestNMWebServices |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesFairScheduler |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodeLabels |
|   | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokens |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService
 |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735521/HADOOP-9613.3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / cdbd66b |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6839/artifact/patchprocess/diffJavacWarnings.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6839/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6839/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| hadoop-mapreduce-client-hs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6839/artifact/patchprocess/testrun_hadoop-mapreduce-client-hs.txt
 |
| 

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560464#comment-14560464
 ] 

Chris Nauroth commented on HADOOP-11934:


Thanks for addressing the feedback, Larry.  A few more notes:

# {{AbstractJavaKeyStoreProvider#bytesToChars}}: This is a minor nit.  The 
declaration and initialization of {{pass}} can be condensed to one line, i.e. 
{{String pass = ...}}.
# {{JavaKeyStoreProvider#initFileSystem}}: Please add the {{@Override}} 
annotation.
# {{LocalJavaKeyStoreProvider}}: The class JavaDoc mentions the jceks scheme. 
 Should that be changed to localjceks?
# {{LocalJavaKeyStoreProvider#flush}}: I'm sorry I didn't spot this earlier, 
but unfortunately, the JDK does not implement a mapping of POSIX permissions to 
NTFS ACLs for its {{Files#setPosixFilePermissions}} and 
{{Files#getPosixFilePermissions}} methods.  It just throws an 
{{UnsupportedOperationException}} if we try to run these methods on Windows.  
(See test failure below.)  Fortunately, we do implement that mapping in Hadoop! 
 :-)  To make this Windows-compatible, I think we're going to need to explore 
using {{org.apache.hadoop.fs.FileUtil#setPermission}} for the set operation.  
The get operation unfortunately is more awkward, involving a combination of 
{{org.apache.hadoop.fs.Stat}}, {{org.apache.hadoop.fs.FileUtil#execCommand}} 
and {{org.apache.hadoop.util.Shell#getGetPermissionCommand}}.  The high level 
flow for this is in {{org.apache.hadoop.fs.RawLocalFileSystem}}.  
Alternatively, maybe you can think of a simpler way to do a special case for 
Windows.  Let me know.

{code}
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.031 sec  
FAILURE! - in org.apache.hadoop.security.alias.TestCredentialProviderFactory
testLocalJksProvider(org.apache.hadoop.security.alias.TestCredentialProviderFactory)
  Time elapsed: 0.031 sec   ERROR!
java.lang.UnsupportedOperationException: null
at java.nio.file.Files.setPosixFilePermissions(Files.java:1991)
at 
org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider.flush(LocalJavaKeyStoreProvider.java:149)
at 
org.apache.hadoop.security.alias.TestCredentialProviderFactory.checkSpecificProvider(TestCredentialProviderFactory.java:148)
at 
org.apache.hadoop.security.alias.TestCredentialProviderFactory.testLocalJksProvider(TestCredentialProviderFactory.java:220)
{code}


 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 

[jira] [Commented] (HADOOP-12011) Allow to dump verbose information to ease debugging in raw erasure coders

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560505#comment-14560505
 ] 

Hadoop QA commented on HADOOP-12011:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 42s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 43s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 14s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  7s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 42s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 56s | Tests passed in 
hadoop-common. |
| | |  61m 16s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735526/HADOOP-12011-HDFS-7285-v4.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 1299357 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6840/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6840/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6840/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6840/console |


This message was automatically generated.

 Allow to dump verbose information to ease debugging in raw erasure coders
 -

 Key: HADOOP-12011
 URL: https://issues.apache.org/jira/browse/HADOOP-12011
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-7285

 Attachments: HADOOP-12011-HDFS-7285-v1.patch, 
 HADOOP-12011-HDFS-7285-v3.patch, HADOOP-12011-HDFS-7285-v4.patch


 While working on native erasure coders, it was found useful to dump key 
 information like encode/decode matrix, erasures and etc. for the 
 encode/decode call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-27 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560521#comment-14560521
 ] 

Kengo Seki commented on HADOOP-12031:
-

Thanks [~busbey]. As you pointed out, I realized that 002.patch does not work 
on Python 2.6-, because except ... as ... statement is supported from 2.6.1.
So I'm planning:

* If xmllint is available, use it
* If not, try to validate using Python. In that case, use sys.exc_info() 
instead of except ... as ... statement. It makes the plugin to work on Python 
at least 2.1+ (including 3.x).


 test-patch.sh should have an xml plugin
 ---

 Key: HADOOP-12031
 URL: https://issues.apache.org/jira/browse/HADOOP-12031
 Project: Hadoop Common
  Issue Type: Test
  Components: build
Reporter: Allen Wittenauer
Assignee: Kengo Seki
  Labels: newbie, test-patch
 Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch


 HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
 change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562338#comment-14562338
 ] 

Hadoop QA commented on HADOOP-11984:


(!) A patch to test-patch or smart-apply-patch has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6852/console in case of 
problems.

 Enable parallel JUnit tests in pre-commit.
 --

 Key: HADOOP-11984
 URL: https://issues.apache.org/jira/browse/HADOOP-11984
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, scripts, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
 HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
 HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
 HADOOP-11984.009.patch, HADOOP-11984.010.patch, HADOOP-11984.011.patch, 
 HADOOP-11984.013.patch


 HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
 for running JUnit tests in multiple concurrent processes.  This issue 
 proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7139) Allow appending to existing SequenceFiles

2015-05-27 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru updated HADOOP-7139:

Status: Open  (was: Patch Available)

Cancelling the patch as its no longer can be applied on trunk. 

 Allow appending to existing SequenceFiles
 -

 Key: HADOOP-7139
 URL: https://issues.apache.org/jira/browse/HADOOP-7139
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 1.0.0
Reporter: Stephen Rose
Assignee: Stephen Rose
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-7139-kt.patch, HADOOP-7139.patch, 
 HADOOP-7139.patch, HADOOP-7139.patch, HADOOP-7139.patch

   Original Estimate: 2h
  Remaining Estimate: 2h





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-27 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12031:

Attachment: HADOOP-12031.003.patch

-03:

* use xmllint rather than python, if available
* make the python code compatible with 2.1+
* rename the variable $j in for loop to $i

 test-patch.sh should have an xml plugin
 ---

 Key: HADOOP-12031
 URL: https://issues.apache.org/jira/browse/HADOOP-12031
 Project: Hadoop Common
  Issue Type: Test
  Components: build
Reporter: Allen Wittenauer
Assignee: Kengo Seki
  Labels: newbie, test-patch
 Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch, 
 HADOOP-12031.003.patch


 HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
 change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12038) SwiftNativeOutputStream should check whether a file exists or not before deleting

2015-05-27 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-12038:
-
Status: Patch Available  (was: Open)

 SwiftNativeOutputStream should check whether a file exists or not before 
 deleting
 -

 Key: HADOOP-12038
 URL: https://issues.apache.org/jira/browse/HADOOP-12038
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Chen He
Assignee: Chen He
Priority: Minor
 Attachments: HADOOP-12038.000.patch


 15/05/27 15:27:03 WARN snative.SwiftNativeOutputStream: Could not delete 
 /tmp/hadoop-root/output-3695386887711395289.tmp
 It should check whether the file exists or not before deleting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12038) SwiftNativeOutputStream should check whether a file exists or not before deleting

2015-05-27 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-12038:
-
Attachment: HADOOP-12038.000.patch

 SwiftNativeOutputStream should check whether a file exists or not before 
 deleting
 -

 Key: HADOOP-12038
 URL: https://issues.apache.org/jira/browse/HADOOP-12038
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Chen He
Assignee: Chen He
Priority: Minor
 Attachments: HADOOP-12038.000.patch


 15/05/27 15:27:03 WARN snative.SwiftNativeOutputStream: Could not delete 
 /tmp/hadoop-root/output-3695386887711395289.tmp
 It should check whether the file exists or not before deleting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562117#comment-14562117
 ] 

Allen Wittenauer commented on HADOOP-12031:
---

I guess there's nothing we can leverage that ships with the JVM is there?

 test-patch.sh should have an xml plugin
 ---

 Key: HADOOP-12031
 URL: https://issues.apache.org/jira/browse/HADOOP-12031
 Project: Hadoop Common
  Issue Type: Test
  Components: build
Reporter: Allen Wittenauer
Assignee: Kengo Seki
  Labels: newbie, test-patch
 Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch, 
 HADOOP-12031.003.patch


 HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
 change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12038) SwiftNativeOutputStream should check whether a file exists or not before deleting

2015-05-27 Thread Chen He (JIRA)
Chen He created HADOOP-12038:


 Summary: SwiftNativeOutputStream should check whether a file 
exists or not before deleting
 Key: HADOOP-12038
 URL: https://issues.apache.org/jira/browse/HADOOP-12038
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Assignee: Chen He
Priority: Minor


15/05/27 15:27:03 WARN snative.SwiftNativeOutputStream: Could not delete 
/tmp/hadoop-root/output-3695386887711395289.tmp

It should check whether the file exists or not before deleting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562243#comment-14562243
 ] 

Allen Wittenauer commented on HADOOP-12031:
---

OK, definitely a bug here.  It looks like xmlwellformed_postapply isn't 
checking to see if it actually needs to execute via verify_needed_test.

 test-patch.sh should have an xml plugin
 ---

 Key: HADOOP-12031
 URL: https://issues.apache.org/jira/browse/HADOOP-12031
 Project: Hadoop Common
  Issue Type: Test
  Components: build
Reporter: Allen Wittenauer
Assignee: Kengo Seki
  Labels: newbie, test-patch
 Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch, 
 HADOOP-12031.003.patch


 HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
 change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11119) TrashPolicyDefault init pushes messages to command line

2015-05-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562041#comment-14562041
 ] 

Allen Wittenauer commented on HADOOP-9:
---

Looks like the emptier log interval gets printed twice now.

 TrashPolicyDefault init pushes messages to command line
 ---

 Key: HADOOP-9
 URL: https://issues.apache.org/jira/browse/HADOOP-9
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-9-002.patch, HADOOP-9.patch


 During a fresh install of trunk:
 {code}
 aw-mbp-work:hadoop-3.0.0-SNAPSHOT aw$ bin/hadoop fs -put /etc/hosts /tmp
 aw-mbp-work:hadoop-3.0.0-SNAPSHOT aw$ bin/hadoop fs -rm /tmp/hosts
 14/09/23 13:05:46 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
 Deletion interval = 0 minutes, Emptier interval = 0 minutes.
 Deleted /tmp/hosts
 {code}
 The info message for the Namenode trash configuration isn't very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9891) CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException

2015-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562097#comment-14562097
 ] 

Hudson commented on HADOOP-9891:


FAILURE: Integrated in Hadoop-trunk-Commit #7911 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7911/])
HADOOP-9891. CLIMiniCluster instructions fail with MiniYarnCluster 
ClassNotFoundException (Darrell Taylor via aw) (aw: rev 
4d8fb8c19c04088cf8f8e9deecb571273adeaab5)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/site/markdown/CLIMiniCluster.md.vm


 CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException
 

 Key: HADOOP-9891
 URL: https://issues.apache.org/jira/browse/HADOOP-9891
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.1.1-beta
Reporter: Steve Loughran
Assignee: Darrell Taylor
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 3.0.0

 Attachments: HADOOP-9891.patch


 The instruction on how to start up a mini CLI cluster in 
 {{CLIMiniCluster.md}} don't work -it looks like {{MiniYarnCluster}} isn't on 
 the classpath



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Status: Patch Available  (was: Open)

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
Priority: Blocker
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
 HADOOP-11934.012.patch, HADOOP-11934.013.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
  

[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Status: Open  (was: Patch Available)

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
Priority: Blocker
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
 HADOOP-11934.012.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 

[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Attachment: HADOOP-11934.013.patch

v13 addresses the one 81 char line...

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
Priority: Blocker
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
 HADOOP-11934.012.patch, HADOOP-11934.013.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 

[jira] [Commented] (HADOOP-12038) SwiftNativeOutputStream should check whether a file exists or not before deleting

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562205#comment-14562205
 ] 

Hadoop QA commented on HADOOP-12038:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 45s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 35s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 22s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 40s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   0m 14s | Tests passed in 
hadoop-openstack. |
| | |  35m 46s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735755/HADOOP-12038.000.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 5450413 |
| hadoop-openstack test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6850/artifact/patchprocess/testrun_hadoop-openstack.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6850/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6850/console |


This message was automatically generated.

 SwiftNativeOutputStream should check whether a file exists or not before 
 deleting
 -

 Key: HADOOP-12038
 URL: https://issues.apache.org/jira/browse/HADOOP-12038
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Chen He
Assignee: Chen He
Priority: Minor
 Attachments: HADOOP-12038.000.patch


 15/05/27 15:27:03 WARN snative.SwiftNativeOutputStream: Could not delete 
 /tmp/hadoop-root/output-3695386887711395289.tmp
 It should check whether the file exists or not before deleting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562249#comment-14562249
 ] 

Allen Wittenauer commented on HADOOP-12031:
---

Also: XML well-formedness is *really* long, especially in the JIRA table.  Can 
we abbreviate that and the test name to be just xml? 

 test-patch.sh should have an xml plugin
 ---

 Key: HADOOP-12031
 URL: https://issues.apache.org/jira/browse/HADOOP-12031
 Project: Hadoop Common
  Issue Type: Test
  Components: build
Reporter: Allen Wittenauer
Assignee: Kengo Seki
  Labels: newbie, test-patch
 Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch, 
 HADOOP-12031.003.patch


 HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
 change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562263#comment-14562263
 ] 

Hadoop QA commented on HADOOP-11934:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 36s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   9m 22s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 24s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   2m  6s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 41s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 58s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  24m 38s | Tests failed in 
hadoop-common. |
| | |  68m 53s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.ipc.TestIPC |
|   | hadoop.security.ssl.TestReloadingX509TrustManager |
|   | hadoop.security.token.delegation.web.TestWebDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735764/HADOOP-11934.013.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 5450413 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6851/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6851/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6851/console |


This message was automatically generated.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
Priority: Blocker
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
 HADOOP-11934.012.patch, HADOOP-11934.013.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 

[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-27 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562285#comment-14562285
 ] 

Sean Busbey commented on HADOOP-12031:
--

{quote}
I guess there's nothing we can leverage that ships with the JVM is there?
{quote}

interesting suggestion. there isn't any cli tool that comes out of the box 
AFAICT.

However, we could write a simple utility to do this using 
[DocumentBuilder|http://docs.oracle.com/javase/7/docs/api/javax/xml/parsers/DocumentBuilder.html]
 for well-formedness and when a Schema is specified [the validation 
package|http://docs.oracle.com/javase/7/docs/api/javax/xml/validation/package-summary.html]
 for more specifics. Looking at the docs, I think we could make it work for 
Java 5+.

 test-patch.sh should have an xml plugin
 ---

 Key: HADOOP-12031
 URL: https://issues.apache.org/jira/browse/HADOOP-12031
 Project: Hadoop Common
  Issue Type: Test
  Components: build
Reporter: Allen Wittenauer
Assignee: Kengo Seki
  Labels: newbie, test-patch
 Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch, 
 HADOOP-12031.003.patch


 HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
 change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11984) Enable parallel JUnit tests in pre-commit.

2015-05-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11984:
---
Attachment: HADOOP-11984.013.patch

Maybe this is a good reason to bring out my old frenemy, Javascript.  We can 
use the Ant {{script}} task to run Javascript through Rhino.  This executes 
in the JDK, so it removes a dependency on any particular external shell.

Here is patch v013 trying that approach.  I also added the new arguments to 
test-patch.sh as requested by Allen.


 Enable parallel JUnit tests in pre-commit.
 --

 Key: HADOOP-11984
 URL: https://issues.apache.org/jira/browse/HADOOP-11984
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, scripts, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-11984.001.patch, HADOOP-11984.002.patch, 
 HADOOP-11984.003.patch, HADOOP-11984.004.patch, HADOOP-11984.005.patch, 
 HADOOP-11984.006.patch, HADOOP-11984.007.patch, HADOOP-11984.008.patch, 
 HADOOP-11984.009.patch, HADOOP-11984.010.patch, HADOOP-11984.011.patch, 
 HADOOP-11984.013.patch


 HADOOP-9287 and related issues implemented the parallel-tests Maven profile 
 for running JUnit tests in multiple concurrent processes.  This issue 
 proposes to activate that profile during pre-commit to speed up execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9891) CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException

2015-05-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9891:
-
   Resolution: Fixed
Fix Version/s: 3.0.0
 Release Note:   (was: Corrected CLIMiniCluster documentation to include a 
jar to make the YARN cluster work)
   Status: Resolved  (was: Patch Available)

+1 committed to trunk.

Thanks!

 CLIMiniCluster instructions fail with MiniYarnCluster ClassNotFoundException
 

 Key: HADOOP-9891
 URL: https://issues.apache.org/jira/browse/HADOOP-9891
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.1.1-beta
Reporter: Steve Loughran
Assignee: Darrell Taylor
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 3.0.0

 Attachments: HADOOP-9891.patch


 The instruction on how to start up a mini CLI cluster in 
 {{CLIMiniCluster.md}} don't work -it looks like {{MiniYarnCluster}} isn't on 
 the classpath



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11959) WASB should configure client side socket timeout in storage client blob request options

2015-05-27 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-11959:

Attachment: HADOOP-11959.patch

Attaching the patch.

The fix is to move to the latest Azure storage client SDK where the SDK 
internally sets the reasonable socket timeout on the connection. This is 
actually the right fix, as this also automatically provides means for client 
SDK to internally retry on timeout errors.

Storage client SDK release notes:
https://github.com/Azure/azure-storage-java/releases
_Changed the socket timeout to default to 5 minutes rather than infinite when 
neither service side timeout or maximum execution time are set._


 WASB should configure client side socket timeout in storage client blob 
 request options
 ---

 Key: HADOOP-11959
 URL: https://issues.apache.org/jira/browse/HADOOP-11959
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-11959.patch


 On clusters/jobs where {{mapred.task.timeout}} is set to a larger value, we 
 noticed that tasks can sometimes get stuck on the below stack.
 {code}
 Thread 1: (state = IN_NATIVE)
 - java.net.SocketInputStream.socketRead0(java.io.FileDescriptor, byte[], int, 
 int, int) @bci=0 (Interpreted frame)
 - java.net.SocketInputStream.read(byte[], int, int, int) @bci=87, line=152 
 (Interpreted frame)
 - java.net.SocketInputStream.read(byte[], int, int) @bci=11, line=122 
 (Interpreted frame)
 - java.io.BufferedInputStream.fill() @bci=175, line=235 (Interpreted frame)
 - java.io.BufferedInputStream.read1(byte[], int, int) @bci=44, line=275 
 (Interpreted frame)
 - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334 
 (Interpreted frame)
 - sun.net.www.MeteredStream.read(byte[], int, int) @bci=16, line=134 
 (Interpreted frame)
 - java.io.FilterInputStream.read(byte[], int, int) @bci=7, line=133 
 (Interpreted frame)
 - sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(byte[], 
 int, int) @bci=4, line=3053 (Interpreted frame)
 - com.microsoft.azure.storage.core.NetworkInputStream.read(byte[], int, int) 
 @bci=7, line=49 (Interpreted frame)
 - 
 com.microsoft.azure.storage.blob.CloudBlob$10.postProcessResponse(java.net.HttpURLConnection,
  com.microsoft.azure.storage.blob.CloudBlob, com.microsoft.azure
 .storage.blob.CloudBlobClient, com.microsoft.azure.storage.OperationContext, 
 java.lang.Integer) @bci=204, line=1691 (Interpreted frame)
 - 
 com.microsoft.azure.storage.blob.CloudBlob$10.postProcessResponse(java.net.HttpURLConnection,
  java.lang.Object, java.lang.Object, com.microsoft.azure.storage
 .OperationContext, java.lang.Object) @bci=17, line=1613 (Interpreted frame)
 - 
 com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(java.lang.Object,
  java.lang.Object, com.microsoft.azure.storage.core.StorageRequest, com.mi
 crosoft.azure.storage.RetryPolicyFactory, 
 com.microsoft.azure.storage.OperationContext) @bci=352, line=148 (Interpreted 
 frame)
 - com.microsoft.azure.storage.blob.CloudBlob.downloadRangeInternal(long, 
 java.lang.Long, byte[], int, com.microsoft.azure.storage.AccessCondition, 
 com.microsof
 t.azure.storage.blob.BlobRequestOptions, 
 com.microsoft.azure.storage.OperationContext) @bci=131, line=1468 
 (Interpreted frame)
 - com.microsoft.azure.storage.blob.BlobInputStream.dispatchRead(int) @bci=31, 
 line=255 (Interpreted frame)
 - com.microsoft.azure.storage.blob.BlobInputStream.readInternal(byte[], int, 
 int) @bci=52, line=448 (Interpreted frame)
 - com.microsoft.azure.storage.blob.BlobInputStream.read(byte[], int, int) 
 @bci=28, line=420 (Interpreted frame)
 - java.io.BufferedInputStream.read1(byte[], int, int) @bci=39, line=273 
 (Interpreted frame)
 - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334 
 (Interpreted frame)
 - java.io.DataInputStream.read(byte[], int, int) @bci=7, line=149 
 (Interpreted frame)
 - 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.read(byte[],
  int, int) @bci=10, line=734 (Interpreted frame)
 - java.io.BufferedInputStream.read1(byte[], int, int) @bci=39, line=273 
 (Interpreted frame)
 - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334 
 (Interpreted frame)
 - java.io.DataInputStream.read(byte[]) @bci=8, line=100 (Interpreted frame)
 - org.apache.hadoop.util.LineReader.fillBuffer(java.io.InputStream, byte[], 
 boolean) @bci=2, line=180 (Interpreted frame)
 - 
 org.apache.hadoop.util.LineReader.readDefaultLine(org.apache.hadoop.io.Text, 
 int, int) @bci=64, line=216 (Compiled frame)
 - org.apache.hadoop.util.LineReader.readLine(org.apache.hadoop.io.Text, int, 
 int) @bci=19, line=174 (Interpreted frame)
 - 

[jira] [Updated] (HADOOP-12038) SwiftNativeOutputStream should check whether a file exists or not before deleting

2015-05-27 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-12038:
-
Affects Version/s: 2.7.0

 SwiftNativeOutputStream should check whether a file exists or not before 
 deleting
 -

 Key: HADOOP-12038
 URL: https://issues.apache.org/jira/browse/HADOOP-12038
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Chen He
Assignee: Chen He
Priority: Minor

 15/05/27 15:27:03 WARN snative.SwiftNativeOutputStream: Could not delete 
 /tmp/hadoop-root/output-3695386887711395289.tmp
 It should check whether the file exists or not before deleting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Status: Open  (was: Patch Available)

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
Priority: Blocker
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
 HADOOP-11934.012.patch, HADOOP-11934.013.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
  

[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11934:
-
Status: Patch Available  (was: Open)

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
Priority: Blocker
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
 HADOOP-11934.012.patch, HADOOP-11934.013.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
  

[jira] [Commented] (HADOOP-12038) SwiftNativeOutputStream should check whether a file exists or not before deleting

2015-05-27 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562246#comment-14562246
 ] 

Chen He commented on HADOOP-12038:
--

The change is too simple and may not need a unit test.

 SwiftNativeOutputStream should check whether a file exists or not before 
 deleting
 -

 Key: HADOOP-12038
 URL: https://issues.apache.org/jira/browse/HADOOP-12038
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Chen He
Assignee: Chen He
Priority: Minor
 Attachments: HADOOP-12038.000.patch


 15/05/27 15:27:03 WARN snative.SwiftNativeOutputStream: Could not delete 
 /tmp/hadoop-root/output-3695386887711395289.tmp
 It should check whether the file exists or not before deleting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562266#comment-14562266
 ] 

Larry McCay commented on HADOOP-11934:
--

Those test failures are unrelated to this patch.
I'll resubmit the patch to run again.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
Priority: Blocker
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
 HADOOP-11934.012.patch, HADOOP-11934.013.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
 

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562086#comment-14562086
 ] 

Hadoop QA commented on HADOOP-11934:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 41s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 36s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  4s | The applied patch generated  1 
new checkstyle issues (total was 15, now 2). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 40s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m  8s | Tests passed in 
hadoop-common. |
| | |  60m 24s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735679/HADOOP-11934.012.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 4d8fb8c |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6848/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6848/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6848/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6848/console |


This message was automatically generated.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
Priority: Blocker
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
 HADOOP-11934.012.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 

[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562130#comment-14562130
 ] 

Hadoop QA commented on HADOOP-12031:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | shellcheck |   0m  5s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | XML well-formedness |   0m  0s | The patch has no 
ill-formed XML file. |
| | |   0m 30s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735733/HADOOP-12031.003.patch 
|
| Optional Tests | shellcheck |
| git revision | trunk / 5450413 |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6849/console |


This message was automatically generated.

 test-patch.sh should have an xml plugin
 ---

 Key: HADOOP-12031
 URL: https://issues.apache.org/jira/browse/HADOOP-12031
 Project: Hadoop Common
  Issue Type: Test
  Components: build
Reporter: Allen Wittenauer
Assignee: Kengo Seki
  Labels: newbie, test-patch
 Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch, 
 HADOOP-12031.003.patch


 HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
 change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14562128#comment-14562128
 ] 

Hadoop QA commented on HADOOP-12031:


(!) A patch to test-patch or smart-apply-patch has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6849/console in case of 
problems.

 test-patch.sh should have an xml plugin
 ---

 Key: HADOOP-12031
 URL: https://issues.apache.org/jira/browse/HADOOP-12031
 Project: Hadoop Common
  Issue Type: Test
  Components: build
Reporter: Allen Wittenauer
Assignee: Kengo Seki
  Labels: newbie, test-patch
 Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch, 
 HADOOP-12031.003.patch


 HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
 change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10642) Provide option to limit heap memory consumed by dynamic metrics2 metrics

2015-05-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-10642:

Description: 
User sunweiei provided the following jmap output in HBase 0.96 deployment:
{code}
 num #instances #bytes  class name
--
   1:  14917882 3396492464  [C
   2:   1996994 2118021808  [B
   3:  43341650 1733666000  java.util.LinkedHashMap$Entry
   4:  14453983 1156550896  [Ljava.util.HashMap$Entry;
   5:  14446577  924580928  
org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2
{code}
Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be 
due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off 
metrics2/lib/MetricsRegistry.java.
This scenario would arise when large number of regions are tracked through 
metrics2 dynamically.
Interns class doesn't provide API to remove entries in its internal Map.

One solution is to provide an option that allows skipping calls to 
Interns.info() in metrics2/lib/MetricsRegistry.java

  was:
User sunweiei provided the following jmap output in HBase 0.96 deployment:

{code}
 num #instances #bytes  class name
--
   1:  14917882 3396492464  [C
   2:   1996994 2118021808  [B
   3:  43341650 1733666000  java.util.LinkedHashMap$Entry
   4:  14453983 1156550896  [Ljava.util.HashMap$Entry;
   5:  14446577  924580928  
org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2
{code}
Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be 
due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off 
metrics2/lib/MetricsRegistry.java.
This scenario would arise when large number of regions are tracked through 
metrics2 dynamically.
Interns class doesn't provide API to remove entries in its internal Map.

One solution is to provide an option that allows skipping calls to 
Interns.info() in metrics2/lib/MetricsRegistry.java


 Provide option to limit heap memory consumed by dynamic metrics2 metrics
 

 Key: HADOOP-10642
 URL: https://issues.apache.org/jira/browse/HADOOP-10642
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Reporter: Ted Yu

 User sunweiei provided the following jmap output in HBase 0.96 deployment:
 {code}
  num #instances #bytes  class name
 --
1:  14917882 3396492464  [C
2:   1996994 2118021808  [B
3:  43341650 1733666000  java.util.LinkedHashMap$Entry
4:  14453983 1156550896  [Ljava.util.HashMap$Entry;
5:  14446577  924580928  
 org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2
 {code}
 Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be 
 due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off 
 metrics2/lib/MetricsRegistry.java.
 This scenario would arise when large number of regions are tracked through 
 metrics2 dynamically.
 Interns class doesn't provide API to remove entries in its internal Map.
 One solution is to provide an option that allows skipping calls to 
 Interns.info() in metrics2/lib/MetricsRegistry.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10661) Ineffective user/passsword check in FTPFileSystem#initialize()

2015-05-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561975#comment-14561975
 ] 

Ted Yu commented on HADOOP-10661:
-

[~airbots]:
Mind updating the patch ?

Thanks

 Ineffective user/passsword check in FTPFileSystem#initialize()
 --

 Key: HADOOP-10661
 URL: https://issues.apache.org/jira/browse/HADOOP-10661
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10661.patch, HADOOP-10661.patch


 Here is related code:
 {code}
   userAndPassword = (conf.get(fs.ftp.user. + host, null) + : + conf
   .get(fs.ftp.password. + host, null));
   if (userAndPassword == null) {
 throw new IOException(Invalid user/passsword specified);
   }
 {code}
 The intention seems to be checking that username / password should not be 
 null.
 But due to the presence of colon, the above check is not effective.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10661) Ineffective user/passsword check in FTPFileSystem#initialize()

2015-05-27 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561976#comment-14561976
 ] 

Chen He commented on HADOOP-10661:
--

Sure, I will do it tonight. Thank you for remaindering me [~ted_yu].

 Ineffective user/passsword check in FTPFileSystem#initialize()
 --

 Key: HADOOP-10661
 URL: https://issues.apache.org/jira/browse/HADOOP-10661
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10661.patch, HADOOP-10661.patch


 Here is related code:
 {code}
   userAndPassword = (conf.get(fs.ftp.user. + host, null) + : + conf
   .get(fs.ftp.password. + host, null));
   if (userAndPassword == null) {
 throw new IOException(Invalid user/passsword specified);
   }
 {code}
 The intention seems to be checking that username / password should not be 
 null.
 But due to the presence of colon, the above check is not effective.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11934:
---
Priority: Blocker  (was: Major)
Target Version/s: 2.7.1
Hadoop Flags: Reviewed

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
Priority: Blocker
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
 HADOOP-11934.012.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 

[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14561842#comment-14561842
 ] 

Chris Nauroth commented on HADOOP-11934:


Thanks, Larry.  That will do it!  +1 for patch v012, pending Jenkins run.

 Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
 -

 Key: HADOOP-11934
 URL: https://issues.apache.org/jira/browse/HADOOP-11934
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Mike Yoder
Assignee: Larry McCay
 Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
 HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
 HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
 HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
 HADOOP-11934.012.patch


 I was attempting to use the LdapGroupsMapping code and the 
 JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
 fatal, issue.  The code goes into what ought to have been an infinite loop, 
 were it not for it overflowing the stack and Java ending the loop.  Here is a 
 snippet of the stack; my annotations are at the bottom.
 {noformat}
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
   at 
 org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
   at 
 org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
   at 
 org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
   at 
 org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2753)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$Key.init(FileSystem.java:2745)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:88)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider.init(JavaKeyStoreProvider.java:65)
   at 
 org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
   at 
 org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
   at 
 org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
   at 
 org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
   at 
 org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
   at org.apache.hadoop.security.Groups.init(Groups.java:70)
   at org.apache.hadoop.security.Groups.init(Groups.java:66)
   at 
 org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
   at 
 

[jira] [Commented] (HADOOP-11952) Native compilation on Solaris fails on Yarn due to use of FTS

2015-05-27 Thread Malcolm Kavalsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14560566#comment-14560566
 ] 

Malcolm Kavalsky commented on HADOOP-11952:
---

BTW, if we are working on YARN now, then there are some more native 
issues which need to be filed, also in the HDFS module.

On another note, my boss asked whether we have a timeline for when we 
can inform partners that Hadoop native runs on Solaris.
Which version of Hadoop are we targetting (my guess is 2.8), and when 
can we expect to have the bits in the community.
To this end, I think we also need to decide what bugs are show stoppers, 
and what are less critical, and how to prioritize them for release.

For example, on one extreme we could concentrate on releasing fixes that 
enable a 32-bit version pretty soon, versus a  64-bit version that may 
take longer. Also, targeting Solaris 11.2 versus Solaris 12 probably 
requires a bit more work.

What do you think ?




 Native compilation on Solaris fails on Yarn due to use of FTS
 -

 Key: HADOOP-11952
 URL: https://issues.apache.org/jira/browse/HADOOP-11952
 Project: Hadoop Common
  Issue Type: Sub-task
 Environment: Solaris 11.2
Reporter: Malcolm Kavalsky
Assignee: Alan Burlison
   Original Estimate: 24h
  Remaining Estimate: 24h

 Compiling the Yarn Node Manager results in fts not found. On Solaris we 
 have an alternative ftw with similar functionality.
 This is isolated to a single file container-executor.c
 Note that this will just fix the compilation error. A more serious issue is 
 that Solaris does not support cgroups as Linux does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)