[jira] [Resolved] (HADOOP-12749) Create a threadpoolexecutor that overrides afterExecute to log uncaught exceptions/errors

2016-05-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-12749.
---
  Resolution: Fixed
   Fix Version/s: (was: 2.9.0)
  2.8.0
Target Version/s: 2.8.0

Backported to 2.8

> Create a threadpoolexecutor that overrides afterExecute to log uncaught 
> exceptions/errors
> -
>
> Key: HADOOP-12749
> URL: https://issues.apache.org/jira/browse/HADOOP-12749
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.8.0
>
> Attachments: HADOOP-12749.001.patch, HADOOP-12749.002.patch, 
> HADOOP-12749.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12749) Create a threadpoolexecutor that overrides afterExecute to log uncaught exceptions/errors

2016-05-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe reopened HADOOP-12749:
---

> Create a threadpoolexecutor that overrides afterExecute to log uncaught 
> exceptions/errors
> -
>
> Key: HADOOP-12749
> URL: https://issues.apache.org/jira/browse/HADOOP-12749
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.9.0
>
> Attachments: HADOOP-12749.001.patch, HADOOP-12749.002.patch, 
> HADOOP-12749.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-12920) The static Block#toString method should not include information from derived classes

2016-03-11 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12920:
-

 Summary: The static Block#toString method should not include 
information from derived classes
 Key: HADOOP-12920
 URL: https://issues.apache.org/jira/browse/HADOOP-12920
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


The static {{Block#toString}} method should not include information from 
derived classes.  This was a regression introduced by HDFS-9350.  Thanks to 
[~cnauroth] for finding this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12714) Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is not buggy

2016-01-14 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12714:
-

 Summary: Fix hadoop-mapreduce-client-nativetask unit test which 
fails when glibc is not buggy
 Key: HADOOP-12714
 URL: https://issues.apache.org/jira/browse/HADOOP-12714
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Fix hadoop-mapreduce-client-nativetask unit test which fails when glibc is not 
buggy.  It attempts to open a "glibc bug spill" file which doesn't exist unless 
glibc has a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12712) Fix some native build warnings

2016-01-14 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12712:
-

 Summary: Fix some native build warnings
 Key: HADOOP-12712
 URL: https://issues.apache.org/jira/browse/HADOOP-12712
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


Fix some native build warnings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12653) Client.java can get "Address already in use" when using kerberos and attempting to bind to any port on the local IP address

2015-12-16 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12653:
-

 Summary: Client.java can get "Address already in use" when using 
kerberos and attempting to bind to any port on the local IP address
 Key: HADOOP-12653
 URL: https://issues.apache.org/jira/browse/HADOOP-12653
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Client.java can get "Address already in use" when using kerberos and attempting 
to bind to any port on the local IP address.  It appears to be caused by the 
host running out of ports in the ephemeral range.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12344) Improve validateSocketPathSecurity0 error message

2015-11-06 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-12344.
---
Resolution: Fixed

> Improve validateSocketPathSecurity0 error message
> -
>
> Key: HADOOP-12344
> URL: https://issues.apache.org/jira/browse/HADOOP-12344
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Casey Brotherton
>Assignee: Casey Brotherton
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-12344.001.patch, HADOOP-12344.002.patch, 
> HADOOP-12344.003.patch, HADOOP-12344.004.patch, HADOOP-12344.patch
>
>
> When a socket path does not have the correct permissions, an error is thrown.
> That error just has the failing component of the path and not the entire path 
> of the socket.
> The entire path of the socket could be printed out to allow for a direct 
> check of the permissions of the entire path.
> {code}
> java.io.IOException: the path component: '/' is world-writable.  Its 
> permissions are 0077.  Please fix this or select a different socket path.
>   at 
> org.apache.hadoop.net.unix.DomainSocket.validateSocketPathSecurity0(Native 
> Method)
>   at 
> org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:189)
> ...
> {code}
> The error message could also provide the socket path:
> {code}
> java.io.IOException: the path component: '/' is world-writable.  Its 
> permissions are 0077.  Please fix this or select a different socket path than 
> '/var/run/hdfs-sockets/dn'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12560) Fix sprintf warnings in {{DomainSocket.c}} introduced by HADOOP-12344

2015-11-06 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12560:
-

 Summary: Fix sprintf warnings in {{DomainSocket.c}} introduced by 
HADOOP-12344
 Key: HADOOP-12560
 URL: https://issues.apache.org/jira/browse/HADOOP-12560
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe


Fix sprintf warnings in {{DomainSocket.c}} introduced by HADOOP-12344

{code}
 [exec] 
op/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c:488:10:
 warning: format ‘%ld’ expects argument of type ‘long int’, but argument 6 has 
type ‘long long int’ [-Wformat=]
 [exec]   check, path, mode, (long long)st.st_uid, (long 
long)st.st_gid, check);
 [exec]   ^
 [exec] 
/pool/home/alanbur/bigdata/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c:488:10:
 warning: format ‘%ld’ expects argument of type ‘long int’, but argument 7 has 
type ‘long long int’ [-Wformat=]
 [exec] 
/pool/home/alanbur/bigdata/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c:500:10:
 warning: format ‘%ld’ expects argument of type ‘long int’, but argument 6 has 
type ‘long long int’ [-Wformat=]
 [exec]   check, check);
 [exec]   ^
 [exec] 
/pool/home/alanbur/bigdata/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c:500:10:
 warning: format ‘%ld’ expects argument of type ‘long int’, but argument 7 has 
type ‘long long int’ [-Wformat=]
 [exec] 
/pool/home/alanbur/bigdata/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c:513:10:
 warning: format ‘%ld’ expects argument of type ‘long int’, but argument 6 has 
type ‘long long int’ [-Wformat=]
 [exec]   (long long)uid, check, (long long)uid, check);
 [exec]   ^
 [exec] 
/pool/home/alanbur/bigdata/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c:513:10:
 warning: format ‘%ld’ expects argument of type ‘long int’, but argument 7 has 
type ‘long long int’ [-Wformat=]
 [exec] 
/pool/home/alanbur/bigdata/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c:513:10:
 warning: format ‘%ld’ expects argument of type ‘long int’, but argument 8 has 
type ‘long long int’ [-Wformat=]
 [exec] 
/pool/home/alanbur/bigdata/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c:513:10:
 warning: format ‘%ld’ expects argument of type ‘long int’, but argument 10 has 
type ‘long long int’ [-Wformat=]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12447) Clean up some htrace integration issues

2015-09-29 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12447:
-

 Summary: Clean up some htrace integration issues
 Key: HADOOP-12447
 URL: https://issues.apache.org/jira/browse/HADOOP-12447
 Project: Hadoop Common
  Issue Type: Bug
  Components: tracing
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Clean up some htrace integration issues



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7824) Native IO uses wrong constants almost everywhere

2015-07-22 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-7824.
--
   Resolution: Fixed
Fix Version/s: 2.8.0

committed to 2.8, thanks!

 Native IO uses wrong constants almost everywhere
 

 Key: HADOOP-7824
 URL: https://issues.apache.org/jira/browse/HADOOP-7824
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.20.204.0, 0.20.205.0, 1.0.3, 0.23.0, 2.0.0-alpha, 3.0.0
 Environment: Mac OS X, Linux, Solaris, Windows, ... 
Reporter: Dmytro Shteflyuk
Assignee: Martin Walsh
  Labels: hadoop
 Fix For: 2.8.0

 Attachments: HADOOP-7824.001.patch, HADOOP-7824.002.patch, 
 HADOOP-7824.patch, HADOOP-7824.patch, hadoop-7824.txt


 Constants like O_CREAT, O_EXCL, etc. have different values on OS X and many 
 other operating systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-7824) Native IO uses wrong constants almost everywhere

2015-07-22 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe reopened HADOOP-7824:
--

 Native IO uses wrong constants almost everywhere
 

 Key: HADOOP-7824
 URL: https://issues.apache.org/jira/browse/HADOOP-7824
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.20.204.0, 0.20.205.0, 1.0.3, 0.23.0, 2.0.0-alpha, 3.0.0
 Environment: Mac OS X, Linux, Solaris, Windows, ... 
Reporter: Dmytro Shteflyuk
Assignee: Martin Walsh
  Labels: hadoop
 Fix For: 2.8.0

 Attachments: HADOOP-7824.001.patch, HADOOP-7824.002.patch, 
 HADOOP-7824.patch, HADOOP-7824.patch, hadoop-7824.txt


 Constants like O_CREAT, O_EXCL, etc. have different values on OS X and many 
 other operating systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12201) Add tracing to FileSystem#createFileSystem and Globber#glob

2015-07-07 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12201:
-

 Summary: Add tracing to FileSystem#createFileSystem and 
Globber#glob
 Key: HADOOP-12201
 URL: https://issues.apache.org/jira/browse/HADOOP-12201
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Add tracing to FileSystem#createFileSystem and Globber#glob



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12171) Shorten overly-long htrace span names for server

2015-07-01 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12171:
-

 Summary: Shorten overly-long htrace span names for server
 Key: HADOOP-12171
 URL: https://issues.apache.org/jira/browse/HADOOP-12171
 Project: Hadoop Common
  Issue Type: Bug
  Components: tracing
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Shorten overly-long htrace span names for the server.  For example, 
{{org.apache.hadoop.hdfs.protocol.ClientProtocol.create}} should be 
{{ClientProtocol#create}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12124) Add HTrace support for FsShell

2015-06-26 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12124:
-

 Summary: Add HTrace support for FsShell
 Key: HADOOP-12124
 URL: https://issues.apache.org/jira/browse/HADOOP-12124
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Add HTrace support for FsShell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-06-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-12036.
---
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0

 Consolidate all of the cmake extensions in one directory
 

 Key: HADOOP-12036
 URL: https://issues.apache.org/jira/browse/HADOOP-12036
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer
Assignee: Alan Burlison
 Fix For: 2.8.0

 Attachments: HADOOP-12036.001.patch, HADOOP-12036.002.patch, 
 HADOOP-12036.004.patch, HADOOP-12036.005.patch


 Rather than have a half-dozen redefinitions, custom extensions, etc, we 
 should move them all to one location so that the cmake environment is 
 consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11997) CMake CMAKE_C_FLAGS are non-portable

2015-05-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-11997.
---
Resolution: Duplicate

 CMake CMAKE_C_FLAGS are non-portable
 

 Key: HADOOP-11997
 URL: https://issues.apache.org/jira/browse/HADOOP-11997
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.7.0
 Environment: All
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Critical

 hadoop-common-project/hadoop-common/src/CMakeLists.txt 
 (https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt#L110)
  contains the following unconditional assignments to CMAKE_C_FLAGS:
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -g -Wall -O2)
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -D_REENTRANT -D_GNU_SOURCE)
 set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -D_LARGEFILE_SOURCE 
 -D_FILE_OFFSET_BITS=64)
 There are several issues here:
 1. -D_GNU_SOURCE globally enables the use of all Linux-only extensions in 
 hadoop-common native source. This is probably a major contributor to the poor 
 cross-platform portability of Hadoop native code to non-Linux platforms as it 
 makes it easy for developers to use non-portable Linux features without 
 realising. Use of Linux-specific features should be correctly bracketed with 
 conditional macro blocks that provide an alternative for non-Linux platforms.
 2. -g -Wall -O2 turns on debugging for all builds, I believe the correct 
 mechanism is to set the CMAKE_BUILD_TYPE CMake variable. If it is still 
 necessary to override CFLAGS it should probably be done conditionally 
 dependent on the value of CMAKE_BUILD_TYPE.
 3. -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 On Solaris these flags are 
 only needed for largefile support in ILP32 applications, LP64 applications 
 are largefile by default. I believe the same is true on Linux, so these flags 
 are harmless but redundant for 64-bit compilation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11975) Native code needs to be built to match the 32/64 bitness of the JVM

2015-05-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-11975.
---
Resolution: Duplicate

 Native code needs to be built to match the 32/64 bitness of the JVM
 ---

 Key: HADOOP-11975
 URL: https://issues.apache.org/jira/browse/HADOOP-11975
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.7.0
 Environment: Solaris
Reporter: Alan Burlison
Assignee: Alan Burlison

 When building with a 64-bit JVM on Solaris the following error occurs at the 
 link stage of building the native code:
  [exec] ld: fatal: file 
 /usr/jdk/instances/jdk1.8.0/jre/lib/amd64/server/libjvm.so: wrong ELF class: 
 ELFCLASS64
  [exec] collect2: error: ld returned 1 exit status
  [exec] make[2]: *** [target/usr/local/lib/libhadoop.so.1.0.0] Error 1
  [exec] make[1]: *** [CMakeFiles/hadoop.dir/all] Error 2
 The compilation flags in the makefiles need to explicitly state if 32 or 64 
 bit code is to be generated, to match the JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11987) JNI build should use default cmake FindJNI.cmake

2015-05-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-11987.
---
Resolution: Duplicate

 JNI build should use default cmake FindJNI.cmake
 

 Key: HADOOP-11987
 URL: https://issues.apache.org/jira/browse/HADOOP-11987
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 2.7.0
 Environment: All
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Minor

 From 
 http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201505.mbox/%3C55568DAC.1040303%40oracle.com%3E
 --
 Why does  hadoop-common-project/hadoop-common/src/CMakeLists.txt use 
 JNIFlags.cmake in the same directory to set things up for JNI 
 compilation rather than FindJNI.cmake, which comes as a standard cmake 
 module? The checks in JNIFlags.cmake make several assumptions that I 
 believe are only correct on Linux whereas I'd expect FindJNI.cmake to be 
 more platform-independent.
 --
 Just checked the repo of cmake and it turns out that FindJNI.cmake is
 available even before cmake 2.4. I think it makes sense to file a bug
 to replace it to the standard cmake module. Can you please file a jira
 for this?
 --
 This also applies to 
 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/JNIFlags.cmake



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10116) fix inconsistent synchronization warnings in ZlibCompressor

2015-04-16 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10116.
---
Resolution: Duplicate

Yeah, let's close it.

 fix inconsistent synchronization warnings in ZlibCompressor
 -

 Key: HADOOP-10116
 URL: https://issues.apache.org/jira/browse/HADOOP-10116
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe

 Fix findbugs warnings in ZlibCompressor.  I believe these were introduced by 
 HADOOP-10047.
 {code}
 Code  Warning
 ISInconsistent synchronization of 
 org.apache.hadoop.io.compress.zlib.ZlibCompressor.keepUncompressedBuf; locked 
 57% of time
 ISInconsistent synchronization of 
 org.apache.hadoop.io.compress.zlib.ZlibCompressor.userBuf; locked 60% of time
 ISInconsistent synchronization of 
 org.apache.hadoop.io.compress.zlib.ZlibCompressor.userBufLen; locked 85% of 
 time
 ISInconsistent synchronization of 
 org.apache.hadoop.io.compress.zlib.ZlibDecompressor.userBuf; locked 60% of 
 time
 ISInconsistent synchronization of 
 org.apache.hadoop.io.compress.zlib.ZlibDecompressor.userBufLen; locked 77% of 
 time
 Dodgy Warnings
 Code  Warning
 DLS   Dead store to pos2 in 
 org.apache.hadoop.io.compress.zlib.ZlibCompressor.put(ByteBuffer, ByteBuffer)
 DLS   Dead store to pos2 in 
 org.apache.hadoop.io.compress.zlib.ZlibDecompressor.put(ByteBuffer, 
 ByteBuffer)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11714) Add more trace log4j messages to SpanReceiverHost

2015-03-13 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11714:
-

 Summary: Add more trace log4j messages to SpanReceiverHost
 Key: HADOOP-11714
 URL: https://issues.apache.org/jira/browse/HADOOP-11714
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tracing
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


Add more trace log4j messages to SpanReceiverHost



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11611) fix TestHTracedRESTReceiver unit test failures

2015-02-18 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-11611.
---
Resolution: Fixed

wrong project

 fix TestHTracedRESTReceiver unit test failures
 --

 Key: HADOOP-11611
 URL: https://issues.apache.org/jira/browse/HADOOP-11611
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.2
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Critical

 Fix some issues with HTracedRESTReceiver that are resulting in unit test 
 failures.
 So there were two main issues:
 * better way to launch htraced
 * fixes to the HTracedRESTReceiver logic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11611) fix TestHTracedRESTReceiver unit test failures

2015-02-18 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11611:
-

 Summary: fix TestHTracedRESTReceiver unit test failures
 Key: HADOOP-11611
 URL: https://issues.apache.org/jira/browse/HADOOP-11611
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.2
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Critical


Fix some issues with HTracedRESTReceiver that are resulting in unit test 
failures.

So there were two main issues:
* better way to launch htraced
* fixes to the HTracedRESTReceiver logic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11505) hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some cases

2015-01-22 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11505:
-

 Summary: hadoop-mapreduce-client-nativetask fails to use x86 
optimizations in some cases
 Key: HADOOP-11505
 URL: https://issues.apache.org/jira/browse/HADOOP-11505
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


hadoop-mapreduce-client-nativetask fails to use x86 optimizations in some 
cases.  Also, on some alternate, non-x86, non-ARM architectures the generated 
code is incorrect.  Thanks to Edward Nevill for finding this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11474) jenkins gives -1 overall even when nothing is wrong

2015-01-12 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11474:
-

 Summary: jenkins gives -1 overall even when nothing is wrong
 Key: HADOOP-11474
 URL: https://issues.apache.org/jira/browse/HADOOP-11474
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11474) jenkins gives -1 overall even when nothing is wrong

2015-01-12 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-11474.
---
Resolution: Duplicate

 jenkins gives -1 overall even when nothing is wrong
 -

 Key: HADOOP-11474
 URL: https://issues.apache.org/jira/browse/HADOOP-11474
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe

 It looks like test-patch.sh is giving -1 overall even when nothing is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11416) Move ChunkedArrayList into hadoop-common

2014-12-16 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11416:
-

 Summary: Move ChunkedArrayList into hadoop-common
 Key: HADOOP-11416
 URL: https://issues.apache.org/jira/browse/HADOOP-11416
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Move ChunkedArrayList into hadoop-common so that it can be used by classes in 
hadoop-common, not just hdfs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11410) make the rpath of libhadoop.so configurable

2014-12-15 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11410:
-

 Summary: make the rpath of libhadoop.so configurable 
 Key: HADOOP-11410
 URL: https://issues.apache.org/jira/browse/HADOOP-11410
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


We should make the rpath of {{libhadoop.so}} configurable, so that we can use a 
different rpath if needed.  The {{RPATH}} of {{libhadoop.so}} is primarily used 
to control where {{dlopen}} looks for shared libraries by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11402) Negative user-to-group cache entries are never cleared for never-again-accessed users

2014-12-12 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11402:
-

 Summary: Negative user-to-group cache entries are never cleared 
for never-again-accessed users
 Key: HADOOP-11402
 URL: https://issues.apache.org/jira/browse/HADOOP-11402
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe


Negative user-to-group cache entries are never cleared for never-again-accessed 
users.  We should have a background thread that runs very infrequently and 
removes these expired entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11255) listLocatedStatus does not support cross-filesystem symlinks

2014-10-31 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11255:
-

 Summary: listLocatedStatus does not support cross-filesystem 
symlinks
 Key: HADOOP-11255
 URL: https://issues.apache.org/jira/browse/HADOOP-11255
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe


listLocatedStatus does not properly support cross-filesystem symlinks.  This is 
because the two-argument form of {{FileSystem#listLocatedStatus}} is protected. 
 So when it needs to call its counterpart in another FS, it's unable to do so 
because of the Java access rules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11197) Make sure the build fails if findbugs fails

2014-10-13 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11197:
-

 Summary: Make sure the build fails if findbugs fails
 Key: HADOOP-11197
 URL: https://issues.apache.org/jira/browse/HADOOP-11197
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Priority: Minor


test-patch.sh should complain about missing findbugs files.  HADOOP-11178 
demonstrates that the findbugs build can sometimes be incomplete currently 
without raising red flags.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11186) documentation should talk about hadoop.htrace.spanreceiver.classes, not hadoop.trace.spanreceiver.classes

2014-10-09 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11186:
-

 Summary: documentation should talk about 
hadoop.htrace.spanreceiver.classes, not hadoop.trace.spanreceiver.classes
 Key: HADOOP-11186
 URL: https://issues.apache.org/jira/browse/HADOOP-11186
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


The documentation should talk about hadoop.htrace.spanreceiver.classes, not 
hadoop.trace.spanreceiver.classes (note the H)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11079) Hadoop tests should run with PerformanceAdvisory logging at maximum

2014-09-09 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11079:
-

 Summary: Hadoop tests should run with PerformanceAdvisory logging 
at maximum
 Key: HADOOP-11079
 URL: https://issues.apache.org/jira/browse/HADOOP-11079
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Priority: Minor


It would be nice if Hadoop tests would run with PerformanceAdvisory logging at 
maximum.  This would let us know at a glance that tests were or were not 
running with native libraries enabled, and other performance-enhancing features.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11050) hconf.c: fix bug where we would sometimes not try to load multiple XML files from the same path

2014-09-02 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11050:
-

 Summary: hconf.c: fix bug where we would sometimes not try to load 
multiple XML files from the same path
 Key: HADOOP-11050
 URL: https://issues.apache.org/jira/browse/HADOOP-11050
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


hconf.c: fix bug where we would sometimes not try to load multiple XML files 
from the same path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11051) implement ndfs_get_hosts

2014-09-02 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11051:
-

 Summary: implement ndfs_get_hosts
 Key: HADOOP-11051
 URL: https://issues.apache.org/jira/browse/HADOOP-11051
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Implement ndfs_get_hosts, the hadoop native client version of getHosts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10958) TestGlobPaths should do more tests of globbing by unprivileged users

2014-08-27 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10958.
---

Resolution: Duplicate

I rolled this fix into HADOOP-10957 at Daryn's request.

 TestGlobPaths should do more tests of globbing by unprivileged users
 

 Key: HADOOP-10958
 URL: https://issues.apache.org/jira/browse/HADOOP-10958
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Colin Patrick McCabe

 TestGlobPaths should do more tests of globbing by unprivileged users.  Right 
 now, most of the tests are of globbing by the superuser, but this tends to 
 hide permission exception issues such as HADOOP-10957.  We should keep a few 
 tests operating with privileged globs, but do most of them unprivileged.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10985) native client: split ndfs.c into meta, file, util, and permission

2014-08-21 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10985.
---

   Resolution: Fixed
Fix Version/s: HADOOP-10388

 native client: split ndfs.c into meta, file, util, and permission
 -

 Key: HADOOP-10985
 URL: https://issues.apache.org/jira/browse/HADOOP-10985
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: HADOOP-10388

 Attachments: HADOOP-10985.001.patch


 Split ndfs.c into meta.c, file.c, util.c, and permission.c.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10981) native client: parse Hadoop permission strings

2014-08-19 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10981:
-

 Summary: native client: parse Hadoop permission strings
 Key: HADOOP-10981
 URL: https://issues.apache.org/jira/browse/HADOOP-10981
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe


The native client should parse Hadoop permission strings like 
PermissionParser.java does.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10985) native client: split ndfs.c into meta, file, util, and permission

2014-08-19 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10985:
-

 Summary: native client: split ndfs.c into meta, file, util, and 
permission
 Key: HADOOP-10985
 URL: https://issues.apache.org/jira/browse/HADOOP-10985
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Split ndfs.c into meta.c, file.c, util.c, and permission.c.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10877) native client: implement hdfsMove and hdfsCopy

2014-08-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10877.
---

   Resolution: Fixed
Fix Version/s: HADOOP-10388

committed, thanks for the review

 native client: implement hdfsMove and hdfsCopy
 --

 Key: HADOOP-10877
 URL: https://issues.apache.org/jira/browse/HADOOP-10877
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: HADOOP-10388

 Attachments: HADOOP-10877-pnative.001.patch, 
 HADOOP-10877-pnative.002.patch


 In the pure native client, we need to implement {{hdfsMove}} and 
 {{hdfsCopy}}.  These are basically recursive copy functions (in the Java 
 code, move is copy with a delete at the end).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10725) Implement listStatus and getFileInfo in the native client

2014-08-12 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10725.
---

   Resolution: Fixed
Fix Version/s: HADOOP-10388

 Implement listStatus and getFileInfo in the native client
 -

 Key: HADOOP-10725
 URL: https://issues.apache.org/jira/browse/HADOOP-10725
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: HADOOP-10388

 Attachments: HADOOP-10725-pnative.001.patch, 
 HADOOP-10725-pnative.002.patch, HADOOP-10725-pnative.003.patch, 
 HADOOP-10725-pnative.004.patch


 Implement listStatus and getFileInfo in the native client.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10958) TestGlobPaths should do more tests of globbing by unprivileged users

2014-08-11 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10958:
-

 Summary: TestGlobPaths should do more tests of globbing by 
unprivileged users
 Key: HADOOP-10958
 URL: https://issues.apache.org/jira/browse/HADOOP-10958
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Colin Patrick McCabe


TestGlobPaths should do more tests of globbing by unprivileged users.  Right 
now, most of the tests are of globbing by the superuser, but this tends to hide 
permission exception issues such as HADOOP-10957.  We should keep a few tests 
operating with privileged globs, but do most of them unprivileged.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10818) native client: refactor URI code to be clearer

2014-07-22 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10818.
---

  Resolution: Fixed
   Fix Version/s: HADOOP-10388
Target Version/s: HADOOP-10388

 native client: refactor URI code to be clearer
 --

 Key: HADOOP-10818
 URL: https://issues.apache.org/jira/browse/HADOOP-10818
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: HADOOP-10388

 Attachments: HADOOP-10818-pnative.001.patch, 
 HADOOP-10818-pnative.002.patch


 Refactor the {{common/uri.c}} code to be a bit clearer.  We should just be 
 able to refer to user_info, auth, port, path, etc. fields in the structure, 
 rather than calling accessors.  {{hdfsBuilder}} should just have a connection 
 URI rather than separate fields for all these things.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10877) native client: implement hdfsMove and hdfsCopy

2014-07-22 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10877:
-

 Summary: native client: implement hdfsMove and hdfsCopy
 Key: HADOOP-10877
 URL: https://issues.apache.org/jira/browse/HADOOP-10877
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


In the pure native client, we need to implement {{hdfsMove}} and {{hdfsCopy}}.  
These are basically recursive copy functions (in the Java code, move is copy 
with a delete at the end).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10871) incorrect prototype in OpensslSecureRandom.c

2014-07-21 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10871:
-

 Summary: incorrect prototype in OpensslSecureRandom.c
 Key: HADOOP-10871
 URL: https://issues.apache.org/jira/browse/HADOOP-10871
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: util
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-10871-fs-enc.001.patch

There is an incorrect prototype in OpensslSecureRandom.c.

{code}
/home/cmccabe/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSec
ureRandom.c:160:3: warning: call to function ‘openssl_rand_init’ without a real 
prototype [-Wunprototyped-calls]
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10870) Failed to load OpenSSL cipher error logs on systems with old openssl versions

2014-07-21 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10870.
---

  Resolution: Fixed
   Fix Version/s: fs-encryption (HADOOP-10150 and HDFS-6134)
Target Version/s: fs-encryption (HADOOP-10150 and HDFS-6134)

 Failed to load OpenSSL cipher error logs on systems with old openssl versions
 -

 Key: HADOOP-10870
 URL: https://issues.apache.org/jira/browse/HADOOP-10870
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Stephen Chu
Assignee: Colin Patrick McCabe
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: HADOOP-10870-fs-enc.001.patch


 I built Hadoop from fs-encryption branch and deployed Hadoop (without 
 enabling any security confs) on a Centos 6.4 VM with an old version of 
 openssl.
 {code}
 [root@schu-enc hadoop-common]# rpm -qa | grep openssl
 openssl-1.0.0-27.el6_4.2.x86_64
 openssl-devel-1.0.0-27.el6_4.2.x86_64
 {code}
 When I try to do a simple hadoop fs -ls, I get
 {code}
 [hdfs@schu-enc hadoop-common]$ hadoop fs -ls
 2014-07-21 19:35:14,486 ERROR [main] crypto.OpensslCipher 
 (OpensslCipher.java:clinit(87)) - Failed to load OpenSSL Cipher.
 java.lang.UnsatisfiedLinkError: Cannot find AES-CTR support, is your version 
 of Openssl new enough?
   at org.apache.hadoop.crypto.OpensslCipher.initIDs(Native Method)
   at 
 org.apache.hadoop.crypto.OpensslCipher.clinit(OpensslCipher.java:84)
   at 
 org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.init(OpensslAesCtrCryptoCodec.java:50)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
   at org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:55)
   at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:591)
   at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:561)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2590)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2624)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2606)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:228)
   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:211)
   at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
   at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
 2014-07-21 19:35:14,495 WARN  [main] crypto.CryptoCodec 
 (CryptoCodec.java:getInstance(66)) - Crypto codec 
 org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec is not available.
 {code}
 It would be an improvment to clean up/shorten this error log.
 hadoop checknative shows the error as well
 {code}
 [hdfs@schu-enc ~]$ hadoop checknative
 2014-07-21 19:38:38,376 INFO  [main] bzip2.Bzip2Factory 
 (Bzip2Factory.java:isNativeBzip2Loaded(70)) - Successfully loaded  
 initialized native-bzip2 library system-native
 2014-07-21 19:38:38,395 INFO  [main] zlib.ZlibFactory 
 (ZlibFactory.java:clinit(49)) - Successfully loaded  initialized 
 native-zlib library
 2014-07-21 19:38:38,411 ERROR [main] crypto.OpensslCipher 
 (OpensslCipher.java:clinit(87)) - Failed to load OpenSSL Cipher.
 java.lang.UnsatisfiedLinkError: Cannot find AES-CTR support, is your version 
 of Openssl new enough?
   at org.apache.hadoop.crypto.OpensslCipher.initIDs(Native Method)
   at 
 org.apache.hadoop.crypto.OpensslCipher.clinit(OpensslCipher.java:84)
   at 
 

[jira] [Resolved] (HADOOP-10871) incorrect prototype in OpensslSecureRandom.c

2014-07-21 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10871.
---

   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
   fs-encryption (HADOOP-10150 and HDFS-6134)

 incorrect prototype in OpensslSecureRandom.c
 

 Key: HADOOP-10871
 URL: https://issues.apache.org/jira/browse/HADOOP-10871
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: util
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: HADOOP-10871-fs-enc.001.patch


 There is an incorrect prototype in OpensslSecureRandom.c.
 {code}
 /home/cmccabe/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSec
 ureRandom.c:160:3: warning: call to function ‘openssl_rand_init’ without a 
 real prototype [-Wunprototyped-calls]
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-3983) compile-c++ should honor the jvm size in compiling the c++ code

2014-07-18 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-3983.
--

   Resolution: Fixed
Fix Version/s: 2.0.0-alpha

In the CMake build, we now use JVM_ARCH_DATA_MODEL to determine whether to 
build 32-bit or 64-bit libraries, so no, this isn't an issue any more.

 compile-c++ should honor the jvm size in compiling the c++ code
 ---

 Key: HADOOP-3983
 URL: https://issues.apache.org/jira/browse/HADOOP-3983
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Owen O'Malley
  Labels: newbie
 Fix For: 2.0.0-alpha


 The build scripts for compile-c++ and compile-c++ -examples should honor the 
 word size of the jvm, since it is in the platform name. Currently, the 
 platform names are Linux-amd64-64 or Linux-i386-32, but the C++ is always 
 compiled in the platform default size.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10806) ndfs: need to implement umask, pass permission bits to hdfsCreateDirectory

2014-07-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10806.
---

   Resolution: Fixed
Fix Version/s: HADOOP-10388

 ndfs: need to implement umask, pass permission bits to hdfsCreateDirectory
 --

 Key: HADOOP-10806
 URL: https://issues.apache.org/jira/browse/HADOOP-10806
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: HADOOP-10388

 Attachments: HADOOP-10806-pnative.001.patch, 
 HADOOP-10806-pnative.002.patch


 We need to pass in permission bits to {{hdfsCreateDirectory}}.  Also, we need 
 to read {{fs.permissions.umask-mode}} so that we know what to mask off of the 
 permission bits (umask is always implemented client-side)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10818) native client: refactor URI code to be clearer

2014-07-11 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10818:
-

 Summary: native client: refactor URI code to be clearer
 Key: HADOOP-10818
 URL: https://issues.apache.org/jira/browse/HADOOP-10818
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Refactor the {{common/uri.c}} code to be a bit clearer.  We should just be able 
to refer to user_info, auth, port, path, etc. fields in the structure, rather 
than calling accessors.  {{hdfsBuilder}} should just have a connection URI 
rather than separate fields for all these things.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10734) Implement high-performance secure random number sources

2014-07-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10734.
---

Resolution: Fixed

 Implement high-performance secure random number sources
 ---

 Key: HADOOP-10734
 URL: https://issues.apache.org/jira/browse/HADOOP-10734
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: HADOOP-10734-fs-enc.004.patch, HADOOP-10734.1.patch, 
 HADOOP-10734.2.patch, HADOOP-10734.3.patch, HADOOP-10734.4.patch, 
 HADOOP-10734.5.patch, HADOOP-10734.patch


 This JIRA is to implement Secure random using JNI to OpenSSL, and 
 implementation should be thread-safe.
 Utilize RdRand to return random numbers from hardware random number 
 generator. It's TRNG(True Random Number generators) having much higher 
 performance than {{java.security.SecureRandom}}. 
 https://wiki.openssl.org/index.php/Random_Numbers
 http://en.wikipedia.org/wiki/RdRand
 https://software.intel.com/en-us/articles/performance-impact-of-intel-secure-key-on-openssl



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10805) ndfs hdfsDelete should check the return boolean

2014-07-09 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10805:
-

 Summary: ndfs hdfsDelete should check the return boolean
 Key: HADOOP-10805
 URL: https://issues.apache.org/jira/browse/HADOOP-10805
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


The delete RPC to the NameNode returns a boolean.  We need to check this in the 
pure native client to ensure that the delete actually succeeded.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10805) ndfs hdfsDelete should check the return boolean

2014-07-09 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10805.
---

  Resolution: Fixed
   Fix Version/s: HADOOP-10388
Target Version/s: HADOOP-10388

 ndfs hdfsDelete should check the return boolean
 ---

 Key: HADOOP-10805
 URL: https://issues.apache.org/jira/browse/HADOOP-10805
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: HADOOP-10388

 Attachments: HADOOP-10805-pnative.001.patch


 The delete RPC to the NameNode returns a boolean.  We need to check this in 
 the pure native client to ensure that the delete actually succeeded.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10785) UnsatisfiedLinkError in cryptocodec tests with OpensslCipher#initContext

2014-07-07 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10785.
---

Resolution: Duplicate

 UnsatisfiedLinkError in cryptocodec tests with OpensslCipher#initContext
 

 Key: HADOOP-10785
 URL: https://issues.apache.org/jira/browse/HADOOP-10785
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Fix For: 3.0.0


 {noformat}
 java.lang.UnsatisfiedLinkError: 
 org.apache.hadoop.crypto.OpensslCipher.initContext(II)J
 at org.apache.hadoop.crypto.OpensslCipher.initContext(Native Method)
 at 
 org.apache.hadoop.crypto.OpensslCipher.getInstance(OpensslCipher.java:90)
 at 
 org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec$OpensslAesCtrCipher.init(OpensslAesCtrCryptoCodec.java:73)
 at 
 org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.createEncryptor(OpensslAesCtrCryptoCodec.java:53)
 at 
 org.apache.hadoop.crypto.CryptoOutputStream.init(CryptoOutputStream.java:95)
 at 
 org.apache.hadoop.crypto.CryptoOutputStream.init(CryptoOutputStream.java:79)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10693) Implementation of AES-CTR CryptoCodec using JNI to OpenSSL

2014-07-03 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10693.
---

Resolution: Fixed

committed to fs-encryption branch

 Implementation of AES-CTR CryptoCodec using JNI to OpenSSL
 --

 Key: HADOOP-10693
 URL: https://issues.apache.org/jira/browse/HADOOP-10693
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)

 Attachments: HADOOP-10693.1.patch, HADOOP-10693.2.patch, 
 HADOOP-10693.3.patch, HADOOP-10693.4.patch, HADOOP-10693.5.patch, 
 HADOOP-10693.6.patch, HADOOP-10693.7.patch, HADOOP-10693.8.patch, 
 HADOOP-10693.patch


 In HADOOP-10603, we have an implementation of AES-CTR CryptoCodec using Java 
 JCE provider. 
 To get high performance, the configured JCE provider should utilize native 
 code and AES-NI, but in JDK6,7 the Java embedded provider doesn't support it.
  
 Considering not all hadoop user will use the provider like Diceros or able to 
 get signed certificate from oracle to develop a custom provider, so this JIRA 
 will have an implementation of AES-CTR CryptoCodec using JNI to OpenSSL 
 directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10667) implement TCP connection reuse for native client

2014-06-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10667.
---

   Resolution: Fixed
Fix Version/s: HADOOP-10388

 implement TCP connection reuse for native client
 

 Key: HADOOP-10667
 URL: https://issues.apache.org/jira/browse/HADOOP-10667
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: HADOOP-10388

 Attachments: HADOOP-10667-pnative.001.patch, 
 HADOOP-10667-pnative.002.patch, HADOOP-10667-pnative.003.patch, 
 HADOOP-10667-pnative.004.patch


 The HDFS / YARN native clients should re-use TCP connections to avoid the 
 overhead of the three-way handshake, similar to how the Java code does.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10725) Implement listStatus and getFileInfo in the native client

2014-06-19 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10725:
-

 Summary: Implement listStatus and getFileInfo in the native client
 Key: HADOOP-10725
 URL: https://issues.apache.org/jira/browse/HADOOP-10725
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Implement listStatus and getFileInfo in the native client.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10706) Fix initialization of hrpc_sync_ctx

2014-06-16 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10706.
---

  Resolution: Fixed
   Fix Version/s: HADOOP-10388
Target Version/s: HADOOP-10388

 Fix initialization of hrpc_sync_ctx
 ---

 Key: HADOOP-10706
 URL: https://issues.apache.org/jira/browse/HADOOP-10706
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Binglin Chang
Assignee: Binglin Chang
 Fix For: HADOOP-10388

 Attachments: HADOOP-10706.v1.patch


 1. 
 {code}
 memset(ctx, 0, sizeof(ctx));
  return ctx;
 {code}
 Doing this will alway make return value to 0
 2.
 hrpc_release_sync_ctx should changed to hrpc_proxy_release_sync_ctx, all the 
 functions in this .h/.c file follow this rule



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10640) Implement Namenode RPCs in HDFS native client

2014-06-12 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10640.
---

   Resolution: Fixed
Fix Version/s: HADOOP-10388

 Implement Namenode RPCs in HDFS native client
 -

 Key: HADOOP-10640
 URL: https://issues.apache.org/jira/browse/HADOOP-10640
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: HADOOP-10388

 Attachments: HADOOP-10640-pnative.001.patch, 
 HADOOP-10640-pnative.002.patch, HADOOP-10640-pnative.003.patch, 
 HADOOP-10640-pnative.004.patch


 Implement the parts of libhdfs that just involve making RPCs to the Namenode, 
 such as mkdir, rename, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10447) Implement C code for parsing Hadoop / HDFS URIs

2014-06-12 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10447.
---

Resolution: Duplicate
  Assignee: Colin Patrick McCabe

 Implement C code for parsing Hadoop / HDFS URIs
 ---

 Key: HADOOP-10447
 URL: https://issues.apache.org/jira/browse/HADOOP-10447
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe

 We need some glue code for parsing Hadoop or HDFS URIs in C.  Probably we 
 should just put a small 'Path' wrapper around a URI parsing library like 
 http://uriparser.sourceforge.net/ (BSD licensed)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10446) native code for reading Hadoop configuration XML files

2014-06-12 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10446.
---

  Resolution: Duplicate
   Fix Version/s: HADOOP-10388
Assignee: Colin Patrick McCabe
Target Version/s: HADOOP-10388

 native code for reading Hadoop configuration XML files
 --

 Key: HADOOP-10446
 URL: https://issues.apache.org/jira/browse/HADOOP-10446
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: HADOOP-10388


 We need to have a way to read Hadoop configuration XML files in the native 
 HDFS and YARN clients.  This will allow those clients to discover the 
 locations of NameNodes, YARN daemons, and other configuration settings, etc. 
 etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10667) implement TCP connection reuse for native client

2014-06-06 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10667:
-

 Summary: implement TCP connection reuse for native client
 Key: HADOOP-10667
 URL: https://issues.apache.org/jira/browse/HADOOP-10667
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


The HDFS / YARN native clients should re-use TCP connections to avoid the 
overhead of the three-way handshake, similar to how the Java code does.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10624) Fix some minors typo and add more test cases for hadoop_err

2014-05-30 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10624.
---

   Resolution: Fixed
Fix Version/s: HADOOP-10388

committed, thanks!

 Fix some minors typo and add more test cases for hadoop_err
 ---

 Key: HADOOP-10624
 URL: https://issues.apache.org/jira/browse/HADOOP-10624
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HADOOP-10388
Reporter: Wenwu Peng
Assignee: Wenwu Peng
 Fix For: HADOOP-10388

 Attachments: HADOOP-10624-pnative.001.patch, 
 HADOOP-10624-pnative.002.patch, HADOOP-10624-pnative.003.patch, 
 HADOOP-10624-pnative.004.patch


 Changes:
 1. Add more test cases to cover method hadoop_lerr_alloc and 
 hadoop_uverr_alloc
 2. Fix typo as following:
 1) Change hadoop_uverr_alloc(int cod to hadoop_uverr_alloc(int code in 
 hadoop_err.h
 2) Change OutOfMemory to OutOfMemoryException to consistent with other 
 Exception in hadoop_err.c
 3) Change DBUG to DEBUG in messenger.c
 4) Change DBUG to DEBUG in reactor.c



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10640) Implement Namenode RPCs in HDFS native client

2014-05-29 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10640:
-

 Summary: Implement Namenode RPCs in HDFS native client
 Key: HADOOP-10640
 URL: https://issues.apache.org/jira/browse/HADOOP-10640
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Implement the parts of libhdfs that just involve making RPCs to the Namenode, 
such as mkdir, rename, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10592) Add unit test case for net in hadoop native client

2014-05-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10592.
---

   Resolution: Fixed
Fix Version/s: HADOOP-10388

 Add unit test case for net in hadoop native client 
 ---

 Key: HADOOP-10592
 URL: https://issues.apache.org/jira/browse/HADOOP-10592
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HADOOP-10388
Reporter: Wenwu Peng
Assignee: Wenwu Peng
 Fix For: HADOOP-10388

 Attachments: HADOOP-10592-pnative.001.patch, 
 HADOOP-10592-pnative.002.patch, HADOOP-10592-pnative.003.patch, 
 HADOOP-10592-pnative.004.patch


 Add unit test case for net.c in hadoop native client 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10587) Use a thread-local cache in TokenIdentifier#getBytes to avoid creating many DataOutputBuffer objects

2014-05-15 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10587:
-

 Summary: Use a thread-local cache in TokenIdentifier#getBytes to 
avoid creating many DataOutputBuffer objects
 Key: HADOOP-10587
 URL: https://issues.apache.org/jira/browse/HADOOP-10587
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-10587.001.patch

We can use a thread-local cache in TokenIdentifier#getBytes to avoid creating 
many DataOutputBuffer objects.  This will reduce our memory usage (for example, 
when loading edit logs), and help prevent OOMs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10573) fix hadoop native client CMakeLists.txt issue with older cmakes

2014-05-06 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10573.
---

Resolution: Fixed

 fix hadoop native client CMakeLists.txt issue with older cmakes
 ---

 Key: HADOOP-10573
 URL: https://issues.apache.org/jira/browse/HADOOP-10573
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Wenwu Peng
Assignee: Wenwu Peng
 Attachments: HADOOP-10573.1.patch, HADOOP-10573.2.patch


 In GeneratProtobufs.cmake, use variable CMAKE_CURRENT_LIST_DIR, it is new for 
 cmake version 2.8, so we should change cmake_minimum_required(VERSION 2.6) to 
 (VERSION 2.8) in CMakeLists.txt



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10564) Add username to native RPCv9 client

2014-05-01 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10564:
-

 Summary: Add username to native RPCv9 client
 Key: HADOOP-10564
 URL: https://issues.apache.org/jira/browse/HADOOP-10564
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Add the ability for the native RPCv9 client to set a username when initiating a 
connection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10389) Native RPCv9 client

2014-04-30 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10389.
---

Resolution: Fixed

committed to branch

 Native RPCv9 client
 ---

 Key: HADOOP-10389
 URL: https://issues.apache.org/jira/browse/HADOOP-10389
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HADOOP-10388
Reporter: Binglin Chang
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-10388.001.patch, HADOOP-10389.002.patch, 
 HADOOP-10389.004.patch, HADOOP-10389.005.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10443) limit symbol visibility in libhdfs-core.so and libyarn-core.so

2014-03-26 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10443:
-

 Summary: limit symbol visibility in libhdfs-core.so and 
libyarn-core.so
 Key: HADOOP-10443
 URL: https://issues.apache.org/jira/browse/HADOOP-10443
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe
Priority: Minor


We should avoid exposing all the symbols of libhdfs-core.so and 
libyarn-core.so.  Otherwise, we they conflict with symbols used in the 
applications using the libraries.  This can be done with gcc's symbol 
visibility directives.

Also, we should probably include libuv and libprotobuf-c statically into our 
libraries, since most distributions don't yet include these libraries, and we 
don't want to have version issues there.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10444) add pom.xml infrastructure for hadoop-native-core

2014-03-26 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10444:
-

 Summary: add pom.xml infrastructure for hadoop-native-core
 Key: HADOOP-10444
 URL: https://issues.apache.org/jira/browse/HADOOP-10444
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe


Add pom.xml infrastructure for hadoop-native-core, so that it builds under 
Maven.  We can look to how we integrated CMake into hadoop-hdfs-project and 
hadoop-common-project for inspiration here.  In the long term, it would be nice 
to use a Maven plugin here (see HADOOP-8887)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10445) Implement DataTransferProtocol in libhdfs-core.so

2014-03-26 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10445:
-

 Summary: Implement DataTransferProtocol in libhdfs-core.so
 Key: HADOOP-10445
 URL: https://issues.apache.org/jira/browse/HADOOP-10445
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe


We need to implement DataTransferProtocol so that we can send and receive data 
to and from DataNodes.  This is different protocol from Hadoop IPC, so it will 
require a slightly different code path.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10446) native code for reading Hadoop configuration XML files

2014-03-26 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10446:
-

 Summary: native code for reading Hadoop configuration XML files
 Key: HADOOP-10446
 URL: https://issues.apache.org/jira/browse/HADOOP-10446
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe


We need to have a way to read Hadoop configuration XML files in the native HDFS 
and YARN clients.  This will allow those clients to discover the locations of 
NameNodes, YARN daemons, and other configuration settings, etc. etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10447) Implement C code for parsing Hadoop / HDFS URIs

2014-03-26 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10447:
-

 Summary: Implement C code for parsing Hadoop / HDFS URIs
 Key: HADOOP-10447
 URL: https://issues.apache.org/jira/browse/HADOOP-10447
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe


We need some glue code for parsing Hadoop or HDFS URIs in C.  Probably we 
should just put a small 'Path' wrapper around a URI parsing library like 
http://uriparser.sourceforge.net/ (BSD licensed)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10401) ShellBasedUnixGroupsMapping#getGroups does not always return primary group first

2014-03-10 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10401:
-

 Summary: ShellBasedUnixGroupsMapping#getGroups does not always 
return primary group first
 Key: HADOOP-10401
 URL: https://issues.apache.org/jira/browse/HADOOP-10401
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe


{{ShellBasedUnixGroupsMapping#getGroups}} does not always return the primary 
group first.  It should do this so that clients who expect it don't get the 
wrong result.  We should also document that the primary group is returned first 
in the API.  Note that {{JniBasedUnixGroupsMapping}} does return the primary 
group first.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10325) improve jenkins javadoc warnings from test-patch.sh

2014-02-04 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10325:
-

 Summary: improve jenkins javadoc warnings from test-patch.sh
 Key: HADOOP-10325
 URL: https://issues.apache.org/jira/browse/HADOOP-10325
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe


Currently test-patch.sh uses {{OK_JAVADOC_WARNINGS}} to know how many warnings 
trunk is expected to have.  However, this is a fragile and difficult to use 
system, since different build slaves may generate different numbers of warnings 
(based on compiler revision, etc.).  Also, programmers must remember to update 
{{OK_JAVADOC_WARNINGS}}, which they don't always.  Finally, there is no easy 
way to find what the *new* javadoc warnings are in the huge pile of warnings.

We should change this to work the same way the javac warnings code does: to 
simply build with and without the patch and do a diff.  The diff should be 
saved for easy perusal.  We also should not complain about warnings being 
removed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10198) DomainSocket: add support for socketpair

2014-01-02 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10198:
-

 Summary: DomainSocket: add support for socketpair
 Key: HADOOP-10198
 URL: https://issues.apache.org/jira/browse/HADOOP-10198
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-10198.001.patch

Add support for {{DomainSocket#socketpair}}.  This function uses the POSIX 
function of the same name to create two UNIX domain sockets which are connected 
to each other.  This will be useful for HDFS-5182.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10116) fix inconsistent synchronization warnings in ZlibCompressor

2013-11-19 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10116:
-

 Summary: fix inconsistent synchronization warnings in 
ZlibCompressor
 Key: HADOOP-10116
 URL: https://issues.apache.org/jira/browse/HADOOP-10116
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe


Fix findbugs warnings in ZlibCompressor.  I believe these were introduced by 
HADOOP-10047.

{code}
CodeWarning
IS  Inconsistent synchronization of 
org.apache.hadoop.io.compress.zlib.ZlibCompressor.keepUncompressedBuf; locked 
57% of time
IS  Inconsistent synchronization of 
org.apache.hadoop.io.compress.zlib.ZlibCompressor.userBuf; locked 60% of time
IS  Inconsistent synchronization of 
org.apache.hadoop.io.compress.zlib.ZlibCompressor.userBufLen; locked 85% of time
IS  Inconsistent synchronization of 
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.userBuf; locked 60% of time
IS  Inconsistent synchronization of 
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.userBufLen; locked 77% of 
time
Dodgy Warnings

CodeWarning
DLS Dead store to pos2 in 
org.apache.hadoop.io.compress.zlib.ZlibCompressor.put(ByteBuffer, ByteBuffer)
DLS Dead store to pos2 in 
org.apache.hadoop.io.compress.zlib.ZlibDecompressor.put(ByteBuffer, ByteBuffer)
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10109) Fix test failure in TestOfflineEditsViewer introduced by HADOOP-10052

2013-11-18 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10109:
-

 Summary: Fix test failure in TestOfflineEditsViewer introduced by 
HADOOP-10052
 Key: HADOOP-10109
 URL: https://issues.apache.org/jira/browse/HADOOP-10109
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.2.1
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: 
0001-HADOOP-10020-addendum.-Fix-TestOfflineEditsViewer.patch

Fix test failure in TestOfflineEditsViewer introduced by HADOOP-10052



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HADOOP-10109) Fix test failure in TestOfflineEditsViewer introduced by HADOOP-10052

2013-11-18 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10109.
---

   Resolution: Fixed
Fix Version/s: 2.2.1

 Fix test failure in TestOfflineEditsViewer introduced by HADOOP-10052
 -

 Key: HADOOP-10109
 URL: https://issues.apache.org/jira/browse/HADOOP-10109
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.2.1
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.2.1

 Attachments: 
 0001-HADOOP-10020-addendum.-Fix-TestOfflineEditsViewer.patch


 Fix test failure in TestOfflineEditsViewer introduced by HADOOP-10052



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10077) o.a.h.s.Groups should refresh in the background

2013-10-30 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10077:
-

 Summary: o.a.h.s.Groups should refresh in the background
 Key: HADOOP-10077
 URL: https://issues.apache.org/jira/browse/HADOOP-10077
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.1
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


{{org.apache.hadoop.security.Groups}} maintains a cache of mappings between 
user names and sets of associated group names.  Periodically, the entries in 
this cache expire and must be refetched from the operating system.

Currently, this is done in the context of whatever thread happens to try to 
access the group mapping information right after the time period expires.  
However, this is problematic, since that thread may be holding the 
{{FSNamesystem}} lock.  This means that if the {{GroupMappingServiceProvider}} 
is slow, the whole NameNode may grind to a halt until it finishes.  This can 
generate periodic load spikes or even NameNode failovers.

Instead, we should allow the refreshing of the group mappings to be done 
asynchronously in a background thread pool.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10061) TestMetricsSystemImpl#testInitFirstVerifyStopInvokedImmediately failed

2013-10-21 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10061:
-

 Summary: 
TestMetricsSystemImpl#testInitFirstVerifyStopInvokedImmediately failed
 Key: HADOOP-10061
 URL: https://issues.apache.org/jira/browse/HADOOP-10061
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Reporter: Colin Patrick McCabe
Priority: Minor


TestMetricsSystemImpl#testInitFirstVerifyStopInvokedImmediately failed with 
Wanted at most 2 times but was 3

{code}
1 tests failed.
REGRESSION:  
org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyStopInvokedImmediately

Error Message:
 Wanted at most 2 times but was 3

Stack Trace:
org.mockito.exceptions.base.MockitoAssertionError:
Wanted at most 2 times but was 3
at 
org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyStopInvokedImmediately(TestMetricsSystemImpl.java:114)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HADOOP-10061) TestMetricsSystemImpl#testInitFirstVerifyStopInvokedImmediately failed

2013-10-21 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10061.
---

Resolution: Duplicate

resolving as duplicate

 TestMetricsSystemImpl#testInitFirstVerifyStopInvokedImmediately failed
 --

 Key: HADOOP-10061
 URL: https://issues.apache.org/jira/browse/HADOOP-10061
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.3.0
Reporter: Colin Patrick McCabe
Priority: Minor

 TestMetricsSystemImpl#testInitFirstVerifyStopInvokedImmediately failed with 
 Wanted at most 2 times but was 3
 {code}
 1 tests failed.
 REGRESSION:  
 org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyStopInvokedImmediately
 Error Message:
  Wanted at most 2 times but was 3
 Stack Trace:
 org.mockito.exceptions.base.MockitoAssertionError:
 Wanted at most 2 times but was 3
 at 
 org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirstVerifyStopInvokedImmediately(TestMetricsSystemImpl.java:114)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2013-10-18 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe reopened HADOOP-9652:
--

  Assignee: Colin Patrick McCabe  (was: Andrew Wang)

 RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
 -

 Key: HADOOP-9652
 URL: https://issues.apache.org/jira/browse/HADOOP-9652
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.3.0

 Attachments: 0001-temporarily-disable-HADOOP-9652.patch, 
 hadoop-9452-1.patch, hadoop-9652-2.patch, hadoop-9652-3.patch, 
 hadoop-9652-4.patch, hadoop-9652-5.patch, hadoop-9652-6.patch


 {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
 the symlink, but instead uses the owner and mode of the symlink target.  If 
 the target can't be found, it fills in bogus values (the empty string and 
 FsPermission.getDefault) for these.
 Symlinks have an owner distinct from the owner of the target they point to, 
 and getFileLinkStatus ought to expose this.
 In some operating systems, symlinks can have a permission other than 0777.  
 We ought to expose this in RawLocalFilesystem and other places, although we 
 don't necessarily have to support this behavior in HDFS.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10034) optimize same-filesystem symlinks by doing resolution server-side

2013-10-08 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10034:
-

 Summary: optimize same-filesystem symlinks by doing resolution 
server-side
 Key: HADOOP-10034
 URL: https://issues.apache.org/jira/browse/HADOOP-10034
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe


We should optimize same-filesystem symlinks by doing resolution server-side 
rather than client side, as discussed on HADOOP-9780.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10020) disable symlinks temporarily

2013-10-03 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10020:
-

 Summary: disable symlinks temporarily
 Key: HADOOP-10020
 URL: https://issues.apache.org/jira/browse/HADOOP-10020
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.1.2-beta
Reporter: Colin Patrick McCabe


disable symlinks temporarily until we can make them production-ready in Hadoop 
2.3



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10021) distCp support for symlinks

2013-10-03 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10021:
-

 Summary: distCp support for symlinks
 Key: HADOOP-10021
 URL: https://issues.apache.org/jira/browse/HADOOP-10021
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.3.0
Reporter: Colin Patrick McCabe


Add support for symlinks to distCp.  We probably want something like rsync, 
where you can choose to copy symlinks as links, or copy what they refer to.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-9984) FileSystem#globStatus and FileSystem#listStatus should resolve symlinks by default

2013-09-20 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9984:


 Summary: FileSystem#globStatus and FileSystem#listStatus should 
resolve symlinks by default
 Key: HADOOP-9984
 URL: https://issues.apache.org/jira/browse/HADOOP-9984
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


During the process of adding symlink support to FileSystem, we realized that 
many existing HDFS clients would be broken by listStatus and globStatus 
returning symlinks.  One example is applications that assume that 
!FileStatus#isFile implies that the inode is a directory.  As we discussed in 
HADOOP-9972 and HADOOP-9912, we should default these APIs to returning resolved 
paths.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-7344) globStatus doesn't grok groupings with a slash

2013-09-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-7344.
--

 Tags:  
   Resolution: Fixed
Fix Version/s: 2.3.0
 Assignee: Colin Patrick McCabe

This was fixed by the globber rework in HADOOP-9817.

{code}
cmccabe@keter:~/hadoop4 /h/bin/hadoop fs -mkdir -p /a/b/c
cmccabe@keter:~/hadoop4 /h/bin/hadoop fs -ls '/{a,a/b}'
Found 1 items
drwxr-xr-x   - cmccabe supergroup  0 2013-09-20 15:20 /a/b
Found 1 items
drwxr-xr-x   - cmccabe supergroup  0 2013-09-20 15:20 /a/b/c
{code}

 globStatus doesn't grok groupings with a slash
 --

 Key: HADOOP-7344
 URL: https://issues.apache.org/jira/browse/HADOOP-7344
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0
Reporter: Daryn Sharp
Assignee: Colin Patrick McCabe
 Fix For: 2.3.0


 If a glob contains a grouping with a single item that contains a slash, ex. 
 {a/b}, then globStatus throws {{Illegal file pattern: Unclosed group near 
 index 2}} -- regardless of whether the path exists.  However, if the glob 
 set contains more than one item, ex. {a/b,c}, then it throws a 
 {{NullPointerException}} from {{FileSystem.java:1277}}.
 {code}
 1276: FileStatus[] files = globStatusInternal(new Path(filePattern), filter);
 1277: for (FileStatus file : files) {
 1278:   results.add(file);
 1279: }
 {code}
 The method {{globStatusInternal}} can return null, so the iterator fails with 
 the NPE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9972) new APIs for listStatus and globStatus to deal with symlinks

2013-09-17 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9972:


 Summary: new APIs for listStatus and globStatus to deal with 
symlinks
 Key: HADOOP-9972
 URL: https://issues.apache.org/jira/browse/HADOOP-9972
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Based on the discussion in HADOOP-9912, we need new APIs for FileSystem to deal 
with symlinks.  The issue is that code has been written which is incompatible 
with the existence of things which are not files or directories.  For example,
there is a lot of code out there that looks at FileStatus#isFile, and
if it returns false, assumes that what it is looking at is a
directory.  In the case of a symlink, this assumption is incorrect.

It seems reasonable to make the default behavior of {{FileSystem#listStatus}} 
and {{FileSystem#globStatus}} be fully resolving symlinks, and ignoring 
dangling ones.  This will prevent incompatibility with existing MR jobs and 
other HDFS users.  We should also add new versions of listStatus and globStatus 
that allow new, symlink-aware code to deal with symlinks as symlinks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9895) NativeCodeLoader incorrectly returns

2013-08-21 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9895:


 Summary: NativeCodeLoader incorrectly returns 
 Key: HADOOP-9895
 URL: https://issues.apache.org/jira/browse/HADOOP-9895
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9676) make maximum RPC buffer size configurable

2013-06-28 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9676:


 Summary: make maximum RPC buffer size configurable
 Key: HADOOP-9676
 URL: https://issues.apache.org/jira/browse/HADOOP-9676
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


Currently the RPC server just allocates however much memory the client asks 
for, without validating.  It would be nice to make the maximum RPC buffer size 
configurable.  This would prevent a rogue client from bringing down the 
NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
would also make it easier to debug issues with super-large RPCs or malformed 
headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9673) NetworkTopology: when a node can't be added, print out its location for diagnostic purposes

2013-06-26 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9673:


 Summary: NetworkTopology: when a node can't be added, print out 
its location for diagnostic purposes
 Key: HADOOP-9673
 URL: https://issues.apache.org/jira/browse/HADOOP-9673
 Project: Hadoop Common
  Issue Type: Improvement
  Components: net
Affects Versions: 2.2.0
Reporter: Colin Patrick McCabe
Priority: Trivial


It would be nice if NetworkTopology would print out the network location of 
anode if it couldn't be added.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9667) SequenceFile: Reset keys and values when syncing to a place before the header

2013-06-24 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9667:


 Summary: SequenceFile: Reset keys and values when syncing to a 
place before the header
 Key: HADOOP-9667
 URL: https://issues.apache.org/jira/browse/HADOOP-9667
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Priority: Minor


There seems to be a bug in the {{SequenceFile#sync}} function.  Thanks to 
Christopher Ng for this report:

{code}
/** Seek to the next sync mark past a given position.*/
public synchronized void sync(long position) throws IOException {
  if (position+SYNC_SIZE = end) {
seek(end);
return;
  }

  if (position  headerEnd) {
// seek directly to first record
in.seek(headerEnd); 
should this not call seek (ie this.seek) instead?
// note the sync marker seen in the header
syncSeen = true;
return;
  }
{code}

the problem is that when you sync to the start of a compressed file, the
noBufferedKeys and valuesDecompressed isn't reset so a block read isn't
triggered.  When you subsequently call next() you're potentially getting
keys from the buffer which still contains keys from the previous position
of the file.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9652) FileContext#getFileLinkStatus does not fill in the link owner and mode

2013-06-18 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9652:


 Summary: FileContext#getFileLinkStatus does not fill in the link 
owner and mode
 Key: HADOOP-9652
 URL: https://issues.apache.org/jira/browse/HADOOP-9652
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe


{{FileContext#getFileLinkStatus}} does not actually get the owner and mode of 
the symlink, but instead uses the owner and mode of the symlink target.  If the 
target can't be found, it fills in bogus values (the empty string and 
FsPermission.getDefault) for these.

Symlinks have an owner distinct from who created them, and getFileLinkStatus 
ought to expose this.

In some operating systems, symlinks can have a permission other than 0777.  We 
ought to expose this in RawLocalFilesystem and other places, although we don't 
necessarily have to support this behavior in HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9646) Inconsistent exception specifications in FileUtils#chmod

2013-06-14 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9646:


 Summary: Inconsistent exception specifications in FileUtils#chmod
 Key: HADOOP-9646
 URL: https://issues.apache.org/jira/browse/HADOOP-9646
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


There are two FileUtils#chmod methods:

{code}
public static int chmod(String filename, String perm
  ) throws IOException, InterruptedException;
public static int chmod(String filename, String perm, boolean recursive)
throws IOException;
{code}

The first one just calls the second one with {{recursive = false}}, but despite 
that it is declared as throwing {{InterruptedException}}, something the second 
one doesn't call.

The new Java7 chmod API, which we will transition to once JDK6 support is 
dropped, does *not* throw {{InterruptedException}}

See 
[http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#setOwner(java.nio.file.Path,
 java.nio.file.attribute.UserPrincipal)]

So we should make these consistent by removing the {{InterruptedException}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9646) Inconsistent exception specifications in FileUtils#chmod

2013-06-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe reopened HADOOP-9646:
--


 Inconsistent exception specifications in FileUtils#chmod
 

 Key: HADOOP-9646
 URL: https://issues.apache.org/jira/browse/HADOOP-9646
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: HADOOP-9646.001.patch, HADOOP-9646.002.patch


 There are two FileUtils#chmod methods:
 {code}
 public static int chmod(String filename, String perm
   ) throws IOException, InterruptedException;
 public static int chmod(String filename, String perm, boolean recursive)
 throws IOException;
 {code}
 The first one just calls the second one with {{recursive = false}}, but 
 despite that it is declared as throwing {{InterruptedException}}, something 
 the second one doesn't declare.
 The new Java7 chmod API, which we will transition to once JDK6 support is 
 dropped, does *not* throw {{InterruptedException}}
 See 
 [http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#setOwner(java.nio.file.Path,
  java.nio.file.attribute.UserPrincipal)]
 So we should make these consistent by removing the {{InterruptedException}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9485) inconsistent defaults for hadoop.rpc.socket.factory.class.default

2013-04-18 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9485:


 Summary: inconsistent defaults for 
hadoop.rpc.socket.factory.class.default
 Key: HADOOP-9485
 URL: https://issues.apache.org/jira/browse/HADOOP-9485
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.0.5-beta
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


In {{core-default.xml}}, {{hadoop.rpc.socket.factory.class.default}} defaults 
to {{org.apache.hadoop.net.StandardSocketFactory}}.  However, in 
{{CommonConfigurationKeysPublic.java}}, there is no default for this key.  This 
is inconsistent (defaults in the code versus defaults in the XML files should 
match.)  It also leads to problems with {{RemoteBlockReader2}}, since the 
default {{SocketFactory}} creates a {{Socket}} without an associated channel.  
{{RemoteBlockReader2}} cannot use such a {{Socket}}.

This bug only really becomes apparent when you create a {{Configuration}} using 
the {{Configuration(loadDefaults=true)}} constructor.  Thanks to AB Srinivasan 
for his help in discovering this bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9329) document native build dependencies in BUILDING.txt

2013-02-22 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9329:


 Summary: document native build dependencies in BUILDING.txt
 Key: HADOOP-9329
 URL: https://issues.apache.org/jira/browse/HADOOP-9329
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 2.0.4-beta
Reporter: Colin Patrick McCabe
Priority: Trivial


{{BUILDING.txt}} describes {{-Pnative}}, but it does not specify what native 
libraries are needed for the build.  We should address this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9318) when exiting on a signal, print the signal name first

2013-02-19 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9318:


 Summary: when exiting on a signal, print the signal name first
 Key: HADOOP-9318
 URL: https://issues.apache.org/jira/browse/HADOOP-9318
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


On UNIX, it would be nice to know when a Hadoop daemon had exited on a signal.  
For example, if a daemon exited because the system administrator sent SIGTERM 
(i.e. {{killall java}}), it would be nice to know that.  Although some of this 
can be deduced from context and {{SHUTDOWN_MSG}}, it would be nice to have it 
be explicit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9286) documentation in NativeLibraries.apt.vm is out of date for branch-2 and later

2013-02-05 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-9286:


 Summary: documentation in NativeLibraries.apt.vm is out of date 
for branch-2 and later
 Key: HADOOP-9286
 URL: https://issues.apache.org/jira/browse/HADOOP-9286
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Priority: Minor


The documentation for the native libraries in branch-2 and later is out of 
date.  It refers to building with ant and autotools, which we no longer do in 
these branches (we use Maven and cmake instead.)  The dependency list is also 
woefully out of date, and it talks about being mainly used on RHEL4, Ubuntu, 
and Gentoo.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >