[jira] [Commented] (HADOOP-7487) DF should throw a more reasonable exception when mount cannot be determined

2013-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587002#comment-13587002
 ] 

Hudson commented on HADOOP-7487:


Integrated in Hadoop-Yarn-trunk #139 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/139/])
HADOOP-7487. DF should throw a more reasonable exception when mount cannot 
be determined. Contributed by Andrew Wang. (Revision 1449992)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1449992
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DF.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDFVariations.java


 DF should throw a more reasonable exception when mount cannot be determined
 ---

 Key: HADOOP-7487
 URL: https://issues.apache.org/jira/browse/HADOOP-7487
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 3.0.0, 2.0.3-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
  Labels: noob
 Fix For: 2.0.4-beta

 Attachments: hadoop-7487-1.patch, hadoop-7487-2.patch, 
 hadoop-7487-3.patch


 Currently, when using the DF class to determine the mount corresponding to a 
 given directory, it will throw the generic exception Expecting a line not 
 the end of stream if it can't determine the mount (for example if the 
 directory doesn't exist).
 This error message should be improved in several ways:
 # If the dir to check doesn't exist, we can see that before even execing df, 
 and throw a better exception (or behave better by chopping path components 
 until it exists)
 # Rather than parsing the lines out of df's stdout, collect the whole output, 
 and then parse. So, if df returns a non-zero exit code, we can avoid trying 
 to parse the empty result
 # If there's a success exit code, and we still can't parse it (eg 
 incompatible OS), we should include the unparseable line in the exception 
 message.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9323) Typos in API documentation

2013-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587003#comment-13587003
 ] 

Hudson commented on HADOOP-9323:


Integrated in Hadoop-Yarn-trunk #139 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/139/])
HADOOP-9323. Fix typos in API documentation. Contributed by Suresh 
Srinivas. (Revision 1449977)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1449977
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PositionedReadable.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/BytesWritable.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Buffer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/RecordOutput.java


 Typos in API documentation
 --

 Key: HADOOP-9323
 URL: https://issues.apache.org/jira/browse/HADOOP-9323
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation, fs, io, record
Affects Versions: 2.0.3-alpha
Reporter: Hao Zhong
Assignee: Suresh Srinivas
Priority: Minor
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9323.patch


 Some typos are as follows:
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/ChecksumFileSystem.html
 basice-basic
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html
 sytem-system
 http://hadoop.apache.org/docs/current/api/index.html?org/apache/hadoop/fs/RawLocalFileSystem.html
 http://hadoop.apache.org/docs/current/api/index.html?org/apache/hadoop/fs/FilterFileSystem.html
 inital-initial
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/TrashPolicy.html
 paramater-parameter
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/PositionedReadable.html
 equalt-equal
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/BytesWritable.html
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/record/Buffer.html
 seqeunce-sequence
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/Text.html
 instatiation-instantiation
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/record/RecordOutput.html
 alll-all
 Please revise the documentation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8569) CMakeLists.txt: define _GNU_SOURCE and _LARGEFILE_SOURCE

2013-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587004#comment-13587004
 ] 

Hudson commented on HADOOP-8569:


Integrated in Hadoop-Yarn-trunk #139 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/139/])
HADOOP-8569. CMakeLists.txt: define _GNU_SOURCE and _LARGEFILE_SOURCE. 
Contributed by Colin Patrick McCabe. (Revision 1449922)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1449922
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/CMakeLists.txt


 CMakeLists.txt: define _GNU_SOURCE and _LARGEFILE_SOURCE
 

 Key: HADOOP-8569
 URL: https://issues.apache.org/jira/browse/HADOOP-8569
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.4-beta

 Attachments: HADOOP-8569.001.patch, HADOOP-8569.003.patch


 In the native code, we should define _GNU_SOURCE and _LARGEFILE_SOURCE so 
 that all of the functions on Linux are available.
 _LARGEFILE enables fseeko and ftello; _GNU_SOURCE enables a variety of 
 Linux-specific functions from glibc, including sync_file_range.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-02-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9258:
---

Status: Open  (was: Patch Available)

 Add stricter tests to FileSystemContractTestBase
 

 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.0.3-alpha, 1.1.1
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9528-2.patch, HADOOP-9528-3.patch, 
 HADOOP-9528-4.patch, HADOOP-9528-5.patch, HADOOP-9528.patch


 The File System Contract contains implicit assumptions that aren't checked in 
 the contract test base. Add more tests to define the contract's assumptions 
 more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-02-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9258:
---

Attachment: HADOOP-9528-6.patch

This patch incorporates the HADOOP-9261 s3n rename changes.

 Add stricter tests to FileSystemContractTestBase
 

 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 1.1.1, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9528-2.patch, HADOOP-9528-3.patch, 
 HADOOP-9528-4.patch, HADOOP-9528-5.patch, HADOOP-9528-6.patch, 
 HADOOP-9528.patch


 The File System Contract contains implicit assumptions that aren't checked in 
 the contract test base. Add more tests to define the contract's assumptions 
 more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-02-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9258:
---

Status: Patch Available  (was: Open)

 Add stricter tests to FileSystemContractTestBase
 

 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.0.3-alpha, 1.1.1
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9528-2.patch, HADOOP-9528-3.patch, 
 HADOOP-9528-4.patch, HADOOP-9528-5.patch, HADOOP-9528-6.patch, 
 HADOOP-9528.patch


 The File System Contract contains implicit assumptions that aren't checked in 
 the contract test base. Add more tests to define the contract's assumptions 
 more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9331) Hadoop crypto codec framework and crypto codec implementations

2013-02-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587035#comment-13587035
 ] 

Steve Loughran commented on HADOOP-9331:


# can you convert the doc to a PDF?
# how is this going to impact export rules for Hadoop?

 Hadoop crypto codec framework and crypto codec implementations
 --

 Key: HADOOP-9331
 URL: https://issues.apache.org/jira/browse/HADOOP-9331
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0
Reporter: Jerry Chen
 Attachments: Hadoop Crypto Design.docx

   Original Estimate: 504h
  Remaining Estimate: 504h

 For use cases that deal with sensitive data, we often need to encrypt data to 
 be stored safely at rest. Hadoop common provides a codec framework for 
 compression algorithms. We start here. However because encryption algorithms 
 require some additional configuration and methods for key management, we 
 introduce a crypto codec framework that builds on the compression codec 
 framework. It cleanly distinguishes crypto algorithms from compression 
 algorithms, but shares common interfaces between them where possible, and 
 also carries extended interfaces where necessary to satisfy those needs. We 
 also introduce a generic Key type, and supporting utility methods and 
 classes, as a necessary abstraction for dealing with both Java crypto keys 
 and PGP keys.
 The task for this feature breaks into two parts:
 1. The crypto codec framework that based on compression codec which can be 
 shared by all crypto codec implementations.
 2. The codec implementations such as AES, RC4 and others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9326) maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-common: There a test failures.

2013-02-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587038#comment-13587038
 ] 

Steve Loughran commented on HADOOP-9326:


Well, I just ran the trunk tests on an OSX box and didn't get any failures, so 
it's something odd about your machine. Which means, unfortunately, you are the 
only person who can track it down. Sorry

Try doing (in {{hadoop-common/hadoop-common-project}} a clean test of only one 
test that is failing, then look at what is happening to cause the failure 
{code}
mvn clean test -Dtest=TestFileUtil#testFailFullyDelete
{code}

 maven-surefire-plugin:2.12.3:test (default-test) on project hadoop-common: 
 There a test failures.
 -

 Key: HADOOP-9326
 URL: https://issues.apache.org/jira/browse/HADOOP-9326
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, test
 Environment: For information, i take hadoop with GIT and i run it on 
 mac OS 
Reporter: JLASSI Aymen
   Original Estimate: 336h
  Remaining Estimate: 336h

 I'd like to compile hadoop from source code, and when i launch test-step, i 
 have the desciption as follows, when i skip the test-step to the package 
 step, i have the same problem, the same description of bug:
 Results :
 Failed tests:   testFailFullyDelete(org.apache.hadoop.fs.TestFileUtil): The 
 directory xSubDir *should* not have been deleted. expected:true but 
 was:false
   testFailFullyDeleteContents(org.apache.hadoop.fs.TestFileUtil): The 
 directory xSubDir *should* not have been deleted. expected:true but 
 was:false
   
 testListStatusThrowsExceptionForUnreadableDir(org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem):
  Should throw IOException
   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 build/test/temp/RELATIVE1 in 
 build/test/temp/RELATIVE0/block4197707426846287299.tmp - FAILED!
   
 testROBufferDirAndRWBufferDir[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for build/test/temp/RELATIVE2 in 
 build/test/temp/RELATIVE1/block138767728739012230.tmp - FAILED!
   testRWBufferDirBecomesRO[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for build/test/temp/RELATIVE3 in 
 build/test/temp/RELATIVE4/block4888615109050601773.tmp - FAILED!
   test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
  in 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block4663369813226761504.tmp
  - FAILED!
   
 testROBufferDirAndRWBufferDir[1](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE2
  in 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1/block2846944239985650460.tmp
  - FAILED!
   testRWBufferDirBecomesRO[1](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE3
  in 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE4/block4367331619344952181.tmp
  - FAILED!
   test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 file:/Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
  in 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block5687619346377173125.tmp
  - FAILED!
   
 testROBufferDirAndRWBufferDir[2](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for 
 file:/Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED2
  in 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1/block2235209534902942511.tmp
  - FAILED!
   testRWBufferDirBecomesRO[2](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for 
 file:/Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED3
  in 
 /Users/aymenjlassi/Desktop/hadoop_source/releaseGit/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED4/block6994640486900109274.tmp
  - FAILED!
   testReportChecksumFailure(org.apache.hadoop.fs.TestLocalFileSystem)
   
 

[jira] [Commented] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-02-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587063#comment-13587063
 ] 

Hadoop QA commented on HADOOP-9258:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12570974/HADOOP-9528-6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 tests included appear to have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2231//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2231//console

This message is automatically generated.

 Add stricter tests to FileSystemContractTestBase
 

 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 1.1.1, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9528-2.patch, HADOOP-9528-3.patch, 
 HADOOP-9528-4.patch, HADOOP-9528-5.patch, HADOOP-9528-6.patch, 
 HADOOP-9528.patch


 The File System Contract contains implicit assumptions that aren't checked in 
 the contract test base. Add more tests to define the contract's assumptions 
 more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9258) Add stricter tests to FileSystemContractTestBase

2013-02-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587074#comment-13587074
 ] 

Steve Loughran commented on HADOOP-9258:


The test still failing depends on HADOOP-9265

 Add stricter tests to FileSystemContractTestBase
 

 Key: HADOOP-9258
 URL: https://issues.apache.org/jira/browse/HADOOP-9258
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 1.1.1, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9528-2.patch, HADOOP-9528-3.patch, 
 HADOOP-9528-4.patch, HADOOP-9528-5.patch, HADOOP-9528-6.patch, 
 HADOOP-9528.patch


 The File System Contract contains implicit assumptions that aren't checked in 
 the contract test base. Add more tests to define the contract's assumptions 
 more rigorously for those filesystems that are tested by this (not Local, BTW)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7487) DF should throw a more reasonable exception when mount cannot be determined

2013-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587083#comment-13587083
 ] 

Hudson commented on HADOOP-7487:


Integrated in Hadoop-Hdfs-trunk #1328 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1328/])
HADOOP-7487. DF should throw a more reasonable exception when mount cannot 
be determined. Contributed by Andrew Wang. (Revision 1449992)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1449992
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DF.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDFVariations.java


 DF should throw a more reasonable exception when mount cannot be determined
 ---

 Key: HADOOP-7487
 URL: https://issues.apache.org/jira/browse/HADOOP-7487
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 3.0.0, 2.0.3-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
  Labels: noob
 Fix For: 2.0.4-beta

 Attachments: hadoop-7487-1.patch, hadoop-7487-2.patch, 
 hadoop-7487-3.patch


 Currently, when using the DF class to determine the mount corresponding to a 
 given directory, it will throw the generic exception Expecting a line not 
 the end of stream if it can't determine the mount (for example if the 
 directory doesn't exist).
 This error message should be improved in several ways:
 # If the dir to check doesn't exist, we can see that before even execing df, 
 and throw a better exception (or behave better by chopping path components 
 until it exists)
 # Rather than parsing the lines out of df's stdout, collect the whole output, 
 and then parse. So, if df returns a non-zero exit code, we can avoid trying 
 to parse the empty result
 # If there's a success exit code, and we still can't parse it (eg 
 incompatible OS), we should include the unparseable line in the exception 
 message.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9323) Typos in API documentation

2013-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587084#comment-13587084
 ] 

Hudson commented on HADOOP-9323:


Integrated in Hadoop-Hdfs-trunk #1328 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1328/])
HADOOP-9323. Fix typos in API documentation. Contributed by Suresh 
Srinivas. (Revision 1449977)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1449977
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PositionedReadable.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/BytesWritable.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Buffer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/RecordOutput.java


 Typos in API documentation
 --

 Key: HADOOP-9323
 URL: https://issues.apache.org/jira/browse/HADOOP-9323
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation, fs, io, record
Affects Versions: 2.0.3-alpha
Reporter: Hao Zhong
Assignee: Suresh Srinivas
Priority: Minor
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9323.patch


 Some typos are as follows:
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/ChecksumFileSystem.html
 basice-basic
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html
 sytem-system
 http://hadoop.apache.org/docs/current/api/index.html?org/apache/hadoop/fs/RawLocalFileSystem.html
 http://hadoop.apache.org/docs/current/api/index.html?org/apache/hadoop/fs/FilterFileSystem.html
 inital-initial
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/TrashPolicy.html
 paramater-parameter
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/PositionedReadable.html
 equalt-equal
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/BytesWritable.html
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/record/Buffer.html
 seqeunce-sequence
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/Text.html
 instatiation-instantiation
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/record/RecordOutput.html
 alll-all
 Please revise the documentation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8569) CMakeLists.txt: define _GNU_SOURCE and _LARGEFILE_SOURCE

2013-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587085#comment-13587085
 ] 

Hudson commented on HADOOP-8569:


Integrated in Hadoop-Hdfs-trunk #1328 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1328/])
HADOOP-8569. CMakeLists.txt: define _GNU_SOURCE and _LARGEFILE_SOURCE. 
Contributed by Colin Patrick McCabe. (Revision 1449922)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1449922
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/CMakeLists.txt


 CMakeLists.txt: define _GNU_SOURCE and _LARGEFILE_SOURCE
 

 Key: HADOOP-8569
 URL: https://issues.apache.org/jira/browse/HADOOP-8569
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.4-beta

 Attachments: HADOOP-8569.001.patch, HADOOP-8569.003.patch


 In the native code, we should define _GNU_SOURCE and _LARGEFILE_SOURCE so 
 that all of the functions on Linux are available.
 _LARGEFILE enables fseeko and ftello; _GNU_SOURCE enables a variety of 
 Linux-specific functions from glibc, including sync_file_range.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9323) Typos in API documentation

2013-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587142#comment-13587142
 ] 

Hudson commented on HADOOP-9323:


Integrated in Hadoop-Mapreduce-trunk #1356 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1356/])
HADOOP-9323. Fix typos in API documentation. Contributed by Suresh 
Srinivas. (Revision 1449977)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1449977
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/PositionedReadable.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicy.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/BytesWritable.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Buffer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/RecordOutput.java


 Typos in API documentation
 --

 Key: HADOOP-9323
 URL: https://issues.apache.org/jira/browse/HADOOP-9323
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation, fs, io, record
Affects Versions: 2.0.3-alpha
Reporter: Hao Zhong
Assignee: Suresh Srinivas
Priority: Minor
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9323.patch


 Some typos are as follows:
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/ChecksumFileSystem.html
 basice-basic
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html
 sytem-system
 http://hadoop.apache.org/docs/current/api/index.html?org/apache/hadoop/fs/RawLocalFileSystem.html
 http://hadoop.apache.org/docs/current/api/index.html?org/apache/hadoop/fs/FilterFileSystem.html
 inital-initial
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/TrashPolicy.html
 paramater-parameter
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/PositionedReadable.html
 equalt-equal
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/BytesWritable.html
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/record/Buffer.html
 seqeunce-sequence
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/Text.html
 instatiation-instantiation
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/record/RecordOutput.html
 alll-all
 Please revise the documentation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7487) DF should throw a more reasonable exception when mount cannot be determined

2013-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587141#comment-13587141
 ] 

Hudson commented on HADOOP-7487:


Integrated in Hadoop-Mapreduce-trunk #1356 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1356/])
HADOOP-7487. DF should throw a more reasonable exception when mount cannot 
be determined. Contributed by Andrew Wang. (Revision 1449992)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1449992
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DF.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDFVariations.java


 DF should throw a more reasonable exception when mount cannot be determined
 ---

 Key: HADOOP-7487
 URL: https://issues.apache.org/jira/browse/HADOOP-7487
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 3.0.0, 2.0.3-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
  Labels: noob
 Fix For: 2.0.4-beta

 Attachments: hadoop-7487-1.patch, hadoop-7487-2.patch, 
 hadoop-7487-3.patch


 Currently, when using the DF class to determine the mount corresponding to a 
 given directory, it will throw the generic exception Expecting a line not 
 the end of stream if it can't determine the mount (for example if the 
 directory doesn't exist).
 This error message should be improved in several ways:
 # If the dir to check doesn't exist, we can see that before even execing df, 
 and throw a better exception (or behave better by chopping path components 
 until it exists)
 # Rather than parsing the lines out of df's stdout, collect the whole output, 
 and then parse. So, if df returns a non-zero exit code, we can avoid trying 
 to parse the empty result
 # If there's a success exit code, and we still can't parse it (eg 
 incompatible OS), we should include the unparseable line in the exception 
 message.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8569) CMakeLists.txt: define _GNU_SOURCE and _LARGEFILE_SOURCE

2013-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587143#comment-13587143
 ] 

Hudson commented on HADOOP-8569:


Integrated in Hadoop-Mapreduce-trunk #1356 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1356/])
HADOOP-8569. CMakeLists.txt: define _GNU_SOURCE and _LARGEFILE_SOURCE. 
Contributed by Colin Patrick McCabe. (Revision 1449922)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1449922
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/CMakeLists.txt


 CMakeLists.txt: define _GNU_SOURCE and _LARGEFILE_SOURCE
 

 Key: HADOOP-8569
 URL: https://issues.apache.org/jira/browse/HADOOP-8569
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.4-beta

 Attachments: HADOOP-8569.001.patch, HADOOP-8569.003.patch


 In the native code, we should define _GNU_SOURCE and _LARGEFILE_SOURCE so 
 that all of the functions on Linux are available.
 _LARGEFILE enables fseeko and ftello; _GNU_SOURCE enables a variety of 
 Linux-specific functions from glibc, including sync_file_range.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8029) org.apache.hadoop.io.nativeio.NativeIO.posixFadviseIfPossible does not handle EINVAL

2013-02-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8029:
-

Attachment: HADOOP-8029.001.patch

this is a patch for branch-1.  The issue has already been resolved upstream in 
trunk.

 org.apache.hadoop.io.nativeio.NativeIO.posixFadviseIfPossible does not handle 
 EINVAL
 

 Key: HADOOP-8029
 URL: https://issues.apache.org/jira/browse/HADOOP-8029
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.20.205.0
 Environment: Debian Wheezy 64-bit 
 uname -a = Linux desktop 3.1.0-1-amd64 #1 SMP Tue Jan 10 05:01:58 UTC 2012 
 x86_64 GNU/Linux 
 cat /etc/issue = Debian GNU/Linux wheezy/sid \n \l 
 /etc/apt/sources.list =  
 deb http://ftp.us.debian.org/debian/ wheezy main contrib non-free 
 deb-src http://ftp.us.debian.org/debian/ wheezy main contrib non-free 
 deb http://security.debian.org/ wheezy/updates main contrib non-free 
 deb-src http://security.debian.org/ wheezy/updates main contrib non-free 
 deb http://archive.cloudera.com/debian squeeze-cdh3 contrib 
 deb-src http://archive.cloudera.com/debian squeeze-cdh3 contrib 
 Hadoop specific configuration (disabled permissions, pseudo-distributed mode, 
 replication set to 1, from my own blog post here: http://j.mp/tsVBR4
Reporter: Tim Mattison
 Attachments: HADOOP-8029.001.patch

   Original Estimate: 4h
  Remaining Estimate: 4h

 When Hadoop's directories reside on tmpfs in Debian Wheezy (and possibly all 
 Linux 3.1 distros) in an installation that is using the native libraries 
 fadvise returns EINVAL when trying to run a MapReduce job.  Since EINVAL 
 isn't handled all MapReduce jobs report Map output lost, rescheduling: 
 getMapOutput.
 A full stack trace for this issue looks like this:
 [exec] 12/02/03 09:50:58 INFO mapred.JobClient: Task Id : 
 attempt_201202030949_0001_m_00_0, Status : FAILED
 [exec] Map output lost, rescheduling: 
 getMapOutput(attempt_201202030949_0001_m_00_0,0) failed :
 [exec] EINVAL: Invalid argument
 [exec] at org.apache.hadoop.io.nativeio.NativeIO.posix_fadvise(Native Method)
 [exec] at 
 org.apache.hadoop.io.nativeio.NativeIO.posixFadviseIfPossible(NativeIO.java:177)
 [exec] at 
 org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:4026)
 [exec] at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
 [exec] at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
 [exec] at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
 [exec] at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
 [exec] at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:829)
 [exec] at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 [exec] at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 [exec] at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
 Some logic will need to be implemented to handle EINVAL to properly support 
 all file systems.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9336) Allow UGI of current connection to be queried

2013-02-26 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-9336:
---

 Summary: Allow UGI of current connection to be queried
 Key: HADOOP-9336
 URL: https://issues.apache.org/jira/browse/HADOOP-9336
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 2.0.0-alpha, 0.23.0, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical


Querying {{UGI.getCurrentUser}} is synch'ed and inefficient for short-lived RPC 
requests.  Since the connection already contains the UGI, there should be a 
means to query it directly and avoid a call to {{UGI.getCurrentUser}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9336) Allow UGI of current connection to be queried

2013-02-26 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587500#comment-13587500
 ] 

Alejandro Abdelnur commented on HADOOP-9336:


Daryn, how (if) will this affect RPC calls done within the context of a doAs()?

 Allow UGI of current connection to be queried
 -

 Key: HADOOP-9336
 URL: https://issues.apache.org/jira/browse/HADOOP-9336
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical

 Querying {{UGI.getCurrentUser}} is synch'ed and inefficient for short-lived 
 RPC requests.  Since the connection already contains the UGI, there should be 
 a means to query it directly and avoid a call to {{UGI.getCurrentUser}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8029) org.apache.hadoop.io.nativeio.NativeIO.posixFadviseIfPossible does not handle EINVAL

2013-02-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587556#comment-13587556
 ] 

Suresh Srinivas commented on HADOOP-8029:
-

bq. The issue has already been resolved upstream in trunk.
Do you know which jira addresses this?

 org.apache.hadoop.io.nativeio.NativeIO.posixFadviseIfPossible does not handle 
 EINVAL
 

 Key: HADOOP-8029
 URL: https://issues.apache.org/jira/browse/HADOOP-8029
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.20.205.0
 Environment: Debian Wheezy 64-bit 
 uname -a = Linux desktop 3.1.0-1-amd64 #1 SMP Tue Jan 10 05:01:58 UTC 2012 
 x86_64 GNU/Linux 
 cat /etc/issue = Debian GNU/Linux wheezy/sid \n \l 
 /etc/apt/sources.list =  
 deb http://ftp.us.debian.org/debian/ wheezy main contrib non-free 
 deb-src http://ftp.us.debian.org/debian/ wheezy main contrib non-free 
 deb http://security.debian.org/ wheezy/updates main contrib non-free 
 deb-src http://security.debian.org/ wheezy/updates main contrib non-free 
 deb http://archive.cloudera.com/debian squeeze-cdh3 contrib 
 deb-src http://archive.cloudera.com/debian squeeze-cdh3 contrib 
 Hadoop specific configuration (disabled permissions, pseudo-distributed mode, 
 replication set to 1, from my own blog post here: http://j.mp/tsVBR4
Reporter: Tim Mattison
 Attachments: HADOOP-8029.001.patch

   Original Estimate: 4h
  Remaining Estimate: 4h

 When Hadoop's directories reside on tmpfs in Debian Wheezy (and possibly all 
 Linux 3.1 distros) in an installation that is using the native libraries 
 fadvise returns EINVAL when trying to run a MapReduce job.  Since EINVAL 
 isn't handled all MapReduce jobs report Map output lost, rescheduling: 
 getMapOutput.
 A full stack trace for this issue looks like this:
 [exec] 12/02/03 09:50:58 INFO mapred.JobClient: Task Id : 
 attempt_201202030949_0001_m_00_0, Status : FAILED
 [exec] Map output lost, rescheduling: 
 getMapOutput(attempt_201202030949_0001_m_00_0,0) failed :
 [exec] EINVAL: Invalid argument
 [exec] at org.apache.hadoop.io.nativeio.NativeIO.posix_fadvise(Native Method)
 [exec] at 
 org.apache.hadoop.io.nativeio.NativeIO.posixFadviseIfPossible(NativeIO.java:177)
 [exec] at 
 org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:4026)
 [exec] at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
 [exec] at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
 [exec] at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
 [exec] at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
 [exec] at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:829)
 [exec] at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 [exec] at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 [exec] at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
 Some logic will need to be implemented to handle EINVAL to properly support 
 all file systems.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2013-02-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587574#comment-13587574
 ] 

Suresh Srinivas commented on HADOOP-9151:
-

Given the discussions from HADOOP-9163 - 
https://issues.apache.org/jira/browse/HADOOP-9163?focusedCommentId=13581535page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13581535,
 I think we should move forward with this.

Todd and Eli, please confirm.

 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8029) org.apache.hadoop.io.nativeio.NativeIO.posixFadviseIfPossible does not handle EINVAL

2013-02-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587591#comment-13587591
 ] 

Colin Patrick McCabe commented on HADOOP-8029:
--

check 
{{hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java}}
 in trunk.  You'll see:
{code}
  try {
NativeIO.posixFadviseIfPossible(fd, getPosition(), getCount(),
NativeIO.POSIX_FADV_DONTNEED);
  } catch (Throwable t) {
LOG.warn(Failed to manage OS cache for  + identifier, t);
  }
{code}

In other words, we are catching any possible exception from fadvise, including 
EINVAL.

similarly, 
{{hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedChunkedFile.java}}
 has this:
{code}
  try {
NativeIO.posixFadviseIfPossible(fd, getStartOffset(), getEndOffset()
- getStartOffset(), NativeIO.POSIX_FADV_DONTNEED);
  } catch (Throwable t) {
LOG.warn(Failed to manage OS cache for  + identifier, t);
  }
{code}

Those are the only uses of posixFadviseIfPossible in 
{{hadoop-mapreduce-project}} in trunk.

It looks like MAPREDUCE-3289 added that code to trunk in its current form.

 org.apache.hadoop.io.nativeio.NativeIO.posixFadviseIfPossible does not handle 
 EINVAL
 

 Key: HADOOP-8029
 URL: https://issues.apache.org/jira/browse/HADOOP-8029
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.20.205.0
 Environment: Debian Wheezy 64-bit 
 uname -a = Linux desktop 3.1.0-1-amd64 #1 SMP Tue Jan 10 05:01:58 UTC 2012 
 x86_64 GNU/Linux 
 cat /etc/issue = Debian GNU/Linux wheezy/sid \n \l 
 /etc/apt/sources.list =  
 deb http://ftp.us.debian.org/debian/ wheezy main contrib non-free 
 deb-src http://ftp.us.debian.org/debian/ wheezy main contrib non-free 
 deb http://security.debian.org/ wheezy/updates main contrib non-free 
 deb-src http://security.debian.org/ wheezy/updates main contrib non-free 
 deb http://archive.cloudera.com/debian squeeze-cdh3 contrib 
 deb-src http://archive.cloudera.com/debian squeeze-cdh3 contrib 
 Hadoop specific configuration (disabled permissions, pseudo-distributed mode, 
 replication set to 1, from my own blog post here: http://j.mp/tsVBR4
Reporter: Tim Mattison
 Attachments: HADOOP-8029.001.patch

   Original Estimate: 4h
  Remaining Estimate: 4h

 When Hadoop's directories reside on tmpfs in Debian Wheezy (and possibly all 
 Linux 3.1 distros) in an installation that is using the native libraries 
 fadvise returns EINVAL when trying to run a MapReduce job.  Since EINVAL 
 isn't handled all MapReduce jobs report Map output lost, rescheduling: 
 getMapOutput.
 A full stack trace for this issue looks like this:
 [exec] 12/02/03 09:50:58 INFO mapred.JobClient: Task Id : 
 attempt_201202030949_0001_m_00_0, Status : FAILED
 [exec] Map output lost, rescheduling: 
 getMapOutput(attempt_201202030949_0001_m_00_0,0) failed :
 [exec] EINVAL: Invalid argument
 [exec] at org.apache.hadoop.io.nativeio.NativeIO.posix_fadvise(Native Method)
 [exec] at 
 org.apache.hadoop.io.nativeio.NativeIO.posixFadviseIfPossible(NativeIO.java:177)
 [exec] at 
 org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:4026)
 [exec] at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
 [exec] at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
 [exec] at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
 [exec] at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
 [exec] at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:829)
 [exec] at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 [exec] at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 [exec] at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
 Some logic will need to be implemented to handle EINVAL to properly support 
 all file systems.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9334) Update netty version

2013-02-26 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9334:


Assignee: nkeywal

 Update netty version
 

 Key: HADOOP-9334
 URL: https://issues.apache.org/jira/browse/HADOOP-9334
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.0.4-beta
Reporter: nkeywal
Assignee: nkeywal
Priority: Minor
 Fix For: 3.0.0

 Attachments: 9334.branch2.v1.patch, 9334.trunk.v1.patch


 There are newer version available. HBase for example depends on the 3.5.9.
 Latest 3.5 is 3.5.11, there is the 3.6.3 as well.
 While there is no point in trying to have exactly the same version, things 
 are more comfortable if the gap in version is minimal, as the dependency is 
 client side as well (i.e. HBase has to choose a version anyway).
 Attached a patch for the branch 2.
 I haven't executed the unit tests, but HBase works ok with Hadoop on Netty 
 3.5.9.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9334) Update netty version

2013-02-26 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9334:


   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
   2.0.4-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 for the patch. I committed it to trunk and branch-2.

 Update netty version
 

 Key: HADOOP-9334
 URL: https://issues.apache.org/jira/browse/HADOOP-9334
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.0.4-beta
Reporter: nkeywal
Assignee: nkeywal
Priority: Minor
 Fix For: 2.0.4-beta

 Attachments: 9334.branch2.v1.patch, 9334.trunk.v1.patch


 There are newer version available. HBase for example depends on the 3.5.9.
 Latest 3.5 is 3.5.11, there is the 3.6.3 as well.
 While there is no point in trying to have exactly the same version, things 
 are more comfortable if the gap in version is minimal, as the dependency is 
 client side as well (i.e. HBase has to choose a version anyway).
 Attached a patch for the branch 2.
 I haven't executed the unit tests, but HBase works ok with Hadoop on Netty 
 3.5.9.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9334) Update netty version

2013-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587638#comment-13587638
 ] 

Hudson commented on HADOOP-9334:


Integrated in Hadoop-trunk-Commit #3385 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3385/])
HADOOP-9334. Upgrade netty version. Contributed by Nicolas Liochon. 
(Revision 1450463)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1450463
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
* /hadoop/common/trunk/hadoop-mapreduce-project/pom.xml
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/pom.xml


 Update netty version
 

 Key: HADOOP-9334
 URL: https://issues.apache.org/jira/browse/HADOOP-9334
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.0.4-beta
Reporter: nkeywal
Assignee: nkeywal
Priority: Minor
 Fix For: 2.0.4-beta

 Attachments: 9334.branch2.v1.patch, 9334.trunk.v1.patch


 There are newer version available. HBase for example depends on the 3.5.9.
 Latest 3.5 is 3.5.11, there is the 3.6.3 as well.
 While there is no point in trying to have exactly the same version, things 
 are more comfortable if the gap in version is minimal, as the dependency is 
 client side as well (i.e. HBase has to choose a version anyway).
 Attached a patch for the branch 2.
 I haven't executed the unit tests, but HBase works ok with Hadoop on Netty 
 3.5.9.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9151) Include RPC error info in RpcResponseHeader instead of sending it separately

2013-02-26 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587672#comment-13587672
 ] 

Todd Lipcon commented on HADOOP-9151:
-

Sure, I will remove my -1, given that 2.0.3 already broke compatibility 
(again). Hopefully at some point the rest of the community will come to 
understand that continuing to break wire compatibility after a .0 release 
isn't acceptable.

 Include RPC error info in RpcResponseHeader instead of sending it separately
 

 Key: HADOOP-9151
 URL: https://issues.apache.org/jira/browse/HADOOP-9151
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9151.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9117) replace protoc ant plugin exec with a maven plugin

2013-02-26 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587717#comment-13587717
 ] 

Alejandro Abdelnur commented on HADOOP-9117:


Nicholas, can you please try the following (without modifying the POMs):

* clean build: mvn clean test -DskipTests
* then open the project with Eclipse and work
* compile changes (not a full rebuild)

Does that work?

Thx

 replace protoc ant plugin exec with a maven plugin
 --

 Key: HADOOP-9117
 URL: https://issues.apache.org/jira/browse/HADOOP-9117
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch, 
 HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch


 The protoc compiler is currently invoked using ant plugin exec. There is a 
 bug in the ant plugin exec task which does not consume the STDOUT or STDERR 
 appropriately making the build to stop sometimes (you need to press enter to 
 continue).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9117) replace protoc ant plugin exec with a maven plugin

2013-02-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587731#comment-13587731
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-9117:


I tried full clean and rebuild multiple times but they did not fix the problem. 
 I even tried removing ~/.m2 directory.

 replace protoc ant plugin exec with a maven plugin
 --

 Key: HADOOP-9117
 URL: https://issues.apache.org/jira/browse/HADOOP-9117
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-beta

 Attachments: HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch, 
 HADOOP-9117.patch, HADOOP-9117.patch, HADOOP-9117.patch


 The protoc compiler is currently invoked using ant plugin exec. There is a 
 bug in the ant plugin exec task which does not consume the STDOUT or STDERR 
 appropriately making the build to stop sometimes (you need to press enter to 
 continue).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-02-26 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587799#comment-13587799
 ] 

Arpit Gupta commented on HADOOP-9253:
-

@Alejandro

Another thing we could do is capture the ulimit info after the head cmd. That 
way the users can still get to see the info. Let me know and i can generate a 
new patch.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 0.23.7, 2.0.4-beta

 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.branch-1.patch, HADOOP-9253.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-02-26 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587803#comment-13587803
 ] 

Alejandro Abdelnur commented on HADOOP-9253:


Arpit, the info showing up in the logs is fine, showing up in the terminal is 
not.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 0.23.7, 2.0.4-beta

 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.branch-1.patch, HADOOP-9253.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9253) Capture ulimit info in the logs at service start time

2013-02-26 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587806#comment-13587806
 ] 

Arpit Gupta commented on HADOOP-9253:
-

Right if the head cmd was run before the ulimit info was captured then it will 
only be in the log and not in the terminal.

 Capture ulimit info in the logs at service start time
 -

 Key: HADOOP-9253
 URL: https://issues.apache.org/jira/browse/HADOOP-9253
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 0.23.7, 2.0.4-beta

 Attachments: HADOOP-9253.branch-1.patch, HADOOP-9253.branch-1.patch, 
 HADOOP-9253.branch-1.patch, HADOOP-9253.patch, HADOOP-9253.patch


 output of ulimit -a is helpful while debugging issues on the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8917) add LOCALE.US to toLowerCase in SecurityUtil.replacePattern

2013-02-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587847#comment-13587847
 ] 

Suresh Srinivas commented on HADOOP-8917:
-

+1 for the patches. I will commit it shortly.

 add LOCALE.US to toLowerCase in SecurityUtil.replacePattern
 ---

 Key: HADOOP-8917
 URL: https://issues.apache.org/jira/browse/HADOOP-8917
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8917.branch-1.patch, HADOOP-8917.patch


 Webhdfs and fsck when getting the kerberos principal use Locale.US in 
 toLowerCase. We should do the same in replacePattern as this method is used 
 when service prinicpals log in.
 see 
 https://issues.apache.org/jira/browse/HADOOP-8878?focusedCommentId=13472245page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13472245
  for more details

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8917) add LOCALE.US to toLowerCase in SecurityUtil.replacePattern

2013-02-26 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8917:


   Resolution: Fixed
Fix Version/s: 2.0.4-beta
   1.2.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch to trunk, branch-2 and branch-1. Thank you Arpit!

 add LOCALE.US to toLowerCase in SecurityUtil.replacePattern
 ---

 Key: HADOOP-8917
 URL: https://issues.apache.org/jira/browse/HADOOP-8917
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 2.0.4-beta

 Attachments: HADOOP-8917.branch-1.patch, HADOOP-8917.patch


 Webhdfs and fsck when getting the kerberos principal use Locale.US in 
 toLowerCase. We should do the same in replacePattern as this method is used 
 when service prinicpals log in.
 see 
 https://issues.apache.org/jira/browse/HADOOP-8878?focusedCommentId=13472245page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13472245
  for more details

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8917) add LOCALE.US to toLowerCase in SecurityUtil.replacePattern

2013-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587860#comment-13587860
 ] 

Hudson commented on HADOOP-8917:


Integrated in Hadoop-trunk-Commit #3386 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3386/])
HADOOP-8917. add LOCALE.US to toLowerCase in SecurityUtil.replacePattern. 
Contributed by Arpit Agarwal. (Revision 1450571)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1450571
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java


 add LOCALE.US to toLowerCase in SecurityUtil.replacePattern
 ---

 Key: HADOOP-8917
 URL: https://issues.apache.org/jira/browse/HADOOP-8917
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 2.0.4-beta

 Attachments: HADOOP-8917.branch-1.patch, HADOOP-8917.patch


 Webhdfs and fsck when getting the kerberos principal use Locale.US in 
 toLowerCase. We should do the same in replacePattern as this method is used 
 when service prinicpals log in.
 see 
 https://issues.apache.org/jira/browse/HADOOP-8878?focusedCommentId=13472245page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13472245
  for more details

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9301) hadoop client servlet/jsp/jetty/tomcat JARs creating conflicts in Oozie HttpFS

2013-02-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587866#comment-13587866
 ] 

Suresh Srinivas commented on HADOOP-9301:
-

Any idea which change caused this issue and if you know it can you link it to 
this jira?

 hadoop client servlet/jsp/jetty/tomcat JARs creating conflicts in Oozie  
 HttpFS
 

 Key: HADOOP-9301
 URL: https://issues.apache.org/jira/browse/HADOOP-9301
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 2.0.4-beta


 Here's how to reproduce:
 {noformat}
 $ cd hadoop-client
 $ mvn dependency:tree | egrep 'jsp|jetty'
 [INFO] |  +- org.mortbay.jetty:jetty:jar:6.1.26.cloudera.2:compile
 [INFO] |  +- org.mortbay.jetty:jetty-util:jar:6.1.26.cloudera.2:compile
 [INFO] |  +- javax.servlet.jsp:jsp-api:jar:2.1:compile
 {noformat}
 This has a potential for completely screwing up clients like Oozie, etc – 
 hence a blocker.
 It seems that while common excludes those JARs, they are sneaking in via 
 hdfs, we need to exclude them too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-02-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587868#comment-13587868
 ] 

Suresh Srinivas commented on HADOOP-9299:
-

If you know, please link the jira that caused this issue.

 kerberos name resolution is kicking in even when kerberos is not configured
 ---

 Key: HADOOP-9299
 URL: https://issues.apache.org/jira/browse/HADOOP-9299
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Priority: Blocker

 Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
 from the RC0 2.0.3-alpha tarball:
 {noformat}
 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
 [TRANSIENT], ErrorCode [JA009], Message [JA009: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:68)
 at 
 org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.init(MRDelegationTokenIdentifier.java:51)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
 at 
 org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
 Caused by: 
 org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
 No rules applied to yarn/localhost@LOCALREALM
 at 
 org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.init(AbstractDelegationTokenIdentifier.java:66)
 ... 12 more
 ]
 {noformat}
 This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
 is a Hadoop issue rather than the oozie one is because when I hack 
 /etc/krb5.conf to be:
 {noformat}
 [libdefaults]
ticket_lifetime = 600
default_realm = LOCALHOST
default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
 [realms]
LOCALHOST = {
kdc = localhost:88
default_domain = .local
}
 [domain_realm]
.local = LOCALHOST
 [logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log
 {noformat}
 The issue goes away. 
 Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
 should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8917) add LOCALE.US to toLowerCase in SecurityUtil.replacePattern

2013-02-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587873#comment-13587873
 ] 

Hudson commented on HADOOP-8917:


Integrated in Hadoop-trunk-Commit #3387 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3387/])
HADOOP-8917. Changed contributed by from Arpit Agarwal to Arpit Gupta. 
(Revision 1450575)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1450575
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 add LOCALE.US to toLowerCase in SecurityUtil.replacePattern
 ---

 Key: HADOOP-8917
 URL: https://issues.apache.org/jira/browse/HADOOP-8917
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 2.0.4-beta

 Attachments: HADOOP-8917.branch-1.patch, HADOOP-8917.patch


 Webhdfs and fsck when getting the kerberos principal use Locale.US in 
 toLowerCase. We should do the same in replacePattern as this method is used 
 when service prinicpals log in.
 see 
 https://issues.apache.org/jira/browse/HADOOP-8878?focusedCommentId=13472245page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13472245
  for more details

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9325) KerberosAuthenticationHandler and AuthenticationFilter should be able to reference Hadoop configurations

2013-02-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587875#comment-13587875
 ] 

Kai Zheng commented on HADOOP-9325:
---

Hi Alejandro,

Thanks for your suggestion. I did some investigation according to your hint, 
and found:
1. In org.apache.hadoop.http.HttpServer there seems to be two ways to specify 
the configurations needed by KerberosAuthenticationHandler
  1) via HttpServer-initSpnego(Configuration conf, String usernameConfKey, 
String keytabConfKey);
  2) via configuring an AuthenticationFilterInitializer
Perhaps method 2) is what you meant, right. By adding properties with prefix of 
hadoop.http.authentication, FilterInitializer can pass those values to 
AuthenticationFilter and then to the Kerberos handler.
But if no FilterInitializer is specified, then method 1) will be dependent on, 
right. However, in this way 
only kerberos.principal and kerberos.keytab can be configured, not valid for 
the mentioned kerberos.name.rules.

So in this JIRA, in my view, we might have two things to fix:
1) Adding hadoop.http.authentication.kerberos.names.rules in the doc as you 
mentioned;
2) Allowing HttpServer-initSpnego(...) to specify the name.rules.

BTW, I reported this issue because I don't know how to specify the 
kerberos.names.rules in Oozie. In fact it's already supported and possible to 
do it using method similar with 1) method for HttpServer in Hadoop. (Done in 
org.apache.oozie.servlet.AuthFilter).

Would you check again with this, and with your confirmation, I will go that way 
and provide the fix.

Thanks
Kai

 KerberosAuthenticationHandler and AuthenticationFilter should be able to 
 reference Hadoop configurations
 

 Key: HADOOP-9325
 URL: https://issues.apache.org/jira/browse/HADOOP-9325
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng

 In KerberosAuthenticationHandler SPNEGO activities, KerberosName is used to 
 get short name for client principal, which needs in some Kerberos 
 authentication situations to reference translation rules defined in Hadoop 
 configuration file like core-site.xml
 as follows:
   property
 namehadoop.security.auth_to_local/name
 value.../value
   /property
 Note, this is an issue only if default rule can't meet the requirement and 
 custom rules need to be defined.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9331) Hadoop crypto codec framework and crypto codec implementations

2013-02-26 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen updated HADOOP-9331:
---

Attachment: Hadoop Crypto Design.pdf

Upate the design document with PDF format.

 Hadoop crypto codec framework and crypto codec implementations
 --

 Key: HADOOP-9331
 URL: https://issues.apache.org/jira/browse/HADOOP-9331
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0
Reporter: Jerry Chen
 Attachments: Hadoop Crypto Design.pdf

   Original Estimate: 504h
  Remaining Estimate: 504h

 For use cases that deal with sensitive data, we often need to encrypt data to 
 be stored safely at rest. Hadoop common provides a codec framework for 
 compression algorithms. We start here. However because encryption algorithms 
 require some additional configuration and methods for key management, we 
 introduce a crypto codec framework that builds on the compression codec 
 framework. It cleanly distinguishes crypto algorithms from compression 
 algorithms, but shares common interfaces between them where possible, and 
 also carries extended interfaces where necessary to satisfy those needs. We 
 also introduce a generic Key type, and supporting utility methods and 
 classes, as a necessary abstraction for dealing with both Java crypto keys 
 and PGP keys.
 The task for this feature breaks into two parts:
 1. The crypto codec framework that based on compression codec which can be 
 shared by all crypto codec implementations.
 2. The codec implementations such as AES, RC4 and others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9331) Hadoop crypto codec framework and crypto codec implementations

2013-02-26 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen updated HADOOP-9331:
---

Attachment: (was: Hadoop Crypto Design.docx)

 Hadoop crypto codec framework and crypto codec implementations
 --

 Key: HADOOP-9331
 URL: https://issues.apache.org/jira/browse/HADOOP-9331
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0
Reporter: Jerry Chen
 Attachments: Hadoop Crypto Design.pdf

   Original Estimate: 504h
  Remaining Estimate: 504h

 For use cases that deal with sensitive data, we often need to encrypt data to 
 be stored safely at rest. Hadoop common provides a codec framework for 
 compression algorithms. We start here. However because encryption algorithms 
 require some additional configuration and methods for key management, we 
 introduce a crypto codec framework that builds on the compression codec 
 framework. It cleanly distinguishes crypto algorithms from compression 
 algorithms, but shares common interfaces between them where possible, and 
 also carries extended interfaces where necessary to satisfy those needs. We 
 also introduce a generic Key type, and supporting utility methods and 
 classes, as a necessary abstraction for dealing with both Java crypto keys 
 and PGP keys.
 The task for this feature breaks into two parts:
 1. The crypto codec framework that based on compression codec which can be 
 shared by all crypto codec implementations.
 2. The codec implementations such as AES, RC4 and others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9333) Hadoop crypto codec framework based on compression codec

2013-02-26 Thread Jerry Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13587914#comment-13587914
 ] 

Jerry Chen commented on HADOOP-9333:


[~tucu00], The crypto codec provide high level abstraction related to a codec 
that needs a crypto context. When considering the case that we do compression 
before the encryption, a compression codec can be configured as part of the 
crypto codec configuration and let the crypto codec implemenation handling the 
compression by using the specified compression codec and then do encryption 
after that. And the decryption process is reverse.

 Hadoop crypto codec framework based on compression codec
 

 Key: HADOOP-9333
 URL: https://issues.apache.org/jira/browse/HADOOP-9333
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 3.0.0
Reporter: Jerry Chen
 Attachments: HADOOP-9333.patch, HADOOP-9333.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 The work defined here is to extend the Compression Codec framework to the 
 encryption framework for handling encryption and decryption specific 
 requirements in Hadoop.
 The targets of the encryption framework are:
 1.Establishes a common abstraction of the API level that can be shared by 
 all crypto codec implementations as well as users that use the API. 
 2.Provides a foundation for other components in Hadoop such as Map Reduce 
 or HBase to support encryption features.
 The design document is available in the HADOOP-9331.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9331) Hadoop crypto codec framework and crypto codec implementations

2013-02-26 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen updated HADOOP-9331:
---

Description: 
For use cases that deal with sensitive data, we often need to encrypt data to 
be stored safely at rest. Hadoop common provides a codec framework for 
compression algorithms. We start here. However because encryption algorithms 
require some additional configuration and methods for key management, we 
introduce a crypto codec framework that builds on the compression codec 
framework. It cleanly distinguishes crypto algorithms from compression 
algorithms, but shares common interfaces between them where possible, and also 
carries extended interfaces where necessary to satisfy those needs. We also 
introduce a generic Key type, and supporting utility methods and classes, as a 
necessary abstraction for dealing with both Java crypto keys and PGP keys.

The task for this feature breaks into two parts:
1. The crypto codec framework that based on compression codec which can be 
shared by all crypto codec implementations.
2. The codec implementations such as AES and others.

  was:
For use cases that deal with sensitive data, we often need to encrypt data to 
be stored safely at rest. Hadoop common provides a codec framework for 
compression algorithms. We start here. However because encryption algorithms 
require some additional configuration and methods for key management, we 
introduce a crypto codec framework that builds on the compression codec 
framework. It cleanly distinguishes crypto algorithms from compression 
algorithms, but shares common interfaces between them where possible, and also 
carries extended interfaces where necessary to satisfy those needs. We also 
introduce a generic Key type, and supporting utility methods and classes, as a 
necessary abstraction for dealing with both Java crypto keys and PGP keys.

The task for this feature breaks into two parts:
1. The crypto codec framework that based on compression codec which can be 
shared by all crypto codec implementations.
2. The codec implementations such as AES, RC4 and others.


 Hadoop crypto codec framework and crypto codec implementations
 --

 Key: HADOOP-9331
 URL: https://issues.apache.org/jira/browse/HADOOP-9331
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0
Reporter: Jerry Chen
 Attachments: Hadoop Crypto Design.pdf

   Original Estimate: 504h
  Remaining Estimate: 504h

 For use cases that deal with sensitive data, we often need to encrypt data to 
 be stored safely at rest. Hadoop common provides a codec framework for 
 compression algorithms. We start here. However because encryption algorithms 
 require some additional configuration and methods for key management, we 
 introduce a crypto codec framework that builds on the compression codec 
 framework. It cleanly distinguishes crypto algorithms from compression 
 algorithms, but shares common interfaces between them where possible, and 
 also carries extended interfaces where necessary to satisfy those needs. We 
 also introduce a generic Key type, and supporting utility methods and 
 classes, as a necessary abstraction for dealing with both Java crypto keys 
 and PGP keys.
 The task for this feature breaks into two parts:
 1. The crypto codec framework that based on compression codec which can be 
 shared by all crypto codec implementations.
 2. The codec implementations such as AES and others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9136) Support per-server IPC configuration

2013-02-26 Thread Abhishek Kapoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13588016#comment-13588016
 ] 

Abhishek Kapoor commented on HADOOP-9136:
-

Or we can use configuration xml for IPC server specific elements.

 Support per-server IPC configuration
 

 Key: HADOOP-9136
 URL: https://issues.apache.org/jira/browse/HADOOP-9136
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Kihwal Lee

 Currently different IPC servers in Hadoop use the same config variables names 
 starting with ipc.server. This makes it difficult and confusing to maintain 
 configuration for different IPC servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira